WorldWideScience

Sample records for modeling techniques required

  1. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  2. An EMG-assisted model calibration technique that does not require MVCs.

    Science.gov (United States)

    Dufour, Jonathan S; Marras, William S; Knapik, Gregory G

    2013-06-01

    As personalized biologically-assisted models of the spine have evolved, the normalization of raw electromyographic (EMG) signals has become increasingly important. The traditional method of normalizing myoelectric signals, relative to measured maximum voluntary contractions (MVCs), is susceptible to error and is problematic for evaluating symptomatic low back pain (LBP) patients. Additionally, efforts to circumvent MVCs have not been validated during complex free-dynamic exertions. Therefore, the objective of this study was to develop an MVC-independent biologically-assisted model calibration technique that overcomes the limitations of previous normalization efforts, and to validate this technique over a variety of complex free-dynamic conditions including symmetrical and asymmetrical lifting. The newly developed technique (non-MVC) eliminates the need to collect MVCs by combining gain (maximum strength per unit area) and MVC into a single muscle property (gain ratio) that can be determined during model calibration. Ten subjects (five male, five female) were evaluated to compare gain ratio prediction variability, spinal load predictions, and model fidelity between the new non-MVC and established MVC-based model calibration techniques. The new non-MVC model calibration technique demonstrated at least as low gain ratio prediction variability, similar spinal loads, and similar model fidelity when compared to the MVC-based technique, indicating that it is a valid alternative to traditional MVC-based EMG normalization. Spinal loading for individuals who are unwilling or unable to produce reliable MVCs can now be evaluated. In particular, this technique will be valuable for evaluating symptomatic LBP patients, which may provide significant insight into the underlying nature of the LBP disorder.

  3. Non-formal techniques for requirements elicitation, modeling, and early assessment for services

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Vyas, Dhaval; Dittmar, A.; Forbig, P.

    2011-01-01

    Designing systems for multiple stakeholders requires frequent collaboration with multiple stakeholders from the start. In many cases at least some stakeholders lack a professional habit of formal modeling. We report observations from two case studies of stakeholder involvement in early design where

  4. Testing agile requirements models

    Institute of Scientific and Technical Information of China (English)

    BOTASCHANJAN Jewgenij; PISTER Markus; RUMPE Bernhard

    2004-01-01

    This paper discusses a model-based approach to validate software requirements in agile development processes by simulation and in particular automated testing. The use of models as central development artifact needs to be added to the portfolio of software engineering techniques, to further increase efficiency and flexibility of the development beginning already early in the requirements definition phase. Testing requirements are some of the most important techniques to give feedback and to increase the quality of the result. Therefore testing of artifacts should be introduced as early as possible, even in the requirements definition phase.

  5. Modelling biomechanical requirements of a rider for different horse-riding techniques at trot

    NARCIS (Netherlands)

    Cocq, de P.; Muller, M.; Clayton, H.M.; Leeuwen, van J.L.

    2013-01-01

    The simplest model possible for bouncing systems consists of a point mass bouncing passively on a mass-less spring without viscous losses. This type of spring–mass model has been used to describe the stance period of symmetric running gaits. In this study, we investigated the interaction between

  6. Comparison of Various Requirements Elicitation Techniques

    National Research Council Canada - National Science Library

    Masooma Yousuf; M Asger

    2015-01-01

      No requirements elicitation technique has capability of finding all of the software requirements so we have to use variety of techniques that will help us to cover all the requirements, resulting...

  7. Communication Analysis modelling techniques

    CERN Document Server

    España, Sergio; Pastor, Óscar; Ruiz, Marcela

    2012-01-01

    This report describes and illustrates several modelling techniques proposed by Communication Analysis; namely Communicative Event Diagram, Message Structures and Event Specification Templates. The Communicative Event Diagram is a business process modelling technique that adopts a communicational perspective by focusing on communicative interactions when describing the organizational work practice, instead of focusing on physical activities1; at this abstraction level, we refer to business activities as communicative events. Message Structures is a technique based on structured text that allows specifying the messages associated to communicative events. Event Specification Templates are a means to organise the requirements concerning a communicative event. This report can be useful to analysts and business process modellers in general, since, according to our industrial experience, it is possible to apply many Communication Analysis concepts, guidelines and criteria to other business process modelling notation...

  8. Quasi-monte carlo simulation and variance reduction techniques substantially reduce computational requirements of patient-level simulation models: An application to a discrete event simulation model

    NARCIS (Netherlands)

    Treur, M.; Postma, M.

    2014-01-01

    Objectives: Patient-level simulation models provide increased flexibility to overcome the limitations of cohort-based approaches in health-economic analysis. However, computational requirements of reaching convergence is a notorious barrier. The objective was to assess the impact of using quasi-mont

  9. Towards elicitation of users requirements for hospital information system: from a care process modelling technique to a web based collaborative tool.

    Science.gov (United States)

    Staccini, Pascal M; Joubert, Michel; Quaranta, Jean-Francois; Fieschi, Marius

    2002-01-01

    Growing attention is being given to the use of process modeling methodology for user requirements elicitation. In the analysis phase of hospital information systems, the usefulness of care-process models has been investigated to evaluate the conceptual applicability and practical understandability by clinical staff and members of users teams. Nevertheless, there still remains a gap between users and analysts in their mutual ability to share conceptual views and vocabulary, keeping the meaning of clinical context while providing elements for analysis. One of the solutions for filling this gap is to consider the process model itself in the role of a hub as a centralized means of facilitating communication between team members. Starting with a robust and descriptive technique for process modeling called IDEF0/SADT, we refined the basic data model by extracting concepts from ISO 9000 process analysis and from enterprise ontology. We defined a web-based architecture to serve as a collaborative tool and implemented it using an object-oriented database. The prospects of such a tool are discussed notably regarding to its ability to generate data dictionaries and to be used as a navigation tool through the medium of hospital-wide documentation.

  10. Data flow modeling techniques

    Science.gov (United States)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  11. Model building techniques for analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald; Cordova, Theresa Elena; Henry, Ronald C.; Brooks, Sean; Martin, Wilbur D.

    2009-09-01

    The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.

  12. Requirements Prioritization: Challenges and Techniques for Quality Software Development

    Directory of Open Access Journals (Sweden)

    Muhammad Abdullah Awais

    2016-04-01

    Full Text Available Every organization is aware of the consequences and importance of requirements for the development of quality software product whether local or global. Requirement engineering phase of development with focus on the prioritization of requirements is going under huge research every day because in any development methodology, all requirements cannot be implemented at same time so requirements are prioritized to be implemented to give solution as early as possible in phases as scheduled in incremental fashion. Numerous frameworks and practices have been devised, in progress and some being discovered day by day. With such huge knowledge database and research available, it has always been confusing to decide which technique to follow to gain maximum results. Thus many projects fail because of the wrong choice in requirement prioritization because it’s really difficult to employ right technique and framework at right time. And problems do not end here rather due to strict deadlines, it’s often best to develop system in parts by different team members dispersed globally with diverse methodologies and differences and in this situation it becomes more difficult to prioritize requirements. Main focus would be on ETVX based prioritization [1] for in house development and requirement prioritization of software developed globally by diverse team members [2]. This paper will try to provide an overview of different prioritization techniques for software requirement, and a critical analysis of ETVX based model will be presented to highlight issues and challenges in this proposed model of requirement prioritization in [1] and improved version of this model will be presented while an analysis of requirement prioritization for software developed in global environment [2] also be presented.

  13. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  14. A Framework for Modelling Software Requirements

    Directory of Open Access Journals (Sweden)

    Dhirendra Pandey

    2011-05-01

    Full Text Available Requirement engineering plays an important role in producing quality software products. In recent past years, some approaches of requirement framework have been designed to provide an end-to-end solution for system development life cycle. Textual requirements specifications are difficult to learn, design, understand, review, and maintain whereas pictorial modelling is widely recognized as an effective requirement analysis tool. In this paper, we will present a requirement modelling framework with the analysis of modern requirements modelling techniques. Also, we will discuss various domains of requirement engineering with the help of modelling elements such as semantic map of business concepts, lifecycles of business objects, business processes, business rules, system context diagram, use cases and their scenarios, constraints, and user interface prototypes. The proposed framework will be illustrated with the case study of inventory management system.

  15. Analysis of Requirement Engineering Processes, Tools/Techniques and Methodologies

    Directory of Open Access Journals (Sweden)

    Tousif ur Rehman

    2013-02-01

    Full Text Available Requirement engineering is an integral part of the software development lifecycle since the basis for developing successful software depends on comprehending its requirements in the first place. Requirement engineering involves a number of processes for gathering requirements in accordance with the needs and demands of users and stakeholders of the software product. In this paper, we have reviewed the prominent processes, tools and technologies used in the requirement gathering phase. The study is useful to perceive the current state of the affairs pertaining to the requirement engineering research and to understand the strengths and limitations of the existing requirement engineering techniques. The study also summarizes the best practices and how to use a blend of the requirement engineering techniques as an effective methodology to successfully conduct the requirement engineering task. The study also highlights the importance of security requirements as though they are part of the non-functional requirement, yet are naturally considered fundamental to secure software development.

  16. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  17. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  18. Small-scale (flash) flood early warning in the light of operational requirements: opportunities and limits with regard to user demands, driving data, and hydrologic modeling techniques

    Science.gov (United States)

    Philipp, Andy; Kerl, Florian; Büttner, Uwe; Metzkes, Christine; Singer, Thomas; Wagner, Michael; Schütze, Niels

    2016-05-01

    In recent years, the Free State of Saxony (Eastern Germany) was repeatedly hit by both extensive riverine flooding, as well as flash flood events, emerging foremost from convective heavy rainfall. Especially after a couple of small-scale, yet disastrous events in 2010, preconditions, drivers, and methods for deriving flash flood related early warning products are investigated. This is to clarify the feasibility and the limits of envisaged early warning procedures for small catchments, hit by flashy heavy rain events. Early warning about potentially flash flood prone situations (i.e., with a suitable lead time with regard to required reaction-time needs of the stakeholders involved in flood risk management) needs to take into account not only hydrological, but also meteorological, as well as communication issues. Therefore, we propose a threefold methodology to identify potential benefits and limitations in a real-world warning/reaction context. First, the user demands (with respect to desired/required warning products, preparation times, etc.) are investigated. Second, focusing on small catchments of some hundred square kilometers, two quantitative precipitation forecasts are verified. Third, considering the user needs, as well as the input parameter uncertainty (i.e., foremost emerging from an uncertain QPF), a feasible, yet robust hydrological modeling approach is proposed on the basis of pilot studies, employing deterministic, data-driven, and simple scoring methods.

  19. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  20. A goal-oriented requirements modelling language for enterprise architecture

    NARCIS (Netherlands)

    Quartel, Dick; Engelsman, Wilco; Jonkers, Henk; Sinderen, van Marten

    2009-01-01

    Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for enterpris

  1. A goal-oriented requirements modelling language for enterprise architecture

    NARCIS (Netherlands)

    Quartel, Dick; Engelsman, W.; Jonkers, Henk; van Sinderen, Marten J.

    2009-01-01

    Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for

  2. Agent Based Multiviews Requirements Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Based on the current researches of viewpoints oriented requirements engineering and intelligent agent, we present the concept of viewpoint agent and its abstract model based on a meta-language for multiviews requirements engineering. It provided a basis for consistency checking and integration of different viewpoint requirements, at the same time, these checking and integration works can automatically realized in virtue of intelligent agent's autonomy, proactiveness and social ability. Finally, we introduce the practical application of the model by the case study of data flow diagram.

  3. Requirement Defect Mitigation Technique: An Early Stage Implementation

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-09-01

    Full Text Available Requirement defect identification and mitigation at early stage of Software Development Life Cycle (SDLC is very cost effective than to later stages. The requirements analysis in requirement engineering process is critical and major foundation of requirement defect identification. A poor requirement analysis process may lead to software requirement specification (SRS full of defects akin to misplaced, ambiguous, incompatible, misinterpreted, and incomplete requirements. In this paper, requirement defects are being identified and properly mitigated as per its severity in the requirement phase to get rid of major rework by spending extra cost and effort at the later stages. Here, a Defect Mitigation Technique (DMT is proposed for mitigating the identified requirement defect and also the reliability of requirement is being assessed to deliver Reliable Requirement. The proposed algorithm is helping the DMT for its proper processing, defect mitigation and reliability assessment. The prime motive of this study is an effort to put off requirements stage defects from entering into later stages of SDLC.

  4. A fuzzy model for exploiting customer requirements

    Directory of Open Access Journals (Sweden)

    Zahra Javadirad

    2016-09-01

    Full Text Available Nowadays, Quality function deployment (QFD is one of the total quality management tools, where customers’ views and requirements are perceived and using various techniques improves the production requirements and operations. The QFD department, after identification and analysis of the competitors, takes customers’ feedbacks to meet the customers’ demands for the products compared with the competitors. In this study, a comprehensive model for assessing the importance of the customer requirements in the products or services for an organization is proposed. The proposed study uses linguistic variables, as a more comprehensive approach, to increase the precision of the expression evaluations. The importance of these requirements specifies the strengths and weaknesses of the organization in meeting the requirements relative to competitors. The results of these experiments show that the proposed method performs better than the other methods.

  5. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  6. An Evaluation of Requirement Prioritization Techniques with ANP

    Directory of Open Access Journals (Sweden)

    Javed ali Khan

    2016-07-01

    Full Text Available This article elaborates an evaluation of seven software requirements prioritization methods (ANP, binary search tree, AHP, hierarchy AHP, spanning tree matrix, priority group and bubble sort. Based on the case study of local project (automation of Mobilink franchise system, the experiment is conducted by students in the Requirement Engineering course in the department of Software Engineering at the University of Science and Technology Bannu, Khyber Pakhtunkhawa, Pakistan. Parameters/ measures used for the experiment are consistency indication, scale of measurement, interdependence, required number of decisions, total time consumption, time consumption per decision, ease of use, reliability of results and fault tolerance; on which requirements prioritization techniques are evaluated. The results of experiment show that ANP is the most successful prioritization methodology among all the available prioritization methodologies.

  7. Deriving Earth Science Data Analytics Tools/Techniques Requirements

    Science.gov (United States)

    Kempler, S. J.

    2015-12-01

    Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists. Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics tools/techniques requirements that would support specific ESDA type goals. Representative existing data analytics tools/techniques relevant to ESDA will also be addressed.

  8. Extending enterprise architecture modelling with business goals and requirements

    Science.gov (United States)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten

    2011-02-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.

  9. Requirements Analyses Integrating Goals and Problem Analysis Techniques

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    One of the difficulties that goal-oriented requirements analyses encounters is that the efficiency of the goal refinement is based on the analysts' subjective knowledge and experience. To improve the efficiency of the requirements eiicitation process, engineers need approaches with more systemized analysis techniques. This paper integrates the goal-oriented requirements language i* with concepts from a structured problem analysis notation, problem frames (PF). The PF approach analyzes software design as a contextualized problem which has to respond to constraints imposed by the environment. The proposed approach is illustrated using the meeting scheduler exemplar. Results show that integration of the goal and the problem analysis enables simultaneous consideration of the designer's subjective intentions and the physical environmental constraints.

  10. Incorporation of RAM techniques into simulation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.C. Jr.; Haire, M.J.; Schryver, J.C.

    1995-07-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model represents the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army`s next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through ``what if`` questions, sensitivity studies, and battle scenario changes.

  11. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  12. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  13. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  14. Radiation Belt and Plasma Model Requirements

    Science.gov (United States)

    Barth, Janet L.

    2005-01-01

    Contents include the following: Radiation belt and plasma model environment. Environment hazards for systems and humans. Need for new models. How models are used. Model requirements. How can space weather community help?

  15. Modeling Techniques: Theory and Practice

    OpenAIRE

    Odd A. Asbjørnsen

    1985-01-01

    A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the pro...

  16. Requirements model for an e-Health awareness portal

    Science.gov (United States)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Nawi, Mohd Nasrun M.

    2016-08-01

    Requirements engineering is at the heart and foundation of software engineering process. Poor quality requirements inevitably lead to poor quality software solutions. Also, poor requirement modeling is tantamount to designing a poor quality product. So, quality assured requirements development collaborates fine with usable products in giving the software product the needed quality it demands. In the light of the foregoing, the requirements for an e-Ebola Awareness Portal were modeled with a good attention given to these software engineering concerns. The requirements for the e-Health Awareness Portal are modeled as a contribution to the fight against Ebola and helps in the fulfillment of the United Nation's Millennium Development Goal No. 6. In this study requirements were modeled using UML 2.0 modeling technique.

  17. Modeling Techniques: Theory and Practice

    Directory of Open Access Journals (Sweden)

    Odd A. Asbjørnsen

    1985-07-01

    Full Text Available A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the process variables. This allows residence time distribution function parameters to be estimated with the reaction in situ, but without any correlation between the estimated residence time distribution parameters and the estimated reaction kinetic parameters. A general word of warning is given to the choice of wrong mathematical structure of models.

  18. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  19. Chemonucleolysis technique. New oblique approach requires no measurements.

    Science.gov (United States)

    Romy, M

    1986-01-01

    The author describes a new technique for intradiscal therapy that eliminates the need for measurements. The new technique for entering the lumbar disk for discolysis from the oblique approach is described as simple, accurate and safe.

  20. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  1. Model assisted qualification of NDE techniques

    Science.gov (United States)

    Ballisat, Alexander; Wilcox, Paul; Smith, Robert; Hallam, David

    2017-02-01

    The costly and time consuming nature of empirical trials typically performed for NDE technique qualification is a major barrier to the introduction of NDE techniques into service. The use of computational models has been proposed as a method by which the process of qualification can be accelerated. However, given the number of possible parameters present in an inspection, the number of combinations of parameter values scales to a power law and running simulations at all of these points rapidly becomes infeasible. Given that many NDE inspections result in a single valued scalar quantity, such as a phase or amplitude, using suitable sampling and interpolation methods significantly reduces the number of simulations that have to be performed. This paper presents initial results of applying Latin Hypercube Designs and M ultivariate Adaptive Regression Splines to the inspection of a fastener hole using an oblique ultrasonic shear wave inspection. It is demonstrated that an accurate mapping of the response of the inspection for the variations considered can be achieved by sampling only a small percentage of the parameter space of variations and that the required percentage decreases as the number of parameters and the number of possible sample points increases. It is then shown how the outcome of this process can be used to assess the reliability of the inspection through commonly used metrics such as probability of detection, thereby providing an alternative methodology to the current practice of performing empirical probability of detection trials.

  2. Model checking timed automata : techniques and applications

    NARCIS (Netherlands)

    Hendriks, Martijn.

    2006-01-01

    Model checking is a technique to automatically analyse systems that have been modeled in a formal language. The timed automaton framework is such a formal language. It is suitable to model many realistic problems in which time plays a central role. Examples are distributed algorithms, protocols, emb

  3. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  4. Using Visualization Techniques in Multilayer Traffic Modeling

    Science.gov (United States)

    Bragg, Arnold

    We describe visualization techniques for multilayer traffic modeling - i.e., traffic models that span several protocol layers, and traffic models of protocols that cross layers. Multilayer traffic modeling is challenging, as one must deal with disparate traffic sources; control loops; the effects of network elements such as IP routers; cross-layer protocols; asymmetries in bandwidth, session lengths, and application behaviors; and an enormous number of complex interactions among the various factors. We illustrate by using visualization techniques to identify relationships, transformations, and scaling; to smooth simulation and measurement data; to examine boundary cases, subtle effects and interactions, and outliers; to fit models; and to compare models with others that have fewer parameters. Our experience suggests that visualization techniques can provide practitioners with extraordinary insight about complex multilayer traffic effects and interactions that are common in emerging next-generation networks.

  5. Modeling Requirements for Cohort and Register IT.

    Science.gov (United States)

    Stäubert, Sebastian; Weber, Ulrike; Michalik, Claudia; Dress, Jochen; Ngouongo, Sylvie; Stausberg, Jürgen; Winter, Alfred

    2016-01-01

    The project KoRegIT (funded by TMF e.V.) aimed to develop a generic catalog of requirements for research networks like cohort studies and registers (KoReg). The catalog supports such kind of research networks to build up and to manage their organizational and IT infrastructure. To make transparent the complex relationships between requirements, which are described in use cases from a given text catalog. By analyzing and modeling the requirements a better understanding and optimizations of the catalog are intended. There are two subgoals: a) to investigate one cohort study and two registers and to model the current state of their IT infrastructure; b) to analyze the current state models and to find simplifications within the generic catalog. Processing the generic catalog was performed by means of text extraction, conceptualization and concept mapping. Then methods of enterprise architecture planning (EAP) are used to model the extracted information. To work on objective a) questionnaires are developed by utilizing the model. They are used for semi-structured interviews, whose results are evaluated via qualitative content analysis. Afterwards the current state was modeled. Objective b) was done by model analysis. A given generic text catalog of requirements was transferred into a model. As result of objective a) current state models of one existing cohort study and two registers are created and analyzed. An optimized model called KoReg-reference-model is the result of objective b). It is possible to use methods of EAP to model requirements. This enables a better overview of the partly connected requirements by means of visualization. The model based approach also enables the analysis and comparison of the empirical data from the current state models. Information managers could reduce the effort of planning the IT infrastructure utilizing the KoReg-reference-model. Modeling the current state and the generation of reports from the model, which could be used as

  6. An Extended Analysis of Requirements Traceability Model

    Institute of Scientific and Technical Information of China (English)

    Jiang Dandong(蒋丹东); Zhang Shensheng; Chen Lu

    2004-01-01

    A new extended meta model of traceability is presented. Then, a formalized fine-grained model of traceability is described. Some major issues about this model, including trace units, requirements and relations within the model, are further analyzed. Finally, a case study that comes from a key project of 863 Program is given.

  7. Long-term dynamics simulation: Modeling requirements

    Energy Technology Data Exchange (ETDEWEB)

    Morched, A.S.; Kar, P.K.; Rogers, G.J.; Morison, G.K. (Ontario Hydro, Toronto, ON (Canada))

    1989-12-01

    This report details the required performance and modelling capabilities of a computer program intended for the study of the long term dynamics of power systems. Following a general introduction which outlines the need for long term dynamic studies, the modelling requirements for the conduct of such studies is discussed in detail. Particular emphasis is placed on models for system elements not normally modelled in power system stability programs, which will have a significant impact in the long term time frame of minutes to hours following the initiating disturbance. The report concludes with a discussion of the special computational and programming requirements for a long term stability program. 43 refs., 36 figs.

  8. The Benefit of Ambiguity in Understanding Goals in Requirements Modelling

    DEFF Research Database (Denmark)

    Paay, Jeni; Pedell, Sonja; Sterling, Leon

    2011-01-01

    This paper examines the benefit of ambiguity in describing goals in requirements modelling for the design of socio-technical systems using concepts from Agent-Oriented Software Engineering (AOSE) and ethnographic and cultural probe methods from Human Computer Interaction (HCI). The authors’ aim...... a holistic approach to eliciting, analyzing, and modelling socially-oriented requirements by combining a particular form of ethnographic technique, cultural probes, with Agent Oriented Software Engineering notations to model these requirements. This paper focuses on examining the value of maintaining...... of their research is to create technologies that support more flexible and meaningful social interactions, by combining best practice in software engineering with ethnographic techniques to model complex social interactions from their socially oriented life for the purposes of building rich socio...

  9. Requirements for clinical information modelling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Jódar-Sánchez, Francisco; Kalra, Dipak

    2015-07-01

    This study proposes consensus requirements for clinical information modelling tools that can support modelling tasks in medium/large scale institutions. Rather than identify which functionalities are currently available in existing tools, the study has focused on functionalities that should be covered in order to provide guidance about how to evolve the existing tools. After identifying a set of 56 requirements for clinical information modelling tools based on a literature review and interviews with experts, a classical Delphi study methodology was applied to conduct a two round survey in order to classify them as essential or recommended. Essential requirements are those that must be met by any tool that claims to be suitable for clinical information modelling, and if we one day have a certified tools list, any tool that does not meet essential criteria would be excluded. Recommended requirements are those more advanced requirements that may be met by tools offering a superior product or only needed in certain modelling situations. According to the answers provided by 57 experts from 14 different countries, we found a high level of agreement to enable the study to identify 20 essential and 21 recommended requirements for these tools. It is expected that this list of identified requirements will guide developers on the inclusion of new basic and advanced functionalities that have strong support by end users. This list could also guide regulators in order to identify requirements that could be demanded of tools adopted within their institutions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  11. Research Techniques Made Simple: Skin Carcinogenesis Models: Xenotransplantation Techniques.

    Science.gov (United States)

    Mollo, Maria Rosaria; Antonini, Dario; Cirillo, Luisa; Missero, Caterina

    2016-02-01

    Xenotransplantation is a widely used technique to test the tumorigenic potential of human cells in vivo using immunodeficient mice. Here we describe basic technologies and recent advances in xenotransplantation applied to study squamous cell carcinomas (SCCs) of the skin. SCC cells isolated from tumors can either be cultured to generate a cell line or injected directly into mice. Several immunodeficient mouse models are available for selection based on the experimental design and the type of tumorigenicity assay. Subcutaneous injection is the most widely used technique for xenotransplantation because it involves a simple procedure allowing the use of a large number of cells, although it may not mimic the original tumor environment. SCC cell injections at the epidermal-to-dermal junction or grafting of organotypic cultures containing human stroma have also been used to more closely resemble the tumor environment. Mixing of SCC cells with cancer-associated fibroblasts can allow the study of their interaction and reciprocal influence, which can be followed in real time by intradermal ear injection using conventional fluorescent microscopy. In this article, we will review recent advances in xenotransplantation technologies applied to study behavior of SCC cells and their interaction with the tumor environment in vivo.

  12. Information technology - Security techniques - Information security management systems - Requirements

    CERN Document Server

    International Organization for Standardization. Geneva

    2005-01-01

    ISO/IEC 27001:2005 covers all types of organizations (e.g. commercial enterprises, government agencies, not-for profit organizations). ISO/IEC 27001:2005 specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining and improving a documented Information Security Management System within the context of the organization's overall business risks. It specifies requirements for the implementation of security controls customized to the needs of individual organizations or parts thereof. ISO/IEC 27001:2005 is designed to ensure the selection of adequate and proportionate security controls that protect information assets and give confidence to interested parties. ISO/IEC 27001:2005 is intended to be suitable for several different types of use, including the following: use within organizations to formulate security requirements and objectives; use within organizations as a way to ensure that security risks are cost effectively managed; use within organizations to ensure comp...

  13. Using data mining techniques for building fusion models

    Science.gov (United States)

    Zhang, Zhongfei; Salerno, John J.; Regan, Maureen A.; Cutler, Debra A.

    2003-03-01

    Over the past decade many techniques have been developed which attempt to predict possible events through the use of given models or patterns of activity. These techniques work quite well given the case that one has a model or a valid representation of activity. However, in reality for the majority of the time this is not the case. Models that do exist, in many cases were hand crafted, required many man-hours to develop and they are very brittle in the dynamic world in which we live. Data mining techniques have shown some promise in providing a set of solutions. In this paper we will provide the details for our motivation, theory and techniques which we have developed, as well as the results of a set of experiments.

  14. Customer requirements based ERP customization using AHP technique

    NARCIS (Netherlands)

    Parthasarathy, S.; Daneva, Maya

    2014-01-01

    Purpose– Customization is a difficult task for many organizations implementing enterprise resource planning (ERP) systems. The purpose of this paper is to develop a new framework based on customers’ requirements to examine the ERP customization choices for the enterprise. The analytical hierarchy pr

  15. Customer requirements based ERP customization using AHP technique

    NARCIS (Netherlands)

    Parthasarathy, S.; Daneva, Maia

    2014-01-01

    Purpose– Customization is a difficult task for many organizations implementing enterprise resource planning (ERP) systems. The purpose of this paper is to develop a new framework based on customers’ requirements to examine the ERP customization choices for the enterprise. The analytical hierarchy

  16. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-10-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  17. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-11-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  18. A Requirements Analysis Model Based on QFD

    Institute of Scientific and Technical Information of China (English)

    TANG Zhi-wei; Nelson K.H.Tang

    2004-01-01

    The enterprise resource planning (ERP) system has emerged to offer an integrated IT solution and more and more enterprises are increasing by adopting this system and regarding it as an important innovation. However, there is already evidence of high failure risks in ERP project implementation, one major reason is poor analysis of the requirements for system implementation. In this paper, the importance of requirements analysis for ERP project implementation is highlighted, and a requirements analysis model by applying quality function deployment (QFD) is presented, which will support to conduct requirements analysis for ERP project.

  19. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  20. A Biomechanical Modeling Guided CBCT Estimation Technique.

    Science.gov (United States)

    Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing

    2017-02-01

    Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.

  1. Modeling Techniques for IN/Internet Interworking

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper focuses on the authors' contributions to ITU-T to develop the network modeling for the support of IN/Internet interworking. Following an introduction to benchmark interworking services, the paper describes the consensus enhanced DFP architecture, which is reached based on IETF reference model and the authors' proposal. Then the proposed information flows for benchmark services are presented with new or updated flows identified. Finally a brief description is given to implementation techniques.

  2. 40 CFR 141.403 - Treatment technique requirements for ground water systems.

    Science.gov (United States)

    2010-07-01

    ....403 Treatment technique requirements for ground water systems. (a) Ground water systems with significant deficiencies or source water fecal contamination. (1) The treatment technique requirements of this... requirements of this section. (3) When a significant deficiency is identified at a Subpart H public...

  3. Aerobic Requirements for and Heart Rate Responses to Variations in Rope Jumping Techniques.

    Science.gov (United States)

    Solis, Ken; And Others

    1988-01-01

    Highly skilled ropejumpers can maintain their exercise intensity by varying their jumping technique. Research on heart rate responses and aerobic requirements of different jumping techniques is discussed. Methodology and data are reported. (Author/JL)

  4. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  5. Field Assessment Techniques for Bank Erosion Modeling

    Science.gov (United States)

    1990-11-22

    Field Assessment Techniques for Bank Erosion Modeling First Interim Report Prepared for US Army European Research Office US AR DS G-. EDISON HOUSE...SEDIMENTATION ANALYSIS SHEETS and GUIDELINES FOR THE USE OF SEDIMENTATION ANALYSIS SHEETS IN THE FIELD Prepared for US Army Engineer Waterways Experiment...Material Type 3 Material Type 4 Cobbles Toe[’ Toe Toefl Toefl Protection Status Cobbles/boulders Mid-Bnak .. Mid-na.k Mid-Bnask[ Mid-Boak

  6. Advanced interaction techniques for medical models

    OpenAIRE

    Monclús, Eva

    2014-01-01

    Advances in Medical Visualization allows the analysis of anatomical structures with the use of 3D models reconstructed from a stack of intensity-based images acquired through different techniques, being Computerized Tomographic (CT) modality one of the most common. A general medical volume graphics application usually includes an exploration task which is sometimes preceded by an analysis process where the anatomical structures of interest are first identified. ...

  7. User Requirements and Domain Model Engineering

    NARCIS (Netherlands)

    Specht, Marcus; Glahn, Christian

    2006-01-01

    Specht, M., & Glahn, C. (2006). User requirements and domain model engineering. Presentation at International Workshop in Learning Networks for Lifelong Competence Development. March, 30-31, 2006. Sofia, Bulgaria: TENCompetence Conference. Retrieved June 30th, 2006, from http://dspace.learningnetwor

  8. User Requirements and Domain Model Engineering

    NARCIS (Netherlands)

    Specht, Marcus; Glahn, Christian

    2006-01-01

    Specht, M., & Glahn, C. (2006). User requirements and domain model engineering. Presentation at International Workshop in Learning Networks for Lifelong Competence Development. March, 30-31, 2006. Sofia, Bulgaria: TENCompetence Conference. Retrieved June 30th, 2006, from http://dspace.learningnetwor

  9. Modeling requirements for in situ vitrification

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, R.J.; Mecham, D.C.; Hagrman, D.L.; Johnson, R.W.; Murray, P.E.; Slater, C.E.; Marwil, E.S.; Weaver, R.A.; Argyle, M.D.

    1991-11-01

    This document outlines the requirements for the model being developed at the INEL which will provide analytical support for the ISV technology assessment program. The model includes representations of the electric potential field, thermal transport with melting, gas and particulate release, vapor migration, off-gas combustion and process chemistry. The modeling objectives are to (1) help determine the safety of the process by assessing the air and surrounding soil radionuclide and chemical pollution hazards, the nuclear criticality hazard, and the explosion and fire hazards, (2) help determine the suitability of the ISV process for stabilizing the buried wastes involved, and (3) help design laboratory and field tests and interpret results therefrom.

  10. Level of detail technique for plant models

    Institute of Scientific and Technical Information of China (English)

    Xiaopeng ZHANG; Qingqiong DENG; Marc JAEGER

    2006-01-01

    Realistic modelling and interactive rendering of forestry and landscape is a challenge in computer graphics and virtual reality. Recent new developments in plant growth modelling and simulation lead to plant models faithful to botanical structure and development, not only representing the complex architecture of a real plant but also its functioning in interaction with its environment. Complex geometry and material of a large group of plants is a big burden even for high performances computers, and they often overwhelm the numerical calculation power and graphic rendering power. Thus, in order to accelerate the rendering speed of a group of plants, software techniques are often developed. In this paper, we focus on plant organs, i.e. leaves, flowers, fruits and inter-nodes. Our approach is a simplification process of all sparse organs at the same time, i. e. , Level of Detail (LOD) , and multi-resolution models for plants. We do explain here the principle and construction of plant simplification. They are used to construct LOD and multi-resolution models of sparse organs and branches of big trees. These approaches take benefit from basic knowledge of plant architecture, clustering tree organs according to biological structures. We illustrate the potential of our approach on several big virtual plants for geometrical compression or LOD model definition. Finally we prove the efficiency of the proposed LOD models for realistic rendering with a virtual scene composed by 184 mature trees.

  11. A general technique to train language models on language models

    NARCIS (Netherlands)

    Nederhof, MJ

    2005-01-01

    We show that under certain conditions, a language model can be trained oil the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained auto

  12. Team mental models: techniques, methods, and analytic approaches.

    Science.gov (United States)

    Langan-Fox, J; Code, S; Langfield-Smith, K

    2000-01-01

    Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.

  13. Understanding requirements via natural language information modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, J.K.; Becker, S.D.

    1993-07-01

    Information system requirements that are expressed as simple English sentences provide a clear understanding of what is needed between system specifiers, administrators, users, and developers of information systems. The approach used to develop the requirements is the Natural-language Information Analysis Methodology (NIAM). NIAM allows the processes, events, and business rules to be modeled using natural language. The natural language presentation enables the people who deal with the business issues that are to be supported by the information system to describe exactly the system requirements that designers and developers will implement. Computer prattle is completely eliminated from the requirements discussion. An example is presented that is based upon a section of a DOE Order involving nuclear materials management. Where possible, the section is analyzed to specify the process(es) to be done, the event(s) that start the process, and the business rules that are to be followed during the process. Examples, including constraints, are developed. The presentation steps through the modeling process and shows where the section of the DOE Order needs clarification, extensions or interpretations that could provide a more complete and accurate specification.

  14. Technique of Substantiating Requirements for the Vision Systems of Industrial Robotic Complexes

    Directory of Open Access Journals (Sweden)

    V. Ya. Kolyuchkin

    2015-01-01

    Full Text Available In references, there is a lack of approaches to describe the justified technical requirements for the vision systems (VS of industrial robotics complexes (IRC. Therefore, an objective of the work is to develop a technique that allows substantiating requirements for the main quality indicators of VS, functioning as a part of the IRC.The proposed technique uses a model representation of VS, which, as a part of the IRC information system, sorts the objects in the work area, as well as measures their linear and angular coordinates. To solve the problem of statement there is a proposal to define the target function of a designed IRC as a dependence of the IRC indicator efficiency on the VS quality indicators. The paper proposes to use, as an indicator of the IRC efficiency, the probability of a lack of fault products when manufacturing. Based on the functions the VS perform as a part of the IRC information system, the accepted indicators of VS quality are as follows: a probability of the proper recognition of objects in the working IRC area, and confidential probabilities of measuring linear and angular orientation coordinates of objects with the specified values of permissible error. Specific values of these errors depend on the orientation errors of working bodies of manipulators that are a part of the IRC. The paper presents mathematical expressions that determine the functional dependence of the probability of a lack of fault products when manufacturing on the VS quality indicators and the probability of failures of IRC technological equipment.The offered technique for substantiating engineering requirements for the VS of IRC has novelty. The results obtained in this work can be useful for professionals involved in IRC VS development, and, in particular, in development of VS algorithms and software.

  15. Geometrical geodesy techniques in Goddard earth models

    Science.gov (United States)

    Lerch, F. J.

    1974-01-01

    The method for combining geometrical data with satellite dynamical and gravimetry data for the solution of geopotential and station location parameters is discussed. Geometrical tracking data (simultaneous events) from the global network of BC-4 stations are currently being processed in a solution that will greatly enhance of geodetic world system of stations. Previously the stations in Goddard earth models have been derived only from dynamical tracking data. A linear regression model is formulated from combining the data, based upon the statistical technique of weighted least squares. Reduced normal equations, independent of satellite and instrumental parameters, are derived for the solution of the geodetic parameters. Exterior standards for the evaluation of the solution and for the scale of the earth's figure are discussed.

  16. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  17. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  18. Domain-driven specification techniques simplify the analysis of requirements for the KAON factory central control system

    Energy Technology Data Exchange (ETDEWEB)

    Inwood, C. (Inwood Real-Time Systems Associates, Kinburn, ON (Canada)); Ludgate, G.A.; Dohan, D.A.; Osberg, E.A.; Koscielniak, S. (British Columbia Univ., Vancouver (Canada). TRIUMF Facility)

    1990-08-01

    Domain-driven modelling, outlined in this paper, has been successfully applied to the analysis, specification and design of the KAON Factory central control system (KF-CCS). This advanced object-oriented technique is especially suited to the development of complex systems. Early in the project, four very natural domains were identified which simplified the analysis of requirements. (orig.).

  19. Domain-driven specification techniques simplify the analysis of requirements for the Kaon Factory central control system

    Science.gov (United States)

    Inwood, Clifford; Ludgate, G. A.; Dohan, D. A.; Osberg, E. A.; Koscielniak, S.

    1990-08-01

    Domain-driven modelling, outlined in this paper, has been successfully applied to the analysis, specification and design of the KAON Factory central control system (KF-CCS). This advanced object-oriented technique is especially suited to the development of complex systems. Early in the project, four very natural domains were identified which simplified the analysis of requirements.

  20. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  1. The Benefit of Ambiguity in Understanding Goals in Requirements Modelling

    DEFF Research Database (Denmark)

    Paay, Jeni; Pedell, Sonja; Sterling, Leon

    2011-01-01

    of their research is to create technologies that support more flexible and meaningful social interactions, by combining best practice in software engineering with ethnographic techniques to model complex social interactions from their socially oriented life for the purposes of building rich socio......This paper examines the benefit of ambiguity in describing goals in requirements modelling for the design of socio-technical systems using concepts from Agent-Oriented Software Engineering (AOSE) and ethnographic and cultural probe methods from Human Computer Interaction (HCI). The authors’ aim...... of abstraction, ambiguous and open for conversations through the modelling process add richness to goal models, and communicate quality attributes of the interaction being modelled to the design phase, where this ambiguity is regarded as a resource for design....

  2. Requirements for a next generation global flood inundation models

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Smith, A.; Sampson, C. C.

    2016-12-01

    In this paper we review the current status of global hydrodynamic models for flood inundation prediction and highlight recent successes and current limitations. Building on this analysis we then go on to consider what is required to develop the next generation of such schemes and show that to achieve this a number of fundamental science problems will need to be overcome. New data sets and new types of analysis will be required, and we show that these will only partially be met by currently planned satellite missions and data collection initiatives. A particular example is the quality of available global Digital Elevation data. The current best data set for flood modelling, SRTM, is only available at a relatively modest 30m resolution, contains pixel-to-pixel noise of 6m and is corrupted by surface artefacts. Creative processing techniques have sought to address these issues with some success, but fundamentally the quality of the available global terrain data limits flood modelling and needs to be overcome. Similar arguments can be made for many other elements of global hydrodynamic models including their bathymetry data, boundary conditions, flood defence information and model validation data. We therefore systematically review each component of global flood models and document whether planned new technology will solve current limitations and, if not, what exactly will be required to do so.

  3. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  4. Integrated modelling requires mass collaboration (Invited)

    Science.gov (United States)

    Moore, R. V.

    2009-12-01

    The need for sustainable solutions to the world’s problems is self evident; the challenge is to anticipate where, in the environment, economy or society, the proposed solution will have negative consequences. If we failed to realise that the switch to biofuels would have the seemingly obvious result of reduced food production, how much harder will it be to predict the likely impact of policies whose impacts may be more subtle? It has been clear for a long time that models and data will be important tools for assessing the impact of events and the measures for their mitigation. They are an effective way of encapsulating knowledge of a process and using it for prediction. However, most models represent a single or small group of processes. The sustainability challenges that face us now require not just the prediction of a single process but the prediction of how many interacting processes will respond in given circumstances. These processes will not be confined to a single discipline but will often straddle many. For example, the question, “What will be the impact on river water quality of the medical plans for managing a ‘flu pandemic and could they cause a further health hazard?” spans medical planning, the absorption of drugs by the body, the spread of disease, the hydraulic and chemical processes in sewers and sewage treatment works and river water quality. This question nicely reflects the present state of the art. We have models of the processes and standards, such as the Open Modelling Interface (the OpenMI), allow them to be linked together and to datasets. We can therefore answer the question but with the important proviso that we thought to ask it. The next and greater challenge is to deal with the open question, “What are the implications of the medical plans for managing a ‘flu pandemic?”. This implies a system that can make connections that may well not have occurred to us and then evaluate their probable impact. The final touch will be to

  5. Compact Models and Measurement Techniques for High-Speed Interconnects

    CERN Document Server

    Sharma, Rohit

    2012-01-01

    Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.

  6. A Comparison of Evolutionary Computation Techniques for IIR Model Identification

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2014-01-01

    Full Text Available System identification is a complex optimization problem which has recently attracted the attention in the field of science and engineering. In particular, the use of infinite impulse response (IIR models for identification is preferred over their equivalent FIR (finite impulse response models since the former yield more accurate models of physical plants for real world applications. However, IIR structures tend to produce multimodal error surfaces whose cost functions are significantly difficult to minimize. Evolutionary computation techniques (ECT are used to estimate the solution to complex optimization problems. They are often designed to meet the requirements of particular problems because no single optimization algorithm can solve all problems competitively. Therefore, when new algorithms are proposed, their relative efficacies must be appropriately evaluated. Several comparisons among ECT have been reported in the literature. Nevertheless, they suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. This study presents the comparison of various evolutionary computation optimization techniques applied to IIR model identification. Results over several models are presented and statistically validated.

  7. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  8. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano; Lucassen, Garm; Brinkkemper, Sjaak

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to

  9. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano; Lucassen, Garm; Brinkkemper, Sjaak

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to evaluat

  10. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano|info:eu-repo/dai/nl/369508394; Lucassen, Garm; Brinkkemper, Sjaak|info:eu-repo/dai/nl/07500707X

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to evaluat

  11. Updates on measurements and modeling techniques for expendable countermeasures

    Science.gov (United States)

    Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.

    2016-10-01

    The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.

  12. Use of surgical techniques in the rat pancreas transplantation model

    Institute of Scientific and Technical Information of China (English)

    Yi Ma; Zhi-Yong Guo

    2008-01-01

    BACKGROUND:Pancreas transplantation is currently considered to be the most reliable and effective treatment for insulin-dependent diabetes mellitus (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years. We investigated the surgical techniques of pancreas transplantation in rats by analysing the difference between cervical segmental pancreas transplantation and abdominal pancreaticoduodenal transplantation. METHODS:Two hundred and forty male adult Wistar rats weighing 200-300 g were used, 120 as donors and 120 as recipients. Sixty cervical segmental pancreas transplants and 60 abdominal pancreaticoduodenal transplants were carried out and vessel anastomoses were made with microsurgical techniques. RESULTS:The time of donor pancreas harvesting in the cervical and abdominal groups was 31±6 and 37.6±3.8 min, respectively, and the lengths of recipient operations were 49.2±5.6 and 60.6±7.8 min. The time for donor operation was not signiifcantly different (P>0.05), but the recipient operation time in the abdominal group was longer than that in the cervical group (P0.05). CONCLUSIONS:Both pancreas transplantation methods are stable models for immunological and physiological studies in pancreas transplantation. Since each has its own advantages and disadvantages, the designer can choose the appropriate method according to the requirements of the study.

  13. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  14. From requirements to Java in a snap model-driven requirements engineering in practice

    CERN Document Server

    Smialek, Michal

    2015-01-01

    This book provides a coherent methodology for Model-Driven Requirements Engineering which stresses the systematic treatment of requirements within the realm of modelling and model transformations. The underlying basic assumption is that detailed requirements models are used as first-class artefacts playing a direct role in constructing software. To this end, the book presents the Requirements Specification Language (RSL) that allows precision and formality, which eventually permits automation of the process of turning requirements into a working system by applying model transformations and co

  15. Research into methods, models, and automisation requirements for fermentation, immobilisation, and processing techniques based on fluidisation regimes. Final report; Erforschung von Verfahren, Modellzusammenhaengen und Automatisierungserfordernissen fuer Fermentations-, Immobilisierungs- und Aufarbeitungsprozesse im Fluidisationsregime. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Kuenne, H.J.; Haida, H.; Lakowitz, R.; Behns, W. [FZB Biotechnik GmbH, Berlin (Germany); Ebenau, B. [FZB Biotechnik GmbH, Berlin (Germany)

    1992-05-31

    A gas/solid fluidised bed was used to immobilise cells in a model system consisting of the following components: glass sand as supporting material; saccharomyces cerevisiae as the biological system; and carboxymethylcellulose or alginate as gelling agent. Two process variants were used for the immobilisation, namely simultaneous or consecutive application of the yeast suspension and polymer layer. The immobilisation products were tested with respect to their biological activity (staining test, fermentometer-test) and mechanical stability (stirring test, shaking test, stream tube test). Microscopic examination in conjunction with the staining test revealed that no or only very slight damage occurs during the actual immobilisation and the subsequent hardening process. (orig./SR) [Deutsch] Unter Nutzung der Gas/Feststoff-Wirbelschicht wurden Zellimmobilisate mit dem Modellsystem Quarzsand als Traegermaterial, Hefe Saccharomyces cerevisiae als biologisches System, Carboxymethylcellulose bzw. Alignant als Gelbildner hergestellt. Bei der Immobilisierung kamen ausserdem zwei Varianten zum Einsatz: - Getrenntes Aufbringen von Hefesuspension und Polymerschicht - gleichzeitiges Aufbringen von Hefesuspension und Polymerschicht. Die so erzeugten Produkte wurden anschliessend auf ihre biologische Aktivitaet (Faerbetest, Fermentometertest) und auf ihre mechanische Stabilitaet (Ruehrtest, Schuetteltest, Stroemungsrohrtest) untersucht. Die mikroskopischen Untersuchungen unter Einbeziehung des Faerbetests ergaben, dass sowohl waehrend der eigentlichen Immobilisierung als auch bei der Aushaertung keine oder nur eine geringe Schaedigung auftrat. (orig./SR)

  16. Training Community College faculty in the techniques and skills required for Solar Energy System installation

    Energy Technology Data Exchange (ETDEWEB)

    Leo, R.J.

    1980-05-01

    A project to train a specified number of community college, vocational/technical faculty in the techniques and skills required to install solar energy systems is described. The planning that led to the contract, the development and conduct of the training workshops, and the outcomes are detailed. An overall evaluation of the project and recommendations for the future are included. (MHR)

  17. Modeling uncertainty in requirements engineering decision support

    Science.gov (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.

    2005-01-01

    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  18. Modeling uncertainty in requirements engineering decision support

    Science.gov (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.

    2005-01-01

    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  19. A formal model for integrity protection based on DTE technique

    Institute of Scientific and Technical Information of China (English)

    JI Qingguang; QING Sihan; HE Yeping

    2006-01-01

    In order to provide integrity protection for the secure operating system to satisfy the structured protection class' requirements, a DTE technique based integrity protection formalization model is proposed after the implications and structures of the integrity policy have been analyzed in detail. This model consists of some basic rules for configuring DTE and a state transition model, which are used to instruct how the domains and types are set, and how security invariants obtained from initial configuration are maintained in the process of system transition respectively. In this model, ten invariants are introduced, especially, some new invariants dealing with information flow are proposed, and their relations with corresponding invariants described in literatures are also discussed.The thirteen transition rules with well-formed atomicity are presented in a well-operational manner. The basic security theorems correspond to these invariants and transition rules are proved. The rationalities for proposing the invariants are further annotated via analyzing the differences between this model and ones described in literatures. At last but not least, future works are prospected, especially, it is pointed out that it is possible to use this model to analyze SE-Linux security.

  20. Application of a systematic finite-element model modification technique to dynamic analysis of structures

    Science.gov (United States)

    Robinson, J. C.

    1982-01-01

    A systematic finite-element model modification technique has been applied to two small problems and a model of the main wing box of a research drone aircraft. The procedure determines the sensitivity of the eigenvalues and eigenvector components to specific structural changes, calculates the required changes and modifies the finite-element model. Good results were obtained where large stiffness modifications were required to satisfy large eigenvalue changes. Sensitivity matrix conditioning problems required the development of techniques to insure existence of a solution and accelerate its convergence. A method is proposed to assist the analyst in selecting stiffness parameters for modification.

  1. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  2. Quantitative model validation techniques: new insights

    CERN Document Server

    Ling, You

    2012-01-01

    This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...

  3. SMV model-based safety analysis of software requirements

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Seong, Poong Hyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)], E-mail: phseong@kaist.ac.kr

    2009-02-15

    Fault tree analysis (FTA) is one of the most frequently applied safety analysis techniques when developing safety-critical industrial systems such as software-based emergency shutdown systems of nuclear power plants and has been used for safety analysis of software requirements in the nuclear industry. However, the conventional method for safety analysis of software requirements has several problems in terms of correctness and efficiency; the fault tree generated from natural language specifications may contain flaws or errors while the manual work of safety verification is very labor-intensive and time-consuming. In this paper, we propose a new approach to resolve problems of the conventional method; we generate a fault tree from a symbolic model verifier (SMV) model, not from natural language specifications, and verify safety properties automatically, not manually, by a model checker SMV. To demonstrate the feasibility of this approach, we applied it to shutdown system 2 (SDS2) of Wolsong nuclear power plant (NPP). In spite of subtle ambiguities present in the approach, the results of this case study demonstrate its overall feasibility and effectiveness.

  4. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  5. Techniques and Simulation Models in Risk Management

    OpenAIRE

    Mirela GHEORGHE

    2012-01-01

    In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...

  6. A TECHNIQUE OF DIGITAL SURFACE MODEL GENERATION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is usually a time-consuming process to real-time set up 3D digital surface mo del(DSM) of an object with complex sur face.On the basis of the architectural survey proje ct of“Chilin Nunnery Reconstruction",this paper investigates an easy and feasi ble way,that is,on project site,applying digital close range photogrammetry an d CAD technique to establish the DSM for simulating ancient architectures with c omplex surface.The method has been proved very effective in practice.

  7. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  8. Diet formulation techniques and lysine requirements of 1- to 22-day-old broilers

    Directory of Open Access Journals (Sweden)

    JC Siqueira

    2013-06-01

    Full Text Available Two experiments were carried out to compare two techniques (amino acid supplementation and dilution for formulating experimental diets for pre-starter (1 to 8 days and starter (8 to 22 days broiler chicks and to estimate digestible lysine requirements using the dose-response method. In each experiment, 1,200 male Cobb 500 chickens were randomly distributed according to a 5x2 factorial arrangement (lysine level x formulation technique with six replicates of 20 birds each. For the supplemented diet, a basal diet was formulated to meet the nutritional requirements, then L-lysine HCl was added to achieve digestible lysine levels of 0.975, 1.082, 1.189, 1.296 and 1.403% in the pre-starter diets and 0.840, 0.932, 1.024, 1.116 and 1.208% in the starter diets. For the diluted diet, a diet high in crude protein (CP and relatively low in lysine was formulated and to which was added a protein-free diet until lysine levels were similar to those described above for the supplemented diet. The results suggest that the dilution technique favored the performance potential and better met lysine requirements compared with the supplementation technique. Lysine levels required for optimal feed conversion ratio of broilers during the pre-starter and starter phases were estimated at 1.361 and 1.187%, which are equivalent to lysine intake of 0.340 and 0.797 g/day, respectively.

  9. REQUIREMENT OF FLUIDITY OF HIGH WATER CONTENT MATERIALS FORTHE GETWAY-SIDE BACKFILLING TECHNIQUE

    Institute of Scientific and Technical Information of China (English)

    QiTaiyue; MaNianjie

    1996-01-01

    Through analyzing the effects of water consumption, diameter of solid particle, and flow velocity on the fluidity of high water content material slurry, the relationship among the fluidity, the isotropy of the slurry, and the pumping facilities applied in getway-side backfilling has been found. And the requirment of fluidity of high water content material for the design of getway-side backfilling technique is put forward in the paper.

  10. DECISION MAKING MODELING OF CONCRETE REQUIREMENTS

    Directory of Open Access Journals (Sweden)

    Suhartono Irawan

    2001-01-01

    Full Text Available This paper presents the results of an experimental evaluation between predicted and practice concrete strength. The scope of the evaluation is the optimisation of the cement content for different concrete grades as a result of bringing the target mean value of tests cubes closer to the required characteristic strength value by reducing the standard deviation. Abstract in Bahasa Indonesia : concrete+mix+design%2C+acceptance+control%2C+optimisation%2C+cement+content.

  11. A MODEL FOR ALIGNING SOFTWARE PROJECTS REQUIREMENTS WITH PROJECT TEAM MEMBERS REQUIREMENTS

    Directory of Open Access Journals (Sweden)

    Robert Hans

    2013-02-01

    Full Text Available The fast-paced, dynamic environment within which information and communication technology (ICT projects are run as well as ICT professionals’ constant changing requirements present a challenge for project managers in terms of aligning projects’ requirements with project team members’ requirements. This research paper purports that if projects’ requirements are properly aligned with team members’ requirements, then this will result in a balanced decision approach. Moreover, such an alignment will result in the realization of employee’s needs as well as meeting project’s needs. This paper presents a Project’s requirements and project Team members’ requirements (PrTr alignment model and argues that a balanced decision which meets both software project’s requirements and team members’ requirements can be achieved through the application of the PrTr alignment model.

  12. Supporting requirements model evolution throughout the system life-cycle

    OpenAIRE

    Ernst, Neil; Mylopoulos, John; Yu, Yijun; Ngyuen, Tien T.

    2008-01-01

    Requirements models are essential not just during system implementation, but also to manage system changes post-implementation. Such models should be supported by a requirements model management framework that allows users to create, manage and evolve models of domains, requirements, code and other design-time artifacts along with traceability links between their elements. We propose a comprehensive framework which delineates the operations and elements necessary, and then describe a tool imp...

  13. Microwave Diffraction Techniques from Macroscopic Crystal Models

    Science.gov (United States)

    Murray, William Henry

    1974-01-01

    Discusses the construction of a diffractometer table and four microwave models which are built of styrofoam balls with implanted metallic reflecting spheres and designed to simulate the structures of carbon (graphite structure), sodium chloride, tin oxide, and palladium oxide. Included are samples of Bragg patterns and computer-analysis results.…

  14. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow

  15. Comparative Analysis of Vehicle Make and Model Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Faiza Ayub Syed

    2014-03-01

    Full Text Available Vehicle Make and Model Recognition (VMMR has emerged as a significant element of vision based systems because of its application in access control systems, traffic control and monitoring systems, security systems and surveillance systems, etc. So far a number of techniques have been developed for vehicle recognition. Each technique follows different methodology and classification approaches. The evaluation results highlight the recognition technique with highest accuracy level. In this paper we have pointed out the working of various vehicle make and model recognition techniques and compare these techniques on the basis of methodology, principles, classification approach, classifier and level of recognition After comparing these factors we concluded that Locally Normalized Harris Corner Strengths (LHNS performs best as compared to other techniques. LHNS uses Bayes and K-NN classification approaches for vehicle classification. It extracts information from frontal view of vehicles for vehicle make and model recognition.

  16. Extending enterprise architecture modelling with business goals and requirements

    NARCIS (Netherlands)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; Sinderen, van Marten

    2011-01-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling te

  17. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  18. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  19. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  20. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I/O-mod...... for range reporting problems in the pointer machine and the I/O-model. With this technique, we tighten the gap between the known upper bound and lower bound for the most fundamental range reporting problem, orthogonal range reporting. 5......In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  1. Mixing Formal and Informal Model Elements for Tracing Requirements

    DEFF Research Database (Denmark)

    Jastram, Michael; Hallerstede, Stefan; Ladenberger, Lukas

    2011-01-01

    a system for traceability with a state-based formal method that supports refinement. We do not require all specification elements to be modelled formally and support incremental incorporation of new specification elements into the formal model. Refinement is used to deal with larger amounts of requirements......Tracing between informal requirements and formal models is challenging. A method for such tracing should permit to deal efficiently with changes to both the requirements and the model. A particular challenge is posed by the persisting interplay of formal and informal elements. In this paper, we...

  2. Evaluation of the Protein Requirement in Chinese Young Adults Using the Indicator Amino Acid Oxidation Technique

    Institute of Scientific and Technical Information of China (English)

    LI Min; ZHANG Yu Hui; WANG Zhi Ling; GOU Ling Yan; LI Wei Dong; TIAN Yuan; HU Yi Chun; WANG Rui; PIAO Jian Hua; YANG Xiao Guang

    2013-01-01

    Objective To accurately calculate the protein requirements in Chinese young adults using the indicator amino acid oxidation technique. Methods Nine women and ten men received a restricted daily level of protein intake (0.75, 0.82, 0.89, 0.97, and 1.05 g/kg), along with L-[1-13C]-leucine. Subjects’ protein requirement was determined by a biphasic linear regression crossover analysis of F13CO2 data. In doing so, a breakpoint at the minimal rate of appearance of 13CO2 expiration specific to each level of dietary protein was identified. This trial was registered with the Chinese clinical trial registry as ChiCTR-ONC-11001407. Results The Estimated Average Requirement (EAR) and the Recommended Nutrient Intake (RNI) of protein for healthy Chinese young adults were determined to be 0.87 and 0.98 g/(kg·d), respectively, based on the indicator amino acid oxidation technique. Conclusion The EAR and RNI of mixed protein are 5% and 16% that are lower than the current proposed EAR and RNI (0.92 and 1.16 g/(kg·d), respectively), as determined by the nitrogen balance method. The respective EAR and RNI recommendations of 0.87 and 0.98 g/(kg·d) of mixed protein are estimated to be reasonable and suitable for Chinese young adults.

  3. Using cognitive modeling for requirements engineering in anesthesiology

    NARCIS (Netherlands)

    Pott, C; le Feber, J

    2005-01-01

    Cognitive modeling is a complexity reducing method to describe significant cognitive processes under a specified research focus. Here, a cognitive process model for decision making in anesthesiology is presented and applied in requirements engineering. Three decision making situations of

  4. Airway emergencies presenting to the paediatric emergency department requiring advanced management techniques.

    Science.gov (United States)

    Simma, Leopold; Cincotta, Domenic; Sabato, Stefan; Long, Elliot

    2017-09-01

    Airway emergencies presenting to the emergency department (ED) are usually managed with conventional equipment and techniques. The patient group managed urgently in the operating room (OR) has not been described. This study aims to describe a case series of children presenting to the ED with airway emergencies managed urgently in the OR, particularly the anaesthetic equipment and techniques used and airway findings. A retrospective cohort study undertaken at The Royal Children's Hospital, Melbourne, Australia. All patients presenting to the ED between 1 January 2012 and 30 July 2015 (42 months) with an airway emergency who were subsequently managed in the OR were included. Patient characteristics, anaesthetic equipment and technique and airway findings were recorded. Twenty-two airway emergencies in 21 patients were included over the study period, on average one every 2 months. Median age was 18 months and 43% were male. Inhalational induction was used in 77.3%, combined inhalational and intravenous induction in 9.1%, and intravenous induction alone in 13.6%. The most commonly used inhalational induction agent was sevoflurane, and the most commonly used intravenous induction agents were ketamine and propofol. Ten airway emergencies did not require intubation, seven for removal of inhaled foreign body, two with progressive tracheal stenosis requiring emergent dilatation and one examination under anaesthesia to rule out inhaled foreign body. Of the 12 airway emergencies that required immediate intubation, direct laryngoscopy was used in 9 and fibre-optic intubating bronchoscopy in 3. For intubations performed by direct laryngoscopy, one was difficult (Cormack and Lehane grade 3). First pass success was 83.3%. Adverse events occurred in 3/22 (13.6%) cases. Advanced airway techniques, including inhalational induction and intubation via fibre-optic intubating bronchoscope, are rarely but predictably required in the management of patients presenting to the ED

  5. Symmetry and partial order reduction techniques in model checking Rebeca

    NARCIS (Netherlands)

    Jaghouri, M.M.; Sirjani, M.; Mousavi, M.R.; Movaghar, A.

    2007-01-01

    Rebeca is an actor-based language with formal semantics that can be used in modeling concurrent and distributed software and protocols. In this paper, we study the application of partial order and symmetry reduction techniques to model checking dynamic Rebeca models. Finding symmetry based equivalen

  6. Prediction of survival with alternative modeling techniques using pseudo values

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); F.R. Datema (Frank); R.J. Baatenburg de Jong (Robert Jan); E.W. Steyerberg (Ewout)

    2014-01-01

    textabstractBackground: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo

  7. Use of surgical techniques in the rat pancreas transplantation model

    National Research Council Canada - National Science Library

    Ma, Yi; Guo, Zhi-Yong

    2008-01-01

    ... (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years...

  8. Application of Krylov Reduction Technique for a Machine Tool Multibody Modelling

    Directory of Open Access Journals (Sweden)

    M. Sulitka

    2014-02-01

    Full Text Available Quick calculation of machine tool dynamic response represents one of the major requirements for machine tool virtual modelling and virtual machining, aiming at simulating the machining process performance, quality, and precision of a workpiece. Enhanced time effectiveness in machine tool dynamic simulations may be achieved by employing model order reduction (MOR techniques of the full finite element (FE models. The paper provides a case study aimed at comparison of Krylov subspace base and mode truncation technique. Application of both of the reduction techniques for creating a machine tool multibody model is evaluated. The Krylov subspace reduction technique shows high quality in terms of both dynamic properties of the reduced multibody model and very low time demands at the same time.

  9. Software Requirements Specification Verifiable Fuel Cycle Simulation (VISION) Model

    Energy Technology Data Exchange (ETDEWEB)

    D. E. Shropshire; W. H. West

    2005-11-01

    The purpose of this Software Requirements Specification (SRS) is to define the top-level requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). This simulation model is intended to serve a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies.

  10. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    with simple unified modelling language (UML) requirements models, it is not easy for the development team to get confidence on the stakeholders' requirements validation. This paper describes an approach, based on the construction of executable interactive prototypes, to support the validation of workflow...

  11. Tornado wind-loading requirements based on risk assessment techniques (For specific reactor safety Class 1 coolant system features)

    Science.gov (United States)

    Deobald, Theodore L.; Coles, Garill A.; Smith, Gary L.

    1992-01-01

    Regulations require that nuclear power plants be protected from tornado winds. If struck by a tornado, a plant must be capable of safely shutting down and removing decay heat. Probabilistic techniques are used to show that risk to the public from the U.S. Department of Energy SP-100 reactor is acceptable without tornado hardening parts of the secondary system. Relaxed requirements for design wind loadings will result in significant cost savings. To demonstrate an acceptable level of risk, this document examines tornado-initiated accidents. The two tornado-initiated accidents examined in detail are loss of cooling resulting in core damage and loss of secondary system boundary integrity leading to sodium release. Loss of core cooling is analyzed using fault/event tree models. Loss of secondary system boundary integrity is analyzed by comparing the consequences to acceptance criteria for the release of radioactive material or alkali metal aerosol.

  12. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  13. Amino acid requirement studies in Oreochromis niloticus by application of principles of the diet dilution technique.

    Science.gov (United States)

    Liebert, F

    2009-12-01

    Data analysis utilized four growth experiments with mixed diets limiting in lysine, in threonine, and in methionine respectively. All male juvenile Orechromis niloticus [12 g average body weight instead of average (BW) at start, four repetition tanks per diet, 56 days experimental period] provided the database for application of an exponential N-utilization model. Imposing amino acid efficiency data were utilized for modelling of amino acid requirements depending on the level of daily protein deposition. According to the observed average dietary amino acid efficiency of the amino acids under study, 16.3 g/kg of lysine, 8.3 g/kg of threonine and 7.3 g/kg of methionine were established as required in feed content for 187 mg daily protein deposition (50 g BW, feed intake at 3% of BW). Further modelling by use of graded dietary amino acid efficiency yielded strong evidence for the significance of this dietary factor of influence. Current data analysis has led to conclusion, that the applied non-linear modelling of amino acid requirements is an advantageous approach because of its quantitative reflection of graded dietary amino acid efficiency corresponding to protein deposition data. The procedure has the potential to contribute to alternate approaches for improved reliability of recommended quantitative amino acid supply in fish nutrition.

  14. Prediction of survival with alternative modeling techniques using pseudo values.

    Directory of Open Access Journals (Sweden)

    Tjeerd van der Ploeg

    Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising

  15. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  16. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    Science.gov (United States)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  17. Process Model for Defining Space Sensing and Situational Awareness Requirements

    Science.gov (United States)

    2006-04-01

    process model for defining systems for space sensing and space situational awareness is presented. The paper concentrates on eight steps for determining the requirements to include: decision maker needs, system requirements, exploitation methods and vulnerabilities, critical capabilities, and identify attack scenarios. Utilization of the USAF anti-tamper (AT) implementation process as a process model departure point for the space sensing and situational awareness (SSSA...is presented. The AT implementation process model , as an

  18. Laser solder repair technique for nerve anastomosis: temperatures required for optimal tensile strength

    Science.gov (United States)

    McNally-Heintzelman, Karen M.; Dawes, Judith M.; Lauto, Antonio; Parker, Anthony E.; Owen, Earl R.; Piper, James A.

    1998-01-01

    Laser-assisted repair of nerves is often unsatisfactory and has a high failure rate. Two disadvantages of laser assisted procedures are low initial strength of the resulting anastomosis and thermal damage of tissue by laser heating. Temporary or permanent stay sutures are used and fluid solders have been proposed to increase the strength of the repair. These techniques, however, have their own disadvantages including foreign body reaction and difficulty of application. To address these problems solid protein solder strips have been developed for use in conjunction with a diode laser for nerve anastomosis. The protein helps to supplement the bond, especially in the acute healing phase up to five days post- operative. Indocyanine green dye is added to the protein solder to absorb a laser wavelength (approximately 800 nm) that is poorly absorbed by water and other bodily tissues. This reduces the collateral thermal damage typically associated with other laser techniques. An investigation of the feasibility of the laser-solder repair technique in terms of required laser irradiance, tensile strength of the repair, and solder and tissue temperature is reported here. The tensile strength of repaired nerves rose steadily with laser irradiance reaching a maximum of 105 plus or minus 10 N.cm-2 at 12.7 W.cm-2. When higher laser irradiances were used the tensile strength of the resulting bonds dropped. Histopathological analysis of the laser- soldered nerves, conducted immediately after surgery, showed the solder to have adhered well to the perineurial membrane, with minimal damage to the inner axons of the nerve. The maximum temperature reached at the solder surface and at the solder/nerve interface, measured using a non-contact fiber optic radiometer and thermocouple respectively, also rose steadily with laser irradiance. At 12.7 W.cm-2, the temperatures reached at the surface and at the interface were 85 plus or minus 4 and 68 plus or minus 4 degrees Celsius respectively

  19. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  20. Matrix eigenvalue model: Feynman graph technique for all genera

    Energy Technology Data Exchange (ETDEWEB)

    Chekhov, Leonid [Steklov Mathematical Institute, ITEP and Laboratoire Poncelet, Moscow (Russian Federation); Eynard, Bertrand [SPhT, CEA, Saclay (France)

    2006-12-15

    We present the diagrammatic technique for calculating the free energy of the matrix eigenvalue model (the model with arbitrary power {beta} by the Vandermonde determinant) to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves)

  1. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  2. Manifold learning techniques and model reduction applied to dissipative PDEs

    CERN Document Server

    Sonday, Benjamin E; Gear, C William; Kevrekidis, Ioannis G

    2010-01-01

    We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relation of this nonlinear extension with the so-called "nonlinear Galerkin" methods developed in the context of Approximate Inertial Manifolds.

  3. Requirements for Logical Models for Value-Added Tax Legislation

    DEFF Research Database (Denmark)

    Nielsen, Morten Ib; Simonsen, Jakob Grue; Larsen, Ken Friis

    -specific needs. Currently, these difficulties are handled in most major ERP systems by customising and localising the native code of the ERP systems for each specific country and industry. We propose an alternative that uses logical modeling of VAT legislation. The potential benefit is to eventually transform...... such a model automatically into programs that essentially will replace customisation and localisation by con¿guration by changing parameters in the model. In particular, we: (1) identify a number of requirements for such modeling, including requirements for the underlying logic; (2) model salient parts...

  4. Some meta-modeling and optimization techniques for helicopter pre-sizing.

    OpenAIRE

    Tremolet, A.; Basset, P.M.

    2012-01-01

    Optimization and meta-models are key elements of modern engineering techniques. The Multidisciplinary Design Optimization (MDO) allows solving strongly coupled physical problems aiming at the global system optimization. For these multidisciplinary optimizations, meta-models can be required as surrogates for complex and high computational cost codes. Meta-modeling is also used for catching general trends and underlying relationships between parameters within a database. The application of thes...

  5. Using informatics-enabled quality improvement techniques to meet health record documentation requirements in radiology reports.

    Science.gov (United States)

    Prevedello, Luciano M; Farkas, Cameron; Dufault, Allen; Damiano, Maria; Doubilet, Peter; Khorasani, Ramin

    2013-08-01

    Medicare requires documented teaching physician involvement (attestation) in trainee-generated radiology reports. Automated attestation statement insertion in reports expedites the process but does not comply with requirements for active attestation. We evaluated an informatics-enabled quality improvement (QI) intervention to improve health record documentation requirements for active attestation. Institutional review board approval was not needed for this QI project performed in a 776-bed tertiary/quaternary teaching hospital. The intervention consisted of (1) policy requiring staff radiologists to actively attest to trainee-generated reports by personally activating a "macro" in the reporting system and (2) a semiautomated process to detect reports missing attestation; radiologists received daily e-mail reminders until the attestation statement was inserted. A random sample of 600 of 123,561 trainee-generated radiology reports created 17 months after the intervention (May 2011) was manually reviewed to determine attestation policy adherence. The number of attestation statements added in response to reminders throughout the entire study period was also evaluated. Trend analysis of the number of report addenda containing solely the attestation statement (proxy for missing initial attestation) was performed. Of 600 reports, 594 (99%) contained the attestation statement. Monthly attestations in response to email notifications decreased from 585 to 227 by the sixth month, a 2.6-fold reduction (P < .01). No significant trend was observed the following year, indicating a sustained effect. Informatics-enabled QI techniques resulted in 99% adherence to our teaching physician attestation policy with sustained results. Similar approaches may help improve adherence to other mandated performance measures in radiology reports. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  6. Digital Avionics Information System (DAIS): Training Requirements Analysis Model (TRAMOD).

    Science.gov (United States)

    Czuchry, Andrew J.; And Others

    The training requirements analysis model (TRAMOD) described in this report represents an important portion of the larger effort called the Digital Avionics Information System (DAIS) Life Cycle Cost (LCC) Study. TRAMOD is the second of three models that comprise an LCC impact modeling system for use in the early stages of system development. As…

  7. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    Science.gov (United States)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  8. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    Science.gov (United States)

    Timmis, Jon; Qwarnstrom, Eva E.

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  9. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    Science.gov (United States)

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.

  10. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Eve...... requirements, where the system to be built must explicitly support the interaction between people within a pervasive cooperative workflow execution. A case study from a real project is used to illustrate the proposed approach.......Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Even...... with simple unified modelling language (UML) requirements models, it is not easy for the development team to get confidence on the stakeholders' requirements validation. This paper describes an approach, based on the construction of executable interactive prototypes, to support the validation of workflow...

  11. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  12. A finite element parametric modeling technique of aircraft wing structures

    Institute of Scientific and Technical Information of China (English)

    Tang Jiapeng; Xi Ping; Zhang Baoyuan; Hu Bifu

    2013-01-01

    A finite element parametric modeling method of aircraft wing structures is proposed in this paper because of time-consuming characteristics of finite element analysis pre-processing. The main research is positioned during the preliminary design phase of aircraft structures. A knowledge-driven system of fast finite element modeling is built. Based on this method, employing a template parametric technique, knowledge including design methods, rules, and expert experience in the process of modeling is encapsulated and a finite element model is established automatically, which greatly improves the speed, accuracy, and standardization degree of modeling. Skeleton model, geometric mesh model, and finite element model including finite element mesh and property data are established on parametric description and automatic update. The outcomes of research show that the method settles a series of problems of parameter association and model update in the pro-cess of finite element modeling which establishes a key technical basis for finite element parametric analysis and optimization design.

  13. Double-wire sternal closure technique in bovine animal models for total artificial heart implant.

    Science.gov (United States)

    Karimov, Jamshid H; Sunagawa, Gengo; Golding, Leonard A R; Moazami, Nader; Fukamachi, Kiyotaka

    2015-08-01

    In vivo preclinical testing of mechanical circulatory devices requires large animal models that provide reliable physiological and hemodynamic conditions by which to test the device and investigate design and development strategies. Large bovine species are commonly used for mechanical circulatory support device research. The animals used for chronic in vivo support require high-quality care and excellent surgical techniques as well as advanced methods of postoperative care. These techniques are constantly being updated and new methods are emerging.We report results of our double steel-wire closure technique in large bovine models used for Cleveland Clinic's continuous-flow total artificial heart development program. This is the first report of double-wire sternal fixation used in large bovine models.

  14. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  15. An Empirical Study of Smoothing Techniques for Language Modeling

    CERN Document Server

    Chen, S F; Chen, Stanley F.; Goodman, Joshua T.

    1996-01-01

    We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.

  16. GENERAL REQUIREMENTS FOR SIMULATION MODELS IN WASTE MANAGEMENT

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Ian; Kossik, Rick; Voss, Charlie

    2003-02-27

    Most waste management activities are decided upon and carried out in a public or semi-public arena, typically involving the waste management organization, one or more regulators, and often other stakeholders and members of the public. In these environments, simulation modeling can be a powerful tool in reaching a consensus on the best path forward, but only if the models that are developed are understood and accepted by all of the parties involved. These requirements for understanding and acceptance of the models constrain the appropriate software and model development procedures that are employed. This paper discusses requirements for both simulation software and for the models that are developed using the software. Requirements for the software include transparency, accessibility, flexibility, extensibility, quality assurance, ability to do discrete and/or continuous simulation, and efficiency. Requirements for the models that are developed include traceability, transparency, credibility/validity, and quality control. The paper discusses these requirements with specific reference to the requirements for performance assessment models that are used for predicting the long-term safety of waste disposal facilities, such as the proposed Yucca Mountain repository.

  17. Requirements engineering for cross-sectional information chain models.

    Science.gov (United States)

    Hübner, U; Cruel, E; Gök, M; Garthaus, M; Zimansky, M; Remmers, H; Rienhoff, O

    2012-01-01

    Despite the wealth of literature on requirements engineering, little is known about engineering very generic, innovative and emerging requirements, such as those for cross-sectional information chains. The IKM health project aims at building information chain reference models for the care of patients with chronic wounds, cancer-related pain and back pain. Our question therefore was how to appropriately capture information and process requirements that are both generally applicable and practically useful. To this end, we started with recommendations from clinical guidelines and put them up for discussion in Delphi surveys and expert interviews. Despite the heterogeneity we encountered in all three methods, it was possible to obtain requirements suitable for building reference models. We evaluated three modelling languages and then chose to write the models in UML (class and activity diagrams). On the basis of the current project results, the pros and cons of our approach are discussed.

  18. "Open Access" Requires Clarification: Medical Journal Publication Models Evolve.

    Science.gov (United States)

    Lubowitz, James H; Brand, Jefferson C; Rossi, Michael J; Provencher, Matthew T

    2017-03-01

    While Arthroscopy journal is a traditional subscription model journal, our companion journal Arthroscopy Techniques is "open access." We used to believe open access simply meant online and free of charge. However, while open-access journals are free to readers, in 2017 authors must make a greater sacrifice in the form of an article-processing charge (APC). Again, while this does not apply to Arthroscopy, the APC will apply to Arthroscopy Techniques.

  19. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  20. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  1. Inferring Requirement Goals from Model Implementing in UML

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    UML is used widely in many software developmentprocesses.However,it does not make explicit requirement goals.Here is a method tending to establish the semantic relationship between requirements goals and UML models.Before the method is introduced,some relevant concepts are described

  2. Mixture experiment techniques for reducing the number of components applied for modeling waste glass sodium release

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T. [Pacific Northwest National Lab., Richland, WA (United States). Statistics Group

    1997-12-01

    Statistical mixture experiment techniques were applied to a waste glass data set to investigate the effects of the glass components on Product Consistency Test (PCT) sodium release (NR) and to develop a model for PCT NR as a function of the component proportions. The mixture experiment techniques indicate that the waste glass system can be reduced from nine to four components for purposes of modeling PCT NR. Empirical mixture models containing four first-order terms and one or two second-order terms fit the data quite well, and can be used to predict the NR of any glass composition in the model domain. The mixture experiment techniques produce a better model in less time than required by another approach.

  3. How Many Model Evaluations Are Required To Predict The AEP Of A Wind Power Plant?

    DEFF Research Database (Denmark)

    Murcia Leon, Juan Pablo; Réthoré, Pierre-Elouan; Natarajan, Anand;

    2015-01-01

    Wind farm flow models have advanced considerably with the use of large eddy simulations (LES) and Reynolds averaged Navier-Stokes (RANS) computations. The main limitation of these techniques is their high computational time requirements; which makes their use for wind farm annual energy production...

  4. Validation of Power Requirement Model for Active Loudspeakers

    DEFF Research Database (Denmark)

    Schneider, Henrik; Madsen, Anders Normann; Bjerregaard, Ruben

    2015-01-01

    The actual power requirement of an active loudspeaker during playback of music has not received much attention in the literature. This is probably because no single and simple solution exists and because a complete system knowledge from input voltage to output sound pressure level is required....... There are however many advantages that could be harvested from such knowledge like size, cost and efficiency improvements. In this paper a recently proposed power requirement model for active loudspeakers is experimentally validated and the model is expanded to include the closed and vented type enclosures...

  5. Towards a Formalized Ontology-Based Requirements Model

    Institute of Scientific and Technical Information of China (English)

    JIANG Dan-dong; ZHANG Shen-sheng; WANG Ying-lin

    2005-01-01

    The goal of this paper is to take a further step towards an ontological approach for representing requirements information. The motivation for ontologies was discussed. The definitions of ontology and requirements ontology were given. Then, it presented a collection of informal terms, including four subject areas. It also discussed the formalization process of ontology. The underlying meta-ontology was determined, and the formalized requirements ontology was analyzed. This formal ontology is built to serve as a basis for requirements model. Finally, the implementation of software system was given.

  6. A transformation approach for collaboration based requirement models

    CERN Document Server

    Harbouche, Ahmed; Mokhtari, Aicha

    2012-01-01

    Distributed software engineering is widely recognized as a complex task. Among the inherent complexities is the process of obtaining a system design from its global requirement specification. This paper deals with such transformation process and suggests an approach to derive the behavior of a given system components, in the form of distributed Finite State Machines, from the global system requirements, in the form of an augmented UML Activity Diagrams notation. The process of the suggested approach is summarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model), the definition of the target Design Meta-Model and the definition of the rules to govern the transformation during the derivation process. The derivation process transforms the global system requirements described as UML diagram activities (extended with collaborations) to system roles behaviors represented as UML finite state machines. The approach is implemented using Atlas Transformation Language (ATL).

  7. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    Science.gov (United States)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  8. Innovative Product Design Based on Customer Requirement Weight Calculation Model

    Institute of Scientific and Technical Information of China (English)

    Chen-Guang Guo; Yong-Xian Liu; Shou-Ming Hou; Wei Wang

    2010-01-01

    In the processes of product innovation and design, it is important for the designers to find and capture customer's focus through customer requirement weight calculation and ranking. Based on the fuzzy set theory and Euclidean space distance, this paper puts forward a method for customer requirement weight calculation called Euclidean space distances weighting ranking method. This method is used in the fuzzy analytic hierarchy process that satisfies the additive consistent fuzzy matrix. A model for the weight calculation steps is constructed;meanwhile, a product innovation design module on the basis of the customer requirement weight calculation model is developed. Finally, combined with the instance of titanium sponge production, the customer requirement weight calculation model is validated. By the innovation design module, the structure of the titanium sponge reactor has been improved and made innovative.

  9. An analytical framework for data stream mining techniques based on challenges and requirements

    CERN Document Server

    Kholghi, Mahnoosh

    2011-01-01

    A growing number of applications that generate massive streams of data need intelligent data processing and online analysis. Real-time surveillance systems, telecommunication systems, sensor networks and other dynamic environments are such examples. The imminent need for turning such data into useful information and knowledge augments the development of systems, algorithms and frameworks that address streaming challenges. The storage, querying and mining of such data sets are highly computationally challenging tasks. Mining data streams is concerned with extracting knowledge structures represented in models and patterns in non stopping streams of information. Generally, two main challenges are designing fast mining methods for data streams and need to promptly detect changing concepts and data distribution because of highly dynamic nature of data streams. The goal of this article is to analyze and classify the application of diverse data mining techniques in different challenges of data stream mining. In this...

  10. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  11. Predicting the academic success of architecture students by pre-enrolment requirement: using machine-learning techniques

    Directory of Open Access Journals (Sweden)

    Ralph Olusola Aluko

    2016-12-01

    Full Text Available In recent years, there has been an increase in the number of applicants seeking admission into architecture programmes. As expected, prior academic performance (also referred to as pre-enrolment requirement is a major factor considered during the process of selecting applicants. In the present study, machine learning models were used to predict academic success of architecture students based on information provided in prior academic performance. Two modeling techniques, namely K-nearest neighbour (k-NN and linear discriminant analysis were applied in the study. It was found that K-nearest neighbour (k-NN outperforms the linear discriminant analysis model in terms of accuracy. In addition, grades obtained in mathematics (at ordinary level examinations had a significant impact on the academic success of undergraduate architecture students. This paper makes a modest contribution to the ongoing discussion on the relationship between prior academic performance and academic success of undergraduate students by evaluating this proposition. One of the issues that emerges from these findings is that prior academic performance can be used as a predictor of academic success in undergraduate architecture programmes. Overall, the developed k-NN model can serve as a valuable tool during the process of selecting new intakes into undergraduate architecture programmes in Nigeria.

  12. Concerning the Feasibility of Example-driven Modelling Techniques

    OpenAIRE

    Thorne, Simon; Ball, David; Lawson, Zoe Frances

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven \\ud modelling. The main aim was to establish experimentally within an academic \\ud environment; the relationship between error and task complexity using a) Traditional \\ud spreadsheet modelling, b) example driven techniques. We report on the experimental \\ud design, sampling, research methods and the tasks set for both control and treatment \\ud groups. Analysis of the completed tasks allows comparison of several...

  13. Advanced Phase noise modeling techniques of nonlinear microwave devices

    OpenAIRE

    Prigent, M.; J. C. Nallatamby; R. Quere

    2004-01-01

    In this paper we present a coherent set of tools allowing an accurate and predictive design of low phase noise oscillators. Advanced phase noise modelling techniques in non linear microwave devices must be supported by a proven combination of the following : - Electrical modeling of low-frequency noise of semiconductor devices, oriented to circuit CAD . The local noise sources will be either cyclostationary noise sources or quasistationary noise sources. - Theoretic...

  14. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  15. Evaluation of Foreign Exchange Risk Capital Requirement Models

    Directory of Open Access Journals (Sweden)

    Ricardo S. Maia Clemente

    2005-12-01

    Full Text Available This paper examines capital requirement for financial institutions in order to cover market risk stemming from exposure to foreign currencies. The models examined belong to two groups according to the approach involved: standardized and internal models. In the first group, we study the Basel model and the model adopted by the Brazilian legislation. In the second group, we consider the models based on the concept of value at risk (VaR. We analyze the single and the double-window historical model, the exponential smoothing model (EWMA and a hybrid approach that combines features of both models. The results suggest that the Basel model is inadequate to the Brazilian market, exhibiting a large number of exceptions. The model of the Brazilian legislation has no exceptions, though generating higher capital requirements than other internal models based on VaR. In general, VaR-based models perform better and result in less capital allocation than the standardized approach model applied in Brazil.

  16. Validation of Models : Statistical Techniques and Data Availability

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1999-01-01

    This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real

  17. Techniques and tools for efficiently modeling multiprocessor systems

    Science.gov (United States)

    Carpenter, T.; Yalamanchili, S.

    1990-01-01

    System-level tools and methodologies associated with an integrated approach to the development of multiprocessor systems are examined. Tools for capturing initial program structure, automated program partitioning, automated resource allocation, and high-level modeling of the combined application and resource are discussed. The primary language focus of the current implementation is Ada, although the techniques should be appropriate for other programming paradigms.

  18. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  19. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  20. The Integrated Use of Enterprise and System Dynamics Modelling Techniques in Support of Business Decisions

    Directory of Open Access Journals (Sweden)

    K. Agyapong-Kodua

    2012-01-01

    Full Text Available Enterprise modelling techniques support business process (reengineering by capturing existing processes and based on perceived outputs, support the design of future process models capable of meeting enterprise requirements. System dynamics modelling tools on the other hand are used extensively for policy analysis and modelling aspects of dynamics which impact on businesses. In this paper, the use of enterprise and system dynamics modelling techniques has been integrated to facilitate qualitative and quantitative reasoning about the structures and behaviours of processes and resource systems used by a Manufacturing Enterprise during the production of composite bearings. The case study testing reported has led to the specification of a new modelling methodology for analysing and managing dynamics and complexities in production systems. This methodology is based on a systematic transformation process, which synergises the use of a selection of public domain enterprise modelling, causal loop and continuous simulation modelling techniques. The success of the modelling process defined relies on the creation of useful CIMOSA process models which are then converted to causal loops. The causal loop models are then structured and translated to equivalent dynamic simulation models using the proprietary continuous simulation modelling tool iThink.

  1. Business Process Simulation: Requirements for Business and Resource Models

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2015-07-01

    Full Text Available The purpose of Business Process Model and Notation (BPMN is to provide easily understandable graphical representation of business process. Thus BPMN is widely used and applied in various areas one of them being a business process simulation. This paper addresses some BPMN model based business process simulation problems. The paper formulate requirements for business process and resource models in enabling their use for business process simulation.

  2. Predicting Flu Season Requirements: An Undergraduate Modeling Project

    Science.gov (United States)

    Kramlich, Gary R., II; Braunstein Fierson, Janet L.; Wright, J. Adam

    2010-01-01

    This project was designed to be used in a freshman calculus class whose students had already been introduced to logistic functions and basic data modeling techniques. It need not be limited to such an audience, however; it has also been implemented in a topics in mathematics class for college upperclassmen. Originally intended to be presented in…

  3. System Design Description and Requirements for Modeling the Off-Gas Systems for Fuel Recycling Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Daryl R. Haefner; Jack D. Law; Troy J. Tranter

    2010-08-01

    This document provides descriptions of the off-gases evolved during spent nuclear fuel processing and the systems used to capture the gases of concern. Two reprocessing techniques are discussed, namely aqueous separations and electrochemical (pyrochemical) processing. The unit operations associated with each process are described in enough detail so that computer models to mimic their behavior can be developed. The document also lists the general requirements for the desired computer models.

  4. Separable Watermarking Technique Using the Biological Color Model

    Directory of Open Access Journals (Sweden)

    David Nino

    2009-01-01

    Full Text Available Problem statement: The issue of having robust and fragile watermarking is still main focus for various researchers worldwide. Performance of a watermarking technique depends on how complex as well as how feasible to implement. These issues are tested using various kinds of attacks including geometry and transformation. Watermarking techniques in color images are more challenging than gray images in terms of complexity and information handling. In this study, we focused on implementation of watermarking technique in color images using the biological model. Approach: We proposed a novel method for watermarking using spatial and the Discrete Cosine Transform (DCT domains. The proposed method deled with colored images in the biological color model, the Hue, Saturation and Intensity (HSI. Technique was implemented and used against various colored images including the standard ones such as pepper image. The experiments were done using various attacks such as cropping, transformation and geometry. Results: The method robustness showed high accuracy in retrieval data and technique is fragile against geometric attacks. Conclusion: Watermark security was increased by using the Hadamard transform matrix. The watermarks used were meaningful and of varying sizes and details.

  5. Q-DPM: An Efficient Model-Free Dynamic Power Management Technique

    CERN Document Server

    Li, Min; Yao, Richard; Yan, Xiaolang

    2011-01-01

    When applying Dynamic Power Management (DPM) technique to pervasively deployed embedded systems, the technique needs to be very efficient so that it is feasible to implement the technique on low end processor and tight-budget memory. Furthermore, it should have the capability to track time varying behavior rapidly because the time varying is an inherent characteristic of real world system. Existing methods, which are usually model-based, may not satisfy the aforementioned requirements. In this paper, we propose a model-free DPM technique based on Q-Learning. Q-DPM is much more efficient because it removes the overhead of parameter estimator and mode-switch controller. Furthermore, its policy optimization is performed via consecutive online trialing, which also leads to very rapid response to time varying behavior.

  6. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  7. A modeling technique for STOVL ejector and volume dynamics

    Science.gov (United States)

    Drummond, C. K.; Barankiewicz, W. S.

    1990-01-01

    New models for thrust augmenting ejector performance prediction and feeder duct dynamic analysis are presented and applied to a proposed Short Take Off and Vertical Landing (STOVL) aircraft configuration. Central to the analysis is the nontraditional treatment of the time-dependent volume integrals in the otherwise conventional control-volume approach. In the case of the thrust augmenting ejector, the analysis required a new relationship for transfer of kinetic energy from the primary flow to the secondary flow. Extraction of the required empirical corrections from current steady-state experimental data is discussed; a possible approach for modeling insight through Computational Fluid Dynamics (CFD) is presented.

  8. A TRANSFORMATION APPROACH FOR COLLABORATION BASED REQUIREMENT MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed Harbouche

    2012-02-01

    Full Text Available Distributed software engineering is widely recognized as a complex task. Among the inherent complexitiesis the process of obtaining a system design from its global requirement specification. This paper deals withsuch transformation process and suggests an approach to derive the behavior of a given systemcomponents, in the form of distributed Finite State Machines, from the global system requirements, in theform of an augmented UML Activity Diagrams notation. The process of the suggested approach issummarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model, the definition of the target Design Meta-Model and the definition of the rules to govern thetransformation during the derivation process. The derivation process transforms the global systemrequirements described as UML diagram activities (extended with collaborations to system rolesbehaviors represented as UML finite state machines. The approach is implemented using AtlasTransformation Language (ATL.

  9. Experimental technique of calibration of symmetrical air pollution models

    Indian Academy of Sciences (India)

    P Kumar

    2005-10-01

    Based on the inherent property of symmetry of air pollution models,a Symmetrical Air Pollution Model Index (SAPMI)has been developed to calibrate the accuracy of predictions made by such models,where the initial quantity of release at the source is not known.For exact prediction the value of SAPMI should be equal to 1.If the predicted values are overestimating then SAPMI is > 1and if it is underestimating then SAPMI is < 1.Specific design for the layout of receptors has been suggested as a requirement for the calibration experiments.SAPMI is applicable for all variations of symmetrical air pollution dispersion models.

  10. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  11. Videogrammetric Model Deformation Measurement Technique for Wind Tunnel Applications

    Science.gov (United States)

    Barrows, Danny A.

    2006-01-01

    Videogrammetric measurement technique developments at NASA Langley were driven largely by the need to quantify model deformation at the National Transonic Facility (NTF). This paper summarizes recent wind tunnel applications and issues at the NTF and other NASA Langley facilities including the Transonic Dynamics Tunnel, 31-Inch Mach 10 Tunnel, 8-Ft high Temperature Tunnel, and the 20-Ft Vertical Spin Tunnel. In addition, several adaptations of wind tunnel techniques to non-wind tunnel applications are summarized. These applications include wing deformation measurements on vehicles in flight, determining aerodynamic loads based on optical elastic deformation measurements, measurements on ultra-lightweight and inflatable space structures, and the use of an object-to-image plane scaling technique to support NASA s Space Exploration program.

  12. An observational model for biomechanical assessment of sprint kayaking technique.

    Science.gov (United States)

    McDonnell, Lisa K; Hume, Patria A; Nolte, Volker

    2012-11-01

    Sprint kayaking stroke phase descriptions for biomechanical analysis of technique vary among kayaking literature, with inconsistencies not conducive for the advancement of biomechanics applied service or research. We aimed to provide a consistent basis for the categorisation and analysis of sprint kayak technique by proposing a clear observational model. Electronic databases were searched using key words kayak, sprint, technique, and biomechanics, with 20 sources reviewed. Nine phase-defining positions were identified within the kayak literature and were divided into three distinct types based on how positions were defined: water-contact-defined positions, paddle-shaft-defined positions, and body-defined positions. Videos of elite paddlers from multiple camera views were reviewed to determine the visibility of positions used to define phases. The water-contact-defined positions of catch, immersion, extraction, and release were visible from multiple camera views, therefore were suitable for practical use by coaches and researchers. Using these positions, phases and sub-phases were created for a new observational model. We recommend that kayaking data should be reported using single strokes and described using two phases: water and aerial. For more detailed analysis without disrupting the basic two-phase model, a four-sub-phase model consisting of entry, pull, exit, and aerial sub-phases should be used.

  13. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    Science.gov (United States)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  14. A Hybrid Parallel Execution Model for Logic Based Requirement Specifications (Invited Paper

    Directory of Open Access Journals (Sweden)

    Jeffrey J. P. Tsai

    1999-05-01

    Full Text Available It is well known that undiscovered errors in a requirements specification is extremely expensive to be fixed when discovered in the software maintenance phase. Errors in the requirement phase can be reduced through the validation and verification of the requirements specification. Many logic-based requirements specification languages have been developed to achieve these goals. However, the execution and reasoning of a logic-based requirements specification can be very slow. An effective way to improve their performance is to execute and reason the logic-based requirements specification in parallel. In this paper, we present a hybrid model to facilitate the parallel execution of a logic-based requirements specification language. A logic-based specification is first applied by a data dependency analysis technique which can find all the mode combinations that exist within a specification clause. This mode information is used to support a novel hybrid parallel execution model, which combines both top-down and bottom-up evaluation strategies. This new execution model can find the failure in the deepest node of the search tree at the early stage of the evaluation, thus this new execution model can reduce the total number of nodes searched in the tree, the total processes needed to be generated, and the total communication channels needed in the search process. A simulator has been implemented to analyze the execution behavior of the new model. Experiments show significant improvement based on several criteria.

  15. New techniques required to understand the by-stander effect in situ.

    Science.gov (United States)

    Britten, Richard

    2008-03-01

    The by-stander effect has been known for nearly a century under various names, of which the abscopal effect is probably the most well known. More recently the by-stander effect has received a lot of attention, and various models have been developed to assess the relative importance of the bystander effect in radiation treatment. It is clear that irradiated cells release factors that lead to alterations in the physiology of adjacent irradiated cells, both via inter-cellular junctions and through systemic factors. Most studies that have sought to identify the systemic factors and the cellular mechanisms that are responsible for the bystander effect have by necessity used in vitro systems. The purpose of this presentation is to alert the audience to the various techniques that are available to study the proteomic changes related to the bystander effect in situ. We shall pay attention to the use of MALDI-imaging to track spatial proteomic changes in tissue that have been exposed to microbeams.

  16. Formal Requirements Modeling for Reactive Systems with Coloured Petri Nets

    DEFF Research Database (Denmark)

    Tjell, Simon

    This dissertation presents the contributions of seven publications all concerned with the application of Coloured Petri Nets (CPN) to requirements modeling for reactive systems. The publications are introduced along with relevant background material and related work, and their contributions...... interface composed of recognizable artifacts and activities. The presentation of the three publications related to Use Cases is followed by a the presentation of a publication formalizing some of the guidelines applied for structuring the CPN requirements models|namely the guidelines that make it possible...... activity. The traces are automatically recorded during execution of the model. The second publication presents a formally specified framework for automating a large part of the tasks related to integrating Problem Frames with CPN. The framework is specified in VDM++, and allows the modeler to automatically...

  17. One technique for refining the global Earth gravity models

    Science.gov (United States)

    Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.

    2017-01-01

    The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.

  18. Modelling of pulverized coal boilers: review and validation of on-line simulation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Diez, L.I.; Cortes, C.; Campo, A. [University of Zaragoza, Zaragoza (Spain). Centro de Investigacion de Recursos y Consumos Energeticos (CIRCE)

    2005-07-01

    Thermal modelling of large pulverized fuel utility boilers has reached a very remarkable development, through the application of CFD techniques and other advanced mathematical methods. However, due to the computational requirements, on-line monitoring and simulation tools still rely on lumped models and semiempirical approaches, which are often strongly simplified and not well connected with sound theoretical basis. This paper reviews on-line modelling techniques, aiming at the improvement of their capabilities, by means of the revision and modification of conventional lumped models and the integration of off-line CFD predictions. The paper illustrates the coherence of monitoring calculations as well as the validation of the on-line thermal simulator, starting from real operation data from a case-study unit. The outcome is that it is possible to significantly improve the accuracy of on-line calculations provided by conventional models, taking into account the singularities of large combustion systems and coupling offline CFD predictions for selected scenarios.

  19. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  20. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling.

    Science.gov (United States)

    Dotto, Cintia B S; Mannina, Giorgio; Kleidorfer, Manfred; Vezzaro, Luca; Henrichs, Malte; McCarthy, David T; Freni, Gabriele; Rauch, Wolfgang; Deletic, Ana

    2012-05-15

    Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability

  1. NASA Standard for Models and Simulations: Philosophy and Requirements Overview

    Science.gov (United States)

    Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.

    2013-01-01

    Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.

  2. Single High Fidelity Geometric Data Sets for LCM - Model Requirements

    Science.gov (United States)

    2006-11-01

    material name (example, an HY80 steel ) plus additional material requirements (heat treatment, etc.) Creation of a more detailed description of the data...57 Figure 2.22. Typical Stress-Strain Curve for Steel (adapted from Ref 59) .............................. 60 Figure...structures are steel , aluminum and composites. The structural components that make up a global FEA model drive the fidelity of the model. For example

  3. Interpolation techniques in robust constrained model predictive control

    Science.gov (United States)

    Kheawhom, Soorathep; Bumroongsri, Pornchai

    2017-05-01

    This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.

  4. Mathematical analysis techniques for modeling the space network activities

    Science.gov (United States)

    Foster, Lisa M.

    1992-01-01

    The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.

  5. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    OpenAIRE

    N.RATHIKA; Dr.A.Senthil kumar; A.ANUSUYA

    2014-01-01

    This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO) technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as...

  6. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  7. Equivalence and differences between structural equation modeling and state-space modeling techniques

    NARCIS (Netherlands)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, E.L.; Dolan, C.V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and

  8. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  9. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  10. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  11. Models of protein and amino acid requirements for cattle

    Directory of Open Access Journals (Sweden)

    Luis Orlindo Tedeschi

    2015-03-01

    Full Text Available Protein supply and requirements by ruminants have been studied for more than a century. These studies led to the accumulation of lots of scientific information about digestion and metabolism of protein by ruminants as well as the characterization of the dietary protein in order to maximize animal performance. During the 1980s and 1990s, when computers became more accessible and powerful, scientists began to conceptualize and develop mathematical nutrition models, and to program them into computers to assist with ration balancing and formulation for domesticated ruminants, specifically dairy and beef cattle. The most commonly known nutrition models developed during this period were the National Research Council (NRC in the United States, Agricultural Research Council (ARC in the United Kingdom, Institut National de la Recherche Agronomique (INRA in France, and the Commonwealth Scientific and Industrial Research Organization (CSIRO in Australia. Others were derivative works from these models with different degrees of modifications in the supply or requirement calculations, and the modeling nature (e.g., static or dynamic, mechanistic, or deterministic. Circa 1990s, most models adopted the metabolizable protein (MP system over the crude protein (CP and digestible CP systems to estimate supply of MP and the factorial system to calculate MP required by the animal. The MP system included two portions of protein (i.e., the rumen-undegraded dietary CP - RUP - and the contributions of microbial CP - MCP as the main sources of MP for the animal. Some models would explicitly account for the impact of dry matter intake (DMI on the MP required for maintenance (MPm; e.g., Cornell Net Carbohydrate and Protein System - CNCPS, the Dutch system - DVE/OEB, while others would simply account for scurf, urinary, metabolic fecal, and endogenous contributions independently of DMI. All models included milk yield and its components in estimating MP required for lactation

  12. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  13. Modeling requirements for in situ vitrification. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, R.J.; Mecham, D.C.; Hagrman, D.L.; Johnson, R.W.; Murray, P.E.; Slater, C.E.; Marwil, E.S.; Weaver, R.A.; Argyle, M.D.

    1991-11-01

    This document outlines the requirements for the model being developed at the INEL which will provide analytical support for the ISV technology assessment program. The model includes representations of the electric potential field, thermal transport with melting, gas and particulate release, vapor migration, off-gas combustion and process chemistry. The modeling objectives are to (1) help determine the safety of the process by assessing the air and surrounding soil radionuclide and chemical pollution hazards, the nuclear criticality hazard, and the explosion and fire hazards, (2) help determine the suitability of the ISV process for stabilizing the buried wastes involved, and (3) help design laboratory and field tests and interpret results therefrom.

  14. Required experimental accuracy to select between supersymmetrical models

    Indian Academy of Sciences (India)

    David Grellscheid

    2004-03-01

    We will present a method to decide a priori whether various supersymmetrical scenarios can be distinguished based on sparticle mass data alone. For each model, a scan over all free SUSY breaking parameters reveals the extent of that model's physically allowed region of sparticle-mass-space. Based on the geometrical configuration of these regions in mass-space, it is possible to obtain an estimate of the required accuracy of future sparticle mass measurements to distinguish between the models. We will illustrate this algorithm with an example. Ths talk is based on work done in collaboration with B C Allanach (LAPTH, Annecy) and F Quevedo (DAMTP, Cambridge).

  15. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Gao, X; Sorooshian, S

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  16. A comparative assessment of efficient uncertainty analysis techniques for environmental fate and transport models: application to the FACT model

    Science.gov (United States)

    Balakrishnan, Suhrid; Roy, Amit; Ierapetritou, Marianthi G.; Flach, Gregory P.; Georgopoulos, Panos G.

    2005-06-01

    This work presents a comparative assessment of efficient uncertainty modeling techniques, including Stochastic Response Surface Method (SRSM) and High Dimensional Model Representation (HDMR). This assessment considers improvement achieved with respect to conventional techniques of modeling uncertainty (Monte Carlo). Given that traditional methods for characterizing uncertainty are very computationally demanding, when they are applied in conjunction with complex environmental fate and transport models, this study aims to assess how accurately these efficient (and hence viable) techniques for uncertainty propagation can capture complex model output uncertainty. As a part of this effort, the efficacy of HDMR, which has primarily been used in the past as a model reduction tool, is also demonstrated for uncertainty analysis. The application chosen to highlight the accuracy of these new techniques is the steady state analysis of the groundwater flow in the Savannah River Site General Separations Area (GSA) using the subsurface Flow And Contaminant Transport (FACT) code. Uncertain inputs included three-dimensional hydraulic conductivity fields, and a two-dimensional recharge rate field. The output variables under consideration were the simulated stream baseflows and hydraulic head values. Results show that the uncertainty analysis outcomes obtained using SRSM and HDMR are practically indistinguishable from those obtained using the conventional Monte Carlo method, while requiring orders of magnitude fewer model simulations.

  17. An Investigation into Energy Requirements and Conservation Techniques for Sustainable Buildings

    Science.gov (United States)

    Robitaille, Jad

    Traditionally, societies use to design their built environment in a way that was in line with the climate and the geographical location that they evolved in, thereby supporting sustainable lifestyles (i.e. thick walls with small windows in cold climates). With the industrial revolution and the heavy use and reliance on cheap fossil fuels, it can be argued that the built environment has become more focused on aesthetics and cost savings rather than on true sustainability. This, in turn, has led to energy intensive practices associated with the construction of homes, buildings, cities and megalopolises. Environmental concerns with regards to the future have pushed people, entities and industries to search for ways to decrease human's energy dependency and/or to supply the demand in ways that are deemed sustainable. Efforts to address this concern with respect to the built environment were translated into 'green buildings', sustainable building technologies and high performance buildings that can be rated and/or licensed by selected certifying bodies with varying metrics of building construction and performance. The growing number of such systems has brought real concerns: Do certified sustainable buildings really achieve the level of sustainability (i.e. performance) they were intended to? For the purpose of this study, buildings' energy consumption will be analysed, as it is one of the main drivers when taking into consideration greenhouse gas emissions. Heating and cooling in the residential and commercial/institutional sector, combined account for approximately a fifth of the secondary energy use in Canada. For this reason, this research aims at evaluating the main rating systems in Canada based on the efficacy of their rating systems' certification methodology and the weighting and comparison of energy requirements under each scheme. It has been proven through numerous studies that major energy savings can be achieved by focusing primarily on building designs

  18. Thermodynamic models for bounding pressurant mass requirements of cryogenic tanks

    Science.gov (United States)

    Vandresar, Neil T.; Haberbusch, Mark S.

    1994-01-01

    Thermodynamic models have been formulated to predict lower and upper bounds for the mass of pressurant gas required to pressurize a cryogenic tank and then expel liquid from the tank. Limiting conditions are based on either thermal equilibrium or zero energy exchange between the pressurant gas and initial tank contents. The models are independent of gravity level and allow specification of autogenous or non-condensible pressurants. Partial liquid fill levels may be specified for initial and final conditions. Model predictions are shown to successfully bound results from limited normal-gravity tests with condensable and non-condensable pressurant gases. Representative maximum collapse factor maps are presented for liquid hydrogen to show the effects of initial and final fill level on the range of pressurant gas requirements. Maximum collapse factors occur for partial expulsions with large final liquid fill fractions.

  19. Model Waveform Accuracy Requirements for the $\\chi^2$ Discriminator

    CERN Document Server

    Lindblom, Lee

    2016-01-01

    This paper derives accuracy standards for model gravitational waveforms required to ensure proper use of the $\\chi^2$ discriminator test in gravitational wave (GW) data analysis. These standards are different from previously established requirements for detection and waveform parameter measurement based on signal-to-noise optimization. We present convenient formulae both for evaluating and interpreting the contribution of model errors to measured $\\chi^2$ values. Motivated by these formula, we also present an enhanced, complexified variant of the standard $\\chi^2$ statistic used in GW searches. While our results are not directly relevant to current searches (which use the $\\chi^2$ test only to veto signal candidates with extremely high $\\chi^2$ values), they could be useful in future GW searches and as figures of merit for model gravitational waveforms.

  20. A commuting generation model requiring only aggregated data

    CERN Document Server

    Lenormand, Maxime; Gargiulo, Floriana

    2011-01-01

    We recently proposed, in (Gargiulo et al., 2011), an innova tive stochastic model with only one parameter to calibrate. It reproduces the complete network by an iterative process stochastically choosing, for each commuter living in the municipality of a region, a workplace in the region. The choice is done considering the job offer in each municipality of the region and the distance to all the possible destinations. The model is quite effective if the region is sufficiently autonomous in terms of job offers. However, calibrating or being sure of this autonomy require data or expertise which are not necessarily available. Moreover the region can be not autonomous. In the present, we overcome these limitations, extending the job search geographical base of the commuters to the outside of the region, and changing the deterrence function form. We also found a law to calibrate the improvement model which does not require data.

  1. A New Mathematical Modeling Technique for Pull Production Control Systems

    Directory of Open Access Journals (Sweden)

    O. Srikanth

    2013-12-01

    Full Text Available The Kanban Control System widely used to control the release of parts of multistage manufacturing system operating under a pull production control system. Most of the work on Kanban Control System deals with multi-product manufacturing system. In this paper, we are proposing a regression modeling technique in a multistage manufacturing system is to be coordinates the release of parts into each stage of the system with the arrival of customer demands for final products. And also comparing two variants stages of the Kanban Control System model and combines with mathematical and Simulink model for the production coordination of parts in an assembly manufacturing systems. In both variants, the production of a new subassembly is authorized only when an assembly Kanban is available. Assembly kanbans become available when finished product is consumed. A simulation environment for the product line system has to generate with the proposed model and the mathematical model have to give implementation against the simulation model in the working platform of MATLAB. Both the simulation and model outputs have provided an in depth analysis of each of the resulting control system for offering model of a product line system.

  2. Evolution of Modelling Techniques for Service Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Mikit Kanakia

    2014-07-01

    Full Text Available Service-oriented architecture (SOA is a software design and architecture design pattern based on independent pieces of software providing functionality as services to other applications. The benefit of SOA in the IT infrastructure is to allow parallel use and data exchange between programs which are services to the enterprise. Unified Modelling Language (UML is a standardized general-purpose modelling language in the field of software engineering. The UML includes a set of graphic notation techniques to create visual models of object-oriented software systems. We want to make UML available for SOA as well. SoaML (Service oriented architecture Modelling Language is an open source specification project from the Object Management Group (OMG, describing a UML profile and meta-model for the modelling and design of services within a service-oriented architecture. BPMN was also extended for SOA but there were few pitfalls. There is a need of a modelling framework which dedicated to SOA. Michael Bell authored a framework called Service Oriented Modelling Framework (SOMF which is dedicated for SOA.

  3. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  4. Multi-Model Combination Techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N; Duan, Q; Gao, X; Sorooshian, S

    2006-05-08

    This paper examines several multi-model combination techniques: the Simple Multimodel Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  5. Mathematical Modeling of Programmatic Requirements for Yaws Eradication

    Science.gov (United States)

    Mitjà, Oriol; Fitzpatrick, Christopher; Asiedu, Kingsley; Solomon, Anthony W.; Mabey, David C.W.; Funk, Sebastian

    2017-01-01

    Yaws is targeted for eradication by 2020. The mainstay of the eradication strategy is mass treatment followed by case finding. Modeling has been used to inform programmatic requirements for other neglected tropical diseases and could provide insights into yaws eradication. We developed a model of yaws transmission varying the coverage and number of rounds of treatment. The estimated number of cases arising from an index case (basic reproduction number [R0]) ranged from 1.08 to 3.32. To have 80% probability of achieving eradication, 8 rounds of treatment with 80% coverage were required at low estimates of R0 (1.45). This requirement increased to 95% at high estimates of R0 (2.47). Extending the treatment interval to 12 months increased requirements at all estimates of R0. At high estimates of R0 with 12 monthly rounds of treatment, no combination of variables achieved eradication. Models should be used to guide the scale-up of yaws eradication. PMID:27983500

  6. Cloud Package Selection for Academic Requirements using Multi Criteria Decision Making based Modified Ant Colony Optimization Technique

    Directory of Open Access Journals (Sweden)

    C.Madhumathi

    2016-04-01

    Full Text Available Quality of Service (QoS and user satisfaction are two of the major requirements considered by the current cloud service providers. In-order to incorporate these qualities in the cloud resource selection framework, user’s requirements must be clearly known. This paper presents an effective cloud package allocation technique that utilizes the user’s logs and fuzzy user inputs to identify the user requirements to perform optimal allocations. Since cloud packages are predefined and do not correspond to the direct user requirements, optimal package allocation is the only option. This process is carried out by Ant Colony Optimization (ACO. Due to the metaheuristic nature of ACO, the results obtained from this selection technique was found to be optimal and the results were obtained faster even with the usage of a large number of agents (ants. Experiments show that ACO provides optimal and fast allocations.

  7. System identification and model reduction using modulating function techniques

    Science.gov (United States)

    Shen, Yan

    1993-01-01

    Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.

  8. A technique using a nonlinear helicopter model for determining trims and derivatives

    Science.gov (United States)

    Ostroff, A. J.; Downing, D. R.; Rood, W. J.

    1976-01-01

    A technique is described for determining the trims and quasi-static derivatives of a flight vehicle for use in a linear perturbation model; both the coupled and uncoupled forms of the linear perturbation model are included. Since this technique requires a nonlinear vehicle model, detailed equations with constants and nonlinear functions for the CH-47B tandem rotor helicopter are presented. Tables of trims and derivatives are included for airspeeds between -40 and 160 knots and rates of descent between + or - 10.16 m/sec (+ or - 200 ft/min). As a verification, the calculated and referenced values of comparable trims, derivatives, and linear model poles are shown to have acceptable agreement.

  9. Information Models, Data Requirements, and Agile Data Curation

    Science.gov (United States)

    Hughes, John S.; Crichton, Dan; Ritschel, Bernd; Hardman, Sean; Joyner, Ron

    2015-04-01

    The Planetary Data System's next generation system, PDS4, is an example of the successful use of an ontology-based Information Model (IM) to drive the development and operations of a data system. In traditional systems engineering, requirements or statements about what is necessary for the system are collected and analyzed for input into the design stage of systems development. With the advent of big data the requirements associated with data have begun to dominate and an ontology-based information model can be used to provide a formalized and rigorous set of data requirements. These requirements address not only the usual issues of data quantity, quality, and disposition but also data representation, integrity, provenance, context, and semantics. In addition the use of these data requirements during system's development has many characteristics of Agile Curation as proposed by Young et al. [Taking Another Look at the Data Management Life Cycle: Deconstruction, Agile, and Community, AGU 2014], namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. For example customers can be satisfied through early and continuous delivery of system software and services that are configured directly from the information model. This presentation will describe the PDS4 architecture and its three principle parts: the ontology-based Information Model (IM), the federated registries and repositories, and the REST-based service layer for search, retrieval, and distribution. The development of the IM will be highlighted with special emphasis on knowledge acquisition, the impact of the IM on development and operations, and the use of shared ontologies at multiple governance levels to promote system interoperability and data correlation.

  10. Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation

    Science.gov (United States)

    Lee, George

    1992-01-01

    A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.

  11. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  12. A case study in modeling company policy documents as a source of requirements

    Energy Technology Data Exchange (ETDEWEB)

    CRUMPTON,KATHLEEN MARIE; GONZALES,REGINA M.; TRAUTH,SHARON L.

    2000-04-11

    This paper describes an approach that was developed to produce structured models that graphically reflect the requirements contained within a text document. The document used in this research is a draft policy document governing business in a research and development environment. In this paper, the authors present a basic understanding of why this approach is needed, the techniques developed, lessons learned during modeling and analysis, and recommendations for future investigation. The modeling method applied on the policy document was developed as an extension to entity relationship (ER) diagrams, which built in some structural information typically associated with object-oriented techniques. This approach afforded some structure as an analysis tool, while remaining flexible enough to be used with the text document. It provided a visual representation that allowed further analysis and layering of the model to be done.

  13. Crop Yield Forecasted Model Based on Time Series Techniques

    Institute of Scientific and Technical Information of China (English)

    Li Hong-ying; Hou Yan-lin; Zhou Yong-juan; Zhao Hui-ming

    2012-01-01

    Traditional studies on potential yield mainly referred to attainable yield: the maximum yield which could be reached by a crop in a given environment. The new concept of crop yield under average climate conditions was defined in this paper, which was affected by advancement of science and technology. Based on the new concept of crop yield, the time series techniques relying on past yield data was employed to set up a forecasting model. The model was tested by using average grain yields of Liaoning Province in China from 1949 to 2005. The testing combined dynamic n-choosing and micro tendency rectification, and an average forecasting error was 1.24%. In the trend line of yield change, and then a yield turning point might occur, in which case the inflexion model was used to solve the problem of yield turn point.

  14. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  15. Modeling and Analyzing Terrain Data Acquired by Modern Mapping Techniques

    Science.gov (United States)

    2009-09-22

    enhanced by new terrain mapping technologies such as Laser altimetry (LIDAR), ground based laser scanning and Real Time Kinematic GPS ( RTK - GPS ) that...developed and implemented an approach that has the following features: it is modular so that a user can use different models for each of the modules ...support some way of connecting separate modules together to form pipelines, however this requires manual intervention. While a typical GIS can manage

  16. An improved calibration technique for wind tunnel model attitude sensors

    Science.gov (United States)

    Tripp, John S.; Wong, Douglas T.; Finley, Tom D.; Tcheng, Ping

    1993-01-01

    Aerodynamic wind tunnel tests at NASA Langley Research Center (LaRC) require accurate measurement of model attitude. Inertial accelerometer packages have been the primary sensor used to measure model attitude to an accuracy of +/- 0.01 deg as required for aerodynamic research. The calibration parameters of the accelerometer package are currently obtained from a seven-point tumble test using a simplified empirical approximation. The inaccuracy due to the approximation exceeds the accuracy requirement as the misalignment angle between the package axis and the model body axis increases beyond 1.4 deg. This paper presents the exact solution derived from the coordinate transformation to eliminate inaccuracy caused by the approximation. In addition, a new calibration procedure is developed in which the data taken from the seven-point tumble test is fit to the exact solution by means of a least-squares estimation procedure. Validation tests indicate that the new calibration procedure provides +/- 0.005-deg accuracy over large package misalignments, which is not possible with the current procedure.

  17. Theoretical modeling techniques and their impact on tumor immunology.

    Science.gov (United States)

    Woelke, Anna Lena; Murgueitio, Manuela S; Preissner, Robert

    2010-01-01

    Currently, cancer is one of the leading causes of death in industrial nations. While conventional cancer treatment usually results in the patient suffering from severe side effects, immunotherapy is a promising alternative. Nevertheless, some questions remain unanswered with regard to using immunotherapy to treat cancer hindering it from being widely established. To help rectify this deficit in knowledge, experimental data, accumulated from a huge number of different studies, can be integrated into theoretical models of the tumor-immune system interaction. Many complex mechanisms in immunology and oncology cannot be measured in experiments, but can be analyzed by mathematical simulations. Using theoretical modeling techniques, general principles of tumor-immune system interactions can be explored and clinical treatment schedules optimized to lower both tumor burden and side effects. In this paper, we aim to explain the main mathematical and computational modeling techniques used in tumor immunology to experimental researchers and clinicians. In addition, we review relevant published work and provide an overview of its impact to the field.

  18. Spoken Document Retrieval Leveraging Unsupervised and Supervised Topic Modeling Techniques

    Science.gov (United States)

    Chen, Kuan-Yu; Wang, Hsin-Min; Chen, Berlin

    This paper describes the application of two attractive categories of topic modeling techniques to the problem of spoken document retrieval (SDR), viz. document topic model (DTM) and word topic model (WTM). Apart from using the conventional unsupervised training strategy, we explore a supervised training strategy for estimating these topic models, imagining a scenario that user query logs along with click-through information of relevant documents can be utilized to build an SDR system. This attempt has the potential to associate relevant documents with queries even if they do not share any of the query words, thereby improving on retrieval quality over the baseline system. Likewise, we also study a novel use of pseudo-supervised training to associate relevant documents with queries through a pseudo-feedback procedure. Moreover, in order to lessen SDR performance degradation caused by imperfect speech recognition, we investigate leveraging different levels of index features for topic modeling, including words, syllable-level units, and their combination. We provide a series of experiments conducted on the TDT (TDT-2 and TDT-3) Chinese SDR collections. The empirical results show that the methods deduced from our proposed modeling framework are very effective when compared with a few existing retrieval approaches.

  19. A Model for Forecasting Enlisted Student IA Billet Requirements

    Science.gov (United States)

    2016-03-01

    were promised and had at least one course failure . Training times Student execution depends on TTT. TTT includes under-instruction (UI) time and...Cleared for Public Release A Model for Forecasting Enlisted Student IA Billet Requirements Steven W. Belcher with David L. Reese...and Kletus S. Lawler March 2016 Copyright © 2016 CNA This document contains the best opinion of CNA at the time of issue. It does

  20. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  1. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  2. Organic agriculture requires process rather than product evaluation of novel breeding techniques

    NARCIS (Netherlands)

    Lammerts Van Bueren, E.; Verhoog, H.; Tiemens-Hulscher, M.; Struik, P.C.; Haring, M.A.

    2007-01-01

    In organic agriculture the use of genetically modified organisms (GMOs) is banned. Recently, two novel breeding techniques have been developed, i.e., cisgenesis and reverse breeding, both of which are based on gene technology but should raise less moral concerns from the public. Whether the products

  3. A survey of applications and requirements of unique identification systems and RFID techniques

    NARCIS (Netherlands)

    Ilie-Zudor, Elisabeth; Kemeny, Zsolt; van Blommestein, Fred; Monostori, Laszlo; van der Meulen, Andre

    2011-01-01

    The paper contains an overview of unique identification issues and of the various radio frequency identification techniques that are available now or will become available in the short term. The paper also compares REID with traditional ID technologies. It shows application possibilities and gives e

  4. Concerning the Feasibility of Example-driven Modelling Techniques

    CERN Document Server

    Thorne, Simon R; Lawson, Z

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven modelling. The main aim was to establish experimentally within an academic environment: the relationship between error and task complexity using a) Traditional spreadsheet modelling; b) example driven techniques. We report on the experimental design, sampling, research methods and the tasks set for both control and treatment groups. Analysis of the completed tasks allows comparison of several different variables. The experimental results compare the performance indicators for the treatment and control groups by comparing accuracy, experience, training, confidence measures, perceived difficulty and perceived completeness. The various results are thoroughly tested for statistical significance using: the Chi squared test, Fisher's exact test for significance, Cochran's Q test and McNemar's test on difficulty.

  5. Advanced computer modeling techniques expand belt conveyor technology

    Energy Technology Data Exchange (ETDEWEB)

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  6. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    N.RATHIKA

    2014-07-01

    Full Text Available This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as to optimize some stipulated set of parameters without violating some constraint. Accurate value can be extracted using PSO and it can be reformulated. Modeling and simulation of the machine is executed. MATLAB/Simulink has been cast-off to implement and validate the result.

  7. A Multi-Model Reduction Technique for Optimization of Coupled Structural-Acoustic Problems

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas;

    2016-01-01

    Finite Element models of structural-acoustic coupled systems can become very large for complex structures with multiple connected parts. Optimization of the performance of the structure based on harmonic analysis of the system requires solving the coupled problem iteratively and for several...... frequencies, which can become highly time consuming. Several modal-based model reduction techniques for structure-acoustic interaction problems have been developed in the literature. The unsymmetric nature of the pressure-displacement formulation of the problem poses the question of how the reduction modal...... base should be formed, given that the modal vectors are not orthogonal due to the asymmetry of the system matrices. In this paper, a multi-model reduction (MMR) technique for structure-acoustic interaction problems is developed. In MMR, the reduction base is formed with the modal vectors of a family...

  8. Understanding your users a practical guide to user requirements methods, tools, and techniques

    CERN Document Server

    Baxter, Kathy

    2005-01-01

    Today many companies are employing a user-centered design (UCD) process, but for most companies, usability begins and ends with the usability test. Although usability testing is a critical part of an effective user-centered life cycle, it is only one component of the UCD process. This book is focused on the requirements gathering stage, which often receives less attention than usability testing, but is equally as important. Understanding user requirements is critical to the development of a successful product. Understanding Your Users is an easy to read, easy to implement, how-to guide on

  9. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    Energy Technology Data Exchange (ETDEWEB)

    Barus, R. P. P., E-mail: rismawan.ppb@gmail.com [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung and Centre for Material and Technical Product, Jalan Sangkuriang No. 14 Bandung (Indonesia); Tjokronegoro, H. A.; Leksono, E. [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia); Ismunandar [Chemistry Study, Faculty of Mathematics and Science, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia)

    2014-09-25

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range.

  10. Improvements of Surgical Technique in Establishment of Rat Orthotopic Pulmonary Transplantation Model Using Cuffs

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In order to establish more simple and effective rat orthotopic lung transplantation models, 20 rats were divided into donor and recipient groups. Rat lung transplantation models were established by using improved cuff technique. All the 10 operations were accomplished successfully.The mean operative time of recipients was 45±4 min. The survival time was over 30 days after lung transplantation. The checks of X-ray were almost ncrmal. There was no significant difference in the blood gas analysis before and after clipping the right hilum (P>. 05). This method is more simple,applicable and requires less time.

  11. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  12. Application of Discrete Fracture Modeling and Upscaling Techniques to Complex Fractured Reservoirs

    Science.gov (United States)

    Karimi-Fard, M.; Lapene, A.; Pauget, L.

    2012-12-01

    During the last decade, an important effort has been made to improve data acquisition (seismic and borehole imaging) and workflow for reservoir characterization which has greatly benefited the description of fractured reservoirs. However, the geological models resulting from the interpretations need to be validated or calibrated against dynamic data. Flow modeling in fractured reservoirs remains a challenge due to the difficulty of representing mass transfers at different heterogeneity scales. The majority of the existing approaches are based on dual continuum representation where the fracture network and the matrix are represented separately and their interactions are modeled using transfer functions. These models are usually based on idealized representation of the fracture distribution which makes the integration of real data difficult. In recent years, due to increases in computer power, discrete fracture modeling techniques (DFM) are becoming popular. In these techniques the fractures are represented explicitly allowing the direct use of data. In this work we consider the DFM technique developed by Karimi-Fard et al. [1] which is based on an unstructured finite-volume discretization. The mass flux between two adjacent control-volumes is evaluated using an optimized two-point flux approximation. The result of the discretization is a list of control-volumes with the associated pore-volumes and positions, and a list of connections with the associated transmissibilities. Fracture intersections are simplified using a connectivity transformation which contributes considerably to the efficiency of the methodology. In addition, the method is designed for general purpose simulators and any connectivity based simulator can be used for flow simulations. The DFM technique is either used standalone or as part of an upscaling technique. The upscaling techniques are required for large reservoirs where the explicit representation of all fractures and faults is not possible

  13. Generalization Technique for 2D+SCALE Dhe Data Model

    Science.gov (United States)

    Karim, Hairi; Rahman, Alias Abdul; Boguslawski, Pawel

    2016-10-01

    Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE) data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension) for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information) in scale dimension could be used for the future 3D-scale applications.

  14. Quasi-Algorithm Methods and Techniques for Specifying Objective Job/Task Performance Requirements

    Science.gov (United States)

    1978-07-01

    contradiction ( Carnap , 1962). Independence is a logical requirement and sometimes difficult to achieve in practical situations. To illustrate the... Carnap , R., Logical foundations of probability (2nd ed.), Chicago: Univ. of Chicago Press, 1962. Chambers, A. N., Development of a taxonomy of human

  15. A Case Base for Requirements Engineering: Problem Categories and Solution Techniques

    NARCIS (Netherlands)

    Sikkel, Klaas; Wieringa, Roel; Engmann, Rolf

    2000-01-01

    We introduce a notion of business problem frames, categorizing the type of IT requirements problems found in organizations, as opposed to Jackson’s problem frames which describe a problem in terms of the solution to that problem. A survey of students’ projects showed that this a viable notion. We in

  16. Automata learning algorithms and processes for providing more complete systems requirements specification by scenario generation, CSP-based syntax-oriented model construction, and R2D2C system requirements transformation

    Science.gov (United States)

    Hinchey, Michael G. (Inventor); Margaria, Tiziana (Inventor); Rash, James L. (Inventor); Rouff, Christopher A. (Inventor); Steffen, Bernard (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments, automata learning algorithms and techniques are implemented to generate a more complete set of scenarios for requirements based programming. More specifically, a CSP-based, syntax-oriented model construction, which requires the support of a theorem prover, is complemented by model extrapolation, via automata learning. This may support the systematic completion of the requirements, the nature of the requirement being partial, which provides focus on the most prominent scenarios. This may generalize requirement skeletons by extrapolation and may indicate by way of automatically generated traces where the requirement specification is too loose and additional information is required.

  17. Experimental development based on mapping rule between requirements analysis model and web framework specific design model.

    Science.gov (United States)

    Okuda, Hirotaka; Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    Model Driven Development is a promising approach to develop high quality software systems. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface prototype from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the prototype. We proposes a mapping rule in which design information independent of each web application framework implementation is defined based on the requirements analysis model, so as to improve the traceability to the final product from the valid requirements analysis model. This paper discusses the result of applying our method to the development of a Group Work Support System that is currently running in our department.

  18. User requirements for hydrological models with remote sensing input

    Energy Technology Data Exchange (ETDEWEB)

    Kolberg, Sjur

    1997-10-01

    Monitoring the seasonal snow cover is important for several purposes. This report describes user requirements for hydrological models utilizing remotely sensed snow data. The information is mainly provided by operational users through a questionnaire. The report is primarily intended as a basis for other work packages within the Snow Tools project which aim at developing new remote sensing products for use in hydrological models. The HBV model is the only model mentioned by users in the questionnaire. It is widely used in Northern Scandinavia and Finland, in the fields of hydroelectric power production, flood forecasting and general monitoring of water resources. The current implementation of HBV is not based on remotely sensed data. Even the presently used HBV implementation may benefit from remotely sensed data. However, several improvements can be made to hydrological models to include remotely sensed snow data. Among these the most important are a distributed version, a more physical approach to the snow depletion curve, and a way to combine data from several sources. 1 ref.

  19. Life sciences research in space: The requirement for animal models

    Science.gov (United States)

    Fuller, C. A.; Philips, R. W.; Ballard, R. W.

    1987-01-01

    Use of animals in NASA space programs is reviewed. Animals are needed because life science experimentation frequently requires long-term controlled exposure to environments, statistical validation, invasive instrumentation or biological tissue sampling, tissue destruction, exposure to dangerous or unknown agents, or sacrifice of the subject. The availability and use of human subjects inflight is complicated by the multiple needs and demands upon crew time. Because only living organisms can sense, integrate and respond to the environment around them, the sole use of tissue culture and computer models is insufficient for understanding the influence of the space environment on intact organisms. Equipment for spaceborne experiments with animals is described.

  20. Specification of advanced safety modeling requirements (Rev. 0).

    Energy Technology Data Exchange (ETDEWEB)

    Fanning, T. H.; Tautges, T. J.

    2008-06-30

    The U.S. Department of Energy's Global Nuclear Energy Partnership has lead to renewed interest in liquid-metal-cooled fast reactors for the purpose of closing the nuclear fuel cycle and making more efficient use of future repository capacity. However, the U.S. has not designed or constructed a fast reactor in nearly 30 years. Accurate, high-fidelity, whole-plant dynamics safety simulations will play a crucial role by providing confidence that component and system designs will satisfy established design limits and safety margins under a wide variety of operational, design basis, and beyond design basis transient conditions. Current modeling capabilities for fast reactor safety analyses have resulted from several hundred person-years of code development effort supported by experimental validation. The broad spectrum of mechanistic and phenomenological models that have been developed represent an enormous amount of institutional knowledge that needs to be maintained. Complicating this, the existing code architectures for safety modeling evolved from programming practices of the 1970s. This has lead to monolithic applications with interdependent data models which require significant knowledge of the complexities of the entire code in order for each component to be maintained. In order to develop an advanced fast reactor safety modeling capability, the limitations of the existing code architecture must be overcome while preserving the capabilities that already exist. To accomplish this, a set of advanced safety modeling requirements is defined, based on modern programming practices, that focuses on modular development within a flexible coupling framework. An approach for integrating the existing capabilities of the SAS4A/SASSYS-1 fast reactor safety analysis code into the SHARP framework is provided in order to preserve existing capabilities while providing a smooth transition to advanced modeling capabilities. In doing this, the advanced fast reactor safety models

  1. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    Directory of Open Access Journals (Sweden)

    Frederico R. Romero

    2007-02-01

    Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.

  2. Requirements for effective academic leadership in Iran: A Nominal Group Technique exercise

    OpenAIRE

    Shoghli Alireza; Brommels Mats; Bikmoradi Ali; Sohrabi Zohreh; Masiello Italo

    2008-01-01

    Abstract Background During the last two decades, medical education in Iran has shifted from elite to mass education, with a considerable increase in number of schools, faculties, and programs. Because of this transformation, it is a good case now to explore academic leadership in a non-western country. The objective of this study was to explore the views on effective academic leadership requirements held by key informants in Iran's medical education system. Methods A nominal group study was c...

  3. Modeling of Testability Requirement Based on Generalized Stochastic Petri Nets

    Institute of Scientific and Technical Information of China (English)

    SU Yong-ding; QIU Jing; LIU Guan-jun; QIAN Yan-ling

    2009-01-01

    Testability design is an effective way to realize the fault detection and isolation. Its important step is to determine testability figures of merits (TFOM). Firstly, some influence factors for TFOMs are analyzed, such as the processes of system operation, maintenance and support, fault detection and isolation and so on. Secondly, a testability requirement analysis model is built based on generalized stochastic Petri net (GSPN). Then, the system's reachable states are analyzed based on the model, a Markov chain isomorphic with Petri net is constructed, a state transition matrix is created and the system's steady state probability is obtained. The relationship between the steady state availability and testability parameters can be revealed and reasoned. Finally, an example shows that the proposed method can determine TFOM, such as fault detection rate and fault isolation rate, effectively and reasonably.

  4. Model Penentuan Nilai Target Functional Requirement Berbasis Utilitas

    Directory of Open Access Journals (Sweden)

    Cucuk Nur Rosyidi

    2012-01-01

    Full Text Available In a product design and development process, a designer faces a problem to decide functional requirement (FR target values. That decision is made under a risk since it is conducted in the early design phase using incomplete information. Utility function can be used to reflect the decision maker attitude towards the risk in making such decision. In this research, we develop a utility-based model to determine FR target values using quadratic utility function and information from Quality Function Deployment (QFD. A pencil design is used as a numerical example using quadratic utility function for each FR. The model can be applied for balancing customer and designer interest in determining FR target values.

  5. CIVA workstation for NDE: mixing of NDE techniques and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Benoist, P.; Besnard, R. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes et Systemes Avances; Bayon, G. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Reacteurs Experimentaux; Boutaine, J.L. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Applications et de la Metrologie des Rayonnements Ionisants

    1994-12-31

    In order to compare the capabilities of different NDE techniques, or to use complementary inspection methods, the same components are examined with different procedures. It is then very useful to have a single evaluation tool allowing direct comparison of the methods: CIVA is an open system for processing NDE data; it is adapted to a standard work station (UNIX, C, MOTIF) and can read different supports on which the digitized data are stored. It includes a large library of signal and image processing methods accessible and adapted to NDE data (filtering, deconvolution, 2D and 3D spatial correlations...). Different CIVA application examples are described: brazing inspection (neutronography, ultrasonic), tube inspection (eddy current, ultrasonic), aluminium welds examination (UT and radiography). Modelling and experimental results are compared. 16 fig., 7 ref.

  6. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  7. Requirements for high level models supporting design space exploration in model-based systems engineering

    NARCIS (Netherlands)

    Haveman, Steven; Bonnema, Gerrit Maarten

    2013-01-01

    Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during

  8. A GENERALIZATION OF TRADITIONAL KANO MODEL FOR CUSTOMER REQUIREMENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Renáta Turisová

    2015-07-01

    Full Text Available Purpose: The theory of attractiveness determines the relationship between the technically achieved and customer perceived quality of product attributes. The most frequently used approach in the theory of attractiveness is the implementation of Kano‘s model. There exist a lot of generalizations of that model which take into consideration various aspects and approaches focused on understanding the customer preferences and identification of his priorities for a selling  product. The aim of this article is to outline another possible generalization of Kano‘s model.Methodology/Approach: The traditional Kano’s model captures the nonlinear relationship between reached attributes of quality and customer requirements. The individual attributes of quality are divided into three main categories: must-be, one-dimensional, attractive quality and into two side categories: indifferent and reverse quality. The well selling product has to contain the must-be attribute. It should contain as many one-dimensional attributes as possible. If there are also supplementary attractive attributes, it means that attractiveness of the entire product, from the viewpoint of the customer, nonlinearly sharply rises what has a direct positive impact on a decision of potential customer when purchasing the product. In this article, we show that inclusion of individual quality attributes of a product to the mentioned categories depends, among other things, also on costs on life cycle of the product, respectively on a price of the product on the market.Findings: In practice, we are often encountering the inclusion of products into different price categories: lower, middle and upper class. For a certain type of products the category is either directly declared by a producer (especially in automotive industry, or is determined by a customer by means of assessment of available market prices. To each of those groups of a products different customer expectations can be assigned

  9. Requirements for effective academic leadership in Iran: A Nominal Group Technique exercise

    Directory of Open Access Journals (Sweden)

    Shoghli Alireza

    2008-04-01

    Full Text Available Abstract Background During the last two decades, medical education in Iran has shifted from elite to mass education, with a considerable increase in number of schools, faculties, and programs. Because of this transformation, it is a good case now to explore academic leadership in a non-western country. The objective of this study was to explore the views on effective academic leadership requirements held by key informants in Iran's medical education system. Methods A nominal group study was conducted by strategic sampling in which participants were requested to discuss and report on requirements for academic leadership, suggestions and barriers. Written notes from the discussions were transcribed and subjected to content analysis. Results Six themes of effective academic leadership emerged: 1shared vision, goal, and strategy, 2 teaching and research leadership, 3 fair and efficient management, 4 mutual trust and respect, 5 development and recognition, and 6 transformational leadership. Current Iranian academic leadership suffers from lack of meritocracy, conservative leaders, politicization, bureaucracy, and belief in misconceptions. Conclusion The structure of the Iranian medical university system is not supportive of effective academic leadership. However, participants' views on effective academic leadership are in line with what is also found in the western literature, that is, if the managers could create the premises for a supportive and transformational leadership, they could generate mutual trust and respect in academia and increase scientific production.

  10. Groundwater Resources Assessment For Joypurhat District Using Mathematical Modelling Technique

    Directory of Open Access Journals (Sweden)

    Md. Iquebal Hossain

    2015-06-01

    Full Text Available In this study potential recharge as well as groundwater availability for 5 Upazillas (Akkelpur, Kalai, Joypurhat Sadar, Khetlal and Panchbibi of Joypurhat districts has been estimated using MIKE SHE modelling tools. The main aquifers of the study area are dominated by medium sands, medium and coarse sands with little gravels. The top of aquifers ranges from 15 m to 24 m and the screenable thickness of aquifers range from 33 m to 46 m within the depth range from 57 m to 87 m. Heavy abstraction of groundwater for agricultural, industrial and domestic uses results in excessive lowering of water table making the shallow and hand tubewells inoperable in the dry season. The upazilawise potential recharge for the study area was estimated through mathematical model using MIKE SHE modelling tools in an integrated approach. The required data were collected from the different relevant organisations. The potential recharge of the present study varies from 452 mm to 793 mm. Maximum depth to groundwater table in most of the places occurs at the end of April. At this time, groundwater table in most of the part of Kalai, Khetlal, Akkelpur and Panchbibi goes below suction limit causing HTWs and STWs partially/fully in operable.

  11. Ensembles of signal transduction models using Pareto Optimal Ensemble Techniques (POETs).

    Science.gov (United States)

    Song, Sang Ok; Chakrabarti, Anirikh; Varner, Jeffrey D

    2010-07-01

    Mathematical modeling of complex gene expression programs is an emerging tool for understanding disease mechanisms. However, identification of large models sometimes requires training using qualitative, conflicting or even contradictory data sets. One strategy to address this challenge is to estimate experimentally constrained model ensembles using multiobjective optimization. In this study, we used Pareto Optimal Ensemble Techniques (POETs) to identify a family of proof-of-concept signal transduction models. POETs integrate Simulated Annealing (SA) with Pareto optimality to identify models near the optimal tradeoff surface between competing training objectives. We modeled a prototypical-signaling network using mass-action kinetics within an ordinary differential equation (ODE) framework (64 ODEs in total). The true model was used to generate synthetic immunoblots from which the POET algorithm identified the 117 unknown model parameters. POET generated an ensemble of signaling models, which collectively exhibited population-like behavior. For example, scaled gene expression levels were approximately normally distributed over the ensemble following the addition of extracellular ligand. Also, the ensemble recovered robust and fragile features of the true model, despite significant parameter uncertainty. Taken together, these results suggest that experimentally constrained model ensembles could capture qualitatively important network features without exact parameter information.

  12. Alternative decision modelling techniques for the evaluation of health care technologies: Markov processes versus discrete event simulation.

    Science.gov (United States)

    Karnon, Jonathan

    2003-10-01

    Markov models have traditionally been used to evaluate the cost-effectiveness of competing health care technologies that require the description of patient pathways over extended time horizons. Discrete event simulation (DES) is a more flexible, but more complicated decision modelling technique, that can also be used to model extended time horizons. Through the application of a Markov process and a DES model to an economic evaluation comparing alternative adjuvant therapies for early breast cancer, this paper compares the respective processes and outputs of these alternative modelling techniques. DES displays increased flexibility in two broad areas, though the outputs from the two modelling techniques were similar. These results indicate that the use of DES may be beneficial only when the available data demonstrates particular characteristics.

  13. IMPROVED SOFTWARE QUALITY ASSURANCE TECHNIQUES USING SAFE GROWTH MODEL

    Directory of Open Access Journals (Sweden)

    M.Sangeetha

    2010-09-01

    Full Text Available In our lives are governed by large, complex systems with increasingly complex software, and the safety, security, and reliability of these systems has become a major concern. As the software in today’ssystems grows larger, it has more defects, and these defects adversely affect the safety, security, and reliability of the systems. Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, andmaintenance of software. Software divides into two pieces: internal and external quality characteristics.External quality characteristics are those parts of a product that face its users, where internal quality characteristics are those that do not.Quality is conformance to product requirements and should be free. This research concerns the role of software Quality. Software reliability is an important facet of software quality. It is the probability of failure-freeoperation of a computer program in a specified environment for a specified time. In software reliability modeling, the parameters of the model are typically estimated from the test data of the corresponding component. However, the widely used point estimatorsare subject to random variations in the data, resulting in uncertainties in these estimated parameters. This research describes a new approach to the problem of software testing. The approach is based on Bayesian graphical models and presents formal mechanisms forthe logical structuring of the software testing problem, the probabilistic and statistical treatment of the uncertainties to be addressed, the test design and analysis process, and the incorporation and implication of test results. Once constructed, the models produced are dynamic representations of the software testingproblem. It explains need of the common test-and-fix software quality strategy is no longer adequate, and characterizes the properties of the quality strategy.

  14. Using Interior Point Method Optimization Techniques to Improve 2- and 3-Dimensional Models of Earth Structures

    Science.gov (United States)

    Zamora, A.; Gutierrez, A. E.; Velasco, A. A.

    2014-12-01

    2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.

  15. Managerial Techniques in Educational Administration.

    Science.gov (United States)

    Lane, John J.

    1983-01-01

    Management techniques developed during the past 20 years assume the rational bureaucratic model. School administration requires contingent techniques. Quality Circle, Theory Z, and the McKenzie 7-Framework are discussed as techniques to increase school productivity. (MD)

  16. Ecological Footprint Model Using the Support Vector Machine Technique

    Science.gov (United States)

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance. PMID:22291949

  17. MODSARE-V: Validation of Dependability and Safety Critical Software Components with Model Based Requirements

    Science.gov (United States)

    Silveira, Daniel T. de M. M.; Schoofs, Tobias; Alana Salazar, Elena; Rodriguez Rodriguez, Ana Isabel; Devic, Marie-Odile

    2010-08-01

    The wide use of RAMS methods and techniques [1] (e.g. SFMECA, SFTA, HAZOP, HA...) in critical software development resulted in the specification of new software requirements, design constraints and other issues such as mandatory coding rules. Given the large variety of RAMS Requirements and Techniques, different types of Verification and Validation (V&V) [14] are spread over the phases of the software engineering process. As a result, the V&V process becomes complex and the cost and time required for a complete and consistent V&V process is increased. By introducing the concept of a model based approach to facilitate the RAMS requirements definition process, the V&V may be reduce in time and effort. MODSARE-V is demonstrates the feasibility of this concept based on case studies applied to ground or on-board software space projects with critical functions/components. This paper describes the approach adopted at MODSARE-V to realize the concept into a prototype and summarizes the results and conclusions met after the prototype application on the case studies.

  18. Nano-scale CMOS analog circuits models and CAD techniques for high-level design

    CERN Document Server

    Pandit, Soumya; Patra, Amit

    2014-01-01

    Reliability concerns and the limitations of process technology can sometimes restrict the innovation process involved in designing nano-scale analog circuits. The success of nano-scale analog circuit design requires repeat experimentation, correct analysis of the device physics, process technology, and adequate use of the knowledge database.Starting with the basics, Nano-Scale CMOS Analog Circuits: Models and CAD Techniques for High-Level Design introduces the essential fundamental concepts for designing analog circuits with optimal performances. This book explains the links between the physic

  19. A Novel Algorithmic Cost Estimation Model Based on Soft Computing Technique

    Directory of Open Access Journals (Sweden)

    Iman Attarzadeh

    2010-01-01

    Full Text Available Problem statement: Software development effort estimation is the process of predicting the most realistic use of effort required for developing software based on some parameters. It has always characterized one of the biggest challenges in Computer Science for the last decades. Because time and cost estimate at the early stages of the software development are the most difficult to obtain and they are often the least accurate. Traditional algorithmic techniques such as regression models, Software Life Cycle Management (SLIM, COCOMO II model and function points, require an estimation process in a long term. But, nowadays that is not acceptable for software developers and companies. Newer soft computing techniques to effort estimation based on non-algorithmic techniques such as Fuzzy Logic (FL may offer an alternative for solving the problem. This work aims to propose a new fuzzy logic realistic model to achieve more accuracy in software effort estimation. The main objective of this research was to investigate the role of fuzzy logic technique in improving the effort estimation accuracy by characterizing inputs parameters using two-side Gaussian function which gave superior transition from one interval to another. Approach: The methodology adopted in this study was use of fuzzy logic approach rather than classical intervals in the COCOMO II. Using advantages of fuzzy logic such as fuzzy sets, inputs parameters can be specified by distribution of its possible values and these fuzzy sets were represented by membership functions. In this study to get a smoother transition in the membership function for input parameters, its associated linguistic values were represented by two-side Gaussian Membership Functions (2-D GMF and rules. Results: After analyzing the results attained by means of applying COCOMO II and proposed model based on fuzzy logic to the NASA dataset and created an artificial dataset, it had been found that proposed model was performing

  20. Study of Semi-Span Model Testing Techniques

    Science.gov (United States)

    Gatlin, Gregory M.; McGhee, Robert J.

    1996-01-01

    An investigation has been conducted in the NASA Langley 14- by 22-Foot Subsonic Tunnel in order to further the development of semi-span testing capabilities. A twin engine, energy efficient transport (EET) model with a four-element wing in a takeoff configuration was used for this investigation. Initially a full span configuration was tested and force and moment data, wing and fuselage surface pressure data, and fuselage boundary layer measurements were obtained as a baseline data set. The semi-span configurations were then mounted on the wind tunnel floor, and the effects of fuselage standoff height and shape as well as the effects of the tunnel floor boundary layer height were investigated. The effectiveness of tangential blowing at the standoff/floor juncture as an active boundary-layer control technique was also studied. Results indicate that the semi-span configuration was more sensitive to variations in standoff height than to variations in floor boundary layer height. A standoff height equivalent to 30 percent of the fuselage radius resulted in better correlation with full span data than no standoff or the larger standoff configurations investigated. Undercut standoff leading edges or the use of tangential blowing in the standoff/ floor juncture improved correlation of semi-span data with full span data in the region of maximum lift coefficient.

  1. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; ptechnique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat osteosarcoma model was shown to be feasible: the take rate was high, surgical mortality was

  2. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  3. LSST camera heat requirements using CFD and thermal seeing modeling

    Science.gov (United States)

    Sebag, Jacques; Vogiatzis, Konstantinos

    2010-07-01

    The LSST camera is located above the LSST primary/tertiary mirror and in front of the secondary mirror in the shadow of its central obscuration. Due to this position within the optical path, heat released from the camera has a potential impact on the seeing degradation that is larger than traditionally estimated for Cassegrain or Nasmyth telescope configurations. This paper presents the results of thermal seeing modeling combined with Computational Fluid Dynamics (CFD) analyzes to define the thermal requirements on the LSST camera. Camera power output fluxes are applied to the CFD model as boundary conditions to calculate the steady-state temperature distribution on the camera and the air inside the enclosure. Using a previously presented post-processing analysis to calculate the optical seeing based on the mechanical turbulence and temperature variations along the optical path, the optical performance resulting from the seeing is determined. The CFD simulations are repeated for different wind speeds and orientations to identify the worst case scenario and generate an estimate of seeing contribution as a function of camera-air temperature difference. Finally, after comparing with the corresponding error budget term, a maximum allowable temperature for the camera is selected.

  4. Research on Computer Aided Innovation Model of Weapon Equipment Requirement Demonstration

    Science.gov (United States)

    Li, Yong; Guo, Qisheng; Wang, Rui; Li, Liang

    Firstly, in order to overcome the shortcoming of using only AD or TRIZ solely, and solve the problems currently existed in weapon equipment requirement demonstration, the paper construct the method system of weapon equipment requirement demonstration combining QFD, AD, TRIZ, FA. Then, we construct a CAI model frame of weapon equipment requirement demonstration, which include requirement decomposed model, requirement mapping model and requirement plan optimization model. Finally, we construct the computer aided innovation model of weapon equipment requirement demonstration, and developed CAI software of equipment requirement demonstration.

  5. Requirements-Driven Deployment: Customizing the Requirements Model for the Host Environment

    NARCIS (Netherlands)

    Ali, Raian; Dalpiaz, Fabiano; Giorgini, Paolo

    2014-01-01

    Deployment is a main development phase which configures a software to be ready for use in a certain environment. The ultimate goal of deployment is to enable users to achieve their requirements while using the deployed software. However, requirements are not uniform and differ between deployment env

  6. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    Science.gov (United States)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2016-12-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  7. An algorithmic technique for a class of queueing models with packet switching applications

    Science.gov (United States)

    Morris, R. J. T.

    The need to analyze the performance of a class of hardware packet switches leads to consideration of a queueing model consisting of a single server queue with input given by a function of a Markov chain. An algorithmic technique is developed to obtain the joint stationary distribution of the bivariate Markov chain which describes the system. Both the infinite and finite capacity cases are considered. The technique is used to study several design issues which arise when a packet switch is subjected to independent streams of bursty input traffic. Guidelines are given which aid in estimating the queueing space required to keep traffic losses acceptably small. It is seen that the commonly employed heuristic of dimensioning queueing space using the tail of the infinite capacity distribution can lead to considerable error when compared with exact results.

  8. A 2nd generation static model of greenhouse energy requirements (horticern) : a comparison with dynamic models

    CERN Document Server

    Jolliet, O; Munday, G L

    1989-01-01

    Optimisation of a greenhouse and its components requires a suitable model permitting precise determination of its energy requirements. Existing static models are simple but lack precision; dynamic models though more precise, are unsuitable for use over long periods and difficult to handle in practice. A theoretical study and measurements from the CERN trial greenhouse have allowed the development of new static model named "HORTICERN", precise and easy to use for predicting energy consumption and which takes into account effects of solar energy, wind and radiative loss to the sky. This paper compares the HORTICERN model with the dynamic models of Bot, Takakura, Van Bavel and Gembloux, and demonstrates that its precision is comparable; differences on average being less than 5%, it is independent of type of greenhouse (e.g. single or double glazing, Hortiplus, etc.) and climate. The HORTICERN method has been developed for PC use and is proving to be a powerful tool for greenhouse optimisation by research work...

  9. A prospective overview of the essential requirements in molecular modeling for nanomedicine design.

    Science.gov (United States)

    Kumar, Pradeep; Khan, Riaz A; Choonara, Yahya E; Pillay, Viness

    2013-05-01

    Nanotechnology has presented many new challenges and opportunities in the area of nanomedicine design. The issues related to nanoconjugation, nanosystem-mediated targeted drug delivery, transitional stability of nanovehicles, the integrity of drug transport, drug-delivery mechanisms and chemical structural design require a pre-estimated and determined course of assumptive actions with property and characteristic estimations for optimal nanomedicine design. Molecular modeling in nanomedicine encompasses these pre-estimations and predictions of pertinent design data via interactive computographic software. Recently, an increasing amount of research has been reported where specialized software is being developed and employed in an attempt to bridge the gap between drug discovery, materials science and biology. This review provides an assimilative and concise incursion into the current and future strategies of molecular-modeling applications in nanomedicine design and aims to describe the utilization of molecular models and theoretical-chemistry computographic techniques for expansive nanomedicine design and development.

  10. Semantic techniques for enabling knowledge reuse in conceptual modelling

    NARCIS (Netherlands)

    Gracia, J.; Liem, J.; Lozano, E.; Corcho, O.; Trna, M.; Gómez-Pérez, A.; Bredeweg, B.

    2010-01-01

    Conceptual modelling tools allow users to construct formal representations of their conceptualisations. These models are typically developed in isolation, unrelated to other user models, thus losing the opportunity of incorporating knowledge from other existing models or ontologies that might enrich

  11. The use of multiple respiratory inhalers requiring different inhalation techniques has an adverse effect on COPD outcomes

    Directory of Open Access Journals (Sweden)

    Bosnic-Anticevich S

    2016-12-01

    Full Text Available Sinthia Bosnic-Anticevich,1 Henry Chrystyn,2 Richard W Costello,3,4 Myrna B Dolovich,5 Monica J Fletcher,6 Federico Lavorini,7 Roberto Rodríguez-Roisin,8 Dermot Ryan,9,10 Simon Wan Yau Ming,2 David B Price2,11 1Woolcock Institute of Medical Research, School of Medical Sciences, University of Sydney and Sydney Local Health District, Sydney, NSW, Australia; 2Observational and Pragmatic Research Institute Pte Ltd, Singapore; 3RCSI Medicine, Royal College of Surgeons, 4RCSI Education & Research Centre, Beaumont Hospital, Beaumont, Dublin, Ireland; 5Department of Medicine, Respirology, McMaster University, ON, Canada; 6Education for Health, Warwick, UK; 7Department of Experimental and Clinical Medicine, University of Florence, Florence, Italy; 8Respiratory Institute, Hospital Clinic, Universitat de Barcelona, Barcelona, Spain; 9Optimum Patient Care, Cambridge, 10Centre for Population Health Sciences, University of Edinburgh, Edinburgh, 11Academic Primary Care, University of Aberdeen, Aberdeen, UK Background: Patients with COPD may be prescribed multiple inhalers as part of their treatment regimen, which require different inhalation techniques. Previous literature has shown that the effectiveness of inhaled treatment can be adversely affected by incorrect inhaler technique. Prescribing a range of device types could worsen this problem, leading to poorer outcomes in COPD patients, but the impact is not yet known. Aims: To compare clinical outcomes of COPD patients who use devices requiring similar inhalation technique with those who use devices with mixed techniques. Methods: A matched cohort design was used, with 2 years of data from the Optimum Patient Care Research Database. Matching variables were established from a baseline year of follow-up data, and two cohorts were formed: a “similar-devices cohort” and a “mixed-devices cohort”. COPD-related events were recorded during an outcome year of follow-up. The primary outcome measure was an

  12. Autonomous selection of PDE inpainting techniques vs. exemplar inpainting techniques for void fill of high resolution digital surface models

    Science.gov (United States)

    Rahmes, Mark; Yates, J. Harlan; Allen, Josef DeVaughn; Kelley, Patrick

    2007-04-01

    High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSite TM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.

  13. Defining Requirements and Applying Information Modeling for Protecting Enterprise Assets

    Science.gov (United States)

    Fortier, Stephen C.; Volk, Jennifer H.

    The advent of terrorist threats has heightened local, regional, and national governments' interest in emergency response and disaster preparedness. The threat of natural disasters also challenges emergency responders to act swiftly and in a coordinated fashion. When a disaster occurs, an ad hoc coalition of pre-planned groups usually forms to respond to the incident. History has shown that these “system of systems” do not interoperate very well. Communications between fire, police and rescue components either do not work or are inefficient. Government agencies, non-governmental organizations (NGOs), and private industry use a wide array of software platforms for managing data about emergency conditions, resources and response activities. Most of these are stand-alone systems with very limited capability for data sharing with other agencies or other levels of government. Information technology advances have facilitated the movement towards an integrated and coordinated approach to emergency management. Other communication mechanisms, such as video teleconferencing, digital television and radio broadcasting, are being utilized to combat the challenges of emergency information exchange. Recent disasters, such as Hurricane Katrina and the tsunami in Indonesia, have illuminated the weaknesses in emergency response. This paper will discuss the need for defining requirements for components of ad hoc coalitions which are formed to respond to disasters. A goal of our effort was to develop a proof of concept that applying information modeling to the business processes used to protect and mitigate potential loss of an enterprise was feasible. These activities would be modeled both pre- and post-incident.

  14. PENGEMBANGAN MODEL INTERNALISASI NILAI KARAKTER DALAM PEMBELAJARAN SEJARAH MELALUI MODEL VALUE CLARIFICATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Nunuk Suryani

    2013-07-01

    Full Text Available This research produce a product model of internalization of the character in learning history through Value Clarification Technique as a revitalization of the role of social studies in the formation of national character. In general, this research consist of three levels : (1 doing  pre-survey which identified the current condition of  the learning value of character in ​​in learning history (2 development of a model based on the findings of  pre-survey, the model used is the Dick and Carey Model, and (3 validating the models. Development models implemented with limited trials and extensive testing. The findings of this study lead to the conclusion that the VCT model is effective to internalize the character value in learning history. VCT models effective for increasing the role of learning history in the formation of student character. It can be concluded VCT models effective for improving the quality of processes and products of learning character values ​​in social studies SMP especially in Surakarta Keywords: Internalization, the value of character, Model VCT, learning history, learning social studies Penelitian ini bertujuan menghasilkan suatu produk model internalisasi nilai karakter dalam pembelajaran IPS melalui Model Value Clarification Technique sebagai revitalisasi peran pembelajaran IPS dalam pembentukan karakter bangsa. Secara garis besar tahapan penelitian meliputi (1 prasurvai untuk mengidetifikasi kondisi pembelajaran nilai karakter pada pembelajaran  IPS Sejarah SMP yang sedang berjalan, (2 pengembangan model berdasarkan hasil prasurvai, model yang digunakan adalah model Dick and Carey, dan (3 vaidasi model. Pengembangan model dilaksanakan dengan ujicoba terbatas dan uji coba luas. Temuan penelitian ini menghasilkan kesimpulan bahwa model VCT efektif  menginternalisasi nilai karakter dalam pembelajaran Sejarah. Model VCT efektif untuk meningkatkan peran pembelajaran Sejarah dalam

  15. Establishment of C6 brain glioma models through stereotactic technique for laser interstitial thermotherapy research

    Directory of Open Access Journals (Sweden)

    Jian Shi

    2015-01-01

    Conclusion: The rat C6 brain glioma model established in the study was a perfect model to study LITT of glioma. Infrared thermograph technique measured temperature conveniently and effectively. The technique is noninvasive, and the obtained data could be further processed using software used in LITT research. To measure deep-tissue temperature, combining thermocouple with infrared thermograph technique would present better results.

  16. Modelling, analysis and validation of microwave techniques for the characterisation of metallic nanoparticles

    Science.gov (United States)

    Sulaimalebbe, Aslam

    In the last decade, the study of nanoparticle (NP) systems has become a large and interesting research area due to their novel properties and functionalities, which are different from those of the bulk materials, and also their potential applications in different fields. It is vital to understand the behaviour and properties of nano-materials aiming at implementing nanotechnology, controlling their behaviour and designing new material systems with superior performance. Physical characterisation of NPs falls into two main categories, property and structure analysis, where the properties of the NPs cannot be studied without the knowledge of size and structure. The direct measurement of the electrical properties of metal NPs presents a key challenge and necessitates the use of innovative experimental techniques. There have been numerous reports of two/four point resistance measurements of NPs films and also electrical conductivity of NPs films using the interdigitated microarray (IDA) electrode. However, using microwave techniques such as open ended coaxial probe (OCP) and microwave dielectric resonator (DR) for electrical characterisation of metallic NPs are much more accurate and effective compared to other traditional techniques. This is because they are inexpensive, convenient, non-destructive, contactless, hazardless (i.e. at low power) and require no special sample preparation. This research is the first attempt to determine the microwave properties of Pt and Au NP films, which were appealing materials for nano-scale electronics, using the aforementioned microwave techniques. The ease of synthesis, relatively cheap, unique catalytic activities and control over the size and the shape were the main considerations in choosing Pt and Au NPs for the present study. The initial phase of this research was to implement and validate the aperture admittance model for the OCP measurement through experiments and 3D full wave simulation using the commercially available Ansoft

  17. Compensation technique for Q-limit enforcements in a constant complex Jacobian power flow model

    Energy Technology Data Exchange (ETDEWEB)

    Raju, V.B.; Bijwe, P.R.; Nanda, J. (Dept. of Electrical Engineering, Indian Inst. of Technology, Delhi, New Delhi 110 016 (IN))

    1990-01-01

    This paper presents a simple and efficient compensation technique to deal with but-type switchings associated with Q-limit enforcement at voltage controlled (PV) buses in a constant Jacobian power flow model. The Jacobian is expressed in the complex variable form resulting in reduced storage requirements as compared to real form of representation of the Jacobian. The structure of the Jacobian is preserved irrespective of bus-type switchings while Q-limit enforcements are performed at the PV buses. This feature permits implementation of optimal ordering of buses in an efficient way while factorizing the Jacobian matrix. The Jacobian is held constant throughout the load flow solution process. Incremental secondary injections (ISIs) are provided at the respective PV buses to maintain the specified voltages. The required injections are computed from the proposed compensation model. Results indicate that the proposed technique is quite efficient as the number of iterations for solution to converge, irrespective of bus-type switchings remains same as that in unadjusted solution case.

  18. A simple technique for combining simplified models and its application to direct stop production

    CERN Document Server

    Barnard, James; French, Sky; White, Martin

    2014-01-01

    The results of many LHC searches for supersymmetric particles are interpreted using simplified models, in which one fixes the masses and couplings of most sparticles then scans over a few remaining masses of interest. We present a new technique for combining multiple simplified models (that requires no additional simulation) thereby demonstrating the utility and limitations of simplified models in general, and suggesting a simple way of improving LHC search strategies. The technique is used to derive limits on the stop mass that are model independent, modulo some reasonably generic assumptions which are quantified precisely. We find that current ATLAS and CMS results exclude stop masses up to 340 GeV for neutralino masses up to 120 GeV, provided that the total branching ratio into channels other than top-neutralino and bottom-chargino is small, and that there is no mass difference smaller than 10 GeV in the mass spectrum. In deriving these limits we place upper bounds on the branching ratios for complete stop...

  19. A modeling technique for active control design studies with application to spacecraft microvibrations.

    Science.gov (United States)

    Aglietti, G S; Gabriel, S B; Langley, R S; Rogers, E

    1997-10-01

    Microvibrations, at frequencies between 1 and 1000 Hz, generated by on board equipment, can propagate throughout a spacecraft structure and affect the performance of sensitive payloads. To investigate strategies to reduce these dynamic disturbances by means of active control systems, realistic yet simple structural models are necessary to represent the dynamics of the electromechanical system. In this paper a modeling technique which meets this requirement is presented, and the resulting mathematical model is used to develop some initial results on active control strategies. Attention is focused on a mass loaded panel subjected to point excitation sources, the objective being to minimize the displacement at an arbitrary output location. Piezoelectric patches acting as sensors and actuators are employed. The equations of motion are derived by using Lagrange's equation with vibration mode shapes as the Ritz functions. The number of sensors/actuators and their location is variable. The set of equations obtained is then transformed into state variables and some initial controller design studies are undertaken. These are based on standard linear systems optimal control theory where the resulting controller is implemented by a state observer. It is demonstrated that the proposed modeling technique is a feasible realistic basis for in-depth controller design/evaluation studies.

  20. Comparison of QuadrapolarTM radiofrequency lesions produced by standard versus modified technique: an experimental model

    Directory of Open Access Journals (Sweden)

    Safakish R

    2017-06-01

    Full Text Available Ramin Safakish Allevio Pain Management Clinic, Toronto, ON, Canada Abstract: Lower back pain (LBP is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI joint pain is responsible for LBP in 18%–30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques. Keywords: lower back pain, radiofrequency ablation, sacroiliac joint, Quadrapolar radiofrequency ablation

  1. A Titration Technique for Demonstrating a Magma Replenishment Model.

    Science.gov (United States)

    Hodder, A. P. W.

    1983-01-01

    Conductiometric titrations can be used to simulate subduction-setting volcanism. Suggestions are made as to the use of this technique in teaching volcanic mechanisms and geochemical indications of tectonic settings. (JN)

  2. New Developments and Techniques in Structural Equation Modeling

    CERN Document Server

    Marcoulides, George A

    2001-01-01

    Featuring contributions from some of the leading researchers in the field of SEM, most chapters are written by the author(s) who originally proposed the technique and/or contributed substantially to its development. Content highlights include latent varia

  3. Molecular dynamics techniques for modeling G protein-coupled receptors.

    Science.gov (United States)

    McRobb, Fiona M; Negri, Ana; Beuming, Thijs; Sherman, Woody

    2016-10-01

    G protein-coupled receptors (GPCRs) constitute a major class of drug targets and modulating their signaling can produce a wide range of pharmacological outcomes. With the growing number of high-resolution GPCR crystal structures, we have the unprecedented opportunity to leverage structure-based drug design techniques. Here, we discuss a number of advanced molecular dynamics (MD) techniques that have been applied to GPCRs, including long time scale simulations, enhanced sampling techniques, water network analyses, and free energy approaches to determine relative binding free energies. On the basis of the many success stories, including those highlighted here, we expect that MD techniques will be increasingly applied to aid in structure-based drug design and lead optimization for GPCRs.

  4. High frequency magnetic field technique: mathematical modelling and development of a full scale water fraction meter

    Energy Technology Data Exchange (ETDEWEB)

    Cimpan, Emil

    2004-09-15

    water fraction. The model intended to employ existent formulas of the medium parameters worked out by Maxwell, Bruggeman and Ramu and Rao. However, to calculate the loss due to the induced eddy currents within the medium in the particular case of the oil continuous phase, other mathematical models expressing (equivalent) medium conductivity and permittivity were required and developed in this work. Although the resonance frequency of the coil was decreasing with increasing medium conductivity, this variation was not as significant as the variation of the coil impedance. This raised the question as to whether coils having the same self-resonance frequency in different media could be constructed. This was worth investigating because it could simplify the mathematical modelling. This was indeed the case and coils featuring approximately the same resonance frequency in different media were made. Concluding, the measuring device based on the HFMFT, which was constructed, investigated and described in this work can be developed into a practical instrument for monitoring the water fraction in multiphase flows. The overall measurement accuracy when using this technique would depend on the analytical models expressing the medium parameters and circumscribing the HFMFT itself. When the mathematical modelling of the HFMFT was finalised, it was understood that many other applications of the technique were also possible. Some of these applications, which might be of interest such as a conductivity meter and a three-component ratio meter, are briefly discussed.

  5. Modeling gravitational instabilities in self-gravitating protoplanetary disks with adaptive mesh refinement techniques

    CERN Document Server

    Lichtenberg, Tim

    2015-01-01

    The astonishing diversity in the observed planetary population requires theoretical efforts and advances in planet formation theories. Numerical approaches provide a method to tackle the weaknesses of current planet formation models and are an important tool to close gaps in poorly constrained areas. We present a global disk setup to model the first stages of giant planet formation via gravitational instabilities (GI) in 3D with the block-structured adaptive mesh refinement (AMR) hydrodynamics code ENZO. With this setup, we explore the impact of AMR techniques on the fragmentation and clumping due to large-scale instabilities using different AMR configurations. Additionally, we seek to derive general resolution criteria for global simulations of self-gravitating disks of variable extent. We run a grid of simulations with varying AMR settings, including runs with a static grid for comparison, and study the effects of varying the disk radius. Adopting a marginally stable disk profile (Q_init=1), we validate the...

  6. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    Science.gov (United States)

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  7. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  8. Closed loop models for analyzing engineering requirements for simulators

    Science.gov (United States)

    Baron, S.; Muralidharan, R.; Kleinman, D.

    1980-01-01

    A closed loop analytic model, incorporating a model for the human pilot, (namely, the optimal control model) that would allow certain simulation design tradeoffs to be evaluated quantitatively was developed. This model was applied to a realistic flight control problem. The resulting model is used to analyze both overall simulation effects and the effects of individual elements. The results show that, as compared to an ideal continuous simulation, the discrete simulation can result in significant performance and/or workload penalties.

  9. Small Body GN&C Research Report: A Guidance and Control Technique for Small-Body Proximity Operations with Guaranteed Guidance Resolvability and Required Thruster Silent Time

    Science.gov (United States)

    Carson, John M., III; Ackmese, A. Behcet

    2005-01-01

    The guidance and control (G&C) algorithms for enabling small-body proximity operations are developed by using a model predictive control approach along with a convexification of the governing dynamics, control constraints, and trajectory/state constraints. The open-loop guidance is solved ahead of time or in a resolvable, real-time manner through the use of PWG (Pseudo Way-point Generation), a technique developed in this research. The PWG scheme ensures required thruster silent times during trajectory maneuvers. The feedback control is implemented to track the PWG trajectories in a manner that guarantees the resolvability for the open-loop problem, enabling the ability to update the G&C in a model-predictive manner. The schemes incorporate gravity models and thruster ring times into discrete dynamics that are solved as a optimal control problem to minimize fuel consumption or thruster energy expenditure. The optimal control problem is cast as an LMI (Linear Matrix Inequality) and then solved through Semi-Definite Programming techniques in a computationally efficient manner that provides convergence and constraint guarantees.

  10. A model identification technique to characterize the low frequency behaviour of surrogate explosive materials

    Science.gov (United States)

    Paripovic, Jelena; Davies, Patricia

    2016-09-01

    The mechanical response of energetic materials, especially those used in improvised explosive devices, is of great interest to improve understanding of how mechanical excitations may lead to improved detection or detonation. The materials are comprised of crystals embedded into a binder. Microstructural modelling can give insight into the interactions between the binder and the crystals and thus the mechanisms that may lead to material heating and but there needs to be validation of these models and they also require estimates of constituent material properties. Addressing these issues, nonlinear viscoelastic models of the low frequency behavior of a surrogate material-mass system undergoing base excitation have been constructed, and experimental data have been collected and used to estimate the order of components in the system model and the parameters in the model. The estimation technique is described and examples of its application to both simulated and experimental data are given. From the estimated system model the material properties are extracted. Material properties are estimated for a variety of materials and the effect of aging on the estimated material properties is shown.

  11. OFF-LINE HANDWRITING RECOGNITION USING VARIOUS HYBRID MODELING TECHNIQUES AND CHARACTER N-GRAMS

    NARCIS (Netherlands)

    Brakensiek, A.; Rottland, J.; Kosmala, A.; Rigoll, G.

    2004-01-01

    In this paper a system for on-line cursive handwriting recognition is described. The system is based on Hidden Markov Models (HMMs) using discrete and hybrid modeling techniques. Here, we focus on two aspects of the recognition system. First, we present different hybrid modeling techniques, whereas

  12. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...

  13. Modeling technique to simulate thermally-induced stress in an assembly consisting of components made of different materials

    Energy Technology Data Exchange (ETDEWEB)

    Raab, W.; Weigand, M.; Renneisen, A.; Hoeerl, S. (Darmstadt, Technische Hochschule (West Germany))

    1991-06-01

    A modeling technique based on the stress-freezing procedure of three-dimensional photoelasticity is presented. This technique makes it possible to freeze the state of stress caused by a restraint in thermal expansion. The method simulates the restraint using the glass-transition effect during the stress-freezing procedure, so that the state of stress can be measured quantitatively. Cured epoxy resins were used as model materials. To achieve the necessary difference between the coefficients of thermal expansion, a prestressing method was used. The experimental stress analysis shows that the prestressing method can be applied when different thermal expansion properties associated with multiple components are required. The modeling technique is illustrated by an example in which the stress in the bottom of a light metal-alloy cylinder head is simulated.

  14. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  15. Current practices of diagnostic techniques requir- ing the use of ophthalmic drugs among KwaZulu- Natal optometrists*

    Directory of Open Access Journals (Sweden)

    K. P. Mashige

    2009-12-01

    Full Text Available In anendeavour to improve the quality of optometric eye care services in South Africa, the scope of practice was expanded to include the use of ocular diagnostic procedures such as goniosco-py that require the use of ophthalmic drugs. The purpose of this study was to assess the practices of specific diagnostic techniques (contact tonometry, 78 D/90 D lens fundus examination, binocular indirect ophthalmoscopy and gonioscopy requiring the use of ophthalmic drugs among optometrists in KwaZulu-Natal (KZN province. These specific techniques are referred to as diagnostic procedures in this article. A questionnaire containing information on demography and practice of these specific techniques was sent to all 213 KwaZulu-Natal registered optometrists who owned private practices. One hundred and thirty two completed questionnaires were received, a response rate of 62%. One hundred and seventeen (55% of the questionnaires were included in the analysis of which 55% of the respondents were females and 45% were males. Sixty two optometrists (53% were certified in di-agnostic procedures but many procedures were not being practiced. These procedures and the percentage respondents were: Contact tonometry (60%, 78 D/90 D lens fundus examination (60%, binocular indirect ophthalmoscopy (84% and gonioscopy (78%. Also, among these certified respondents (62 optometrists, a significant proportion (60% disagreed when asked if they were confident and proficient in performing the relevant diagnostic procedures. Many, (61% agreed that lack of incentives discouraged them from routinely performing the procedures. More than half (58%, agreed that chair time was an important factor in deciding whether or not to perform these diagnostic procedures. Of the total respondents (117, 86% agreed that they were confident about the accuracy of their referrals and less than half (45% disagreed that diagnostic procedures should be the sole responsibility of ophthalmologists. Less than

  16. Current practices of diagnostic techniques requiring the use of ophthalmic drugs among KwaZulu- Natal optometrists*

    Directory of Open Access Journals (Sweden)

    K. P. Mashige

    2009-12-01

    Full Text Available In anendeavour to improve the quality of optometric eye care services in South Africa, the scope of practice was expanded to include the use of ocular diagnostic procedures such as gonioscopy that require the use of ophthalmic drugs. The purpose of this study was to assess the practices of specific diagnostic techniques (contact tonometry, 78 D/90 D lens fundus examination, binocular indirect ophthalmoscopy and gonioscopy requiring the use of ophthalmic drugs among optometrists in KwaZulu-Natal (KZN province. These specific techniques are referred to as diagnostic procedures in this article. A questionnaire containing information on demography and practice of these specific techniques was sent to all 213 KwaZulu-Natal registered optometrists who owned private practices. One hundred and thirty two completed questionnaires were received, a response rate of 62%. One hundred and seventeen (55% of the questionnaires were included in the analysis of which 55% of the respondents were females and 45% were males. Sixty two optometrists (53% were certified in di-agnostic procedures but many procedures were not being practiced. These procedures and the percentage respondents were: Contact tonometry (60%, 78 D/90 D lens fundus examination (60%, binocular indirect ophthalmoscopy (84% and gonioscopy (78%. Also, among these certified respondents (62 optometrists, a significant proportion (60% disagreed when asked if they were confident and proficient in performing the relevant diagnostic procedures. Many, (61% agreed that lack of incentives discouraged them from routinely performing the procedures. More than half (58%, agreed that chair time was an important factor in deciding whether or not to perform these diagnostic procedures. Of the total respondents (117, 86% agreed that they were confident about the accuracy of their referrals and less than half (45% disagreed that diagnostic procedures should be the sole responsibility of ophthalmologists. Less than

  17. Application of experimental design techniques to structural simulation meta-model building using neural network

    Institute of Scientific and Technical Information of China (English)

    费庆国; 张令弥

    2004-01-01

    Neural networks are being used to construct meta-models in numerical simulation of structures. In addition to network structures and training algorithms, training samples also greatly affect the accuracy of neural network models. In this paper, some existing main sampling techniques are evaluated, including techniques based on experimental design theory,random selection, and rotating sampling. First, advantages and disadvantages of each technique are reviewed. Then, seven techniques are used to generate samples for training radial neural networks models for two benchmarks: an antenna model and an aircraft model. Results show that the uniform design, in which the number of samples and mean square error network models are considered, is the best sampling technique for neural network based meta-model building.

  18. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  19. A vortex model for Darrieus turbine using finite element techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ponta, Fernando L. [Universidad de Buenos Aires, Dept. de Electrotecnia, Grupo ISEP, Buenos Aires (Argentina); Jacovkis, Pablo M. [Universidad de Buenos Aires, Dept. de Computacion and Inst. de Calculo, Buenos Aires (Argentina)

    2001-09-01

    Since 1970 several aerodynamic prediction models have been formulated for the Darrieus turbine. We can identify two families of models: stream-tube and vortex. The former needs much less computation time but the latter is more accurate. The purpose of this paper is to show a new option for modelling the aerodynamic behaviour of Darrieus turbines. The idea is to combine a classic free vortex model with a finite element analysis of the flow in the surroundings of the blades. This avoids some of the remaining deficiencies in classic vortex models. The agreement between analysis and experiment when predicting instantaneous blade forces and near wake flow behind the rotor is better than the one obtained in previous models. (Author)

  20. TESTING DIFFERENT SURVEY TECHNIQUES TO MODEL ARCHITECTONIC NARROW SPACES

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH field. In fact, even if the technical specifications (range, accuracy and field of view are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan’s cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  1. Testing Different Survey Techniques to Model Architectonic Narrow Spaces

    Science.gov (United States)

    Mandelli, A.; Fassi, F.; Perfetti, L.; Polari, C.

    2017-08-01

    In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH) field. In fact, even if the technical specifications (range, accuracy and field of view) are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan's cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  2. Surrogate-based modeling and dimension reduction techniques for multi-scale mechanics problems

    Institute of Scientific and Technical Information of China (English)

    Wei Shyy; Young-Chang Cho; Wenbo Du; Amit Gupta; Chien-Chou Tseng; Ann Marie Sastry

    2011-01-01

    Successful modeling and/or design of engineering systems often requires one to address the impact of multiple “design variables” on the prescribed outcome.There are often multiple,competing objectives based on which we assess the outcome of optimization.Since accurate,high fidelity models are typically time consuming and computationally expensive,comprehensive evaluations can be conducted only if an efficient framework is available.Furthermore,informed decisions of the model/hardware's overall performance rely on an adequate understanding of the global,not local,sensitivity of the individual design variables on the objectives.The surrogate-based approach,which involves approximating the objectives as continuous functions of design variables from limited data,offers a rational framework to reduce the number of important input variables,i.e.,the dimension of a design or modeling space.In this paper,we review the fundamental issues that arise in surrogate-based analysis and optimization,highlighting concepts,methods,techniques,as well as modeling implications for mechanics problems.To aid the discussions of the issues involved,we summarize recent efforts in investigating cryogenic cavitating flows,active flow control based on dielectric barrier discharge concepts,and lithium (Li)-ion batteries.It is also stressed that many multi-scale mechanics problems can naturally benefit from the surrogate approach for “scale bridging.”

  3. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  4. NEW TECHNIQUE FOR OBESITY SURGERY: INTERNAL GASTRIC PLICATION TECHNIQUE USING INTRAGASTRIC SINGLE-PORT (IGS-IGP) IN EXPERIMENTAL MODEL.

    Science.gov (United States)

    Müller, Verena; Fikatas, Panagiotis; Gül, Safak; Noesser, Maximilian; Fuehrer, Kirs Ten; Sauer, Igor; Pratschke, Johann; Zorron, Ricardo

    2017-01-01

    Bariatric surgery is currently the most effective method to ameliorate co-morbidities as consequence of morbidly obese patients with BMI over 35 kg/m2. Endoscopic techniques have been developed to treat patients with mild obesity and ameliorate comorbidities, but endoscopic skills are needed, beside the costs of the devices. To report a new technique for internal gastric plication using an intragastric single port device in an experimental swine model. Twenty experiments using fresh pig cadaver stomachs in a laparoscopic trainer were performed. The procedure was performed as follow in ten pigs: 1) volume measure; 2) insufflation of the stomach with CO2; 3) extroversion of the stomach through the simulator and installation of the single port device (Gelpoint Applied Mini) through a gastrotomy close to the pylorus; 4) performance of four intragastric handsewn 4-point sutures with Prolene 2-0, from the gastric fundus to the antrum; 5) after the performance, the residual volume was measured. Sleeve gastrectomy was also performed in further ten pigs and pre- and post-procedure gastric volume were measured. The internal gastric plication technique was performed successfully in the ten swine experiments. The mean procedure time was 27±4 min. It produced a reduction of gastric volume of a mean of 51%, and sleeve gastrectomy, a mean of 90% in this swine model. The internal gastric plication technique using an intragastric single port device required few skills to perform, had low operative time and achieved good reduction (51%) of gastric volume in an in vitro experimental model. A cirurgia bariátrica é atualmente o método mais efetivo para melhorar as co-morbidades decorrentes da obesidade mórbida com IMC acima de 35 kg/m2. Técnicas endoscópicas foram desenvolvidas para tratar pacientes com obesidade leve e melhorar as comorbidades, mas habilidades endoscópicas são necessárias, além dos custos. Relatar uma nova técnica para a plicatura gástrica interna

  5. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  6. Bioenergy crop models: Descriptions, data requirements and future challenges

    Energy Technology Data Exchange (ETDEWEB)

    Nair, S. Surendran [University of Tennessee, Knoxville (UTK); Kang, Shujiang [ORNL; Zhang, Xuesong [Pacific Northwest National Laboratory (PNNL); Miguez, Fernando [Iowa State University; Izaurralde, Dr. R. Cesar [Pacific Northwest National Laboratory (PNNL); Post, Wilfred M [ORNL; Dietze, Michael [University of Illinois, Urbana-Champaign; Lynd, L. [Dartmouth College; Wullschleger, Stan D [ORNL

    2012-01-01

    Field studies that address the production of lignocellulosic biomass as a source of renewable energy provide critical data for the development of bioenergy crop models. A literature survey revealed that 14 models have been used for simulating bioenergy crops including herbaceous and woody bioenergy crops, and for crassulacean acid metabolism (CAM) crops. These models simulate field-scale production of biomass for switchgrass (ALMANAC, EPIC, and Agro-BGC), miscanthus (MISCANFOR, MISCANMOD, and WIMOVAC), sugarcane (APSIM, AUSCANE, and CANEGRO), and poplar and willow (SECRETS and 3PG). Two models are adaptations of dynamic global vegetation models and simulate biomass yields of miscanthus and sugarcane at regional scales (Agro-IBIS and LPJmL). Although it lacks the complexity of other bioenergy crop models, the environmental productivity index (EPI) is the only model used to estimate biomass production of CAM (Agave and Opuntia) plants. Except for the EPI model, all models include representations of leaf area dynamics, phenology, radiation interception and utilization, biomass production, and partitioning of biomass to roots and shoots. A few models simulate soil water, nutrient, and carbon cycle dynamics, making them especially useful for assessing the environmental consequences (e.g., erosion and nutrient losses) associated with the large-scale deployment of bioenergy crops. The rapid increase in use of models for energy crop simulation is encouraging; however, detailed information on the influence of climate, soils, and crop management practices on biomass production is scarce. Thus considerable work remains regarding the parameterization and validation of process-based models for bioenergy crops; generation and distribution of high-quality field data for model development and validation; and implementation of an integrated framework for efficient, high-resolution simulations of biomass production for use in planning sustainable bioenergy systems.

  7. DEPENDABLE PRIVACY REQUIREMENTS BY AGILE MODELED LAYERED SECURITY ARCHITECTURES – WEB SERVICES CASE STUDY

    Directory of Open Access Journals (Sweden)

    M.Upendra Kumar

    2011-07-01

    Full Text Available Software Engineering covers the definition of processes, techniques and models suitable for its environment to guarantee quality of results. An important design artifact in any software development project is the Software Architecture. Software Architecture’s important part is the set of architectural design rules. A primary goal of the architecture is to capture the architecture design decisions. An important part of these design decisions consists of architectural design rules In an MDA (Model-Driven Architecture context, the design of the system architecture is captured in the models of the system. MDA is known to be layered approach for modeling the architectural design rules and uses design patterns to improve the quality of software system. And to include the security to the software system, security patterns are introduced that offer security at the architectural level. More over, agile software development methods are used to build secure systems. There are different methods defined in agile development as extreme programming (XP, scrum, feature driven development (FDD, test driven development (TDD, etc. Agile processing is includes the phases as agile analysis, agile design and agile testing. These phases are defined in layers of MDA to provide security at the modeling level which ensures that security at the system architecture stage will improve the requirements for that system. Agile modeled Layered Security Architectures increase the dependability of the architecture in terms of privacy requirements. We validate this with a case study of dependability of privacy of Web Services Security Architectures, which helps for secure service oriented security architecture. In this paper the major part is given to model architectural design rules using MDA so that architects and developers are responsible to automatic enforcement on the detailed design and easy to understand and use by both of them. This MDA approach is implemented in use of

  8. Study on modeling of vehicle dynamic stability and control technique

    Institute of Scientific and Technical Information of China (English)

    GAO Yun-ting; LI Pan-feng

    2012-01-01

    In order to solve the problem of enhancing the vehicle driving stability and safety,which has been the hot question researched by scientific and engineering in the vehicle industry,the new control method was investigated.After the analysis of tire moving characteristics and the vehicle stress analysis,the tire model based on the extension pacejka magic formula which combined longitudinal motion and lateral motion was developed and a nonlinear vehicle dynamical stability model with seven freedoms was made.A new model reference adaptive control project which made the slip angle and yaw rate of vehicle body as the output and feedback variable in adjusting the torque of vehicle body to control the vehicle stability was designed.A simulation model was also built in Matlab/Simulink to evaluate this control project.It was made up of many mathematical subsystem models mainly including the tire model module,the yaw moment calculation module,the center of mass parameter calculation module,tire parameter calculation module of multiple and so forth.The severe lane change simulation result shows that this vehicle model and the model reference adaptive control method have an excellent performance.

  9. Variational Data Assimilation Technique in Mathematical Modeling of Ocean Dynamics

    Science.gov (United States)

    Agoshkov, V. I.; Zalesny, V. B.

    2012-03-01

    Problems of the variational data assimilation for the primitive equation ocean model constructed at the Institute of Numerical Mathematics, Russian Academy of Sciences are considered. The model has a flexible computational structure and consists of two parts: a forward prognostic model, and its adjoint analog. The numerical algorithm for the forward and adjoint models is constructed based on the method of multicomponent splitting. The method includes splitting with respect to physical processes and space coordinates. Numerical experiments are performed with the use of the Indian Ocean and the World Ocean as examples. These numerical examples support the theoretical conclusions and demonstrate the rationality of the approach using an ocean dynamics model with an observed data assimilation procedure.

  10. Advancing In Situ Modeling of ICMEs: New Techniques for New Observations

    CERN Document Server

    Mulligan, Tamitha; Lynch, Benjamin J

    2012-01-01

    It is generally known that multi-spacecraft observations of interplanetary coronal mass ejections (ICMEs) more clearly reveal their three-dimensional structure than do observations made by a single spacecraft. The launch of the STEREO twin observatories in October 2006 has greatly increased the number of multipoint studies of ICMEs in the literature, but this field is still in its infancy. To date, most studies continue to use on flux rope models that rely on single track observations through a vast, multi-faceted structure, which oversimplifies the problem and often hinders interpretation of the large-scale geometry, especially for cases in which one spacecraft observes a flux rope, while another does not. In order to tackle these complex problems, new modeling techniques are required. We describe these new techniques and analyze two ICMEs observed at the twin STEREO spacecraft on 22-23 May 2007, when the spacecraft were separated by ~8 degrees. We find a combination of non-force-free flux rope multi-spacecr...

  11. Achieving a System Operational Availability Requirement (ASOAR) Model

    Science.gov (United States)

    1992-07-01

    ASOAR requires only system and end item level input data, not Line Replaceable Unit (LRU) Input data. ASOAR usage provides concepts for major logistics...the Corp/Theater ADP Service Center II (CTASC II) to a systen operational availabilty goal. The CTASC II system configuration had many redundant types

  12. Wave Propagation in Fluids Models and Numerical Techniques

    CERN Document Server

    Guinot, Vincent

    2007-01-01

    This book presents the physical principles of wave propagation in fluid mechanics and hydraulics. The mathematical techniques that allow the behavior of the waves to be analyzed are presented, along with existing numerical methods for the simulation of wave propagation. Particular attention is paid to discontinuous flows, such as steep fronts and shock waves, and their mathematical treatment. A number of practical examples are taken from various areas fluid mechanics and hydraulics, such as contaminant transport, the motion of immiscible hydrocarbons in aquifers, river flow, pipe transients an

  13. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified...... that is extremely sensitive to structural changes in the system. This quantity, which is derived from the edge-length distribution function of the Voronoi polygons, displays a dramatic change at the solid-liquid transition. This is found to be more useful for locating the transition than either the defect density...

  14. Household water use and conservation models using Monte Carlo techniques

    Science.gov (United States)

    Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.

    2013-10-01

    The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  15. Multiple Fan-Beam Optical Tomography: Modelling Techniques

    Directory of Open Access Journals (Sweden)

    Pang Jon Fea

    2009-10-01

    Full Text Available This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image.

  16. Size reduction techniques for vital compliant VHDL simulation models

    Science.gov (United States)

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  17. Topographic gravity modeling for global Bouguer maps to degree 2160: Validation of spectral and spatial domain forward modeling techniques at the 10 microGal level

    Science.gov (United States)

    Hirt, Christian; Reußner, Elisabeth; Rexer, Moritz; Kuhn, Michael

    2016-09-01

    Over the past years, spectral techniques have become a standard to model Earth's global gravity field to 10 km scales, with the EGM2008 geopotential model being a prominent example. For some geophysical applications of EGM2008, particularly Bouguer gravity computation with spectral techniques, a topographic potential model of adequate resolution is required. However, current topographic potential models have not yet been successfully validated to degree 2160, and notable discrepancies between spectral modeling and Newtonian (numerical) integration well beyond the 10 mGal level have been reported. Here we accurately compute and validate gravity implied by a degree 2160 model of Earth's topographic masses. Our experiments are based on two key strategies, both of which require advanced computational resources. First, we construct a spectrally complete model of the gravity field which is generated by the degree 2160 Earth topography model. This involves expansion of the topographic potential to the 15th integer power of the topography and modeling of short-scale gravity signals to ultrahigh degree of 21,600, translating into unprecedented fine scales of 1 km. Second, we apply Newtonian integration in the space domain with high spatial resolution to reduce discretization errors. Our numerical study demonstrates excellent agreement (8 μGgal RMS) between gravity from both forward modeling techniques and provides insight into the convergence process associated with spectral modeling of gravity signals at very short scales (few km). As key conclusion, our work successfully validates the spectral domain forward modeling technique for degree 2160 topography and increases the confidence in new high-resolution global Bouguer gravity maps.

  18. Liquid propellant analogy technique in dynamic modeling of launch vehicle

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The coupling effects among lateral mode,longitudinal mode and torsional mode of a launch vehicle cannot be taken into account in traditional dynamic analysis using lateral beam model and longitudinal spring-mass model individually.To deal with the problem,propellant analogy methods based on beam model are proposed and coupled mass-matrix of liquid propellant is constructed through additional mass in the present study.Then an integrated model of launch vehicle for free vibration analysis is established,by which research on the interactions between longitudinal and lateral modes,longitudinal and torsional modes of the launch vehicle can be implemented.Numerical examples for tandem tanks validate the present method and its necessity.

  19. Evaluation of dynamical models: dissipative synchronization and other techniques.

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A B

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams--which in turn is much greater than, say, that of correlation dimension--but at a much lower computational cost.

  20. Evaluation of dynamical models: Dissipative synchronization and other techniques

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A. B.

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams—which in turn is much greater than, say, that of correlation dimension—but at a much lower computational cost.

  1. Advanced modeling techniques in application to plasma pulse treatment

    Science.gov (United States)

    Pashchenko, A. F.; Pashchenko, F. F.

    2016-06-01

    Different approaches considered for simulation of plasma pulse treatment process. The assumption of a significant non-linearity of processes in the treatment of oil wells has been confirmed. Method of functional transformations and fuzzy logic methods suggested for construction of a mathematical model. It is shown, that models, based on fuzzy logic are able to provide a satisfactory accuracy of simulation and prediction of non-linear processes observed.

  2. Analysis of computational modeling techniques for complete rotorcraft configurations

    Science.gov (United States)

    O'Brien, David M., Jr.

    Computational fluid dynamics (CFD) provides the helicopter designer with a powerful tool for identifying problematic aerodynamics. Through the use of CFD, design concepts can be analyzed in a virtual wind tunnel long before a physical model is ever created. Traditional CFD analysis tends to be a time consuming process, where much of the effort is spent generating a high quality computational grid. Recent increases in computing power and memory have created renewed interest in alternative grid schemes such as unstructured grids, which facilitate rapid grid generation by relaxing restrictions on grid structure. Three rotor models have been incorporated into a popular fixed-wing unstructured CFD solver to increase its capability and facilitate availability to the rotorcraft community. The benefit of unstructured grid methods is demonstrated through rapid generation of high fidelity configuration models. The simplest rotor model is the steady state actuator disk approximation. By transforming the unsteady rotor problem into a steady state one, the actuator disk can provide rapid predictions of performance parameters such as lift and drag. The actuator blade and overset blade models provide a depiction of the unsteady rotor wake, but incur a larger computational cost than the actuator disk. The actuator blade model is convenient when the unsteady aerodynamic behavior needs to be investigated, but the computational cost of the overset approach is too large. The overset or chimera method allows the blades loads to be computed from first-principles and therefore provides the most accurate prediction of the rotor wake for the models investigated. The physics of the flow fields generated by these models for rotor/fuselage interactions are explored, along with efficiencies and limitations of each method.

  3. Model-based sub-Nyquist sampling and reconstruction technique for ultra-wideband (UWB) radar

    Science.gov (United States)

    Nguyen, Lam; Tran, Trac D.

    2010-04-01

    The Army Research Lab has recently developed an ultra-wideband (UWB) synthetic aperture radar (SAR). The radar has been employed to support proof-of-concept demonstration for several concealed target detection programs. The radar transmits and receives short impulses to achieve a wide-bandwidth from 300 MHz to 3000 MHz. Since the radar directly digitizes the wide-bandwidth receive signals, the challenges is to how to employ relatively slow and inexpensive analog-to-digital (A/D) converters to sample the signals with a rate that is greater than the minimum Nyquist rate. ARL has developed a sampling technique that allows us to employ inexpensive A/D converters (ADC) to digitize the widebandwidth signals. However, this technique still has a major drawback due to the longer time required to complete a data acquisition cycle. This in turn translates to lower average power and lower effective pulse repetition frequency (PRF). Compressed Sensing (CS) theory offers a new approach in data acquisition. From the CS framework, we can reconstruct certain signals or images from much fewer samples than the traditional sampling methods, provided that the signals are sparse in certain domains. However, while the CS framework offers the data compression feature, it still does not address the above mentioned drawback, that is the data acquisition must be operated in equivalent time since many global measurements (obtained from global random projections) are required as depicted by the sensing matrix Φ in the CS framework. In this paper, we propose a new technique that allows the sub-Nyquist sampling and the reconstruction of the wide-bandwidth data. In this technique, each wide-bandwidth radar data record is modeled as a superposition of many backscatter signals from reflective point targets. The technique is based on direct sparse recovery using a special dictionary containing many time-delayed versions of the transmitted probing signal. We demonstrate via simulated as well as

  4. [In-depth interviews and the Kano model to determine user requirements in a burns unit].

    Science.gov (United States)

    González-Revaldería, J; Holguín-Holgado, P; Lumbreras-Marín, E; Núñez-López, G

    To determine the healthcare requirements of patients in a Burns Unit, using qualitative techniques, such us in-depth personal interviews and Kano's methodology. Qualitative methodology using in-depth personal interviews (12 patients), Kano's conceptual model, and the SERVQHOS questionnaire (24 patients). All patients had been hospitalised in the last 12 months in the Burns Unit. Using Kano's methodology, service attributes were grouped by affinity diagrams, and classified as follows: must-be, attractive (unexpected, great satisfaction), and one-dimensional (linked to the degree of functionality of the service). The outcomes were compared with those obtained with SERVQHOS questionnaire. From the analysis of in-depth interviews, 11 requirements were obtained, referring to hotel aspects, information, need for closer staff relationship, and organisational aspects. The attributes classified as must-be were free television and automatic TV disconnection at midnight. Those classified as attractive were: individual room for more privacy, information about dressing change times in order to avoid anxiety, and additional staff for in-patients. The results were complementary to those obtained with the SERVQHOS questionnaire. In-depth personal interviews provide extra knowledge about patient requirements, complementing the information obtained with questionnaires. With this methodology, a more active patient participation is achieved and the companion's opinion is also taken into account. Copyright © 2016 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  5. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  6. Fusing Observations and Model Results for Creation of Enhanced Ozone Spatial Fields: Comparison of Three Techniques

    Science.gov (United States)

    This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...

  7. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  8. High resolution weather data for urban hydrological modelling and impact assessment, ICT requirements and future challenges

    Science.gov (United States)

    ten Veldhuis, Marie-claire; van Riemsdijk, Birna

    2013-04-01

    presentation will highlight ICT-related requirements and limitations in high resolution urban hydrological modelling and analysis. Further ICT challenges arise in provision of high resolution radar data for diverging information needs as well as in combination with other data sources in the urban environment. Different types of information are required for such diverse activities as operational flood protection, traffic management, large event organisation, business planning in shopping districts and restaurants, timing of family activities. These different information needs may require different configurations and data processing for radars and other data sources. An ICT challenge is to develop techniques for deciding how to automatically respond to these diverging information needs (e.g., through (semi-)automated negotiation). Diverse activities also provide a wide variety of information resources that can supplement traditional networks of weather sensors, such as rain sensors on cars and social media. Another ICT challenge is how to combine data from these different sources for answering a particular information need. Examples will be presented of solutions are currently being explored.

  9. Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations

    Science.gov (United States)

    Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad

    2012-11-01

    This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.

  10. Analytical Micromechanics Modeling Technique Developed for Ceramic Matrix Composites Analysis

    Science.gov (United States)

    Min, James B.

    2005-01-01

    Ceramic matrix composites (CMCs) promise many advantages for next-generation aerospace propulsion systems. Specifically, carbon-reinforced silicon carbide (C/SiC) CMCs enable higher operational temperatures and provide potential component weight savings by virtue of their high specific strength. These attributes may provide systemwide benefits. Higher operating temperatures lessen or eliminate the need for cooling, thereby reducing both fuel consumption and the complex hardware and plumbing required for heat management. This, in turn, lowers system weight, size, and complexity, while improving efficiency, reliability, and service life, resulting in overall lower operating costs.

  11. Requirements and Problems in Parallel Model Development at DWD

    Directory of Open Access Journals (Sweden)

    Ulrich Schäattler

    2000-01-01

    Full Text Available Nearly 30 years after introducing the first computer model for weather forecasting, the Deutscher Wetterdienst (DWD is developing the 4th generation of its numerical weather prediction (NWP system. It consists of a global grid point model (GME based on a triangular grid and a non-hydrostatic Lokal Modell (LM. The operational demand for running this new system is immense and can only be met by parallel computers. From the experience gained in developing earlier NWP models, several new problems had to be taken into account during the design phase of the system. Most important were portability (including efficieny of the programs on several computer architectures and ease of code maintainability. Also the organization and administration of the work done by developers from different teams and institutions is more complex than it used to be. This paper describes the models and gives some performance results. The modular approach used for the design of the LM is explained and the effects on the development are discussed.

  12. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power Electronics,"…

  13. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  14. Application of Yamaguchi's technique for the Rescorla-Wagner model.

    Science.gov (United States)

    Yamaguchi, Makoto

    2007-12-01

    Yamaguchi in 2006 solved for the first time a problem concerning a 1972 mathematical model of classical conditioning by Rescorla and Wagner. That derivation is not an isolated contribution. Here it is shown that the same line of derivation can be successfully applied to another experimental situation involving more stimuli.

  15. Suitability of sheet bending modelling techniques in CAPP applications

    NARCIS (Netherlands)

    Streppel, A.H.; de Vin, L.J.; de Vin, L.J.; Brinkman, J.; Brinkman, J.; Kals, H.J.J.

    1993-01-01

    The use of CNC machine tools, together with decreasing lot sizes and stricter tolerance prescriptions, has led to changes in sheet-metal part manufacturing. In this paper, problems introduced by the difference between the actual material behaviour and the results obtained from analytical models and

  16. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  17. A review of propeller modelling techniques based on Euler methods

    NARCIS (Netherlands)

    Zondervan, G.J.D.

    1998-01-01

    Future generation civil aircraft will be powered by new, highly efficient propeller propulsion systems. New, advanced design tools like Euler methods will be needed in the design process of these aircraft. This report describes the application of Euler methods to the modelling of flowfields generate

  18. A comparison of positive reinforcement training techniques in owl and squirrel monkeys: time required to train to reliability.

    Science.gov (United States)

    Rogge, Jessica; Sherenco, Katrina; Malling, Rachel; Thiele, Erica; Lambeth, Susan; Schapiro, Steve; Williams, Lawrence

    2013-01-01

    Positive reinforcement training (PRT) techniques enhance the psychological well being of nonhuman primates by increasing the animal's control over his or her environment and desensitizing the animal to stressful stimuli. However, the literature on PRT in neotropical primates is limited. Here PRT data from owl monkeys and squirrel monkeys are presented, including the length of time to train subjects to target, present hand, and present foot, important responses that can be used to aid in health inspection and treatment. A high percentage of the squirrel and owl monkeys were successfully trained on target and present hand. Present foot, a less natural response, was harder to train and maintain. Although squirrel monkeys did learn to target significantly faster than owl monkeys, the 2 genera did not differ on time to train on subsequent behavior. These data demonstrate that although owl monkeys may require slightly more time to acclimate to a PRT program, it is still possible to establish a PRT program with neotropical primates, and once animals have been introduced to the program, they can learn new responses in a relatively few short sessions.

  19. Validation Techniques of the Intern Models for Credit Risk

    Directory of Open Access Journals (Sweden)

    Nicolae Dardac

    2006-09-01

    Provided the development of complex methodologies of risk measurement and management, on a large scale, by credit institutions, simple and static rules of the first accord have become less and less relevant during the last years. And so, the need of setting up a own funds adequacy framework which is much more risk sensitive and provides incentives to credit institutions on what concerns the improvement of risk measurement and management systems was met by approval of the Basel II Accord, which will, therefore, lead to the strengthening of financial stability. The revisal of the Accord was mainly focused on the increase of risk analysis and internal measurement and the changes made to their estimation allow banks to create their own methodological framework to calculate capital requirements (also considering each credit institution’ risk appetite.

  20. Modelling and Simulation for Requirements Engineering and Options Analysis

    Science.gov (United States)

    2010-05-01

    Defence, 2010 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2010 DRDC Toronto CR 2010...externalize their mental model of the assumed solution for critique and correction by others, and whether or not this would assist in ensuring that

  1. Thermal Modeling and Feedback Requirements for LIFE Neutronic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Seifried, J E

    2009-07-15

    An initial study is performed to determine how temperature considerations affect LIFE neutronic simulations. Among other figures of merit, the isotopic mass accumulation, thermal power, tritium breeding, and criticality are analyzed. Possible fidelities of thermal modeling and degrees of coupling are explored. Lessons learned from switching and modifying nuclear datasets is communicated.

  2. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  3. Detecting feature interactions in Web services with model checking techniques

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    As a platform-independent software system, a Web service is designed to offer interoperability among diverse and heterogeneous applications.With the introduction of service composition in the Web service creation, various message interactions among the atomic services result in a problem resembling the feature interaction problem in the telecommunication area.This article defines the problem as feature interaction in Web services and proposes a model checking-based detection method.In the method, the Web service description is translated to the Promela language - the input language of the model checker simple promela interpreter (SPIN), and the specific properties, expressed as linear temporal logic (LTL) formulas, are formulated according to our classification of feature interaction.Then, SPIN is used to check these specific properties to detect the feature interaction in Web services.

  4. A Memory Insensitive Technique for Large Model Simplification

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Silva, C

    2001-08-07

    In this paper we propose three simple, but significant improvements to the OoCS (Out-of-Core Simplification) algorithm of Lindstrom [20] which increase the quality of approximations and extend the applicability of the algorithm to an even larger class of compute systems. The original OoCS algorithm has memory complexity that depends on the size of the output mesh, but no dependency on the size of the input mesh. That is, it can be used to simplify meshes of arbitrarily large size, but the complexity of the output mesh is limited by the amount of memory available. Our first contribution is a version of OoCS that removes the dependency of having enough memory to hold (even) the simplified mesh. With our new algorithm, the whole process is made essentially independent of the available memory on the host computer. Our new technique uses disk instead of main memory, but it is carefully designed to avoid costly random accesses. Our two other contributions improve the quality of the approximations generated by OoCS. We propose a scheme for preserving surface boundaries which does not use connectivity information, and a scheme for constraining the position of the ''representative vertex'' of a grid cell to an optimal position inside the cell.

  5. A Model-Following Technique for Insensitive Aircraft Control Systems.

    Science.gov (United States)

    1981-01-01

    Harvey and Pope(131 and Vinkler[301 compared several different methods in their works, while Shenkar [261 and Ashkenazi[2i extended the most promising...Following for In- sensitive Control works, let us consider the simple, first-order system used by Shenkar [261. The plant is described by x -(1 + Ar)x + u...representative of the methods of Vinkler, Asikenazi, and Shenkar ), and Model Following for Insensitive Control (MrIC). For the LQR design, we assume that our

  6. Mathematical Formulation Requirements and Specifications for the Process Models

    Energy Technology Data Exchange (ETDEWEB)

    Steefel, C.; Moulton, D.; Pau, G.; Lipnikov, K.; Meza, J.; Lichtner, P.; Wolery, T.; Bacon, D.; Spycher, N.; Bell, J.; Moridis, G.; Yabusaki, S.; Sonnenthal, E.; Zyvoloski, G.; Andre, B.; Zheng, L.; Davis, J.

    2010-11-01

    The Advanced Simulation Capability for Environmental Management (ASCEM) is intended to be a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM program is aimed at addressing critical EM program needs to better understand and quantify flow and contaminant transport behavior in complex geological systems. It will also address the long-term performance of engineered components including cementitious materials in nuclear waste disposal facilities, in order to reduce uncertainties and risks associated with DOE EM's environmental cleanup and closure activities. Building upon national capabilities developed from decades of Research and Development in subsurface geosciences, computational and computer science, modeling and applied mathematics, and environmental remediation, the ASCEM initiative will develop an integrated, open-source, high-performance computer modeling system for multiphase, multicomponent, multiscale subsurface flow and contaminant transport. This integrated modeling system will incorporate capabilities for predicting releases from various waste forms, identifying exposure pathways and performing dose calculations, and conducting systematic uncertainty quantification. The ASCEM approach will be demonstrated on selected sites, and then applied to support the next generation of performance assessments of nuclear waste disposal and facility decommissioning across the EM complex. The Multi-Process High Performance Computing (HPC) Simulator is one of three thrust areas in ASCEM. The other two are the Platform and Integrated Toolsets (dubbed the Platform) and Site Applications. The primary objective of the HPC Simulator is to provide a flexible and extensible computational engine to simulate the coupled processes and flow scenarios described by the conceptual models developed using the ASCEM Platform. The graded and iterative approach to assessments

  7. A titration model for evaluating calcium hydroxide removal techniques

    Directory of Open Access Journals (Sweden)

    Mark PHILLIPS

    2015-02-01

    Full Text Available Objective Calcium hydroxide (Ca(OH2 has been used in endodontics as an intracanal medicament due to its antimicrobial effects and its ability to inactivate bacterial endotoxin. The inability to totally remove this intracanal medicament from the root canal system, however, may interfere with the setting of eugenol-based sealers or inhibit bonding of resin to dentin, thus presenting clinical challenges with endodontic treatment. This study used a chemical titration method to measure residual Ca(OH2 left after different endodontic irrigation methods. Material and Methods Eighty-six human canine roots were prepared for obturation. Thirty teeth were filled with known but different amounts of Ca(OH2 for 7 days, which were dissolved out and titrated to quantitate the residual Ca(OH2 recovered from each root to produce a standard curve. Forty-eight of the remaining teeth were filled with equal amounts of Ca(OH2 followed by gross Ca(OH2 removal using hand files and randomized treatment of either: 1 Syringe irrigation; 2 Syringe irrigation with use of an apical file; 3 Syringe irrigation with added 30 s of passive ultrasonic irrigation (PUI, or 4 Syringe irrigation with apical file and PUI (n=12/group. Residual Ca(OH2 was dissolved with glycerin and titrated to measure residual Ca(OH2 left in the root. Results No method completely removed all residual Ca(OH2. The addition of 30 s PUI with or without apical file use removed Ca(OH2 significantly better than irrigation alone. Conclusions This technique allowed quantification of residual Ca(OH2. The use of PUI (with or without apical file resulted in significantly lower Ca(OH2 residue compared to irrigation alone.

  8. High-resolution global tomography: A full-wave technique for forward and inverse modeling

    Science.gov (United States)

    Nissen-Meyer, Tarje; Sigloch, Karin; Fournier, Alexandre

    2010-05-01

    In recent years, seismology has greatly benefitted from significant progress in digital data collection and processing, accurate numerical methods for wave propagation, and high-performance computing to explore crucial scales of interest in both data and model spaces. We will present a full-wave technique to address the seismic forward and inverse problem at the global scale, with a specific focus on diffracted waves in the lowermost mantle: Our 2D spectral-element method tackles 3D wave propagation through spherically symmetric background models down to seismic frequencies of 1 Hz and delivers the wavefields necessary to construct sensitivity kernels. This specific approach distinguishes itself from the adjoint method in that it requires no knowledge about data structure or observables at the time of forward modeling by means of storing entire reference space-time wavefields. To obtain a direct view of the interconnection between surface displacements and earth structure, we examine the time-dependent sensitivity of the seismic signal to 3D model perturbations. Being highly sensitive to such parameters as epicentral distance, earthquake radiation pattern, depth, frequency, receiver components and time windows, this effort suggests criteria for data selection to optimally illuminate a specific region within the earth. As shown with core-diffracted P-waves, we measure and model our observables (e.g. traveltimes, amplitudes) in multiple-frequency passbands, thereby increasing robustness of the inverse problem and path coverage. This allows us to selectively draw only upon frequency bands with high signal-to-noise ratio. We discuss the selection and usability of data for such a Pdiff tomographic setting, coverage maps and target regions. We also touch upon the validity of a 1D reference model and quantify the applicability range of the first-order Born approximation.

  9. Computable General Equilibrium Techniques for Carbon Tax Modeling

    Directory of Open Access Journals (Sweden)

    Al-Amin

    2009-01-01

    Full Text Available Problem statement: Lacking of proper environmental models environmental pollution is now a solemn problem in many developing countries particularly in Malaysia. Some empirical studies of worldwide reveal that imposition of a carbon tax significantly decreases carbon emissions and does not dramatically reduce economic growth. To our knowledge there has not been any research done to simulate the economic impact of emission control policies in Malaysia. Approach: Therefore this study developed an environmental computable general equilibrium model for Malaysia and investigated carbon tax policy responses in the economy applying exogenously different degrees of carbon tax into the model. Three simulations were carried out using a Malaysian social accounting matrix. Results: The carbon tax policy illustrated that a 1.21% reduction of carbon emission reduced the nominal GDP by 0.82% and exports by 2.08%; 2.34% reduction of carbon emission reduced the nominal GDP by 1.90% and exports by 3.97% and 3.40% reduction of carbon emission reduced the nominal GDP by 3.17% and exports by 5.71%. Conclusion/Recommendations: Imposition of successively higher carbon tax results in increased government revenue from baseline by 26.67, 53.07 and 79.28% respectively. However, fixed capital investment increased in scenario 1a by 0.43% and decreased in scenarios 1b and 1c by 0.26 and 1.79% respectively from the baseline. According to our policy findings policy makers should consider 1st (scenario 1a carbon tax policy. This policy results in achieving reasonably good environmental impacts without losing the investment, fixed capital investment, investment share of nominal GDP and government revenue.

  10. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  11. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection

    Directory of Open Access Journals (Sweden)

    Declan T. Delaney

    2016-12-01

    Full Text Available No single network solution for Internet of Things (IoT networks can provide the required level of Quality of Service (QoS for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  12. Modelling of Evaporator in Waste Heat Recovery System using Finite Volume Method and Fuzzy Technique

    Directory of Open Access Journals (Sweden)

    Jahedul Islam Chowdhury

    2015-12-01

    Full Text Available The evaporator is an important component in the Organic Rankine Cycle (ORC-based Waste Heat Recovery (WHR system since the effective heat transfer of this device reflects on the efficiency of the system. When the WHR system operates under supercritical conditions, the heat transfer mechanism in the evaporator is unpredictable due to the change of thermo-physical properties of the fluid with temperature. Although the conventional finite volume model can successfully capture those changes in the evaporator of the WHR process, the computation time for this method is high. To reduce the computation time, this paper develops a new fuzzy based evaporator model and compares its performance with the finite volume method. The results show that the fuzzy technique can be applied to predict the output of the supercritical evaporator in the waste heat recovery system and can significantly reduce the required computation time. The proposed model, therefore, has the potential to be used in real time control applications.

  13. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection.

    Science.gov (United States)

    Delaney, Declan T; O'Hare, Gregory M P

    2016-12-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  14. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection †

    Science.gov (United States)

    Delaney, Declan T.; O’Hare, Gregory M. P.

    2016-01-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks. PMID:27916929

  15. Rotator cuff repair: a review of surgical techniques, animal models, and new technologies under development.

    Science.gov (United States)

    Deprés-Tremblay, Gabrielle; Chevrier, Anik; Snow, Martyn; Hurtig, Mark B; Rodeo, Scott; Buschmann, Michael D

    2016-12-01

    Rotator cuff tears are the most common musculoskeletal injury occurring in the shoulder. Current surgical repair fails to heal in 20% to 95% of patients, depending on age, size of the tear, smoking, time of repair, tendon quality, muscle quality, healing response, and surgical treatments. These problems are worsened by the limited healing potential of injured tendons attributed to the presence of degenerative changes and relatively poor vascularity of the cuff tendons. Development of new techniques to treat rotator cuff tears requires testing in animal models to assess safety and efficacy before clinical testing. Hence, it is important to evaluate appropriate animal models for rotator cuff research with degeneration of tendons, muscular atrophy, and fatty infiltration similar to humans. This report reviews current clinical treatments and preclinical approaches for rotator cuff tear repair. The review will focus on current clinical surgical treatments, new repair strategies under clinical and preclinical development, and will also describe different animal models available for rotator cuff research. These findings and future directions for rotator cuff tear repair will be discussed. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  16. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    Science.gov (United States)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  17. Techniques and Technology to Revise Content Delivery and Model Critical Thinking in the Neuroscience Classroom.

    Science.gov (United States)

    Illig, Kurt R

    2015-01-01

    Undergraduate neuroscience courses typically involve highly interdisciplinary material, and it is often necessary to use class time to review how principles of chemistry, math and biology apply to neuroscience. Lecturing and Socratic discussion can work well to deliver information to students, but these techniques can lead students to feel more like spectators than participants in a class, and do not actively engage students in the critical analysis and application of experimental evidence. If one goal of undergraduate neuroscience education is to foster critical thinking skills, then the classroom should be a place where students and instructors can work together to develop them. Students learn how to think critically by directly engaging with course material, and by discussing evidence with their peers, but taking classroom time for these activities requires that an instructor find a way to provide course materials outside of class. Using technology as an on-demand provider of course materials can give instructors the freedom to restructure classroom time, allowing students to work together in small groups and to have discussions that foster critical thinking, and allowing the instructor to model these skills. In this paper, I provide a rationale for reducing the use of traditional lectures in favor of more student-centered activities, I present several methods that can be used to deliver course materials outside of class and discuss their use, and I provide a few examples of how these techniques and technologies can help improve learning outcomes.

  18. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.

    2009-11-01

    The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  19. Validation Techniques of the Intern Models for Credit Risk

    Directory of Open Access Journals (Sweden)

    Bogdan Moinescu

    2006-11-01

    Full Text Available The new own funds adequacy device, officialy named “ International Convergence of Capital Measurement and Capital Standards”, describes the most important benchmark framework for micro-prudential supervision at the moment. The publication of the final text in June 2004, after five years of deliberations, represents the result of multiple analyses and comments provided by all interested parties, banking supervision authorities, associations and credit institutions. Provided the development of complex methodologies of risk measurement and management, on a large scale, by credit institutions, simple and static rules of the first accord have become less and less relevant during the last years. And so, the need of setting up a own funds adequacy framework which is much more risk sensitive and provides incentives to credit institutions on what concerns the improvement of risk measurement and management systems was met by approval of the Basel II Accord, which will, therefore, lead to the strengthening of financial stability. The revisal of the Accord was mainly focused on the increase of risk analysis and internal measurement and the changes made to their estimation allow banks to create their own methodological framework to calculate capital requirements (also considering each credit institution’ risk appetite.

  20. Modeling Crop Water Requirement at Regional Scales in the Context of Integrated Hydrology

    Science.gov (United States)

    Dogrul, E. C.; Kadir, T.; Brush, C. F.; Chung, F. I.

    2009-12-01

    In developed watersheds, the stresses on surface and subsurface water resources are generally created by groundwater pumping and stream flow diversions to satisfy agricultural and urban water requirements. The application of pumping and diversion to meet these requirements also affects the surface and subsurface water system through recharge of the aquifer and surface runoff back into the streams. The agricultural crop water requirement is a function of climate, soil and land surface physical properties as well as land use management practices which are spatially distributed and evolve in time. In almost all modeling studies pumping and diversions are specified as predefined stresses and are not included in the simulation as an integral and dynamic component of the hydrologic cycle that depend on other hydrologic components. To address this issue, California Department of Water Resources has been developing a new root zone module that can either be used as a stand-alone modeling tool or can be linked to other stream and aquifer modeling tools. The tool, named Integrated Water Flow Model Demand Calculator (IDC), computes crop water requirements under user-specified climatic, land-use and irrigation management settings at regional scales, and routes the precipitation and irrigation water through the root zone using physically-based methods. In calculating the crop water requirement, IDC uses an irrigation-scheduling type approach where irrigation is triggered when the soil moisture falls below a user-specified level. Water demands for managed wetlands, urban areas, and agricultural crops including rice, can either be computed by IDC or specified by the user depending on the requirements and available data for the modeling project. For areas covered with native vegetation water demand is not computed and only precipitation is routed through the root zone. Many irrigational practices such as irrigation for leaching, re-use of irrigation return flow, flooding and

  1. Comparative Studies of Clustering Techniques for Real-Time Dynamic Model Reduction

    CERN Document Server

    Hogan, Emilie; Halappanavar, Mahantesh; Huang, Zhenyu; Lin, Guang; Lu, Shuai; Wang, Shaobu

    2015-01-01

    Dynamic model reduction in power systems is necessary for improving computational efficiency. Traditional model reduction using linearized models or offline analysis would not be adequate to capture power system dynamic behaviors, especially the new mix of intermittent generation and intelligent consumption makes the power system more dynamic and non-linear. Real-time dynamic model reduction emerges as an important need. This paper explores the use of clustering techniques to analyze real-time phasor measurements to determine generator groups and representative generators for dynamic model reduction. Two clustering techniques -- graph clustering and evolutionary clustering -- are studied in this paper. Various implementations of these techniques are compared and also compared with a previously developed Singular Value Decomposition (SVD)-based dynamic model reduction approach. Various methods exhibit different levels of accuracy when comparing the reduced model simulation against the original model. But some ...

  2. Modelling and Design of a Microstrip Band-Pass Filter Using Space Mapping Techniques

    CERN Document Server

    Tavakoli, Saeed; Mohanna, Shahram

    2010-01-01

    Determination of design parameters based on electromagnetic simulations of microwave circuits is an iterative and often time-consuming procedure. Space mapping is a powerful technique to optimize such complex models by efficiently substituting accurate but expensive electromagnetic models, fine models, with fast and approximate models, coarse models. In this paper, we apply two space mapping, an explicit space mapping as well as an implicit and response residual space mapping, techniques to a case study application, a microstrip band-pass filter. First, we model the case study application and optimize its design parameters, using explicit space mapping modelling approach. Then, we use implicit and response residual space mapping approach to optimize the filter's design parameters. Finally, the performance of each design methods is evaluated. It is shown that the use of above-mentioned techniques leads to achieving satisfactory design solutions with a minimum number of computationally expensive fine model eval...

  3. Reducing the impact of a desalination plant using stochastic modeling and optimization techniques

    Science.gov (United States)

    Alcolea, Andres; Renard, Philippe; Mariethoz, Gregoire; Bertone, François

    2009-02-01

    SummaryWater is critical for economic growth in coastal areas. In this context, desalination has become an increasingly important technology over the last five decades. It often has environmental side effects, especially when the input water is pumped directly from the sea via intake pipelines. However, it is generally more efficient and cheaper to desalt brackish groundwater from beach wells rather than desalting seawater. Natural attenuation is also gained and hazards due to anthropogenic pollution of seawater are reduced. In order to minimize allocation and operational costs and impacts on groundwater resources, an optimum pumping network is required. Optimization techniques are often applied to this end. Because of aquifer heterogeneity, designing the optimum pumping network demands reliable characterizations of aquifer parameters. An optimum pumping network in a coastal aquifer in Oman, where a desalination plant currently pumps brackish groundwater at a rate of 1200 m 3/h for a freshwater production of 504 m 3/h (insufficient to satisfy the growing demand in the area) was designed using stochastic inverse modeling together with optimization techniques. The Monte Carlo analysis of 200 simulations of transmissivity and storage coefficient fields conditioned to the response to stresses of tidal fluctuation and three long term pumping tests was performed. These simulations are physically plausible and fit the available data well. Simulated transmissivity fields are used to design the optimum pumping configuration required to increase the current pumping rate to 9000 m 3/h, for a freshwater production of 3346 m 3/h (more than six times larger than the existing one). For this task, new pumping wells need to be sited and their pumping rates defined. These unknowns are determined by a genetic algorithm that minimizes a function accounting for: (1) drilling, operational and maintenance costs, (2) target discharge and minimum drawdown (i.e., minimum aquifer

  4. Requirements for Logical Models for Value-Added Tax Legislation

    DEFF Research Database (Denmark)

    Nielsen, Morten Ib; Simonsen, Jakob Grue; Larsen, Ken Friis

    -specific needs. Currently, these difficulties are handled in most major ERP systems by customising and localising the native code of the ERP systems for each specific country and industry. We propose an alternative that uses logical modeling of VAT legislation. The potential benefit is to eventually transform......Enterprise resource planning (ERP) systems are ubiquitous in commercial enterprises of all sizes and invariably need to account for the notion of value-added tax (VAT). The legal and technical difficulties in handling VAT are exacerbated by spanning a broad and chaotic spectrum of intricate country...

  5. Application of nonlinear forecasting techniques for meteorological modeling

    Directory of Open Access Journals (Sweden)

    V. Pérez-Muñuzuri

    Full Text Available A nonlinear forecasting method was used to predict the behavior of a cloud coverage time series several hours in advance. The method is based on the reconstruction of a chaotic strange attractor using four years of cloud absorption data obtained from half-hourly Meteosat infrared images from Northwestern Spain. An exhaustive nonlinear analysis of the time series was carried out to reconstruct the phase space of the underlying chaotic attractor. The forecast values are used by a non-hydrostatic meteorological model ARPS for daily weather prediction and their results compared with surface temperature measurements from a meteorological station and a vertical sounding. The effect of noise in the time series is analyzed in terms of the prediction results.

    Key words: Meterology and atmospheric dynamics (mesoscale meteorology; general – General (new fields

  6. Using a business model approach and marketing techniques for recruitment to clinical trials.

    Science.gov (United States)

    McDonald, Alison M; Treweek, Shaun; Shakur, Haleema; Free, Caroline; Knight, Rosemary; Speed, Chris; Campbell, Marion K

    2011-03-11

    Randomised controlled trials (RCTs) are generally regarded as the gold standard for evaluating health care interventions. The level of uncertainty around a trial's estimate of effect is, however, frequently linked to how successful the trial has been in recruiting and retaining participants. As recruitment is often slower or more difficult than expected, with many trials failing to reach their target sample size within the timescale and funding originally envisaged, the results are often less reliable than they could have been. The high number of trials that require an extension to the recruitment period in order to reach the required sample size potentially delays the introduction of more effective therapies into routine clinical practice. Moreover, it may result in less research being undertaken as resources are redirected to extending existing trials rather than funding additional studies.Poor recruitment to publicly-funded RCTs has been much debated but there remains remarkably little clear evidence as to why many trials fail to recruit well, which recruitment methods work, in which populations and settings and for what type of intervention. One proposed solution to improving recruitment and retention is to adopt methodology from the business world to inform and structure trial management techniques.We review what is known about interventions to improve recruitment to trials. We describe a proposed business approach to trials and discuss the implementation of using a business model, using insights gained from three case studies.

  7. Using a business model approach and marketing techniques for recruitment to clinical trials

    Directory of Open Access Journals (Sweden)

    Knight Rosemary

    2011-03-01

    Full Text Available Abstract Randomised controlled trials (RCTs are generally regarded as the gold standard for evaluating health care interventions. The level of uncertainty around a trial's estimate of effect is, however, frequently linked to how successful the trial has been in recruiting and retaining participants. As recruitment is often slower or more difficult than expected, with many trials failing to reach their target sample size within the timescale and funding originally envisaged, the results are often less reliable than they could have been. The high number of trials that require an extension to the recruitment period in order to reach the required sample size potentially delays the introduction of more effective therapies into routine clinical practice. Moreover, it may result in less research being undertaken as resources are redirected to extending existing trials rather than funding additional studies. Poor recruitment to publicly-funded RCTs has been much debated but there remains remarkably little clear evidence as to why many trials fail to recruit well, which recruitment methods work, in which populations and settings and for what type of intervention. One proposed solution to improving recruitment and retention is to adopt methodology from the business world to inform and structure trial management techniques. We review what is known about interventions to improve recruitment to trials. We describe a proposed business approach to trials and discuss the implementation of using a business model, using insights gained from three case studies.

  8. Multivariate moment closure techniques for stochastic kinetic models

    Science.gov (United States)

    Lakatos, Eszter; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H.

    2015-09-01

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  9. Multivariate moment closure techniques for stochastic kinetic models

    Energy Technology Data Exchange (ETDEWEB)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H., E-mail: m.stumpf@imperial.ac.uk [Department of Life Sciences, Centre for Integrative Systems Biology and Bioinformatics, Imperial College London, London SW7 2AZ (United Kingdom)

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  10. A MODEL FOR OVERLAPPING TRIGRAM TECHNIQUE FOR TELUGU SCRIPT

    Directory of Open Access Journals (Sweden)

    B.Vishnu Vardhan

    2007-09-01

    Full Text Available N-grams are consecutive overlapping N-character sequences formed from an input stream. N-grams are used as alternatives to word-based retrieval in a number of systems. In this paper we propose a model applicable to categorization of Telugu document. Telugu is an official language derived from ancient Brahmi script and also the official language of the state of Andhra Pradesh. Brahmi based script is noted for complex conjunct formations. The canonical structure is described as ((C C CV. The structure evolves any character from a set of basic syllables known as vowels and consonants where consonant, vowel (CV core is the basic unit optionally preceded by one or two consonants. A huge set of characters that resemble the phonetic nature with an equivalent character shape are derived from the canonical structure. Words formed from this set evolved into a large corpus. Stringent grammar rules in word formation are part of this corpus. Certain word combinations result in the formation of single word is to be addressed where the last character of the first word and first character of the successive word are combined. Keeping in view of these complexities we propose a trigram based system that provides a reasonable alternative to a word based system in achieving document categorization for the language Telugu.

  11. Multivariate moment closure techniques for stochastic kinetic models.

    Science.gov (United States)

    Lakatos, Eszter; Ale, Angelique; Kirk, Paul D W; Stumpf, Michael P H

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  12. Energy iteration model research of DCM Buck converter with multilevel pulse train technique

    Science.gov (United States)

    Qin, Ming; Li, Xiang

    2017-08-01

    According as the essence of switching converter is the nature of energy, the energy iteration model of the Multilevel Pulse Train (MPT) technique is studied in this paper. The energy iteration model of DCM Buck converter with MPT technique can reflect the control law and excellent transient performance of the MPT technique. The iteration relation of energy transfer in switching converter is discussed. The structure and operation principle of DCM Buck converter with MPT technique is introduced and the energy iteration model of this converter is set up. The energy tracks of MPT-control Buck converter and PT converter is researched and compared to show that the ratio of steady-state control pulse satisfies the expectation for the MPT technique and the MPT-controlled switching converter has much lower output voltage ripple than the PT converter.

  13. Studies on Models,Patterns and Require-ments of Digestible Amino Acids for Layers by Nitrogen Metabolism

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The nitrogen (N) metabolic experiments were made to estimate separately amino acid requirements of 43~48 weeks old layers for maintenance, for protein accretion to estabolish models to estimate digestible amino acid requirements. The regression relationship of nitrogen retention vs amino acid intake was estimated for each amino acid by giving, at rate of N intake of 0.91, 0.52, 0.15 and 0.007g.kg-1 body-weight (W0.75) per d, the semi-synthetic diets was made specially deficient in one amino acid. From the regression coefficients, it was calculated that, for the accretion of 1 g protein, the dietary digestible amino acid requirements were (mg) Thr 63.1, Val 100.4, Met 39.9, Ile 88.6, Leu 114.3, Phe 63.2, Lys 87.0, His 20.5, Arg 87.9, Trp 21.4, Met+Cys 77.6, and Phe+Tyr 114.3. Daily amino acid requirements for N equilibrium were estimated to be (mg.kg-1W0.75 per day) Thr 50.6, Val 74.7, Met 30.3, ILe 66.7 Leu 81.4, Phe 44.8, Lys 60.5 His 14.7, Arg 73.9 ,Trp 17.3, Met+Cys 58.6, and Phe+Tyr 83.9 The dietary degestible amino acid patterns for protein accretion and N equilibrium were also proposed. The models of estimating digestible amino acid requirements for the different productions were developed.

  14. Lattice Boltzmann flow simulations with applications of reduced order modeling techniques

    KAUST Repository

    Brown, Donald

    2014-01-01

    With the recent interest in shale gas, an understanding of the flow mechanisms at the pore scale and beyond is necessary, which has attracted a lot of interest from both industry and academia. One of the suggested algorithms to help understand flow in such reservoirs is the Lattice Boltzmann Method (LBM). The primary advantage of LBM is its ability to approximate complicated geometries with simple algorithmic modificatoins. In this work, we use LBM to simulate the flow in a porous medium. More specifically, we use LBM to simulate a Brinkman type flow. The Brinkman law allows us to integrate fast free-flow and slow-flow porous regions. However, due to the many scales involved and complex heterogeneities of the rock microstructure, the simulation times can be long, even with the speed advantage of using an explicit time stepping method. The problem is two-fold, the computational grid must be able to resolve all scales and the calculation requires a steady state solution implying a large number of timesteps. To help reduce the computational complexity and total simulation times, we use model reduction techniques to reduce the dimension of the system. In this approach, we are able to describe the dynamics of the flow by using a lower dimensional subspace. In this work, we utilize the Proper Orthogonal Decomposition (POD) technique, to compute the dominant modes of the flow and project the solution onto them (a lower dimensional subspace) to arrive at an approximation of the full system at a lowered computational cost. We present a few proof-of-concept examples of the flow field and the corresponding reduced model flow field.

  15. Developing a Teaching Model Using an Online Collaboration Approach for a Digital Technique Practical Work

    Science.gov (United States)

    Muchlas

    2015-01-01

    This research is aimed to produce a teaching model and its supporting instruments using a collaboration approach for a digital technique practical work attended by higher education students. The model is found to be flexible and relatively low cost. Through this research, feasibility and learning impact of the model will be determined. The model…

  16. Development of Reservoir Characterization Techniques and Production Models for Exploiting Naturally Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Michael L.; Brown, Raymon L.; Civan, Frauk; Hughes, Richard G.

    2001-08-15

    Research continues on characterizing and modeling the behavior of naturally fractured reservoir systems. Work has progressed on developing techniques for estimating fracture properties from seismic and well log data, developing naturally fractured wellbore models, and developing a model to characterize the transfer of fluid from the matrix to the fracture system for use in the naturally fractured reservoir simulator.

  17. A novel method to estimate model uncertainty using machine learning techniques

    NARCIS (Netherlands)

    Solomatine, D.P.; Lal Shrestha, D.

    2009-01-01

    A novel method is presented for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling. In this method, first, the probability distribution of the model error is estimated separately for different hydrological situations and second, the

  18. INTELLIGENT CAR STYLING TECHNIQUE AND SYSTEM BASED ON A NEW AERODYNAMIC-THEORETICAL MODEL

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Car styling technique based on a new theoretical model of automotive aerodynamics is introduced, which is proved to be feasible and effective by wind tunnel tests. Development of a multi-module software system from this technique, including modules of knowledge processing, referential styling and ANN aesthetic evaluation etc, capable of assisting car styling works in an intelligent way, is also presented and discussed.

  19. Optimizaton of corrosion control for lead in drinking water using computational modeling techniques

    Science.gov (United States)

    Computational modeling techniques have been used to very good effect in the UK in the optimization of corrosion control for lead in drinking water. A “proof-of-concept” project with three US/CA case studies sought to demonstrate that such techniques could work equally well in the...

  20. Comparison of impression techniques for a two-implant 15-degree divergent model.

    Science.gov (United States)

    Carr, A B

    1992-01-01

    To consistently provide passively fitting implant superstructures, an understanding of the accuracy and precision of all phases of fabrication and connection is required. The initial phase of fabrication, ie, impression making and cast forming, was investigated in an earlier report for a mandibular five-implant model. The current study evaluates the accuracy of working casts produced from impressions using two different transfer copings in a 15-degree divergent two-implant posterior mandibular model. While the indirect method is less cumbersome to use, it was found to be less accurate in the prior study. The purpose of this study was to see if the direct method is more precise for this clinical situation. A transfer was deemed effective in producing experimental casts if distances between specified points on the cast agreed with the corresponding distances on the master cast. The absolute value of the difference in distances between experimental and master casts was compared for the two techniques (two-sample t tests). No significant differences were noted (P > .05), and the power of the tests ranged from 0.70 to 0.96 against the one-sided hypothesis that the direct method had a smaller mean absolute difference in distance than the indirect method. This suggests no clear advantage in using the direct method in similar clinical situations. Comparison of these findings to other impression accuracy studies is made.

  1. A Modeling Technique and Representation of Failure in the Analysis of Triaxial Braided Carbon Fiber Composites

    Science.gov (United States)

    Littell, Justin D.; Binienda, Wieslaw K.; Goldberg, Robert K.; Roberts, Gary D.

    2008-01-01

    Quasi-static tests have been performed on triaxially braided carbon fiber composite materials with large unit cell sizes. The effects of different fibers and matrix materials on the failure mode were investigated. Simulations of the tests have been performed using the transient dynamic finite element code, LS-DYNA. However, the wide range of failure modes observed for the triaxial braided carbon fiber composites during tests could not be simulated using composite material models currently available within LS-DYNA. A macroscopic approach has been developed that provides better simulation of the material response in these materials. This approach uses full-field optical measurement techniques to measure local failures during quasi-static testing. Information from these experiments is then used along with the current material models available in LS-DYNA to simulate the influence of the braided architecture on the failure process. This method uses two-dimensional shell elements with integration points through the thickness of the elements to represent the different layers of braid along with a new analytical method for the import of material stiffness and failure data directly. The present method is being used to examine the effect of material properties on the failure process. The experimental approaches used to obtain the required data will be described, and preliminary results of the numerical analysis will be presented.

  2. Simulation of Moving Loads in Elastic Multibody Systems With Parametric Model Reduction Techniques

    Directory of Open Access Journals (Sweden)

    Fischer Michael

    2014-08-01

    Full Text Available In elastic multibody systems, one considers large nonlinear rigid body motion and small elastic deformations. In a rising number of applications, e.g. automotive engineering, turning and milling processes, the position of acting forces on the elastic body varies. The necessary model order reduction to enable efficient simulations requires the determination of ansatz functions, which depend on the moving force position. For a large number of possible interaction points, the size of the reduced system would increase drastically in the classical Component Mode Synthesis framework. If many nodes are potentially loaded, or the contact area is not known a-priori and only a small number of nodes is loaded simultaneously, the system is described in this contribution with the parameter-dependent force position. This enables the application of parametric model order reduction methods. Here, two techniques based on matrix interpolation are described which transform individually reduced systems and allow the interpolation of the reduced system matrices to determine reduced systems for any force position. The online-offline decomposition and description of the force distribution onto the reduced elastic body are presented in this contribution. The proposed framework enables the simulation of elastic multibody systems with moving loads efficiently because it solely depends on the size of the reduced system. Results in frequency and time domain for the simulation of a thin-walled cylinder with a moving load illustrate the applicability of the proposed method.

  3. Modeling seismic wave propagation across the European plate: structural models and numerical techniques, state-of-the-art and prospects

    Science.gov (United States)

    Morelli, Andrea; Danecek, Peter; Molinari, Irene; Postpischl, Luca; Schivardi, Renata; Serretti, Paola; Tondi, Maria Rosaria

    2010-05-01

    beneath the Alpine mobile belt, and fast lithospheric signatures under the two main Mediterranean subduction systems (Aegean and Tyrrhenian). We validate this new model through comparison of recorded seismograms with simulations based on numerical codes (SPECFEM3D). To ease and increase model usage, we also propose the adoption of a common exchange format for tomographic earth models based on JSON, a lightweight data-interchange format supported by most high-level programming languages, and provide tools for manipulating and visualising models, described in this standard format, in Google Earth and GEON IDV. In the next decade seismologists will be able to reap new possibilities offered by exciting progress in general computing power and algorithmic development in computational seismology. Structural models, still based on classical approaches and modeling just few parameters in each seismogram, will benefit from emerging techniques - such as full waveform fitting and fully nonlinear inversion - that are now just showing their potential. This will require extensive availability of supercomputing resources to earth scientists in Europe, as a tool to match the planned new massive data flow. We need to make sure that the whole apparatus, needed to fully exploit new data, will be widely accessible. To maximize the development, so as for instance to enable us to promptly model ground shaking after a major earthquake, we will also need a better coordination framework, that will enable us to share and amalgamate the abundant local information on earth structure - most often available but difficult to retrieve, merge and use. Comprehensive knowledge of earth structure and of best practices to model wave propagation can by all means be considered an enabling technology for further geophysical progress.

  4. Forecasting Model of Coal Requirement Quantity Based on Grey System Theory

    Institute of Scientific and Technical Information of China (English)

    孙继湖

    2001-01-01

    The generally used methods of forecasting coal requirement quantity include the analogy method, the outside-push method and the cause-effect analysis method. However, the precision of forecasting results using these methods is lower. This paper uses the grey system theory, and sets up grey forecasting model GM (1, 3) to coal requirement quantity. The forecasting result for the Chinese coal requirement quantity coincides with the actual values, and this shows that the model is reliable. Finally, this model are used to forecast Chinese coal requirement quantity in the future ten years.

  5. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  6. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Shahid Ali

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  7. Geomatic techniques for the generation of building information models towards their introduction in Integrated Management Systems

    OpenAIRE

    Diaz Vilariño, Lucia

    2015-01-01

    This research project proposes the use of geomatic techniques to reconstruct in a highly automated way semantic building models that might be subjected to energy analysis. Other non-destructive techniques such as infrared thermography are explored to obtain descriptive attributes for enriching the models. Building stock is considered as an important contributor to the global energy consumption and buildings energy efficiency has become a priority strategy in the European energy policy. Bu...

  8. Digital Avionics Information System (DAIS): Training Requirements Analysis Model Users Guide. Final Report.

    Science.gov (United States)

    Czuchry, Andrew J.; And Others

    This user's guide describes the functions, logical operations and subroutines, input data requirements, and available outputs of the Training Requirements Analysis Model (TRAMOD), a computerized analytical life cycle cost modeling system for use in the early stages of system design. Operable in a stand-alone mode, TRAMOD can be used for the…

  9. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  10. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    Science.gov (United States)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  11. THE IMPROVEMENT OF THE COMPUTATIONAL PERFORMANCE OF THE ZONAL MODEL POMA USING PARALLEL TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Yao Yu

    2014-01-01

    Full Text Available The zonal modeling approach is a new simplified computational method used to predict temperature distribution, energy in multi-zone building and indoor airflow thermal behaviors of building. Although this approach is known to use less computer resource than CFD models, the computational time is still an issue especially when buildings are characterized by complicated geometry and indoor layout of furnishings. Therefore, using a new computing technique to the current zonal models in order to reduce the computational time is a promising way to further improve the model performance and promote the wide application of zonal models. Parallel computing techniques provide a way to accomplish these purposes. Unlike the serial computations that are commonly used in the current zonal models, these parallel techniques decompose the serial program into several discrete instructions which can be executed simultaneously on different processors/threads. As a result, the computational time of the parallelized program can be significantly reduced, compared to that of the traditional serial program. In this article, a parallel computing technique, Open Multi-Processing (OpenMP, is used into the zonal model, Pressurized zOnal Model with the Air diffuser (POMA, in order to improve the model computational performance, including the reduction of computational time and the investigation of the model scalability.

  12. BOLD-based Techniques for Quantifying Brain Hemodynamic and Metabolic Properties – Theoretical Models and Experimental Approaches

    Science.gov (United States)

    Yablonskiy, Dmitriy A.; Sukstanskii, Alexander L.; He, Xiang

    2012-01-01

    Quantitative evaluation of brain hemodynamics and metabolism, particularly the relationship between brain function and oxygen utilization, is important for understanding normal human brain operation as well as pathophysiology of neurological disorders. It can also be of great importance for evaluation of hypoxia within tumors of the brain and other organs. A fundamental discovery by Ogawa and co-workers of the BOLD (Blood Oxygenation Level Dependent) contrast opened a possibility to use this effect to study brain hemodynamic and metabolic properties by means of MRI measurements. Such measurements require developing theoretical models connecting MRI signal to brain structure and functioning and designing experimental techniques allowing MR measurements of salient features of theoretical models. In our review we discuss several such theoretical models and experimental methods for quantification brain hemodynamic and metabolic properties. Our review aims mostly at methods for measuring oxygen extraction fraction, OEF, based on measuring blood oxygenation level. Combining measurement of OEF with measurement of CBF allows evaluation of oxygen consumption, CMRO2. We first consider in detail magnetic properties of blood – magnetic susceptibility, MR relaxation and theoretical models of intravascular contribution to MR signal under different experimental conditions. Then, we describe a “through-space” effect – the influence of inhomogeneous magnetic fields, created in the extravascular space by intravascular deoxygenated blood, on the MR signal formation. Further we describe several experimental techniques taking advantage of these theoretical models. Some of these techniques - MR susceptometry, and T2-based quantification of oxygen OEF – utilize intravascular MR signal. Another technique – qBOLD – evaluates OEF by making use of through-space effects. In this review we targeted both scientists just entering the MR field and more experienced MR researchers

  13. Research on the Propagation Models and Defense Techniques of Internet Worms

    Institute of Scientific and Technical Information of China (English)

    Tian-Yun Huang

    2008-01-01

    Internet worm is harmful to network security, and it has become a research hotspot in recent years. A thorough survey on the propagation models and defense techniques of Internet worm is made in this paper. We first give its strict definition and discuss the working mechanism. We then analyze and compare some repre sentative worm propagation models proposed in recent years, such as K-M model, two-factor model, worm-anti worm model (WAW), firewall-based model, quarantine based model and hybrid benign worm-based model, etc. Some typical defense techniques such as virtual honeypot, active worm prevention and agent-oriented worm defense, etc, are also discussed. The future direction of the worm defense system is pointed out.

  14. Design of Current Controller for Two Quadrant DC Motor Drive by Using Model Order Reduction Technique

    CERN Document Server

    Ramesh, K; Nirmalkumar, A; Gurusamy, G

    2010-01-01

    In this paper, design of current controller for a two quadrant DC motor drive was proposed with the help of model order reduction technique. The calculation of current controller gain with some approximations in the conventional design process is replaced by proposed model order reduction method. The model order reduction technique proposed in this paper gives the better controller gain value for the DC motor drive. The proposed model order reduction method is a mixed method, where the numerator polynomial of reduced order model is obtained by using stability equation method and the denominator polynomial is obtained by using some approximation technique preceded in this paper. The designed controllers responses were simulated with the help of MATLAB to show the validity of the proposed method.

  15. Internet enabled modelling of extended manufacturing enterprises using the process based techniques

    OpenAIRE

    Cheng, K; Popov, Y

    2004-01-01

    The paper presents the preliminary results of an ongoing research project on Internet enabled process-based modelling of extended manufacturing enterprises. It is proposed to apply the Open System Architecture for CIM (CIMOSA) modelling framework alongside with object-oriented Petri Net models of enterprise processes and object-oriented techniques for extended enterprises modelling. The main features of the proposed approach are described and some components discussed. Elementary examples of ...

  16. Maternal, Infant Characteristics, Breastfeeding Techniques, and Initiation: Structural Equation Modeling Approaches

    OpenAIRE

    2015-01-01

    Objectives The aim of this study was to examine the relationships among maternal and infant characteristics, breastfeeding techniques, and exclusive breastfeeding initiation in different modes of birth using structural equation modeling approaches. Methods We examined a hypothetical model based on integrating concepts of a breastfeeding decision-making model, a breastfeeding initiation model, and a social cognitive theory among 952 mother-infant dyads. The LATCH breastfeeding assessment tool ...

  17. Meta-Model and UML Profile for Requirements Management of Software and Embedded Systems

    Directory of Open Access Journals (Sweden)

    Arpinen Tero

    2011-01-01

    Full Text Available Software and embedded system companies today encounter problems related to requirements management tool integration, incorrect tool usage, and lack of traceability. This is due to utilized tools with no clear meta-model and semantics to communicate requirements between different stakeholders. This paper presents a comprehensive meta-model for requirements management. The focus is on software and embedded system domains. The goal is to define generic requirements management domain concepts and abstract interfaces between requirements management and system development. This leads to a portable requirements management meta-model which can be adapted with various system modeling languages. The created meta-model is prototyped by translating it into a UML profile. The profile is imported into a UML tool which is used for rapid evaluation of meta-model concepts in practice. The developed profile is associated with a proof of concept report generator tool that automatically produces up-to-date documentation from the models in form of web pages. The profile is adopted to create an example model of embedded system requirement specification which is built with the profile.

  18. A Dry Membrane Protection Technique to Allow Surface Acoustic Wave Biosensor Measurements of Biological Model Membrane Approaches

    Directory of Open Access Journals (Sweden)

    Marius Enachescu

    2013-09-01

    Full Text Available Model membrane approaches have attracted much attention in biomedical sciences to investigate and simulate biological processes. The application of model membrane systems for biosensor measurements is partly restricted by the fact that the integrity of membranes critically depends on the maintenance of an aqueous surrounding, while various biosensors require a preconditioning of dry sensors. This is for example true for the well-established surface acoustic wave (SAW biosensor SAM®5 blue. Here, a simple drying procedure of sensor-supported model membranes is introduced using the protective disaccharide trehalose. Highly reproducible model membranes were prepared by the Langmuir-Blodgett technique, transferred to SAW sensors and supplemented with a trehalose solution. Membrane rehydration after dry incorporation into the SAW device becomes immediately evident by phase changes. Reconstituted model membranes maintain their full functionality, as indicated by biotin/avidin binding experiments. Atomic force microscopy confirmed the morphological invariability of dried and rehydrated membranes. Approximating to more physiological recognition phenomena, the site-directed immobilization of the integrin VLA-4 into the reconstituted model membrane and subsequent VCAM-1 ligand binding with nanomolar affinity were illustrated. This simple drying procedure is a novel way to combine the model membrane generation by Langmuir-Blodgett technique with SAW biosensor measurements, which extends the applicability of SAM®5 blue in biomedical sciences.

  19. Hybrid supply chain model for material requirement planning under financial constraints: A case study

    Science.gov (United States)

    Curci, Vita; Dassisti, Michele; Josefa, Mula Bru; Manuel, Díaz Madroñero

    2014-10-01

    Supply chain model (SCM) are potentially capable to integrate different aspects in supporting decision making for enterprise management tasks. The aim of the paper is to propose an hybrid mathematical programming model for optimization of production requirements resources planning. The preliminary model was conceived bottom-up from a real industrial case analysed oriented to maximize cash flow. Despite the intense computational effort required to converge to a solution, optimisation done brought good result in solving the objective function.

  20. Accuracy Assessment of a Canal-Tunnel 3d Model by Comparing Photogrammetry and Laserscanning Recording Techniques

    Science.gov (United States)

    Charbonnier, P.; Chavant, P.; Foucher, P.; Muzet, V.; Prybyla, D.; Perrin, T.; Grussenmeyer, P.; Guillemin, S.

    2013-07-01

    With recent developments in the field of technology and computer science, conventional methods are being supplanted by laser scanning and digital photogrammetry. These two different surveying techniques generate 3-D models of real world objects or structures. In this paper, we consider the application of terrestrial Laser scanning (TLS) and photogrammetry to the surveying of canal tunnels. The inspection of such structures requires time, safe access, specific processing and professional operators. Therefore, a French partnership proposes to develop a dedicated equipment based on image processing for visual inspection of canal tunnels. A 3D model of the vault and side walls of the tunnel is constructed from images recorded onboard a boat moving inside the tunnel. To assess the accuracy of this photogrammetric model (PM), a reference model is build using static TLS. We here address the problem comparing the resulting point clouds. Difficulties arise because of the highly differentiated acquisition processes, which result in very different point densities. We propose a new tool, designed to compare differences between pairs of point cloud or surfaces (triangulated meshes). Moreover, dealing with huge datasets requires the implementation of appropriate structures and algorithms. Several techniques are presented : point-to-point, cloud-to-cloud and cloud-to-mesh. In addition farthest point resampling, octree structure and Hausdorff distance are adopted and described. Experimental results are shown for a 475 m long canal tunnel located in France.

  1. Social Learning Network Analysis Model to Identify Learning Patterns Using Ontology Clustering Techniques and Meaningful Learning

    Science.gov (United States)

    Firdausiah Mansur, Andi Besse; Yusof, Norazah

    2013-01-01

    Clustering on Social Learning Network still not explored widely, especially when the network focuses on e-learning system. Any conventional methods are not really suitable for the e-learning data. SNA requires content analysis, which involves human intervention and need to be carried out manually. Some of the previous clustering techniques need…

  2. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    Science.gov (United States)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  3. Models and techniques for evaluating the effectiveness of aircraft computing systems

    Science.gov (United States)

    Meyer, J. F.

    1978-01-01

    Progress in the development of system models and techniques for the formulation and evaluation of aircraft computer system effectiveness is reported. Topics covered include: analysis of functional dependence: a prototype software package, METAPHOR, developed to aid the evaluation of performability; and a comprehensive performability modeling and evaluation exercise involving the SIFT computer.

  4. Applying Modern Techniques and Carrying Out English .Extracurricular—— On the Model United Nations Activity

    Institute of Scientific and Technical Information of China (English)

    XuXiaoyu; WangJian

    2004-01-01

    This paper is an introduction of the extracurricular activity of the Model United Nations in Northwestern Polyteehnical University (NPU) and it focuses on the application of the modem techniques in the activity and the pedagogical theories applied in it. An interview and questionnaire research will reveal the influence of the Model United Nations.

  5. Using Game Theory Techniques and Concepts to Develop Proprietary Models for Use in Intelligent Games

    Science.gov (United States)

    Christopher, Timothy Van

    2011-01-01

    This work is about analyzing games as models of systems. The goal is to understand the techniques that have been used by game designers in the past, and to compare them to the study of mathematical game theory. Through the study of a system or concept a model often emerges that can effectively educate students about making intelligent decisions…

  6. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.; Aziz, Khalid

    2001-08-23

    Research results for the second year of this project on the development of improved modeling techniques for non-conventional (e.g., horizontal, deviated or multilateral) wells were presented. The overall program entails the development of enhanced well modeling and general simulation capabilities. A general formulation for black-oil and compositional reservoir simulation was presented.

  7. Accuracy and reproducibility of dental replica models reconstructed by different rapid prototyping techniques

    NARCIS (Netherlands)

    Hazeveld, Aletta; Huddleston Slater, James J. R.; Ren, Yijin

    INTRODUCTION: Rapid prototyping is a fast-developing technique that might play a significant role in the eventual replacement of plaster dental models. The aim of this study was to investigate the accuracy and reproducibility of physical dental models reconstructed from digital data by several rapid

  8. Identification techniques for phenomenological models of hysteresis based on the conjugate gradient method

    Energy Technology Data Exchange (ETDEWEB)

    Andrei, Petru [Electrical and Computer Engineering Department, Florida State Unviersity, Tallahassee, FL 32310 (United States) and Electrical and Computer Engineering Department, Florida A and M Unviersity, Tallahassee, FL 32310 (United States)]. E-mail: pandrei@eng.fsu.edu; Oniciuc, Liviu [Electrical and Computer Engineering Department, Florida State Unviersity, Tallahassee, FL 32310 (United States); Stancu, Alexandru [Faculty of Physics, ' Al. I. Cuza' University, Iasi 700506 (Romania); Stoleriu, Laurentiu [Faculty of Physics, ' Al. I. Cuza' University, Iasi 700506 (Romania)

    2007-09-15

    An identification technique for the parameters of phenomenological models of hysteresis is presented. The basic idea of our technique is to set up a system of equations for the parameters of the model as a function of known quantities on the major or minor hysteresis loops (e.g. coercive force, susceptibilities at various points, remanence), or other magnetization curves. This system of equations can be either over or underspecified and is solved by using the conjugate gradient method. Numerical results related to the identification of parameters in the Energetic, Jiles-Atherton, and Preisach models are presented.

  9. Modeling Growth Trend and Forecasting Techniques for Vehicular Population in India

    Directory of Open Access Journals (Sweden)

    Kartikeya Jha

    2013-06-01

    Full Text Available Forecasting and estimation of growth in vehicular population is a sine qua non of any major transportation engineering development, requires capturing the past trend and using it to predict the future trend based on qualified assumptions, simulations and models created using explanatory variables. This work attempts to review the in vogue approaches and investigate a more contemporary approach, the Time Series (TS Analysis. Three fundamentally different methods were explored and results from each of these analyses were collated to check for respective levels of accuracy in predicting vehicular population for the same target year. Within the scope of this study and estimation, results obtained from TS Analysis were found to be considerably more accurate than those from Trend Line Analysis and observably better than those from Econometric Analysis. To reinforce these observations and inferences drawn, a second set of analysis was done on more recent input by using AADT data from PeMS, California. Inter alia this was carried out to contrast any statistical improvement observed when doing TS analysis with rich and accurate data. With all the data sets used and locations analyzed for forecasting, the Time Series analysis technique was invariably found to be a potent tool for forecasting.

  10. Modeling and Simulation of Microgrid Connected Renewable Energy Resources with Svpwm Technique

    Directory of Open Access Journals (Sweden)

    Govinda Chukka,

    2014-01-01

    Full Text Available The increasing tension on the global energy supply has resulted in greater interest in renewable energy resources. This presents a significant opportunity for distributed power generation (DG systems using renewable energy resources, including wind turbines, photovoltaic (PV generators, small hydro systems, and fuel cells .However, these DG units produce a wide range of voltages due to the fluctuation of energy resources and impose stringent requirements for the inverter topologies and controls. Usually, a boost-type dc–dc converter is added in the DG units to step up the dc voltage. This kind of topology, although simple may not be able to provide enough dc voltage gain when the input is very low, even with an extreme duty cycle. Also, large duty cycle operation may result in serious reverse-recovery problems and increase the ratings of switching devices. Furthermore, the added converter may deteriorate system efficiency and increase system size, weight, and cost. This paper deals with modeling and simulation of microgrid connected with renewable energy resources like Photo Voltaic panel and fuel cell. The inverter circuit is controlled by Space vector Pulse Width Modulation Technique.

  11. Non-linear control logics for vibrations suppression: a comparison between model-based and non-model-based techniques

    Science.gov (United States)

    Ripamonti, Francesco; Orsini, Lorenzo; Resta, Ferruccio

    2015-04-01

    Non-linear behavior is present in many mechanical system operating conditions. In these cases, a common engineering practice is to linearize the equation of motion around a particular operating point, and to design a linear controller. The main disadvantage is that the stability properties and validity of the controller are local. In order to improve the controller performance, non-linear control techniques represent a very attractive solution for many smart structures. The aim of this paper is to compare non-linear model-based and non-model-based control techniques. In particular the model-based sliding-mode-control (SMC) technique is considered because of its easy implementation and the strong robustness of the controller even under heavy model uncertainties. Among the non-model-based control techniques, the fuzzy control (FC), allowing designing the controller according to if-then rules, has been considered. It defines the controller without a system reference model, offering many advantages such as an intrinsic robustness. These techniques have been tested on the pendulum nonlinear system.

  12. Model-Based Requirements Analysis for Reactive Systems with UML Sequence Diagrams and Coloured Petri Nets

    DEFF Research Database (Denmark)

    Tjell, Simon; Lassen, Kristian Bisgaard

    2008-01-01

    In this paper, we describe a formal foundation for a specialized approach to automatically checking traces against real-time requirements. The traces are obtained from simulation of Coloured Petri Net (CPN) models of reactive systems. The real-time requirements are expressed in terms...... of a derivative of UML 2.0 high-level Sequence Diagrams. The automated requirement checking is part of a bigger tool framework in which VDM++ is applied to automatically generate initial CPN models based on Problem Diagrams. These models are manually enhanced to provide behavioral descriptions of the environment...

  13. Model requirements for decision support under uncertainty in data scarce dynamic deltas

    NARCIS (Netherlands)

    Haasnoot, Marjolijn; van Deursen, W.P.A.; Kwakkel, J. H.; Middelkoop, H.

    2016-01-01

    There is a long tradition of model-based decision support in water management. The consideration of deep uncertainty, however, changes the requirements imposed on models.. In the face of deep uncertainty, models are used to explore many uncertainties and the decision space across multiple outcomes o

  14. Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.

    2003-01-01

    . In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling...

  15. Improving predictive mapping of deep-water habitats: Considering multiple model outputs and ensemble techniques

    Science.gov (United States)

    Robert, Katleen; Jones, Daniel O. B.; Roberts, J. Murray; Huvenne, Veerle A. I.

    2016-07-01

    In the deep sea, biological data are often sparse; hence models capturing relationships between observed fauna and environmental variables (acquired via acoustic mapping techniques) are often used to produce full coverage species assemblage maps. Many statistical modelling techniques are being developed, but there remains a need to determine the most appropriate mapping techniques. Predictive habitat modelling approaches (redundancy analysis, maximum entropy and random forest) were applied to a heterogeneous section of seabed on Rockall Bank, NE Atlantic, for which landscape indices describing the spatial arrangement of habitat patches were calculated. The predictive maps were based on remotely operated vehicle (ROV) imagery transects high-resolution autonomous underwater vehicle (AUV) sidescan backscatter maps. Area under the curve (AUC) and accuracy indicated similar performances for the three models tested, but performance varied by species assemblage, with the transitional species assemblage showing the weakest predictive performances. Spatial predictions of habitat suitability differed between statistical approaches, but niche similarity metrics showed redundancy analysis and random forest predictions to be most similar. As one statistical technique could not be found to outperform the others when all assemblages were considered, ensemble mapping techniques, where the outputs of many models are combined, were applied. They showed higher accuracy than any single model. Different statistical approaches for predictive habitat modelling possess varied strengths and weaknesses and by examining the outputs of a range of modelling techniques and their differences, more robust predictions, with better described variation and areas of uncertainties, can be achieved. As improvements to prediction outputs can be achieved without additional costly data collection, ensemble mapping approaches have clear value for spatial management.

  16. Modeling and verifying SoS performance requirements of C4ISR systems

    Institute of Scientific and Technical Information of China (English)

    Yudong Qi; Zhixue Wang; Qingchao Dong; Hongyue He

    2015-01-01

    System-of-systems (SoS) engineering involves a com-plex process of refining high-level SoS requirements into more detailed systems requirements and assessing the extent to which the performances of to-be systems may possibly satisfy SoS capa-bility objectives. The key issue is how to model such requirements to automate the process of analysis and assessment. This paper suggests a meta-model that defines both functional and non-functional features of SoS requirements for command and control, communication, computer, intel igence, surveil ance reconnais-sance (C4ISR) systems. A domain-specific modeling language is defined by extending unified modeling language (UML) con-structed of class and association with fuzzy theory in order to model the fuzzy concepts of performance requirements. An effi-ciency evaluation function is introduced, based on B´ezier curves, to predict the effectiveness of systems. An algorithm is presented to transform domain models in fuzzy UML into a requirements ontology in description logic (DL) so that requirements verification can be automated with a popular DL reasoner such as Pel et.

  17. An experimental comparison of modelling techniques for speaker recognition under limited data condition

    Indian Academy of Sciences (India)

    H S Jayanna; S R Mahadeva Prasanna

    2009-10-01

    Most of the existing modelling techniques for the speaker recognition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp Vector Quantization (CVQ), Fuzzy Vector Quantization (FVQ), Self-Organizing Map (SOM), Learning Vector Quantization (LVQ), and Gaussian Mixture Model (GMM) classifiers. An experimental evaluation of the most widely used Gaussian Mixture Model–Universal Background Model (GMM–UBM) is also made. The experimental knowledge is then used to select a subset of classifiers for obtaining the combined classifiers. It is proposed that the combined LVQ and GMM–UBM classifier provides relatively better performance compared to all the individual as well as combined classifiers.

  18. A quantitative comparison of the TERA modeling and DFT magnetic resonance image reconstruction techniques.

    Science.gov (United States)

    Smith, M R; Nichols, S T; Constable, R T; Henkelman, R M

    1991-05-01

    The resolution of magnetic resonance images reconstructed using the discrete Fourier transform (DFT) algorithm is limited by the effective window generated by the finite data length. The transient error reconstruction approach (TERA) is an alternative reconstruction method based on autoregressive moving average (ARMA) modeling techniques. Quantitative measurements comparing the truncation artifacts present during DFT and TERA image reconstruction show that the modeling method substantially reduces these artifacts on "full" (256 X 256), "truncated" (256 X 192), and "severely truncated" (256 X 128) data sets without introducing the global amplitude distortion found in other modeling techniques. Two global measures for determining the success of modeling are suggested. Problem areas for one-dimensional modeling are examined and reasons for considering two-dimensional modeling discussed. Analysis of both medical and phantom data reconstructions are presented.

  19. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-05-01

    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes.

  20. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling

    DEFF Research Database (Denmark)

    Dotto, C. B.; Mannina, G.; Kleidorfer, M.

    2012-01-01

    is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM......-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multiobjective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty...... techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside...

  1. Message Structures: a modelling technique for information systems analysis and design

    CERN Document Server

    España, Sergio; Pastor, Óscar; Ruiz, Marcela

    2011-01-01

    Despite the increasing maturity of model-driven software development (MDD), some research challenges remain open in the field of information systems (IS). For instance, there is a need to improve modelling techniques so that they cover several development stages in an integrated way, and they facilitate the transition from analysis to design. This paper presents Message Structures, a technique for the specification of communicative interactions between the IS and organisational actors. This technique can be used both in the analysis stage and in the design stage. During analysis, it allows abstracting from the technology that will support the IS, and to complement business process diagramming techniques with the specification of the communicational needs of the organisation. During design, Message Structures serves two purposes: (i) it allows to systematically derive a specification of the IS memory (e.g. a UML class diagram), (ii) and it allows to reason the user interface design using abstract patterns. Thi...

  2. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques

    Directory of Open Access Journals (Sweden)

    Shengmao Lin

    2017-01-01

    Full Text Available In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA is systematically investigated. The fluid-structure interaction (FSI model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD or computational solid stress (CSS model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications.

  3. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques.

    Science.gov (United States)

    Lin, Shengmao; Han, Xinwei; Bi, Yonghua; Ju, Siyeong; Gu, Linxia

    2017-01-01

    In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA) is systematically investigated. The fluid-structure interaction (FSI) model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD) or computational solid stress (CSS) model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications.

  4. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques

    Science.gov (United States)

    Lin, Shengmao; Han, Xinwei; Bi, Yonghua; Ju, Siyeong

    2017-01-01

    In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA) is systematically investigated. The fluid-structure interaction (FSI) model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD) or computational solid stress (CSS) model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications. PMID:28321413

  5. The potential of cell sheet technique on the development of hepatocellular carcinoma in rat models

    Science.gov (United States)

    Sakaguchi, Katsuhisa; Abumaree, Mohammed; Mohd Zin, Nur Khatijah; Shimizu, Tatsuya

    2017-01-01

    Background Hepatocellular carcinoma (HCC) is considered the 3rd leading cause of death by cancer worldwide with the majority of patients were diagnosed in the late stages. Currently, there is no effective therapy. The selection of an animal model that mimics human cancer is essential for the identification of prognostic/predictive markers, candidate genes underlying cancer induction and the examination of factors that may influence the response of cancers to therapeutic agents and regimens. In this study, we developed a HCC nude rat models using cell sheet and examined the effect of human stromal cells (SCs) on the development of the HCC model and on different liver parameters such as albumin and urea. Methods Transplanted cell sheet for HCC rat models was fabricated using thermo-responsive culture dishes. The effect of human umbilical cord mesenchymal stromal cells (UC-MSCs) and human bone marrow mesenchymal stromal cells (BM-MSCs) on the developed tumour was tested. Furthermore, development of tumour and detection of the liver parameter was studied. Additionally, angiogenesis assay was performed using Matrigel. Results HepG2 cells requires five days to form a complete cell sheet while HepG2 co-cultured with UC-MSCs or BM-MSCs took only three days. The tumour developed within 4 weeks after transplantation of the HCC sheet on the liver of nude rats. Both UC-MSCs and BM-MSCs improved the secretion of liver parameters by increasing the secretion of albumin and urea. Comparatively, the UC-MSCs were more effective than BM-MSCs, but unlike BM-MSCs, UC-MSCs prevented liver tumour formation and the tube formation of HCC. Conclusions Since this is a novel study to induce liver tumour in rats using hepatocellular carcinoma sheet and stromal cells, the data obtained suggest that cell sheet is a fast and easy technique to develop HCC models as well as UC-MSCs have therapeutic potential for liver diseases. Additionally, the data procured indicates that stromal cells enhanced

  6. Requirements Evolution Processes Modeling%需求演化过程建模

    Institute of Scientific and Technical Information of China (English)

    张国生

    2012-01-01

    Requirements tasks, requirements activities, requirements engineering processes and requirements engineering processes system are formally defined. Requirements tasks are measured with information entropy. Requirements activities, requirements engineering processes and requirements engineering processes system are measured with joint entropy. From point of view of requirements engineering processes, microcosmic evolution of iteration and feedback of the requirements engineering processes are modeled with condition-event nets. From point of view of system engineering, macro evolution of the whole software requirements engineering processes system is modeled with dissipative structure theory.%对需求任务、需求活动、需求工程过程以及需求工程过程系统进行形式化定义.用信息熵对需求任务演化进行度量,用联合熵对需求活动、需求工程过程以及需求工程过程系统演化进行度量.从需求工程过程的角度,用条件一事件网对需求工程过程的迭代、反馈进行微观演化建模.从系统工程的角度,用耗散结构理论对整个软件需求工程过程系统进行宏观演化建模.

  7. An effectiveness-NTU technique for characterising a finned tubes PCM system using a CFD model

    OpenAIRE

    Tay, N. H. Steven; Belusko, M.; Castell, Albert; Cabeza, Luisa F.; Bruno, F.

    2014-01-01

    Numerical modelling is commonly used to design, analyse and optimise tube-in-tank phase change thermal energy storage systems with fins. A new simplified two dimensional mathematical model, based on the effectiveness-number of transfer units technique, has been developed to characterise tube-in-tank phase change material systems, with radial round fins. The model applies an empirically derived P factor which defines the proportion of the heat flow which is parallel and isothermal....

  8. Study on ABCD Analysis Technique for Business Models, business strategies, Operating Concepts & Business Systems

    OpenAIRE

    Aithal, Sreeramana

    2016-01-01

    Studying the implications of a business model, choosing success strategies, developing viable operational concepts or evolving a functional system, it is important to analyse it in all dimensions. For this purpose, various analysing techniques/frameworks are used. This paper is a discussion on how to use an innovative analysing framework called ABCD model on a given business model, or on a business strategy or a operational concept/idea or business system. Based on four constructs Advantages,...

  9. An Improved Technique Based on Firefly Algorithm to Estimate the Parameters of the Photovoltaic Model

    Directory of Open Access Journals (Sweden)

    Issa Ahmed Abed

    2016-12-01

    Full Text Available This paper present a method to enhance the firefly algorithm by coupling with a local search. The constructed technique is applied to identify the solar parameters model where the method has been proved its ability to obtain the photovoltaic parameters model. Standard firefly algorithm (FA, electromagnetism-like (EM algorithm, and electromagnetism-like without local (EMW search algorithm all are compared with the suggested method to test its capability to solve this model.

  10. Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique

    OpenAIRE

    Hussein Rappel; Aghil Yousefi-Koma; Jalil Jamali; Ako Bahari

    2014-01-01

    This paper presents a numerical model of lamb wave propagation in a homogenous steel plate using elastodynamic finite integration technique (EFIT) as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capab...

  11. AN ACCURACY ASSESSMENT OF AUTOMATED PHOTOGRAMMETRIC TECHNIQUES FOR 3D MODELING OF COMPLEX INTERIORS

    OpenAIRE

    Georgantas, A.; M. Brédif; Pierrot-Desseilligny, M.

    2012-01-01

    This paper presents a comparison of automatic photogrammetric techniques to terrestrial laser scanning for 3D modelling of complex interior spaces. We try to evaluate the automated photogrammetric techniques not only in terms of their geometric quality compared to laser scanning but also in terms of cost in money, acquisition and computational time. To this purpose we chose as test site a modern building’s stairway. APERO/MICMAC ( ©IGN )which is an Open Source photogrammetric softwar...

  12. Performance Requirements Modeling andAssessment for Active Power Ancillary Services

    DEFF Research Database (Denmark)

    Bondy, Daniel Esteban Morales; Thavlov, Anders; Tougaard, Janus Bundsgaard Mosbæk

    2017-01-01

    New sources of ancillary services are expected in the power system. For large and conventional generation units the dynamic response is well understood and detailed individual measurement is feasible, which factors in to the straightforward performance requirements applied today. For secure power...... ancillary service sources. This paper develops a modeling method for ancillary services performance requirements, including performance and verification indices. The use of the modeling method and the indices is exemplified in two case studies.......New sources of ancillary services are expected in the power system. For large and conventional generation units the dynamic response is well understood and detailed individual measurement is feasible, which factors in to the straightforward performance requirements applied today. For secure power...... system operation, a reliable service delivery is required, yet it may not be appropriate to apply conventional performance requirements to new technologies and methods. The service performance requirements and assessment methods therefore need to be generalized and standardized in order to include future...

  13. Antenna pointing system for satellite tracking based on Kalman filtering and model predictive control techniques

    Science.gov (United States)

    Souza, André L. G.; Ishihara, João Y.; Ferreira, Henrique C.; Borges, Renato A.; Borges, Geovany A.

    2016-12-01

    The present work proposes a new approach for an antenna pointing system for satellite tracking. Such a system uses the received signal to estimate the beam pointing deviation and then adjusts the antenna pointing. The present work has two contributions. First, the estimation is performed by a Kalman filter based conical scan technique. This technique uses the Kalman filter avoiding the batch estimator and applies a mathematical manipulation avoiding the linearization approximations. Secondly, a control technique based on the model predictive control together with an explicit state feedback solution are obtained in order to reduce the computational burden. Numerical examples illustrate the results.

  14. Validation of a COMSOL Multiphysics based soil model using imaging techniques

    Science.gov (United States)

    Hayes, Robert; Newill, Paul; Podd, Frank; Dorn, Oliver; York, Trevor; Grieve, Bruce

    2010-05-01

    In the face of climate change the ability to rapidly identify new plant varieties that will be tolerant to drought, and other stresses, is going to be key to breeding the food crops of tomorrow. Currently, above soil features (phenotypes) are monitored in industrial greenhouses and field trials during seed breeding programmes so as to provide an indication of which plants have the most likely preferential genetics to thrive in the future global environments. These indicators of 'plant vigour' are often based on loosely related features which may be straightforward to examine, such as an additional ear of corn on a maize plant, but which are labour intensive and often lacking in direct linkage to the required crop features. A new visualisation tool is being developed for seed breeders, providing on-line data for each individual plant in a screening programme indicating how efficiently each plant utilises the water and nutrients available in the surrounding soil. It will be used as an in-field tool for early detection of desirable genetic traits with the aim of increased efficiency in identification and delivery of tomorrow's drought tolerant food crops. Visualisation takes the form of Electrical Impedance Tomography (EIT), a non-destructive and non-intrusive imaging technique. The measurement space is typical of medical and industrial process monitoring i.e. on a small spatial scale as opposed to that of typical geophysical applications. EIT measurements are obtained for an individual plant thus allowing water and nutrient absorption levels for an individual specimen to be inferred from the resistance distribution image obtained. In addition to traditional soft-field image reconstruction techniques the inverse problem is solved using mathematical models for the mobility of water and solutes in soil. The University of Manchester/Syngenta LCT2 (Low Cost Tomography 2) instrument has been integrated into crop growth studies under highly controlled soil, nutrient and

  15. Fraud Risk Modelling: Requirements Elicitation in the Case of Telecom Services

    DEFF Research Database (Denmark)

    Yesuf, Ahmed; Wolos, Lars Peter; Rannenberg, Kai

    2017-01-01

    . In this paper, we highlight the important requirements for a usable and context-aware fraud risk modelling approach for Telecom services. To do so, we have conducted two workshops with experts from a Telecom provider and experts from multi-disciplinary areas. In order to show and document the requirements, we...

  16. The Nuremberg Code subverts human health and safety by requiring animal modeling

    OpenAIRE

    Greek Ray; Pippus Annalea; Hansen Lawrence A

    2012-01-01

    Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive...

  17. Determination of Complex-Valued Parametric Model Coefficients Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    A. M. Aibinu

    2010-01-01

    Full Text Available A new approach for determining the coefficients of a complex-valued autoregressive (CAR and complex-valued autoregressive moving average (CARMA model coefficients using complex-valued neural network (CVNN technique is discussed in this paper. The CAR and complex-valued moving average (CMA coefficients which constitute a CARMA model are computed simultaneously from the adaptive weights and coefficients of the linear activation functions in a two-layered CVNN. The performance of the proposed technique has been evaluated using simulated complex-valued data (CVD with three different types of activation functions. The results show that the proposed method can accurately determine the model coefficients provided that the network is properly trained. Furthermore, application of the developed CVNN-based technique for MRI K-space reconstruction results in images with improve resolution.

  18. Virtual Community Life Cycle: a Model to Develop Systems with Fluid Requirements

    OpenAIRE

    El Morr, Christo; Maret, Pierre de; Rioux, Marcia; Dinca-Panaitescu, Mihaela; Subercaze, Julien

    2011-01-01

    This paper reports the results of an investigation into the life cycle model needed to develop information systems for group of people with fluid requirements. For this purpose, we developed a modified spiral model and applied to the analysis, design and implementation of a virtual community for a group of researchers and organizations that collaborated in a research project and had changing system requirements? The virtual knowledge community was dedicated to support mobilization and dissemi...

  19. Forecasting performances of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2014-01-01

    into a linear model selection and estimation problem. To this end, we employ three automatic modelling devices. One of them is White’s QuickNet, but we also consider Autometrics, which is well known to time series econometricians, and the Marginal Bridge Estimator, which is better known to statisticians....... The performances of these three model selectors are compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series from the G7 countries and the four......In this work we consider the forecasting of macroeconomic variables during an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feed-forward autoregressive neural network models. What makes these models interesting in the present context is the fact...

  20. Creating the finite element models of car seats with passive head restraints to meet the requirements of passive safety

    Directory of Open Access Journals (Sweden)

    D. Yu. Solopov

    2014-01-01

    Full Text Available A problem solution to create the car chairs using modern software complexes (CAE based on the finite elements is capable to increase an efficiency of designing process significantly. Designing process is complicated by the fact that at present there are no available techniques focused on this sort of tasks.This article shows the features to create the final element models (FEM of the car chairs having three levels of complexity. It assesses a passive safety, which is ensured by the developed chair models with passive head restraints according to requirements of UNECE No 25 Regulations, and an accuracy of calculation results compared with those of full-scale experiments.This work is part of the developed technique, which allows effective development of the car chair designs both with passive, and with active head restraints, meeting the requirements of passive safety.By results of calculations and experiments it was established that at assessment by an UNECE No 25 technique the "rough" FEM (the 1st and 2nd levels can be considered as rational (in terms of effort to its creation and task solution and by the errors of results, and it is expedient to use them for preliminary and multiple calculations. Detailed models (the 3rd level provide the greatest accuracy (for accelerations the relative error makes 10%, for movements it is 11%, while in comparison with calculations, the relative error for a model of head restraint only decreases by 5% for accelerations and for 9% for movements.The materials presented in the article are used both in research activities and in training students at the Chair of Wheel Vehicles of the Scientific and Educational Complex "Special Mechanical Engineering" of Bauman Moscow State Technical University.

  1. A STUDY ON THE REFLECTION OF DISASTER PREVENTION SAFETY REQUIREMENT THROUGH USE OF THE IMPROVED QFD TECHNIQUES

    Directory of Open Access Journals (Sweden)

    YOUNG-MIN KIM

    2015-09-01

    Full Text Available Railroad passing underwater tunnel has been increasingly growing recently at home and abroad in a bid to reduce construction cost of railroad system as well as for aesthetic improvement. But design concept and technical maturity with regard to underwater tunnel in Korea still remain behind and moreover, local laws & standards and basic design guideline for design safety are yet to be available. In this study, design guideline and related regulation that incorporate domestic environment is developed by referring to advanced European or American standards or design guidelines. Based on such outcome, safety requirements in disaster prevention were identified and the study to secure the safety in terms of disaster prevention was conducted by incorporating the outcome into the design. With the safety requirements identified, the procedure was proposed to incorporate into tunnel design and besides, improved QFD incorporating safety factors was proposed instead of existing QFD implemented by common traditional approach. Disaster prevention safety requirements developed through such methodology were incorporated. The methodology to identify disaster prevention safety requirements and incorporate into the design is expected to make commitment to developing design guideline by providing fundamental information, when applying to underwater tunnel design.

  2. Data Mining Techniques for Identifying Students at Risk of Failing a Computer Proficiency Test Required for Graduation

    Science.gov (United States)

    Tsai, Chih-Fong; Tsai, Ching-Tzu; Hung, Chia-Sheng; Hwang, Po-Sen

    2011-01-01

    Enabling undergraduate students to develop basic computing skills is an important issue in higher education. As a result, some universities have developed computer proficiency tests, which aim to assess students' computer literacy. Generally, students are required to pass such tests in order to prove that they have a certain level of computer…

  3. A formal analysis of the Shlaer-Mellor method: towards a toolkit for formal and informal requirements specification techniques

    NARCIS (Netherlands)

    Wieringa, R.J.; Saake, G.

    1996-01-01

    In this paper, we define a number of tools that we think belong to the core of any toolkit for requirements engineers. The tools are conceptual and hence, they need precise definitions that lay down as exactly as possible what their meaning and possible use is. We argue that this definition can best

  4. MODELING AND COMPENSATION TECHNIQUE FOR THE GEOMETRIC ERRORS OF FIVE-AXIS CNC MACHINE TOOLS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    One of the important trends in precision machining is the development of real-time error compensation technique.The error compensation for multi-axis CNC machine tools is very difficult and attractive.The modeling for the geometric error of five-axis CNC machine tools based on multi-body systems is proposed.And the key technique of the compensation-identifying geometric error parameters-is developed.The simulation of cutting workpiece to verify the modeling based on the multi-body systems is also considered.

  5. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals

    Directory of Open Access Journals (Sweden)

    John E. Huber

    2011-02-01

    Full Text Available The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  6. Modelling the potential spatial distribution of mosquito species using three different techniques.

    Science.gov (United States)

    Cianci, Daniela; Hartemink, Nienke; Ibáñez-Justicia, Adolfo

    2015-02-27

    Models for the spatial distribution of vector species are important tools in the assessment of the risk of establishment and subsequent spread of vector-borne diseases. The aims of this study are to define the environmental conditions suitable for several mosquito species through species distribution modelling techniques, and to compare the results produced with the different techniques. Three different modelling techniques, i.e., non-linear discriminant analysis, random forest and generalised linear model, were used to investigate the environmental suitability in the Netherlands for three indigenous mosquito species (Culiseta annulata, Anopheles claviger and Ochlerotatus punctor). Results obtained with the three statistical models were compared with regard to: (i) environmental suitability maps, (ii) environmental variables associated with occurrence, (iii) model evaluation. The models indicated that precipitation, temperature and population density were associated with the occurrence of Cs. annulata and An. claviger, whereas land surface temperature and vegetation indices were associated with the presence of Oc. punctor. The maps produced with the three different modelling techniques showed consistent spatial patterns for each species, but differences in the ranges of the predictions. Non-linear discriminant analysis had lower predictions than other methods. The model with the best classification skills for all the species was the random forest model, with specificity values ranging from 0.89 to 0.91, and sensitivity values ranging from 0.64 to 0.95. We mapped the environmental suitability for three mosquito species with three different modelling techniques. For each species, the maps showed consistent spatial patterns, but the level of predicted environmental suitability differed; NLDA gave lower predicted probabilities of presence than the other two methods. The variables selected as important in the models were in agreement with the existing knowledge about

  7. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation Modelling

    Science.gov (United States)

    2009-10-01

    Beattie - Bridgeman Virial expansion The above equations are suitable for moderate pressures and are usually based on either empirical constants...CR 2010-013 October 2009 A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation...Defence R&D Canada. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation

  8. Estimation of Actual Evapotranspiration Using an Agro-Hydrological Model and Remote Sensing Techniques

    Directory of Open Access Journals (Sweden)

    mostafa yaghoobzadeh

    2017-02-01

    Full Text Available Introduction: Accurate estimation of evapotranspiration plays an important role in quantification of water balance at awatershed, plain and regional scale. Moreover, it is important in terms ofmanaging water resources such as water allocation, irrigation management, and evaluating the effects of changing land use on water yields. Different methods are available for ET estimation including Bowen ratio energy balance systems, eddy correlation systems, weighing lysimeters.Water balance techniques offer powerful alternatives for measuring ET and other surface energy fluxes. In spite of the elegance, high accuracy and theoretical attractions of these techniques for measuring ET, their practical use over large areas might be limited. They can be very expensive for practical applications at regional scales under heterogeneous terrains composed of different agro-ecosystems. To overcome aforementioned limitations by use of satellite measurements are appropriate approach. The feasibility of using remotely sensed crop parameters in combination of agro-hydrological models has been investigated in recent studies. The aim of the present study was to determine evapotranspiration by two methods, remote sensing and soil, water, atmosphere, and plant (SWAP model for wheat fields located in Neishabour plain. The output of SWAP has been validated by means of soil water content measurements. Furthermore, the actual evapotranspiration estimated by SWAP has been considered as the “reference” in the comparison between SEBAL energy balance models. Materials and Methods: Surface Energy Balance Algorithm for Land (SEBAL was used to estimate actual ET fluxes from Modis satellite images. SEBAL is a one-layer energy balance model that estimates latent heat flux and other energy balance components without information on soil, crop, and management practices. The near surface energy balance equation can be approximated as: Rn = G + H + λET Where Rn: net radiation (Wm2; G

  9. Propagation Techniques and Agronomic Requirements for the Cultivation of Barbados Aloe (Aloe vera (L.) Burm. F.)—A Review

    Science.gov (United States)

    Cristiano, Giuseppe; Murillo-Amador, Bernardo; De Lucia, Barbara

    2016-01-01

    Barbados aloe (Aloe vera (L.) Burm. F.) has traditionally been used for healing in natural medicine. However, aloe is now attracting great interest in the global market due to its bioactive chemicals which are extracted from the leaves and used in industrial preparations for pharmaceutical, cosmetic, and food products. Aloe originated from tropical and sub-tropical Africa, but it is also now cultivated in warm climatic areas of Asia, Europe, and America. In this review, the most important factors affecting aloe production are described. We focus on propagation techniques, sustainable agronomic practices and efficient post harvesting and processing systems. PMID:27721816

  10. PROPAGATION TECHNIQUES AND AGRONOMIC REQUIREMENTS FOR THE CULTIVATION OF BARBADOS ALOE (ALOE VERA (L. BURM. F. - A REVIEW.

    Directory of Open Access Journals (Sweden)

    Barbara De Lucia

    2016-09-01

    Full Text Available Barbados aloe (Aloe vera (L. Burm. f. has traditionally been used for healing in natural medicine. However, aloe is now attracting great interest in the global market due to its bioactive chemicals which are extracted from the leaves and used in industrial preparations for pharmaceutical, cosmetic and food products. Aloe originated from tropical and sub-tropical Africa, but it is also now cultivated in warm climatic areas of Asia, Europe and America.In this review, the most important factors affecting aloe production are described. We focus on propagation techniques, sustainable agronomic practices and efficient post harvesting and processing systems.

  11. Forecasting performance of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    problem. To this end we employ three automatic modelling devices. One of them is White’s QuickNet, but we also consider Autometrics, well known to time series econometricians, and the Marginal Bridge Estimator, better known to statisticians and microeconometricians.The performance of these three model...... selectors is compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series of the G7 countries and the four Scandinavian ones, and focus on forecasting......In this work we consider forecasting macroeconomic variables during an economic crisis. The focus is on a speci…c class of models, the so-called single hidden-layer feedforward autoregressive neural network models. What makes these models interesting in the present context is that they form a class...

  12. A Hybrid Model for the Mid-Long Term Runoff Forecasting by Evolutionary Computaion Techniques

    Institute of Scientific and Technical Information of China (English)

    Zou Xiu-fen; Kang Li-shan; Cae Hong-qing; Wu Zhi-jian

    2003-01-01

    The mid-long term hydrology forecasting is one of most challenging problems in hydrological studies. This paper proposes an efficient dynamical system prediction model using evolutionary computation techniques. The new model overcomes some disadvantages of conventional hydrology fore casting ones. The observed data is divided into two parts: the slow "smooth and steady" data, and the fast "coarse and fluctuation" data. Under the divide and conquer strategy, the behavior of smooth data is modeled by ordinary differential equations based on evolutionary modeling, and that of the coarse data is modeled using gray correlative forecasting method. Our model is verified on the test data of the mid-long term hydrology forecast in tbe northeast region of China. The experimental results show that the model is superior to gray system prediction model (GSPM).

  13. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  14. Structural model requirements to describe microbial inactivation during a mild heat treatment.

    Science.gov (United States)

    Geeraerd, A H; Herremans, C H; Van Impe, J F

    2000-09-10

    The classical concept of D and z values, established for sterilisation processes, is unable to deal with the typical non-loglinear behaviour of survivor curves occurring during the mild heat treatment of sous vide or cook-chill food products. Structural model requirements are formulated, eliminating immediately some candidate model types. Promising modelling approaches are thoroughly analysed and, if applicable, adapted to the specific needs: two models developed by Casolari (1988), the inactivation model of Sapru et al. (1992), the model of Whiting (1993), the Baranyi and Roberts growth model (1994), the model of Chiruta et al. (1997), the model of Daughtry et al. (1997) and the model of Xiong et al. (1999). A range of experimental data of Bacillus cereus, Yersinia enterocolitica, Escherichia coli O157:H7, Listeria monocytogenes and Lactobacillus sake are used to illustrate the different models' performances. Moreover, a novel modelling approach is developed, fulfilling all formulated structural model requirements, and based on a careful analysis of literature knowledge of the shoulder and tailing phenomenon. Although a thorough insight in the occurrence of shoulders and tails is still lacking from a biochemical point of view, this newly developed model incorporates the possibility of a straightforward interpretation within this framework.

  15. Hybrid models for hydrological forecasting: integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following top

  16. Hybrid models for hydrological forecasting: Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following top

  17. Hybrid models for hydrological forecasting: integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  18. Hybrid models for hydrological forecasting: Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  19. Machine learning techniques for astrophysical modelling and photometric redshift estimation of quasars in optical sky surveys

    CERN Document Server

    Kumar, N Daniel

    2008-01-01

    Machine learning techniques are utilised in several areas of astrophysical research today. This dissertation addresses the application of ML techniques to two classes of problems in astrophysics, namely, the analysis of individual astronomical phenomena over time and the automated, simultaneous analysis of thousands of objects in large optical sky surveys. Specifically investigated are (1) techniques to approximate the precise orbits of the satellites of Jupiter and Saturn given Earth-based observations as well as (2) techniques to quickly estimate the distances of quasars observed in the Sloan Digital Sky Survey. Learning methods considered include genetic algorithms, particle swarm optimisation, artificial neural networks, and radial basis function networks. The first part of this dissertation demonstrates that GAs and PSOs can both be efficiently used to model functions that are highly non-linear in several dimensions. It is subsequently demonstrated in the second part that ANNs and RBFNs can be used as ef...

  20. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model.

  1. Comparative analysis of system identification techniques for nonlinear modeling of the neuron-microelectrode junction.

    Science.gov (United States)

    Khan, Saad Ahmad; Thakore, Vaibhav; Behal, Aman; Bölöni, Ladislau; Hickman, James J

    2013-03-01

    Applications of non-invasive neuroelectronic interfacing in the fields of whole-cell biosensing, biological computation and neural prosthetic devices depend critically on an efficient decoding and processing of information retrieved from a neuron-electrode junction. This necessitates development of mathematical models of the neuron-electrode interface that realistically represent the extracellular signals recorded at the neuroelectronic junction without being computationally expensive. Extracellular signals recorded using planar microelectrode or field effect transistor arrays have, until now, primarily been represented using linear equivalent circuit models that fail to reproduce the correct amplitude and shape of the signals recorded at the neuron-microelectrode interface. In this paper, to explore viable alternatives for a computationally inexpensive and efficient modeling of the neuron-electrode junction, input-output data from the neuron-electrode junction is modeled using a parametric Wiener model and a Nonlinear Auto-Regressive network with eXogenous input trained using a dynamic Neural Network model (NARX-NN model). Results corresponding to a validation dataset from these models are then employed to compare and contrast the computational complexity and efficiency of the aforementioned modeling techniques with the Lee-Schetzen technique of cross-correlation for estimating a nonlinear dynamic model of the neuroelectronic junction.

  2. Modeling of PV Systems Based on Inflection Points Technique Considering Reverse Mode

    Directory of Open Access Journals (Sweden)

    Bonie J. Restrepo-Cuestas

    2013-11-01

    Full Text Available This paper proposes a methodology for photovoltaic (PV systems modeling, considering their behavior in both direct and reverse operating mode and considering mismatching conditions. The proposed methodology is based on the inflection points technique with a linear approximation to model the bypass diode and a simplified PV model. The proposed mathematical model allows to evaluate the energetic performance of a PV system, exhibiting short simulation times in large PV systems. In addition, this methodology allows to estimate the condition of the modules affected by the partial shading since it is possible to know the power dissipated due to its operation at the second quadrant.

  3. STATISTICAL INFERENCES FOR VARYING-COEFFICINT MODELS BASED ON LOCALLY WEIGHTED REGRESSION TECHNIQUE

    Institute of Scientific and Technical Information of China (English)

    梅长林; 张文修; 梁怡

    2001-01-01

    Some fundamental issues on statistical inferences relating to varying-coefficient regression models are addressed and studied. An exact testing procedure is proposed for checking the goodness of fit of a varying-coefficient model fired by the locally weighted regression technique versus an ordinary linear regression model. Also, an appropriate statistic for testing variation of model parameters over the locations where the observations are collected is constructed and a formal testing approach which is essential to exploring spatial non-stationarity in geography science is suggested.

  4. Discrete Analytic Domains: a New Technique for Groundwater Flow Modeling in Layered, Anisotropic, and Heterogeneous Aquifer Systems

    Science.gov (United States)

    Fitts, C. R.

    2004-12-01

    A new technique for modeling groundwater flow with analytic solutions is presented. It allows modeling of layered aquifer systems with complex heterogeneity and anisotropy. As with previous AEM techniques, flow in each layer is modeled with two-dimensional analytic solutions, there is high accuracy and resolution, the domain is not discretized into grid blocks or elements, and the modeled area can easily be altered and expanded in the midst of the modeling process. This method differs from previous Analytic Element Method (AEM) techniques by allowing general anisotropy conditions in the aquifer. The flow field is broken into discrete polygonal domains, each with its own definition of isotropic or anisotropic aquifer parameters. An advantage of this approach is that the anisotropy orientation and ratio can differ from one domain to another, a capability not possible with the "infinite domain" of previous AEM formulations. With this approach, the potential and discharge vector functions at a point are the sum of contributions from elements within or on the boundary of the domain containing the point. Unlike previous AEM schemes, elements beyond the domain boundary don't contribute to these functions. Once a solution is in hand, less computation is required to evaluate heads and discharges because fewer elements contribute to the equations. This computational efficiency could prove a significant advantage in large regional models and in solute transport models that use a flow model's velocity field. This technique allows multiple aquifer layers to be stacked vertically, and it has the novel ability to have more layers in the area of interest than in distant areas. This feature can save significant computation and input effort by concentrating layering detail only where it is needed and warranted by data. The boundary conditions at line element boundaries are approximated, including the continuity of flow and head across heterogeneity boundaries. By using high

  5. Capital Regulation, Liquidity Requirements and Taxation in a Dynamic Model of Banking

    NARCIS (Netherlands)

    Di Nicolo, G.; Gamba, A.; Lucchetta, M.

    2011-01-01

    This paper formulates a dynamic model of a bank exposed to both credit and liquidity risk, which can resolve financial distress in three costly forms: fire sales, bond issuance ad equity issuance. We use the model to analyze the impact of capital regulation, liquidity requirements and taxation on ba

  6. Capital Regulation, Liquidity Requirements and Taxation in a Dynamic Model of Banking

    NARCIS (Netherlands)

    Di Nicolo, G.; Gamba, A.; Lucchetta, M.

    2011-01-01

    This paper formulates a dynamic model of a bank exposed to both credit and liquidity risk, which can resolve financial distress in three costly forms: fire sales, bond issuance and equity issuance. We use the model to analyze the impact of capital regulation, liquidity requirements and taxation on b

  7. Animal models in bariatric surgery--a review of the surgical techniques and postsurgical physiology.

    Science.gov (United States)

    Rao, Raghavendra S; Rao, Venkatesh; Kini, Subhash

    2010-09-01

    Bariatric surgery is considered the most effective current treatment for morbid obesity. Since the first publication of an article by Kremen, Linner, and Nelson, many experiments have been performed using animal models. The initial experiments used only malabsorptive procedures like intestinal bypass which have largely been abandoned now. These experimental models have been used to assess feasibility and safety as well as to refine techniques particular to each procedure. We will discuss the surgical techniques and the postsurgical physiology of the four major current bariatric procedures (namely, Roux-en-Y gastric bypass, gastric banding, sleeve gastrectomy, and biliopancreatic diversion). We have also reviewed the anatomy and physiology of animal models. We have reviewed the literature and presented it such that it would be a reference to an investigator interested in animal experiments in bariatric surgery. Experimental animal models are further divided into two categories: large mammals that include dogs, cats, rabbits, and pig and small mammals that include rats and mice.

  8. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    Science.gov (United States)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  9. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    Science.gov (United States)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  10. Workshop on Functional Requirements for the Modeling of Fate and Transport of Waterborne CBRN Materials

    Energy Technology Data Exchange (ETDEWEB)

    Giles, GE

    2005-02-03

    The purpose of this Workshop on ''Functional Requirements for the Modeling of Fate and Transport of Waterborne CBRN Materials'' was to solicit functional requirements for tools that help Incident Managers plan for and deal with the consequences of industrial or terrorist releases of materials into the nation's waterways and public water utilities. Twenty representatives attended and several made presentations. Several hours of discussions elicited a set of requirements. These requirements were summarized in a form for the attendees to vote on their highest priority requirements. These votes were used to determine the prioritized requirements that are reported in this paper and can be used to direct future developments.

  11. Modeling and comparative study of various detection techniques for FMCW LIDAR using optisystem

    Science.gov (United States)

    Elghandour, Ahmed H.; Ren, Chen D.

    2013-09-01

    In this paper we investigated the different detection techniques especially direct detection, coherent heterodyne detection and coherent homodyne detection on FMCW LIDAR system using Optisystem package. A model for target, propagation channel and various detection techniques were developed using Optisystem package and then a comparative study among various detection techniques for FMCW LIDAR systems is done analytically and simulated using the developed model. Performance of direct detection, heterodyne detection and homodyne detection for FMCW LIDAR system was calculated and simulated using Optisystem package. The output simulated performance was checked using simulated results of MATLAB simulator. The results shows that direct detection is sensitive to the intensity of the received electromagnetic signal and has low complexity system advantage over the others detection architectures at the expense of the thermal noise is the dominant noise source and the sensitivity is relatively poor. In addition to much higher detection sensitivity can be achieved using coherent optical mixing which is performed by heterodyne and homodyne detection.

  12. Influence of rub-in technique on required application time and hand coverage in hygienic hand disinfection.

    Science.gov (United States)

    Kampf, Günter; Reichel, Mirja; Feil, Yvonne; Eggerstedt, Sven; Kaulfers, Paul-Michael

    2008-10-29

    Recent data indicate that full efficacy of a hand rub preparation for hygienic hand disinfection can be achieved within 15 seconds (s). However, the efficacy test used for the European Norm (EN) 1500 samples only the fingertips. Therefore, we investigated hand coverage using sixteen different application variations. The hand rub was supplemented with a fluorescent dye, and hands were assessed under UV light by a blind test, before and after application. Fifteen non-healthcare workers were used as subjects for each application variation apart from one test which was done with a group of twenty healthcare workers. All tests apart from the reference procedure were performed using 3 mL of hand rub. The EN 1500 reference procedure, which consists of 6 specific rub-in steps performed twice with an aliquot of 3 ml each time, served as a control. In one part of this study, each of the six steps was performed from one to five times before proceeding to the next step. In another part of the study, the entire sequence of six steps was performed from one to five times. Finally, all subjects were instructed to cover both hands completely, irrespective of any specific steps ("responsible application"). Each rub-in technique was evaluated for untreated skin areas. The reference procedure lasted on average 75 s and resulted in 53% of subjects with at least one untreated area on the hands. Five repetitions of the rub-in steps lasted on average 37 s with 67% of subjects having incompletely treated hands. One repetition lasted on average 17 s, and all subjects had at least one untreated area. Repeating the sequence of steps lasted longer, but did not yield a better result. "Responsible application" was quite fast, lasting 25 s among non-healthcare worker subjects and 28 s among healthcare workers. It was also effective, with 53% and 55% of hands being incompletely treated. New techniques were as fast and effective as "responsible application". Large untreated areas were found only

  13. Influence of rub-in technique on required application time and hand coverage in hygienic hand disinfection

    Directory of Open Access Journals (Sweden)

    Feil Yvonne

    2008-10-01

    Full Text Available Abstract Background Recent data indicate that full efficacy of a hand rub preparation for hygienic hand disinfection can be achieved within 15 seconds (s. However, the efficacy test used for the European Norm (EN 1500 samples only the fingertips. Therefore, we investigated hand coverage using sixteen different application variations. The hand rub was supplemented with a fluorescent dye, and hands were assessed under UV light by a blind test, before and after application. Fifteen non-healthcare workers were used as subjects for each application variation apart from one test which was done with a group of twenty healthcare workers. All tests apart from the reference procedure were performed using 3 mL of hand rub. The EN 1500 reference procedure, which consists of 6 specific rub-in steps performed twice with an aliquot of 3 ml each time, served as a control. In one part of this study, each of the six steps was performed from one to five times before proceeding to the next step. In another part of the study, the entire sequence of six steps was performed from one to five times. Finally, all subjects were instructed to cover both hands completely, irrespective of any specific steps ("responsible application". Each rub-in technique was evaluated for untreated skin areas. Results The reference procedure lasted on average 75 s and resulted in 53% of subjects with at least one untreated area on the hands. Five repetitions of the rub-in steps lasted on average 37 s with 67% of subjects having incompletely treated hands. One repetition lasted on average 17 s, and all subjects had at least one untreated area. Repeating the sequence of steps lasted longer, but did not yield a better result. "Responsible application" was quite fast, lasting 25 s among non-healthcare worker subjects and 28 s among healthcare workers. It was also effective, with 53% and 55% of hands being incompletely treated. New techniques were as fast and effective as "responsible

  14. Wastewater treatment models in teaching and training: the mismatch between education and requirements for jobs.

    Science.gov (United States)

    Hug, Thomas; Benedetti, Lorenzo; Hall, Eric R; Johnson, Bruce R; Morgenroth, Eberhard; Nopens, Ingmar; Rieger, Leiv; Shaw, Andrew; Vanrolleghem, Peter A

    2009-01-01

    As mathematical modeling of wastewater treatment plants has become more common in research and consultancy, a mismatch between education and requirements for model-related jobs has developed. There seems to be a shortage of skilled people, both in terms of quantity and in quality. In order to address this problem, this paper provides a framework to outline different types of model-related jobs, assess the required skills for these jobs and characterize different types of education that modelers obtain "in school" as well as "on the job". It is important to consider that education of modelers does not mainly happen in university courses and that the variety of model related jobs goes far beyond use for process design by consulting companies. To resolve the mismatch, the current connection between requirements for different jobs and the various types of education has to be assessed for different geographical regions and professional environments. This allows the evaluation and improvement of important educational paths, considering quality assurance and future developments. Moreover, conclusions from a workshop involving practitioners and academics from North America and Europe are presented. The participants stressed the importance of non-technical skills and recommended strengthening the role of realistic modeling experience in university training. However, this paper suggests that all providers of modeling education and support, not only universities, but also software suppliers, professional associations and companies performing modeling tasks are called to assess and strengthen their role in training and support of professional modelers.

  15. Investigation of the Stability of POD-Galerkin Techniques for Reduced Order Model Development

    Science.gov (United States)

    2016-01-09

    CFD solutions comparison of Case A at x/L = 0.5 for cases in Table 4. 12 The remaining three cases with multiple frequencies in the forcing function...Techniques for Reduced Order Model Development 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Huang, C...mitigate the stability issues encountered in developing a reduced order model (ROM) for combustion response to specified excitations using the Euler

  16. Stakeholder approach, Stakeholders mental model: A visualization test with cognitive mapping technique

    Directory of Open Access Journals (Sweden)

    Garoui Nassreddine

    2012-04-01

    Full Text Available The idea of this paper is to determine the mental models of actors in the firm with respect to the stakeholder approach of corporate governance. The use of the cognitive map to view these diagrams to show the ways of thinking and conceptualization of the stakeholder approach. The paper takes a corporate governance perspective, discusses stakeholder model. It takes also a cognitive mapping technique.

  17. Spatio–temporal rain attenuation model for application to fade mitigation techniques

    OpenAIRE

    2004-01-01

    We present a new stochastic-dynamic model useful for the planning and design of gigahertz satellite communications using fade mitigation techniques. It is a generalization of the Maseng–Bakken and targets dual-site dual-frequency rain attenu- ated satellite links. The outcome is a consistent and comprehensive model capable of yielding theoretical descriptions of: 1) long-term power spectral density of rain attenuation; 2) rain fade slope; 3) rain frequency scaling factor; 4) site diversity; a...

  18. Financial-Economic Time Series Modeling and Prediction Techniques – Review

    OpenAIRE

    2014-01-01

    Financial-economic time series distinguishes from other time series because they contain a portion of uncertainity. Because of this, statistical theory and methods play important role in their analysis. Moreover, external influence of various parameters on the values in time series makes them non-linear, which on the other hand suggests employment of more complex techniques for ther modeling. To cope with this challenging problem many researchers and scientists have developed various models a...

  19. Model-based review of Doppler global velocimetry techniques with laser frequency modulation

    Science.gov (United States)

    Fischer, Andreas

    2017-06-01

    Optical measurements of flow velocity fields are of crucial importance to understand the behavior of complex flow. One flow field measurement technique is Doppler global velocimetry (DGV). A large variety of different DGV approaches exist, e.g., applying different kinds of laser frequency modulation. In order to investigate the measurement capabilities especially of the newer DGV approaches with laser frequency modulation, a model-based review of all DGV measurement principles is performed. The DGV principles can be categorized by the respective number of required time steps. The systematic review of all DGV principle reveals drawbacks and benefits of the different measurement approaches with respect to the temporal resolution, the spatial resolution and the measurement range. Furthermore, the Cramér-Rao bound for photon shot is calculated and discussed, which represents a fundamental limit of the achievable measurement uncertainty. As a result, all DGV techniques provide similar minimal uncertainty limits. With Nphotons as the number of scattered photons, the minimal standard deviation of the flow velocity reads about 106 m / s /√{Nphotons } , which was calculated for a perpendicular arrangement of the illumination and observation direction and a laser wavelength of 895 nm. As a further result, the signal processing efficiencies are determined with a Monte-Carlo simulation. Except for the newest correlation-based DGV method, the signal processing algorithms are already optimal or near the optimum. Finally, the different DGV approaches are compared regarding errors due to temporal variations of the scattered light intensity and the flow velocity. The influence of a linear variation of the scattered light intensity can be reduced by maximizing the number of time steps, because this means to acquire more information for the correction of this systematic effect. However, more time steps can result in a flow velocity measurement with a lower temporal resolution

  20. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Science.gov (United States)

    2012-01-01

    Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented. PMID:22769234