WorldWideScience

Sample records for modeling technique capable

  1. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 1: Concepts and methodology

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available A comprehensive data driven modeling experiment is presented in a two-part paper. In this first part, an extensive data-driven modeling experiment is proposed. The most important concerns regarding the way data driven modeling (DDM techniques and data were handled, compared, and evaluated, and the basis on which findings and conclusions were drawn are discussed. A concise review of key articles that presented comparisons among various DDM techniques is presented. Six DDM techniques, namely, neural networks, genetic programming, evolutionary polynomial regression, support vector machines, M5 model trees, and K-nearest neighbors are proposed and explained. Multiple linear regression and naïve models are also suggested as baseline for comparison with the various techniques. Five datasets from Canada and Europe representing evapotranspiration, upper and lower layer soil moisture content, and rainfall-runoff process are described and proposed, in the second paper, for the modeling experiment. Twelve different realizations (groups from each dataset are created by a procedure involving random sampling. Each group contains three subsets; training, cross-validation, and testing. Each modeling technique is proposed to be applied to each of the 12 groups of each dataset. This way, both prediction accuracy and uncertainty of the modeling techniques can be evaluated. The description of the datasets, the implementation of the modeling techniques, results and analysis, and the findings of the modeling experiment are deferred to the second part of this paper.

  2. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology – Part 2: Application

    Directory of Open Access Journals (Sweden)

    D. P. Solomatine

    2009-11-01

    Full Text Available In this second part of the two-part paper, the data driven modeling (DDM experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs, genetic programming (GP, evolutionary polynomial regression (EPR, Support vector machines (SVM, M5 model trees (M5, K nearest neighbors (K-nn, and multiple linear regression (MLR techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike the two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations

  3. Activity-based resource capability modeling

    Institute of Scientific and Technical Information of China (English)

    CHENG Shao-wu; XU Xiao-fei; WANG Gang; SUN Xue-dong

    2008-01-01

    To analyse and optimize a enterprise process in a wide scope, an activity-based method of modeling resource capabilities is presented. It models resource capabilities by means of the same structure as an activity, that is, resource capabilities are defined by input objects, actions and output objects. A set of activity-based re-source capability modeling rules and matching rules between an activity and a resource are introduced. This method can not only be used to describe capability of manufacturing tools, but also capability of persons and applications, etc. It unifies methods of modeling capability of all kinds of resources in an enterprise and supports the optimization of the resource allocation of a process.

  4. Modeling of Network Identification Capability.

    Science.gov (United States)

    1986-07-01

    scalar moment is assumed to follow a Poisson distribution, as suggested by Lomnitz (1966). The A cumulative number of events occurring per year at or...Spectral Ratios from Point Sources in Plane-Layered Earth V Models," BSSA. 60, pp 1937-1987 Lomnitz . C. (1966). -Statistical Prediction of Earthquakes...Moment-Magritude Relations in Theory and Practice," J Geophy. Res., 89 (B7). pp. 6229-6235. Lomnitz , C. (1966), Statistical Prediction of Earthquakes

  5. Business Models for Cost Sharing & Capability Sustainment

    Science.gov (United States)

    2012-08-18

    business models need to adapt in a continuous process in most cases, notably the major platforms and technologies featured in this research. Demil and...capability or availability. The business model, as seen by Demil and Lecocq (2010), delivers dynamic consistency by ensuring that profitability and...business of projects. Cambridge, UK: Cambridge University Press. Demil , B., & Lecocq, X. (2010). Business model evolution: In search of dynamic

  6. Capability maturity models for offshore organisational management.

    Science.gov (United States)

    Strutt, J E; Sharp, J V; Terry, E; Miles, R

    2006-12-01

    The goal setting regime imposed by the UK safety regulator has important implications for an organisation's ability to manage health and safety related risks. Existing approaches to safety assurance based on risk analysis and formal safety assessments are increasingly considered unlikely to create the step change improvement in safety to which the offshore industry aspires and alternative approaches are being considered. One approach, which addresses the important issue of organisational behaviour and which can be applied at a very early stage of design, is the capability maturity model (CMM). The paper describes the development of a design safety capability maturity model, outlining the key processes considered necessary to safety achievement, definition of maturity levels and scoring methods. The paper discusses how CMM is related to regulatory mechanisms and risk based decision making together with the potential of CMM to environmental risk management.

  7. Capabilities and accessibility: a model for progress

    Directory of Open Access Journals (Sweden)

    Nick Tyler

    2011-11-01

    Full Text Available Accessibility is seen to be a core issue which relates directly to the quality of life: if a person cannot reach and use a facility then they cannot take advantage of the benefits that the facility is seeking to provide. In some cases this is about being able to take part in an activity for enjoyment, but in some it is a question of the exercise of human rights – access to healthcare, education, voting and other citizens’ rights. This paper argues that such an equitable accessibility approach requires understanding of the relationships between the capabilities that a person has and the capabilities required of them by society in order to achieve the accessibility they seek. The Capabilities Model, which has been developed at UCL is an attempt to understand this relationship and the paper sets out an approach to quantifying the capabilities in a way that allows designers and implementers of environmental construction and operation to have a more robust approach to their decisions about providing accessibility.

  8. Numerical modeling capabilities to predict repository performance

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used.

  9. Conceptual Model of IT Infrastructure Capability and Its Empirical Justification

    Institute of Scientific and Technical Information of China (English)

    QI Xianfeng; LAN Boxiong; GUO Zhenwei

    2008-01-01

    Increasing importance has been attached to the value of information technology (IT) infrastructure in today's organizations. The development of efficacious IT infrastructure capability enhances business performance and brings sustainable competitive advantage. This study analyzed the IT infrastructure capability in a holistic way and then presented a concept model of IT capability. IT infrastructure capability was categorized into sharing capability, service capability, and flexibility. This study then empirically tested the model using a set of survey data collected from 145 firms. Three factors emerge from the factor analysis as IT flexibility, IT service capability, and IT sharing capability, which agree with those in the conceptual model built in this study.

  10. Overview of ASC Capability Computing System Governance Model

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott W. [Los Alamos National Laboratory

    2012-07-11

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  11. Communication Analysis modelling techniques

    CERN Document Server

    España, Sergio; Pastor, Óscar; Ruiz, Marcela

    2012-01-01

    This report describes and illustrates several modelling techniques proposed by Communication Analysis; namely Communicative Event Diagram, Message Structures and Event Specification Templates. The Communicative Event Diagram is a business process modelling technique that adopts a communicational perspective by focusing on communicative interactions when describing the organizational work practice, instead of focusing on physical activities1; at this abstraction level, we refer to business activities as communicative events. Message Structures is a technique based on structured text that allows specifying the messages associated to communicative events. Event Specification Templates are a means to organise the requirements concerning a communicative event. This report can be useful to analysts and business process modellers in general, since, according to our industrial experience, it is possible to apply many Communication Analysis concepts, guidelines and criteria to other business process modelling notation...

  12. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  13. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  14. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli; Gleicher, Frederick; Wang, Bei; Adbel-Khalik, Hany S.; Pascucci, Valerio; Smith, Curtis L.

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  15. Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review

    Science.gov (United States)

    Antonsson, Erik; Gombosi, Tamas

    2005-01-01

    Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.

  16. Capabilities and accuracy of energy modelling software

    CSIR Research Space (South Africa)

    Osburn, L

    2010-11-01

    Full Text Available Energy modelling can be used in a number of different ways to fulfill different needs, including certification within building regulations or green building rating tools. Energy modelling can also be used in order to try and predict what the energy...

  17. Facility Modeling Capability Demonstration Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sadasivan, Pratap [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fallgren, Andrew James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demuth, Scott Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Aleman, Sebastian E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chiswell, Steven R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-02-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration’s (NNSA’s) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  18. Towards a national cybersecurity capability development model

    CSIR Research Space (South Africa)

    Jacobs, Pierre C

    2017-06-01

    Full Text Available on organisational strategy (D. Cooper; S. Dhiri & J. Root, 2012). There are many industry standard operating models such as the TM Forum’s Enhanced Telecom Operations Map (eTOM) Business Process Framework (Cisco Systems, 2009). Other industry standard operating..., the eTOM Business Process Framework is chosen as the operating model for the E-CMIRC. The eTOM framework is a complete framework addressing marketing and sales, strategy, infrastructure and product as well as operations and enterprise management...

  19. Computable general equilibrium model fiscal year 2014 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Laboratory; Boero, Riccardo [Los Alamos National Laboratory

    2016-05-11

    This report provides an overview of the development of the NISAC CGE economic modeling capability since 2012. This capability enhances NISAC's economic modeling and analysis capabilities to answer a broader set of questions than possible with previous economic analysis capability. In particular, CGE modeling captures how the different sectors of the economy, for example, households, businesses, government, etc., interact to allocate resources in an economy and this approach captures these interactions when it is used to estimate the economic impacts of the kinds of events NISAC often analyzes.

  20. A Thermo-Optic Propagation Modeling Capability.

    Energy Technology Data Exchange (ETDEWEB)

    Schrader, Karl; Akau, Ron

    2014-10-01

    A new theoretical basis is derived for tracing optical rays within a finite-element (FE) volume. The ray-trajectory equations are cast into the local element coordinate frame and the full finite-element interpolation is used to determine instantaneous index gradient for the ray-path integral equation. The FE methodology (FEM) is also used to interpolate local surface deformations and the surface normal vector for computing the refraction angle when launching rays into the volume, and again when rays exit the medium. The method is implemented in the Matlab(TM) environment and compared to closed- form gradient index models. A software architecture is also developed for implementing the algorithms in the Zemax(TM) commercial ray-trace application. A controlled thermal environment was constructed in the laboratory, and measured data was collected to validate the structural, thermal, and optical modeling methods.

  1. Data flow modeling techniques

    Science.gov (United States)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  2. Cyber Capability Development Centre (CCDC): Proposed Governance Model

    Science.gov (United States)

    2013-12-01

    Canada. Contract Report DRDC-RDDC-2014-C170 December 2013 Cyber Capability Development Centre ( CCDC ) Proposed governance model Douglas...13 ii Table of Figures Figure 1: CCDC organization and infrastructure

  3. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    Science.gov (United States)

    2015-09-30

    High-resolution simulations using nonhydrostatic models like SUNTANS are crucial for understanding multiscale processes that are unresolved, and...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Development of Improved Algorithms and Multiscale ... Modeling Capability with SUNTANS Oliver B. Fringer 473 Via Ortega, Room 187 Dept. of Civil and Environmental Engineering Stanford University

  4. Business Models for Cost Sharing and Capability Sustainment

    Science.gov (United States)

    2012-04-30

    business models need to adapt in a continuous process in most cases, notably the major platforms and technologies featured in this research. Demil and...capability or availability. The business model, as seen by Demil and Lecocq (2010), delivers dynamic consistency by ensuring that profitability and...Cambridge, UK: Cambridge University Press. Demil , B., & Lecocq, X. (2010). Business model evolution: In search of dynamic consistency. Long Range

  5. Integrated simulation and modeling capability for alternate magnetic fusion concepts

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, B. I.; Hooper, E.B.; Jarboe, T. R.; LoDestro, L. L.; Pearlstein, L. D.; Prager, S. C.; Sarff, J. S.

    1998-11-03

    This document summarizes a strategic study addressing the development of a comprehensive modeling and simulation capability for magnetic fusion experiments with particular emphasis on devices that are alternatives to the mainline tokamak device. A code development project in this area supports two defined strategic thrust areas in the Magnetic Fusion Energy Program: (1) comprehensive simulation and modeling of magnetic fusion experiments and (2) development, operation, and modeling of magnetic fusion alternate- concept experiment

  6. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    Science.gov (United States)

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  7. Neural network modeling of a dolphin's sonar discrimination capabilities

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL

    1994-01-01

    and frequency information were used to model the dolphin discrimination capabilities. Echoes from the same cylinders were digitized using a broadband simulated dolphin sonar signal with the transducer mounted on the dolphin's pen. The echoes were filtered by a bank of continuous constant-Q digital filters...

  8. Experiences with the Capability Maturity Model in a research environment

    NARCIS (Netherlands)

    Velden, van der M.J.; Vreke, J.; Wal, van der B.; Symons, A.

    1996-01-01

    The project described here was aimed at evaluating the Capability Maturity Model (CMM) in the context of a research organization. Part of the evaluation was a standard CMM assessment. It was found that CMM could be applied to a research organization, although its five maturity levels were considered

  9. Capable of Suicide: A Functional Model of the Acquired Capability Component of the Interpersonal-Psychological Theory of Suicide

    Science.gov (United States)

    Smith, Phillip N.; Cukrowicz, Kelly C.

    2010-01-01

    A functional model of the acquired capability for suicide, a component of Joiner's (2005) Interpersonal-Psychological Theory of Suicide, is presented. A component of Joiner's (2005) Interpersonal-Psychological Theory of Suicide a functional model of the acquired capability for suicide is presented. The model integrates the points discussed by…

  10. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  11. A Systems Engineering Capability Maturity Model, Version 1.1,

    Science.gov (United States)

    1995-11-01

    Ongoing Skills and Knowledge 4-113 PA 18: Coordinate with Suppliers 4-120 Part 3: Appendices Appendix A Appendix B Appendix C Appendix D...Ward-Callan, C. Wasson, A. Wilbur, A.M. Wilhite, R. Williams, H. Wilson, D. Zaugg, and C. Zumba . continued on next page SM CMM and Capability...Model (SE-CMM) was developed as a response to industry requests for assistance in coordinating and publishing a model that would foster improvement

  12. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  13. Simulation modeling on the growth of firm's safety management capability

    Institute of Scientific and Technical Information of China (English)

    LIU Tie-zhong; LI Zhi-xiang

    2008-01-01

    Aiming to the deficiency of safety management measure, established simulation model about firm's safety management capability(FSMC) based on organizational learning theory. The system dynamics(SD) method was used, in which level and rate system, variable equation and system structure flow diagram was concluded. Simulation model was verified from two aspects: first, model's sensitivity to variable was tested from the gross of safety investment and the proportion of safety investment; second, variables dependency was checked up from the correlative variable of FSMC and organizational learning. The feasibility of simulation model is verified though these processes.

  14. Capability maturity models in engineering companies: case study analysis

    Directory of Open Access Journals (Sweden)

    Titov Sergei

    2016-01-01

    Full Text Available In the conditions of the current economic downturn engineering companies in Russia and worldwide are searching for new approaches and frameworks to improve their strategic position, increase the efficiency of the internal business processes and enhance the quality of the final products. Capability maturity models are well-known tools used by many foreign engineering companies to assess the productivity of the processes, to elaborate the program of business process improvement and to prioritize the efforts to optimize the whole company performance. The impact of capability maturity model implementation on cost and time are documented and analyzed in the existing research. However, the potential of maturity models as tools of quality management is less known. The article attempts to analyze the impact of CMM implementation on the quality issues. The research is based on a case study methodology and investigates the real life situation in a Russian engineering company.

  15. Aeroheating Mapping to Thermal Model for Autonomous Aerobraking Capability

    Science.gov (United States)

    Amundsen, Ruth M.

    2010-01-01

    Thermal modeling has been performed to evaluate the potential for autonomous aerobraking of a spacecraft in the atmosphere of a planet. As part of this modeling, the aeroheating flux during aerobraking must be applied to the spacecraft solar arrays to evaluate their thermal response. On the Mars Reconnaissance Orbiter (MRO) mission, this was done via two separate thermal models and an extensive suite of mapping scripts. That method has been revised, and the thermal analysis of an aerobraking pass can now be accomplished via a single thermal model, using a new capability in the Thermal Desktop software. This capability, Boundary Condition Mapper, has the ability to input heating flux files that vary with time, position on the solar array, and with the skin temperature. A recently added feature to the Boundary Condition Mapper is that this module can also utilize files that describe the variation of aeroheating over the surface with atmospheric density (rather than time); this is the format of the MRO aeroheating files. This capability has allowed a huge streamlining of the MRO thermal process, simplifying the procedure for importing new aeroheating files and trajectory information. The new process, as well as the quantified time savings, is described.

  16. Development of a fourth generation predictive capability maturity model.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel; Rider, William J.; Trucano, Timothy Guy

    2013-09-01

    The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, the PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.

  17. Capacity and capability enhancements of FBG sensor system by utilizing intensity and WDM detection technique

    Science.gov (United States)

    Yeh, Chien-Hung; Zhuang, Yuan-Hong; Tsai, Ning; Chow, Chi-Wai

    2017-03-01

    In this paper, we demonstrate a multipoint fiber Bragg grating (FBG)-based sensor system by using intensity and wavelength-division-multiplexing (I-WDM) technique to enhance the sensing capacity and capability. In the proposed multipoint sensor system, a three output port optical splitter with different output ratios of 50%, 35% and 15% is proposed to connect each intensity coding FBG sensor for strain and temperature sensing simultaneously. Different output ratios of connected ports can produce different intensity-coding for I-WDM application. Nine FBGs with different Bragg wavelengths are employed for demonstration. The proposed FBG sensor system not only can sense the strain and temperature simultaneously, but also can increase the capacity and capability.

  18. Off-Gas Adsorption Model Capabilities and Recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Lyon, Kevin L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Welty, Amy K. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Law, Jack [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ladshaw, Austin [Georgia Inst. of Technology, Atlanta, GA (United States); Yiacoumi, Sotira [Georgia Inst. of Technology, Atlanta, GA (United States); Tsouris, Costas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-03-01

    Off-gas treatment is required to reduce emissions from aqueous fuel reprocessing. Evaluating the products of innovative gas adsorption research requires increased computational simulation capability to more effectively transition from fundamental research to operational design. Early modeling efforts produced the Off-Gas SeParation and REcoverY (OSPREY) model that, while efficient in terms of computation time, was of limited value for complex systems. However, the computational and programming lessons learned in development of the initial model were used to develop Discontinuous Galerkin OSPREY (DGOSPREY), a more effective model. Initial comparisons between OSPREY and DGOSPREY show that, while OSPREY does reasonably well to capture the initial breakthrough time, it displays far too much numerical dispersion to accurately capture the real shape of the breakthrough curves. DGOSPREY is a much better tool as it utilizes a more stable set of numerical methods. In addition, DGOSPREY has shown the capability to capture complex, multispecies adsorption behavior, while OSPREY currently only works for a single adsorbing species. This capability makes DGOSPREY ultimately a more practical tool for real world simulations involving many different gas species. While DGOSPREY has initially performed very well, there is still need for improvement. The current state of DGOSPREY does not include any micro-scale adsorption kinetics and therefore assumes instantaneous adsorption. This is a major source of error in predicting water vapor breakthrough because the kinetics of that adsorption mechanism is particularly slow. However, this deficiency can be remedied by building kinetic kernels into DGOSPREY. Another source of error in DGOSPREY stems from data gaps in single species, such as Kr and Xe, isotherms. Since isotherm data for each gas is currently available at a single temperature, the model is unable to predict adsorption at temperatures outside of the set of data currently

  19. Climbing the ladder: capability maturity model integration level 3

    Science.gov (United States)

    Day, Bryce; Lutteroth, Christof

    2011-02-01

    This article details the attempt to form a complete workflow model for an information and communication technologies (ICT) company in order to achieve a capability maturity model integration (CMMI) maturity rating of 3. During this project, business processes across the company's core and auxiliary sectors were documented and extended using modern enterprise modelling tools and a The Open Group Architectural Framework (TOGAF) methodology. Different challenges were encountered with regard to process customisation and tool support for enterprise modelling. In particular, there were problems with the reuse of process models, the integration of different project management methodologies and the integration of the Rational Unified Process development process framework that had to be solved. We report on these challenges and the perceived effects of the project on the company. Finally, we point out research directions that could help to improve the situation in the future.

  20. Nuclear Hybrid Energy System Modeling: RELAP5 Dynamic Coupling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Piyush Sabharwall; Nolan Anderson; Haihua Zhao; Shannon Bragg-Sitton; George Mesina

    2012-09-01

    The nuclear hybrid energy systems (NHES) research team is currently developing a dynamic simulation of an integrated hybrid energy system. A detailed simulation of proposed NHES architectures will allow initial computational demonstration of a tightly coupled NHES to identify key reactor subsystem requirements, identify candidate reactor technologies for a hybrid system, and identify key challenges to operation of the coupled system. This work will provide a baseline for later coupling of design-specific reactor models through industry collaboration. The modeling capability addressed in this report focuses on the reactor subsystem simulation.

  1. Object-Oriented MDAO Tool with Aeroservoelastic Model Tuning Capability

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley; Lung, Shun-fat

    2008-01-01

    An object-oriented multi-disciplinary analysis and optimization (MDAO) tool has been developed at the NASA Dryden Flight Research Center to automate the design and analysis process and leverage existing commercial as well as in-house codes to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic and hypersonic aircraft. Once the structural analysis discipline is finalized and integrated completely into the MDAO process, other disciplines such as aerodynamics and flight controls will be integrated as well. Simple and efficient model tuning capabilities based on optimization problem are successfully integrated with the MDAO tool. More synchronized all phases of experimental testing (ground and flight), analytical model updating, high-fidelity simulations for model validation, and integrated design may result in reduction of uncertainties in the aeroservoelastic model and increase the flight safety.

  2. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  3. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  4. EASEWASTE-life cycle modeling capabilities for waste management technologies

    DEFF Research Database (Denmark)

    Bhander, Gurbakhash Singh; Christensen, Thomas Højlund; Hauschild, Michael Zwicky

    2010-01-01

    Background, Aims and Scope The management of municipal solid waste and the associated environmental impacts are subject of growing attention in industrialized countries. EU has recently strongly emphasized the role of LCA in its waste and resource strategies. The development of sustainable solid...... waste management systems applying a life-cycle perspective requires readily understandable tools for modelling the life cycle impacts of waste management systems. The aim of the paper is to demonstrate the structure, functionalities and LCA modelling capabilities of the PC-based life cycle oriented...... waste management model EASEWASTE, developed at the Technical University of Denmark specifically to meet the needs of the waste system developer with the objective to evaluate the environmental performance of the various elements of existing or proposed solid waste management systems. Materials...

  5. Challenges in Developing Strategic Capabilities in SEP Event Modeling

    Science.gov (United States)

    Luhmann, J. G.; S., L.; Krauss-Varban, D.; Li, G.; Odstrcil, D.; Riley, P.; Owens, M.; Sokolov, I.; Manchester, W.; Kota, J.

    2008-05-01

    Realistic major SEP event modeling lags behind current efforts in CME/ICME modeling at this point in time. While on the surface the implementation of such models might seem straightforward, in practice there are many aspects of their construction that make progress slow. Part of the difficulty stems from the complex physics of the problem, some of which remains controversial, and part is simply related to the logistics of coupling the various concepts of acceleration and transport to the increasingly realistic MHD models of CMEs and ICMEs. Before a Strategic Capability in SEP event modeling can begin to be realized, the latter challenge must be addressed. Several groups, including CISM, CSEM and the LWS TR&T Focus Science Team on the subject have been grappling with this ostensibly more tractable part of the problem. This presentation is an effort to collect and communicate the challenges faced in the course of applying MHD results in various approaches. One goal is to suggest what MHD model improvements and products would be most useful toward this goal. Another is to highlight the realities of compromises that must necessarily be made in the SEP event models regardless of the perfection of the MHD descriptions of CMEs and ICMEs.

  6. Assessment of Modeling Capability for Reproducing Storm Impacts on TEC

    Science.gov (United States)

    Shim, J. S.; Kuznetsova, M. M.; Rastaetter, L.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B. A.; Foerster, M.; Foster, B.; Fuller-Rowell, T. J.; Huba, J. D.; Goncharenko, L. P.; Mannucci, A. J.; Namgaladze, A. A.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.

    2014-12-01

    During geomagnetic storm, the energy transfer from solar wind to magnetosphere-ionosphere system adversely affects the communication and navigation systems. Quantifying storm impacts on TEC (Total Electron Content) and assessment of modeling capability of reproducing storm impacts on TEC are of importance to specifying and forecasting space weather. In order to quantify storm impacts on TEC, we considered several parameters: TEC changes compared to quiet time (the day before storm), TEC difference between 24-hour intervals, and maximum increase/decrease during the storm. We investigated the spatial and temporal variations of the parameters during the 2006 AGU storm event (14-15 Dec. 2006) using ground-based GPS TEC measurements in the selected 5 degree eight longitude sectors. The latitudinal variations were also studied in two longitude sectors among the eight sectors where data coverage is relatively better. We obtained modeled TEC from various ionosphere/thermosphere (IT) models. The parameters from the models were compared with each other and with the observed values. We quantified performance of the models in reproducing the TEC variations during the storm using skill scores. This study has been supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) for the space science communities to use.

  7. Finite Element Modeling, Simulation, Tools, and Capabilities at Superform

    Science.gov (United States)

    Raman, Hari; Barnes, A. J.

    2010-06-01

    Over the past thirty years Superform has been a pioneer in the SPF arena, having developed a keen understanding of the process and a range of unique forming techniques to meet varying market needs. Superform’s high-profile list of customers includes Boeing, Airbus, Aston Martin, Ford, and Rolls Royce. One of the more recent additions to Superform’s technical know-how is finite element modeling and simulation. Finite element modeling is a powerful numerical technique which when applied to SPF provides a host of benefits including accurate prediction of strain levels in a part, presence of wrinkles and predicting pressure cycles optimized for time and part thickness. This paper outlines a brief history of finite element modeling applied to SPF and then reviews some of the modeling tools and techniques that Superform have applied and continue to do so to successfully superplastically form complex-shaped parts. The advantages of employing modeling at the design stage are discussed and illustrated with real-world examples.

  8. Coarse-grained DNA model capable of simulating ribose flexibility

    CERN Document Server

    Kovaleva, Natalya A; Mazo, Mikhail A; Zubova, Elena A

    2014-01-01

    We propose a "sugar" coarse-grained (CG) DNA model capable of simulating both biologically significant B- and A-DNA forms. The number of degrees of freedom is reduced to six grains per nucleotide. We show that this is the minimal number sufficient for this purpose. The key features of the sugar CG DNA model are: (1) simulation of sugar repuckering between C2'-endo and C3'-endo by the use of one non-harmonic potential and one three-particle potential, (2) explicit representation of sodium counterions and (3) implicit solvent approach. Effects of solvation and of partial charge screening at small distances are taken into account through the shape of potentials of interactions between charged particles. We obtain parameters of the sugar CG DNA model from the all-atom AMBER model. The suggested model allows adequate simulation of the transitions between A- and B-DNA forms, as well as of large deformations of long DNA molecules, for example, in binding with proteins. Small modifications of the model can provide th...

  9. Distributed generation capabilities of the national energy modeling system

    Energy Technology Data Exchange (ETDEWEB)

    LaCommare, Kristina Hamachi; Edwards, Jennifer L.; Marnay, Chris

    2003-01-01

    This report describes Berkeley Lab's exploration of how the National Energy Modeling System (NEMS) models distributed generation (DG) and presents possible approaches for improving how DG is modeled. The on-site electric generation capability has been available since the AEO2000 version of NEMS. Berkeley Lab has previously completed research on distributed energy resources (DER) adoption at individual sites and has developed a DER Customer Adoption Model called DER-CAM. Given interest in this area, Berkeley Lab set out to understand how NEMS models small-scale on-site generation to assess how adequately DG is treated in NEMS, and to propose improvements or alternatives. The goal is to determine how well NEMS models the factors influencing DG adoption and to consider alternatives to the current approach. Most small-scale DG adoption takes place in the residential and commercial modules of NEMS. Investment in DG ultimately offsets purchases of electricity, which also eliminates the losses associated with transmission and distribution (T&D). If the DG technology that is chosen is photovoltaics (PV), NEMS assumes renewable energy consumption replaces the energy input to electric generators. If the DG technology is fuel consuming, consumption of fuel in the electric utility sector is replaced by residential or commercial fuel consumption. The waste heat generated from thermal technologies can be used to offset the water heating and space heating energy uses, but there is no thermally activated cooling capability. This study consists of a review of model documentation and a paper by EIA staff, a series of sensitivity runs performed by Berkeley Lab that exercise selected DG parameters in the AEO2002 version of NEMS, and a scoping effort of possible enhancements and alternatives to NEMS current DG capabilities. In general, the treatment of DG in NEMS is rudimentary. The penetration of DG is determined by an economic cash-flow analysis that determines adoption based on

  10. Research on regional capability constructive models of cleat development mechanism

    Institute of Scientific and Technical Information of China (English)

    Chen Leishan; Liu Qingqiang; Geng Jie; Lu Genfa

    2009-01-01

    Global climate change has been identified as the ftrst of the top ten environmental problems in the world,As climate change will have serious effects on the social and economic development and everyday living of people in the world,many of the countries and governments are taking untiring efforts to combat climate change.As one of the important mechanisms of reducing greenhouse gas (GHG) emissions in the Kyoto Protocol,Clean DevelopmentMechanism (CDM) has not only provided chance.for developed countries to ftdfill greenhouse emission reduction obligations,but also provided an opportunity for developing countries to combat climate change under the sustainabledevelopment frame.The dual objectives of developed countries' GHG emissions' reduction obligation achievement and developing countries'sustainable development will be achieved under the CDM.As a country with responsibility,China has been positively developing CDM projects and promoting energy saving and emissions reduction during the three years after the Kyoto Protocol came into force,and CDM projects development has always been in the front tank in the world However,as the vast clime within China,notable differences occur in different regions.In order to promote the CDM development in China,it is necessary to have regional CDM capability construction in accor dance with the practicality in different regions.Based on the Slat Analysis of developed CDM projects and current CDM development status in China,problems in the CDM development of China,including the inefficiency in sinall and medium-sized CDM Projects development,over centralization of CDM development scope and especially the differentiated provincial CDM projects developing capability are pointed out in the paper.What's more,reasons forthe problems are analyzed from the leading factors,including policy orient,information asymmetry and weak CDMcapability.In order to promote CDM projects development in China,a new CDM capability construction model is put

  11. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  12. Architectural capability analysis using a model-checking technique

    Directory of Open Access Journals (Sweden)

    Darío José Delgado-Quintero

    2017-01-01

    Full Text Available Este trabajo describe un enfoque matemático basado en una técnica de validación de modelos para analizar capacidades en arquitecturas empresariales construidas utilizando los marcos arquitecturales DoDAF y TOGAF. La base de este enfoque es la validación de requerimientos relacionados con las capacidades empresariales empleando artefactos arquitecturales operacionales o de negocio asociados con el comportamiento dinámico de los procesos. Se muestra cómo este enfoque puede ser utilizado para verificar, de forma cuantitativa, si los modelos operacionales en una arquitectura empresarial pueden satisfacer las capacidades empresariales. Para ello, se utiliza un estudio de caso relacionado con un problema de integración de capacidades.

  13. ISO 9000 and/or Systems Engineering Capability Maturity Model?

    Science.gov (United States)

    Gholston, Sampson E.

    2002-01-01

    For businesses and organizations to remain competitive today they must have processes and systems in place that will allow them to first identify customer needs and then develop products/processes that will meet or exceed the customers needs and expectations. Customer needs, once identified, are normally stated as requirements. Designers can then develop products/processes that will meet these requirements. Several functions, such as quality management and systems engineering management are used to assist product development teams in the development process. Both functions exist in all organizations and both have a similar objective, which is to ensure that developed processes will meet customer requirements. Are efforts in these organizations being duplicated? Are both functions needed by organizations? What are the similarities and differences between the functions listed above? ISO 9000 is an international standard of goods and services. It sets broad requirements for the assurance of quality and for management's involvement. It requires organizations to document the processes and to follow these documented processes. ISO 9000 gives customers assurance that the suppliers have control of the process for product development. Systems engineering can broadly be defined as a discipline that seeks to ensure that all requirements for a system are satisfied throughout the life of the system by preserving their interrelationship. The key activities of systems engineering include requirements analysis, functional analysis/allocation, design synthesis and verification, and system analysis and control. The systems engineering process, when followed properly, will lead to higher quality products, lower cost products, and shorter development cycles. The System Engineering Capability Maturity Model (SE-CMM) will allow companies to measure their system engineering capability and continuously improve those capabilities. ISO 9000 and SE-CMM seem to have a similar objective, which

  14. Visualization of Available Power Transfer Capability in a Transmission System Using Morphological Techniques

    Directory of Open Access Journals (Sweden)

    S. U. Prabha

    2009-01-01

    Full Text Available A morphological decimation technique has been proposed and implemented to analyze the available power transfer capability in a transmission power network. The method creates a graphical image of the power network with thickness of the lines proportional to their respective rated megavolt ampere (MVA capacity. Based on ac load flow solution, another image was created to represent the power flow in Megawatt (MW between the buses. Proper scaling procedure has been discussed for the construction of graphical images. The novelty of this research lies in the application of mathematical morphological techniques for decimating the created images. The image created for the MW capacities of the power lines were decimated into categories and grouped into different colors for better visualization. The multi-color image is superimposed on the input image which is created for the MVA capacity of the network. The proposed method has been tested on an IEEE test system. The results from the present approach can help the planner and operator in a power station, to get a better visualization of the power network. This is the first time this kind of multi-color visualization is presented and it can be used to find the optimal path for power transfer from one bus to another.

  15. A combined technique using SEM and TOPSIS for the commercialization capability of R&D project evaluation

    Directory of Open Access Journals (Sweden)

    Charttirot Karaveg

    2015-07-01

    Full Text Available There is a high risk of R&D based innovation being commercialized, especially in the innovation transfer process which is a concern to many entrepreneurs and researchers. The purpose of this research is to develop the criteria of R&D commercialization capability and to propose a combined technique of Structural Equation Modelling (SEM and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS for R&D project evaluation. The research utilized a mixed-method approach. The first phase comprised a qualitative study on commercialization criteria development though the survey research of 272 successful entrepreneurs and researchers in all industrial sectors in Thailand. The data was collected with a structured questionnaire and analyzed by SEM. The second phase was involved with SEM-TOPSIS technique development and a case study of 45 R&D projects in research institutes and incubators for technique validation. The research results reveal that there were six criteria for R&D project commercialization capability, these are arranged according to the significance; marketing, technology, finance, non-financial impact, intellectual property, and human resource. The holistic criteria is presented in decreasing order on the ambiguous subjectivity of the fuzzy-expert system, to help with effectively funding R&D and to prevent a resource meltdown. This study applies SEM to the relative weighting of hierarchical criteria. The TOPSIS approach is employed to rank the alternative performance. An integrated SEM-TOPSIS is proposed for the first time and applied to present R&D projects shown to be effective and feasible in evaluating R&D commercialization capacity.

  16. Current Capabilities of the Fuel Performance Modeling Code PARFUME

    Energy Technology Data Exchange (ETDEWEB)

    G. K. Miller; D. A. Petti; J. T. Maki; D. L. Knudson

    2004-09-01

    The success of gas reactors depends upon the safety and quality of the coated particle fuel. A fuel performance modeling code (called PARFUME), which simulates the mechanical and physico-chemical behavior of fuel particles during irradiation, is under development at the Idaho National Engineering and Environmental Laboratory. Among current capabilities in the code are: 1) various options for calculating CO production and fission product gas release, 2) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 3) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, kernel migration, and thinning of the SiC caused by interaction of fission products with the SiC, 4) two independent methods for determining particle failure probabilities, 5) a model for calculating release-to-birth (R/B) ratios of gaseous fission products, that accounts for particle failures and uranium contamination in the fuel matrix, and 6) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. This paper presents an overview of the code.

  17. Realtime capable first principle based modelling of tokamak turbulent transport

    Science.gov (United States)

    Citrin, Jonathan; Breton, Sarah; Felici, Federico; Imbeaux, Frederic; Redondo, Juan; Aniel, Thierry; Artaud, Jean-Francois; Baiocchi, Benedetta; Bourdelle, Clarisse; Camenen, Yann; Garcia, Jeronimo

    2015-11-01

    Transport in the tokamak core is dominated by turbulence driven by plasma microinstabilities. When calculating turbulent fluxes, maintaining both a first-principle-based model and computational tractability is a strong constraint. We present a pathway to circumvent this constraint by emulating quasilinear gyrokinetic transport code output through a nonlinear regression using multilayer perceptron neural networks. This recovers the original code output, while accelerating the computing time by five orders of magnitude, allowing realtime applications. A proof-of-principle is presented based on the QuaLiKiz quasilinear transport model, using a training set of five input dimensions, relevant for ITG turbulence. The model is implemented in the RAPTOR real-time capable tokamak simulator, and simulates a 300s ITER discharge in 10s. Progress in generalizing the emulation to include 12 input dimensions is presented. This opens up new possibilities for interpretation of present-day experiments, scenario preparation and open-loop optimization, realtime controller design, realtime discharge supervision, and closed-loop trajectory optimization.

  18. Model building techniques for analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald; Cordova, Theresa Elena; Henry, Ronald C.; Brooks, Sean; Martin, Wilbur D.

    2009-09-01

    The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.

  19. Aviation System Analysis Capability Air Carrier Investment Model-Cargo

    Science.gov (United States)

    Johnson, Jesse; Santmire, Tara

    1999-01-01

    The purpose of the Aviation System Analysis Capability (ASAC) Air Cargo Investment Model-Cargo (ACIMC), is to examine the economic effects of technology investment on the air cargo market, particularly the market for new cargo aircraft. To do so, we have built an econometrically based model designed to operate like the ACIM. Two main drivers account for virtually all of the demand: the growth rate of the Gross Domestic Product (GDP) and changes in the fare yield (which is a proxy of the price charged or fare). These differences arise from a combination of the nature of air cargo demand and the peculiarities of the air cargo market. The net effect of these two factors are that sales of new cargo aircraft are much less sensitive to either increases in GDP or changes in the costs of labor, capital, fuel, materials, and energy associated with the production of new cargo aircraft than the sales of new passenger aircraft. This in conjunction with the relatively small size of the cargo aircraft market means technology improvements to the cargo aircraft will do relatively very little to spur increased sales of new cargo aircraft.

  20. On the Generalization Capabilities of the Ten-Parameter Jiles-Atherton Model

    Directory of Open Access Journals (Sweden)

    Gabriele Maria Lozito

    2015-01-01

    Full Text Available This work proposes an analysis on the generalization capabilities for the modified version of the classic Jiles-Atherton model for magnetic hysteresis. The modified model takes into account the use of dynamic parameterization, as opposed to the classic model where the parameters are constant. Two different dynamic parameterizations are taken into account: a dependence on the excitation and a dependence on the response. The identification process is performed by using a novel nonlinear optimization technique called Continuous Flock-of-Starling Optimization Cube (CFSO3, an algorithm belonging to the class of swarm intelligence. The algorithm exploits parallel architecture and uses a supervised strategy to alternate between exploration and exploitation capabilities. Comparisons between the obtained results are presented at the end of the paper.

  1. The Advanced Modeling, Simulation and Analysis Capability Roadmap Vision for Engineering

    Science.gov (United States)

    Zang, Thomas; Lieber, Mike; Norton, Charles; Fucik, Karen

    2006-01-01

    This paper summarizes a subset of the Advanced Modeling Simulation and Analysis (AMSA) Capability Roadmap that was developed for NASA in 2005. The AMSA Capability Roadmap Team was chartered to "To identify what is needed to enhance NASA's capabilities to produce leading-edge exploration and science missions by improving engineering system development, operations, and science understanding through broad application of advanced modeling, simulation and analysis techniques." The AMSA roadmap stressed the need for integration, not just within the science, engineering and operations domains themselves, but also across these domains. Here we discuss the roadmap element pertaining to integration within the engineering domain, with a particular focus on implications for future observatory missions. The AMSA products supporting the system engineering function are mission information, bounds on information quality, and system validation guidance. The Engineering roadmap element contains 5 sub-elements: (1) Large-Scale Systems Models, (2) Anomalous Behavior Models, (3) advanced Uncertainty Models, (4) Virtual Testing Models, and (5) space-based Robotics Manufacture and Servicing Models.

  2. Lidar Remote Sensing of Forests: New Instruments and Modeling Capabilities

    Science.gov (United States)

    Cook, Bruce D.

    2012-01-01

    Lidar instruments provide scientists with the unique opportunity to characterize the 3D structure of forest ecosystems. This information allows us to estimate properties such as wood volume, biomass density, stocking density, canopy cover, and leaf area. Structural information also can be used as drivers for photosynthesis and ecosystem demography models to predict forest growth and carbon sequestration. All lidars use time-in-flight measurements to compute accurate ranging measurements; however, there is a wide range of instruments and data types that are currently available, and instrument technology continues to advance at a rapid pace. This seminar will present new technologies that are in use and under development at NASA for airborne and space-based missions. Opportunities for instrument and data fusion will also be discussed, as Dr. Cook is the PI for G-LiHT, Goddard's LiDAR, Hyperspectral, and Thermal airborne imager. Lastly, this talk will introduce radiative transfer models that can simulate interactions between laser light and forest canopies. Developing modeling capabilities is important for providing continuity between observations made with different lidars, and to assist the design of new instruments. Dr. Bruce Cook is a research scientist in NASA's Biospheric Sciences Laboratory at Goddard Space Flight Center, and has more than 25 years of experience conducting research on ecosystem processes, soil biogeochemistry, and exchange of carbon, water vapor and energy between the terrestrial biosphere and atmosphere. His research interests include the combined use of lidar, hyperspectral, and thermal data for characterizing ecosystem form and function. He is Deputy Project Scientist for the Landsat Data Continuity Mission (LDCM); Project Manager for NASA s Carbon Monitoring System (CMS) pilot project for local-scale forest biomass; and PI of Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) airborne imager.

  3. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  4. Innovation and dynamic capabilities of the firm: Defining an assessment model

    Directory of Open Access Journals (Sweden)

    André Cherubini Alves

    2017-05-01

    Full Text Available Innovation and dynamic capabilities have gained considerable attention in both academia and practice. While one of the oldest inquiries in economic and strategy literature involves understanding the features that drive business success and a firm’s perpetuity, the literature still lacks a comprehensive model of innovation and dynamic capabilities. This study presents a model that assesses firms’ innovation and dynamic capabilities perspectives based on four essential capabilities: development, operations, management, and transaction capabilities. Data from a survey of 1,107 Brazilian manufacturing firms were used for empirical testing and discussion of the dynamic capabilities framework. Regression and factor analyses validated the model; we discuss the results, contrasting with the dynamic capabilities’ framework. Operations Capability is the least dynamic of all capabilities, with the least influence on innovation. This reinforces the notion that operations capabilities as “ordinary capabilities,” whereas management, development, and transaction capabilities better explain firms’ dynamics and innovation.

  5. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  6. Using Genome-scale Models to Predict Biological Capabilities

    DEFF Research Database (Denmark)

    O’Brien, Edward J.; Monk, Jonathan M.; Palsson, Bernhard O.

    2015-01-01

    Constraint-based reconstruction and analysis (COBRA) methods at the genome scale have been under development since the first whole-genome sequences appeared in the mid-1990s. A few years ago, this approach began to demonstrate the ability to predict a range of cellular functions, including cellular...... growth capabilities on various substrates and the effect of gene knockouts at the genome scale. Thus, much interest has developed in understanding and applying these methods to areas such as metabolic engineering, antibiotic design, and organismal and enzyme evolution. This Primer will get you started....

  7. High Altitude Platforms for Disaster Recovery: Capabilities, Strategies, and Techniques for Providing Emergency Telecommunications

    Energy Technology Data Exchange (ETDEWEB)

    Juan D. Deaton

    2008-05-01

    Natural disasters and terrorist acts have significant potential to disrupt emergency communication systems. These emergency communication networks include first-responder, cellular, landline, and emergency answering services such as 911, 112, or 999. Without these essential emergency communications capabilities, search, rescue, and recovery operations during a catastrophic event will be severely debilitated. High altitude platforms could be fitted with telecommunications equipment and used to support these critical communications missions once the catastrophic event occurs. With the ability to be continuously on station, HAPs provide excellent options for providing emergency coverage over high-risk areas before catastrophic incidents occur. HAPs could also provide enhanced 911 capabilities using either GPS or reference stations. This paper proposes potential emergency communications architecture and presents a method for estimating emergency communications systems traffic patterns for a catastrophic event.

  8. High Altitude Platforms for Disaster Recovery: Capabilities, Strategies, and Techniques for Emergency Telecommunications

    Directory of Open Access Journals (Sweden)

    Juan D. Deaton

    2008-09-01

    Full Text Available Natural disasters and terrorist acts have significant potential to disrupt emergency communication systems. These emergency communication networks include first-responder, cellular, landline, and emergency answering services such as 911, 112, or 999. Without these essential emergency communications capabilities, search, rescue, and recovery operations during a catastrophic event will be severely debilitated. High altitude platforms could be fitted with telecommunications equipment and used to support these critical communications missions once the catastrophic event occurs. With the ability to be continuously on station, HAPs provide excellent options for providing emergency coverage over high-risk areas before catastrophic incidents occur. HAPs could also provide enhanced 911 capabilities using either GPS or reference stations. This paper proposes potential emergency communications architecture and presents a method for estimating emergency communications systems traffic patterns for a catastrophic event.

  9. High Altitude Platforms for Disaster Recovery: Capabilities, Strategies, and Techniques for Emergency Telecommunications

    Directory of Open Access Journals (Sweden)

    Deaton JuanD

    2008-01-01

    Full Text Available Abstract Natural disasters and terrorist acts have significant potential to disrupt emergency communication systems. These emergency communication networks include first-responder, cellular, landline, and emergency answering services such as 911, 112, or 999. Without these essential emergency communications capabilities, search, rescue, and recovery operations during a catastrophic event will be severely debilitated. High altitude platforms could be fitted with telecommunications equipment and used to support these critical communications missions once the catastrophic event occurs. With the ability to be continuously on station, HAPs provide excellent options for providing emergency coverage over high-risk areas before catastrophic incidents occur. HAPs could also provide enhanced 911 capabilities using either GPS or reference stations. This paper proposes potential emergency communications architecture and presents a method for estimating emergency communications systems traffic patterns for a catastrophic event.

  10. Geared rotor dynamic methodologies for advancing prognostic modeling capabilities in rotary-wing transmission systems

    Science.gov (United States)

    Stringer, David Blake

    The overarching objective in this research is the development of a robust, rotor dynamic, physics based model of a helicopter drive train as a foundation for the prognostic modeling for rotary-wing transmissions. Rotorcrafts rely on the integrity of their drive trains for their airworthiness. Drive trains rely on gear technology for their integrity and function. Gears alter the vibration characteristics of a mechanical system and significantly contribute to noise, component fatigue, and personal discomfort prevalent in rotorcraft. This research effort develops methodologies for generating a rotor dynamic model of a rotary-wing transmission based on first principles, through (i) development of a three-dimensional gear-mesh stiffness model for helical and spur gears and integration of this model in a finite element rotor dynamic model, (ii) linear and nonlinear analyses of a geared system for comparison and validation of the gear-mesh model, (iii) development of a modal synthesis technique for potentially providing model reduction and faster analysis capabilities for geared systems, and (iv) extension of the gear-mesh model to bevel and epicyclic configurations. In addition to model construction and validation, faults indigenous to geared systems are presented and discussed. Two faults are selected for analysis and seeded into the transmission model. Diagnostic vibration parameters are presented and used as damage indicators in the analysis. The fault models produce results consistent with damage experienced during experimental testing. The results of this research demonstrate the robustness of the physics-based approach in simulating multiple normal and abnormal conditions. The advantages of this physics-based approach, when combined with contemporary probabilistic and time-series techniques, provide a useful method for improving health monitoring technologies in mechanical systems.

  11. NGNP Data Management and Analysis System Modeling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Cynthia D. Gentillon

    2009-09-01

    Projects for the very-high-temperature reactor (VHTR) program provide data in support of Nuclear Regulatory Commission licensing of the VHTR. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high temperature and high fluence environments. In addition, thermal-hydraulic experiments are conducted to validate codes used to assess reactor safety. The VHTR Program has established the NGNP Data Management and Analysis System (NDMAS) to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities for displaying the data in meaningful ways and identifying relationships among the measured quantities that contribute to their understanding.

  12. Capabilities for modelling of conversion processes in LCA

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide

    2015-01-01

    Life cycle assessment was traditionally used for modelling of product design and optimization. This is also seen in the conventional LCA software which is optimized for the modelling of single materials streams of a homogeneous nature that is assembled into a final product. There has therefore been...... be modelled and then integrated into the overall LCA model. This allows for flexible modules which automatically will adjust the material flows it is handling on basis of its chemical information, which can be set for multiple input materials at the same time. A case example of this was carried out for a bio...

  13. Capabilities For Modelling Of Conversion Processes In Life Cycle Assessment

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide

    Life cycle assessment was traditionally used for modelling of product design and optimization. This is also seen in the conventional LCA software which is optimized for the modelling of single materials streams of a homogeneous nature that is assembled into a final product. There has therefore been...... little focus on the chemical composition of the functional flows, as flows in the models have mainly been tracked on a mass basis, as emphasis was the function of the product and not the chemical composition of said product. Conversely, in modelling of environmental technologies, such as wastewater...... considering how the biochemical parameters change through a process chain. A good example of this is bio-refinery processes where different residual biomass products are converted through different steps into the final energy product. Here it is necessary to know the stoichiometry of the different products...

  14. On the predictive capabilities of multiphase Darcy flow models

    KAUST Repository

    Icardi, Matteo

    2016-01-09

    Darcy s law is a widely used model and the limit of its validity is fairly well known. When the flow is sufficiently slow and the porosity relatively homogeneous and low, Darcy s law is the homogenized equation arising from the Stokes and Navier- Stokes equations and depends on a single effective parameter (the absolute permeability). However when the model is extended to multiphase flows, the assumptions are much more restrictive and less realistic. Therefore it is often used in conjunction with empirical models (such as relative permeability and capillary pressure curves), derived usually from phenomenological speculations and experimental data fitting. In this work, we present the results of a Bayesian calibration of a two-phase flow model, using high-fidelity DNS numerical simulation (at the pore-scale) in a realistic porous medium. These reference results have been obtained from a Navier-Stokes solver coupled with an explicit interphase-tracking scheme. The Bayesian inversion is performed on a simplified 1D model in Matlab by using adaptive spectral method. Several data sets are generated and considered to assess the validity of this 1D model.

  15. Computable general equilibrium model fiscal year 2013 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  16. The Capabilities-Complexity Model. CALA Report 108

    Science.gov (United States)

    Oosterhof, Albert; Rohani, Faranak; Sanfilippo, Carol; Stillwell, Peggy; Hawkins, Karen

    2008-01-01

    In assessment, the ability to construct test items that measure a targeted skill is fundamental to validity and alignment. The ability to do the reverse is also important: determining what skill an existing test item measures. This paper presents a model for classifying test items that builds on procedures developed by others, including Bloom…

  17. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  18. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  19. Ultra-fast dynamic imaging: an overview of current techniques, their capabilities and future prospects

    Science.gov (United States)

    Altucci, C.; Velotta, R.; Marangos, J. P.

    2010-06-01

    In this review we attempt to sketch an overview of the various methods currently being used or under development to enable ultra-fast dynamic imaging of matter. We concentrate on those techniques which combine atomic scale spatial resolution and femtosecond or even sub-femtosecond temporal resolution. In part this review was inspired and informed by the material presented at the 'Ultrafast Dynamic Imaging II' workshop held in Ischia, Italy in April 2009, but we also have drawn on a wider background of material especially when discussing the emerging laser-based methods.

  20. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  1. Developing a Framework for Prediction of Human Performance Capability Using Ensemble Techniques

    Directory of Open Access Journals (Sweden)

    Gaurav Singh Thakur

    2015-01-01

    Full Text Available The recruitment of new personnel is one of the most essential business processes which affect the quality of human capital within any company. It is highly essential for the companies to ensure the recruitment of right talent to maintain a competitive edge over the others in the market. However IT companies often face a problem w hile recruiting new people for their ongoing projects due to lack of a proper framework that defines a criteria for the selection process. In this paper we aim to develop a framewor k that would allow any project manager to take the right decision for selecting new talent by correlating performance parameters with the other domain-specific attributes of the candidates. Also, another important motivation behind this project is to check the validity of the select ion procedure often followed by various big companies in both public and private sectors which focus only on academic scores, GPA/grades of students from colleges and other academic backgr ounds. We test if such a decision will produce optimal results in the industry or is there a need for change that offers a more holistic approach to recruitment of new talent in the softwa re companies. The scope of this work extends beyond the IT domain and a similar procedure can be adopted to develop a recruitment framework in other fields as well. Data-mining tech niques provide useful information from the historical projects depending on which the hiring-m anager can make decisions for recruiting high-quality workforce. This study aims to bridge t his hiatus by developing a data-mining framework based on an ensemble-learning technique t o refocus on the criteria for personnel selection. The results from this research clearly d emonstrated that there is a need to refocus on the selection-criteria for quality objectives.

  2. Modeling Techniques: Theory and Practice

    OpenAIRE

    Odd A. Asbjørnsen

    1985-01-01

    A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the pro...

  3. Modeling Techniques: Theory and Practice

    Directory of Open Access Journals (Sweden)

    Odd A. Asbjørnsen

    1985-07-01

    Full Text Available A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the process variables. This allows residence time distribution function parameters to be estimated with the reaction in situ, but without any correlation between the estimated residence time distribution parameters and the estimated reaction kinetic parameters. A general word of warning is given to the choice of wrong mathematical structure of models.

  4. An Evaluating Model for Enterprise's Innovation Capability Based on BP Neural Network

    Institute of Scientific and Technical Information of China (English)

    HU Wei-qiang; WANG Li-xin

    2007-01-01

    To meet the challenge of knowledge-based economy in the 21st century, scientifically evaluating the innovation capability is important to strengthen the international competence and acquire long-term competitive advantage for Chinese enterprises. In the article, based on the description of concept and structure of enterprise's innovation capability, the evaluation index system of innovation capability is established according to Analytic Hierarchy Process (AHP). In succession, evaluation model based on Back Propagation (BP) neural network is put forward, which provides some theoretic guidance to scientifically evaluating the innovation capability of Chinese enterprises.

  5. Technique development for modulus, microcracking, hermeticity, and coating evaluation capability characterization of SiC/SiC tubes

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Xunxiang [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States); Ang, Caen K. [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States); Singh, Gyanender P. [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States); Katoh, Yutai [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)

    2016-08-01

    Driven by the need to enlarge the safety margins of nuclear fission reactors in accident scenarios, research and development of accident-tolerant fuel has become an important topic in the nuclear engineering and materials community. A continuous-fiber SiC/SiC composite is under consideration as a replacement for traditional zirconium alloy cladding owing to its high-temperature stability, chemical inertness, and exceptional irradiation resistance. An important task is the development of characterization techniques for SiC/SiC cladding, since traditional work using rectangular bars or disks cannot directly provide useful information on the properties of SiC/SiC composite tubes for fuel cladding applications. At Oak Ridge National Laboratory, experimental capabilities are under development to characterize the modulus, microcracking, and hermeticity of as-fabricated, as-irradiated SiC/SiC composite tubes. Resonant ultrasound spectroscopy has been validated as a promising technique to evaluate the elastic properties of SiC/SiC composite tubes and microcracking within the material. A similar technique, impulse excitation, is efficient in determining the basic mechanical properties of SiC bars prepared by chemical vapor deposition; it also has potential for application in studying the mechanical properties of SiC/SiC composite tubes. Complete evaluation of the quality of the developed coatings, a major mitigation strategy against gas permeation and hydrothermal corrosion, requires the deployment of various experimental techniques, such as scratch indentation, tensile pulling-off tests, and scanning electron microscopy. In addition, a comprehensive permeation test station is being established to assess the hermeticity of SiC/SiC composite tubes and to determine the H/D/He permeability of SiC/SiC composites. This report summarizes the current status of the development of these experimental capabilities.

  6. Estimating Heat and Mass Transfer Processes in Green Roof Systems: Current Modeling Capabilities and Limitations (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Tabares Velasco, P. C.

    2011-04-01

    This presentation discusses estimating heat and mass transfer processes in green roof systems: current modeling capabilities and limitations. Green roofs are 'specialized roofing systems that support vegetation growth on rooftops.'

  7. Contractor Development Models for Promoting Sustainable Building – a case for developing management capabilities of contractors

    CSIR Research Space (South Africa)

    Dlungwana, Wilkin S

    2004-11-01

    Full Text Available practices and thereby grow and prosper. Furthermore, the authors argue that improvement, adequate resourcing and the implementation of models and programmes that embrace effective contractor development can greatly enhance the management capabilities...

  8. A Stochastic Model for the Landing Dispersion of Hazard Detection and Avoidance Capable Flight Systems

    Science.gov (United States)

    Witte, L.

    2014-06-01

    To support landing site assessments for HDA-capable flight systems and to facilitate trade studies between the potential HDA architectures versus the yielded probability of safe landing a stochastic landing dispersion model has been developed.

  9. Advances in National Capabilities for Consequence Assessment Modeling of Airborne Hazards

    Energy Technology Data Exchange (ETDEWEB)

    Nasstrom, J; Sugiyama, G; Foster, K; Larsen, S; Kosovic, B; Eme, B; Walker, H; Goldstein, P; Lundquist, J; Pobanz, B; Fulton, J

    2007-11-26

    This paper describes ongoing advancement of airborne hazard modeling capabilities in support of multiple agencies through the National Atmospheric Release Advisory Center (NARAC) and the Interagency Atmospheric Modeling and Atmospheric Assessment Center (IMAAC). A suite of software tools developed by Lawrence Livermore National Laboratory (LLNL) and collaborating organizations includes simple stand-alone, local-scale plume modeling tools for end user's computers, Web- and Internet-based software to access advanced 3-D flow and atmospheric dispersion modeling tools and expert analysis from the national center at LLNL, and state-of-the-science high-resolution urban models and event reconstruction capabilities.

  10. Model checking timed automata : techniques and applications

    NARCIS (Netherlands)

    Hendriks, Martijn.

    2006-01-01

    Model checking is a technique to automatically analyse systems that have been modeled in a formal language. The timed automaton framework is such a formal language. It is suitable to model many realistic problems in which time plays a central role. Examples are distributed algorithms, protocols, emb

  11. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  12. Using Visualization Techniques in Multilayer Traffic Modeling

    Science.gov (United States)

    Bragg, Arnold

    We describe visualization techniques for multilayer traffic modeling - i.e., traffic models that span several protocol layers, and traffic models of protocols that cross layers. Multilayer traffic modeling is challenging, as one must deal with disparate traffic sources; control loops; the effects of network elements such as IP routers; cross-layer protocols; asymmetries in bandwidth, session lengths, and application behaviors; and an enormous number of complex interactions among the various factors. We illustrate by using visualization techniques to identify relationships, transformations, and scaling; to smooth simulation and measurement data; to examine boundary cases, subtle effects and interactions, and outliers; to fit models; and to compare models with others that have fewer parameters. Our experience suggests that visualization techniques can provide practitioners with extraordinary insight about complex multilayer traffic effects and interactions that are common in emerging next-generation networks.

  13. Non-Destructive Evaluation for Corrosion Monitoring in Concrete: A Review and Capability of Acoustic Emission Technique

    Directory of Open Access Journals (Sweden)

    Ahmad Zaki

    2015-08-01

    Full Text Available Corrosion of reinforced concrete (RC structures has been one of the major causes of structural failure. Early detection of the corrosion process could help limit the location and the extent of necessary repairs or replacement, as well as reduce the cost associated with rehabilitation work. Non-destructive testing (NDT methods have been found to be useful for in-situ evaluation of steel corrosion in RC, where the effect of steel corrosion and the integrity of the concrete structure can be assessed effectively. A complementary study of NDT methods for the investigation of corrosion is presented here. In this paper, acoustic emission (AE effectively detects the corrosion of concrete structures at an early stage. The capability of the AE technique to detect corrosion occurring in real-time makes it a strong candidate for serving as an efficient NDT method, giving it an advantage over other NDT methods.

  14. Non-Destructive Evaluation for Corrosion Monitoring in Concrete: A Review and Capability of Acoustic Emission Technique

    Science.gov (United States)

    Zaki, Ahmad; Chai, Hwa Kian; Aggelis, Dimitrios G.; Alver, Ninel

    2015-01-01

    Corrosion of reinforced concrete (RC) structures has been one of the major causes of structural failure. Early detection of the corrosion process could help limit the location and the extent of necessary repairs or replacement, as well as reduce the cost associated with rehabilitation work. Non-destructive testing (NDT) methods have been found to be useful for in-situ evaluation of steel corrosion in RC, where the effect of steel corrosion and the integrity of the concrete structure can be assessed effectively. A complementary study of NDT methods for the investigation of corrosion is presented here. In this paper, acoustic emission (AE) effectively detects the corrosion of concrete structures at an early stage. The capability of the AE technique to detect corrosion occurring in real-time makes it a strong candidate for serving as an efficient NDT method, giving it an advantage over other NDT methods. PMID:26251904

  15. Characterization of Bond Strength of U-Mo Fuel Plates Using the Laser Shockwave Technique: Capabilities and Preliminary Results

    Energy Technology Data Exchange (ETDEWEB)

    J. A. Smith; D. L. Cottle; B. H. Rabin

    2013-09-01

    This report summarizes work conducted to-date on the implementation of new laser-based capabilities for characterization of bond strength in nuclear fuel plates, and presents preliminary results obtained from fresh fuel studies on as-fabricated monolithic fuel consisting of uranium-10 wt.% molybdenum alloys clad in 6061 aluminum by hot isostatic pressing. Characterization involves application of two complementary experimental methods, laser-shock testing and laser-ultrasonic imaging, collectively referred to as the Laser Shockwave Technique (LST), that allows the integrity, physical properties and interfacial bond strength in fuel plates to be evaluated. Example characterization results are provided, including measurement of layer thicknesses, elastic properties of the constituents, and the location and nature of generated debonds (including kissing bonds). LST provides spatially localized, non-contacting measurements with minimum specimen preparation, and is ideally suited for applications involving radioactive materials, including irradiated materials. The theoretical principles and experimental approaches employed in characterizing nuclear fuel plates are described, and preliminary bond strength measurement results are discussed, with emphasis on demonstrating the capabilities and limitations of these methods. These preliminary results demonstrate the ability to distinguish bond strength variations between different fuel plates. Although additional development work is necessary to validate and qualify the test methods, these results suggest LST is viable as a method to meet fuel qualification requirements to demonstrate acceptable bonding integrity.

  16. Review of Hydrologic Models for Evaluating Use of Remote Sensing Capabilities

    Science.gov (United States)

    Peck, E. L.; Mcquivey, R. S.; Keefer, T.; Johnson, E. R.; Erekson, J. L.

    1982-01-01

    Hydrologic models most commonly used by federal agencies for hydrologic forecasting are reviewed. Six catchment models and one snow accumulation and ablation model are reviewed. Information on the structure, parameters, states, and required inputs is presented in schematic diagrams and in tables. The primary and secondary roles of parameters and state variables with respect to their function in the models are identified. The information will be used to evaluate the usefulness of remote sensing capabilities in the operational use of hydrologic models.

  17. Improvements on the ice cloud modeling capabilities of the Community Radiative Transfer Model

    Science.gov (United States)

    Yi, Bingqi; Yang, Ping; Liu, Quanhua; Delst, Paul; Boukabara, Sid-Ahmed; Weng, Fuzhong

    2016-11-01

    Noticeable improvements on the ice cloud modeling capabilities of the Community Radiative Transfer Model (CRTM) are reported, which are based on the most recent advances in understanding ice cloud microphysical (particularly, ice particle habit/shape characteristics) and optical properties. The new CRTM ice cloud model is derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 ice cloud habit model, which represents ice particles as severely roughened hexagonal ice column aggregates with a gamma size distribution. The single-scattering properties of the new ice particle model are derived from a state-of-the-art ice optical property library and are constructed as look-up tables for rapid CRTM computations. Various sensitivity studies concerning instrument-specific applications and simulations are performed to validate CRTM against satellite observations. In particular, radiances in a spectral region covering the infrared wavelengths are simulated. Comparisons of brightness temperatures between CRTM simulations and observations (from MODIS, the Atmospheric Infrared Sounder, and the Advanced Microwave Sounding Unit) show that the new ice cloud optical property look-up table substantially enhances the performance of the CRTM under ice cloud conditions.

  18. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    Energy Technology Data Exchange (ETDEWEB)

    Schraad, Mark William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Physics and Engineering Models; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Advanced Simulation and Computing

    2016-09-06

    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  19. A Study on Evaluation Model of IT Industry's Innovation Capability Based Variable Weight Theory

    Institute of Scientific and Technical Information of China (English)

    LI Zi-biao; WANG lei; HU Bao-min

    2006-01-01

    In this paper, IT Industry's innovation capability is considered to be the innovation output capability after complex operation of industry input in industry system. In this complex process, R&D personnel input and R&D expense input are un-substitutable, and for evaluation of innovation capability, innovation input and innovation output also are un-substitutable. Based on this theory, an evaluation model of sustaining strength index is put forward. Considering both input scale and output contribution of IT industry's innovation system, this model reflects the un-substitutability of every evaluation aspects. The measurement result not only shows the industry innovation capability, but also reflects the support degree to economy. At last the data of IT industry in China are provided between 1994 and 2004 for empirical study.

  20. Exploring a capability-demand interaction model for inclusive design evaluation

    OpenAIRE

    Persad, Umesh

    2012-01-01

    Designers are required to evaluate their designs against the needs and capabilities of their target user groups in order to achieve successful, inclusive products. This dissertation presents exploratory research into the specific problem of supporting analytical design evaluation for Inclusive Design. The analytical evaluation process involves evaluating products with user data rather than testing with actual users. The work focuses on the exploration of a capability-demand model of product i...

  1. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  2. Joint intelligence operations centers (JIOC) business process model & capabilities evaluation methodology

    OpenAIRE

    Schacher, Gordon; Irvine, Nelson; Hoyt, Roger

    2012-01-01

    A JIOC Business Process Model has been developed for use in evaluating JIOC capabilities. The model is described and depicted through OV5 and organization swim-lane diagrams. Individual intelligence activities diagrams are included. A JIOC evaluation methodology is described.

  3. Dynamic capabilities, Marketing Capability and Organizational Performance

    Directory of Open Access Journals (Sweden)

    Adriana Roseli Wünsch Takahashi

    2017-01-01

    Full Text Available The goal of the study is to investigate the influence of dynamic capabilities on organizational performance and the role of marketing capabilities as a mediator in this relationship in the context of private HEIs in Brazil. As a research method we carried out a survey with 316 IES and data analysis was operationalized with the technique of structural equation modeling. The results indicate that the dynamic capabilities have influence on organizational performance only when mediated by marketing ability. The marketing capability has an important role in the survival, growth and renewal on educational services offerings for HEIs in private sector, and consequently in organizational performance. It is also demonstrated that mediated relationship is more intense for HEI with up to 3,000 students and other organizational profile variables such as amount of courses, the constitution, the type of institution and type of education do not significantly alter the results.

  4. Real-time capable first principle based modelling of tokamak turbulent transport

    CERN Document Server

    Breton, S; Felici, F; Imbeaux, F; Aniel, T; Artaud, J F; Baiocchi, B; Bourdelle, C; Camenen, Y; Garcia, J

    2015-01-01

    A real-time capable core turbulence tokamak transport model is developed. This model is constructed from the regularized nonlinear regression of quasilinear gyrokinetic transport code output. The regression is performed with a multilayer perceptron neural network. The transport code input for the neural network training set consists of five dimensions, and is limited to adiabatic electrons. The neural network model successfully reproduces transport fluxes predicted by the original quasilinear model, while gaining five orders of magnitude in computation time. The model is implemented in a real-time capable tokamak simulator, and simulates a 300s ITER discharge in 10s. This proof-of-principle for regression based transport models anticipates a significant widening of input space dimensionality and physics realism for future training sets. This aims to provide unprecedented computational speed coupled with first-principle based physics for real-time control and integrated modelling applications.

  5. Examination of the capability of the laser-induced breakdown spectroscopy (LIBS) technique as the emerging laser-based analytical tool for analyzing trace elements in coal

    Science.gov (United States)

    Idris, N.; Ramli, M.; Mahidin, Hedwig, R.; Lie, Z. S.; Kurniawan, K. H.

    2014-09-01

    Due to its superior advantageous over the conventional analytical tools, laser-induced breakdown spectroscopy (LIBS) technique nowadays is becoming an emerging analytical tools and it is expected to be new future super star of analytical tool. This technique is based on the use of optical emission from the laser-induced plasma for analyzing spectrochemically the constituent and content of the sampled object. The capability of this technique is examined on analysis of trace elements in coal sample. Coal is one difficult sample to analyze due to its complex chemical composition and physical properties. It is inherent that coal contains trace element including heavy metal, thus mining, beneficiation and utilization poses hazard to environment and to human beings. The LIBS apparatus used was composed by a laser system (Nd-YAG: Quanta Ray; LAB SERIES; 1,064 nm; 500 mJ; 8 ns) and optical detector (McPherson model 2061; 1,000 mm focal length; f/8.6 Czerny-Turner) equipped with Andor I*Star intensified CCD 1024×256 pixels. The emitted laser was focused onto coal sample with a focusing lens of +250 mm. The plasma emission was collected by a fiber optics and sent to the the spectrograph. The coal samples were taken from Province of Aceh. As the results, several trace elements including heavy metal (As, Mn, Pb) can surely be observed, implying the eventuality of LIBS technique to analysis the presence of trace element in coal.

  6. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  7. Research Techniques Made Simple: Skin Carcinogenesis Models: Xenotransplantation Techniques.

    Science.gov (United States)

    Mollo, Maria Rosaria; Antonini, Dario; Cirillo, Luisa; Missero, Caterina

    2016-02-01

    Xenotransplantation is a widely used technique to test the tumorigenic potential of human cells in vivo using immunodeficient mice. Here we describe basic technologies and recent advances in xenotransplantation applied to study squamous cell carcinomas (SCCs) of the skin. SCC cells isolated from tumors can either be cultured to generate a cell line or injected directly into mice. Several immunodeficient mouse models are available for selection based on the experimental design and the type of tumorigenicity assay. Subcutaneous injection is the most widely used technique for xenotransplantation because it involves a simple procedure allowing the use of a large number of cells, although it may not mimic the original tumor environment. SCC cell injections at the epidermal-to-dermal junction or grafting of organotypic cultures containing human stroma have also been used to more closely resemble the tumor environment. Mixing of SCC cells with cancer-associated fibroblasts can allow the study of their interaction and reciprocal influence, which can be followed in real time by intradermal ear injection using conventional fluorescent microscopy. In this article, we will review recent advances in xenotransplantation technologies applied to study behavior of SCC cells and their interaction with the tumor environment in vivo.

  8. Modelling force deployment from army intelligence using the transportation system capability (TRANSCAP) model : a standardized approach.

    Energy Technology Data Exchange (ETDEWEB)

    Burke, J. F., Jr.; Love, R. J.; Macal, C. M.; Decision and Information Sciences

    2004-07-01

    Argonne National Laboratory (Argonne) developed the transportation system capability (TRANSCAP) model to simulate the deployment of forces from Army bases, in collaboration with and under the sponsorship of the Military Transportation Management Command Transportation Engineering Agency (MTMCTEA). TRANSCAP's design separates its pre- and post-processing modules (developed in Java) from its simulation module (developed in MODSIM III). This paper describes TRANSCAP's modelling approach, emphasizing Argonne's highly detailed, object-oriented, multilanguage software design principles. Fundamental to these design principles is TRANSCAP's implementation of an improved method for standardizing the transmission of simulated data to output analysis tools and the implementation of three Army deployment/redeployment community standards, all of which are in the final phases of community acceptance. The first is the extensive hierarchy and object representation for transport simulations (EXHORT), which is a reusable, object-oriented deployment simulation source code framework of classes. The second and third are algorithms for rail deployment operations at a military base.

  9. Co-firing biomass and coal-progress in CFD modelling capabilities

    DEFF Research Database (Denmark)

    Kær, Søren Knudsen; Rosendahl, Lasse Aistrup; Yin, Chungen

    2005-01-01

    This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra-particle...... particle conversion patterns. The improved model will impact the simulation capabilities of biomass fired boilers in the areas of thermal conditions, NOx formation and particle deposition behaviour.......This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra-particle...... gradients. The advanced reaction model predicts moisture and volatiles release characteristics that differ significantly from those found from a 0-dimensional model partly due to the processes occurring in parallel rather than sequentially. This is demonstrated for a test case that illustrates single...

  10. Evaluation Model for Capability of Enterprise Agent Coalition Based on Information Fusion and Attribute Reduction

    Institute of Scientific and Technical Information of China (English)

    Dongjun Liu; Li Li; and Jiayang Wang

    2016-01-01

    For the issue of evaluation of capability of enterprise agent coalition, an evaluation model based on information fusion and entropy weighting method is presented. The attribute reduction method is utilized to reduce indicators of the capability according to the theory of rough set. The new indicator system can be determined. Attribute reduction can also reduce the workload and remove the redundant information, when there are too many indicators or the indicators have strong correlation. The research complexity can be reduced and the efficiency can be improved. Entropy weighting method is used to determine the weights of the remaining indicators, and the importance of indicators is analyzed. The information fusion model based on nearest neighbor method is developed and utilized to evaluate the capability of multiple agent coalitions, compared to cloud evaluation model and D-S evidence method. Simulation results are reasonable and with obvious distinction. Thus they verify the effectiveness and feasibility of the model. The information fusion model can provide more scientific, rational decision support for choosing the best agent coalition, and provide innovative steps for the evaluation process of capability of agent coalitions.

  11. Capability-based Access Control Delegation Model on the Federated IoT Network

    DEFF Research Database (Denmark)

    Anggorojati, Bayu; Mahalle, Parikshit N.; Prasad, Neeli R.

    2012-01-01

    Flexibility is an important property for general access control system and especially in the Internet of Things (IoT), which can be achieved by access or authority delegation. Delegation mechanisms in access control that have been studied until now have been intended mainly for a system that has...... no resource constraint, such as a web-based system, which is not very suitable for a highly pervasive system such as IoT. To this end, this paper presents an access delegation method with security considerations based on Capability-based Context Aware Access Control (CCAAC) model intended for federated...... machine-to-machine communication or IoT networks. The main idea of our proposed model is that the access delegation is realized by means of a capability propagation mechanism, and incorporating the context information as well as secure capability propagation under federated IoT environments. By using...

  12. Capability Model for Case-Based Reasoning in Collaborative Commerce Environment

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Collaborative commerce (c-commerce) has become an innovative business paradigm that helps companies achieve high operational performance through inter-organizational collaboration. This paper presents an effective case-based reasoning (CBR) capability model for solution selection in c-commerce applications, as CBR is widely used in knowledge management and electronic commerce.Based on the case-based competence model suggested by Smyth and McKenna, a directed graph was used to represent the collaborative reasoning history of CBR systems, where information of reasoning process ability was extracted. Experiment was carried out on a travel dataset. By integrating case-based competence and reasoning process ability, the capability is more suitable to reflect the real ability of CBR systems. The result shows that the proposed method can effectively evaluate the capability of CBR systems and enhance the performance of collaborative case-based reasoning in c-commerce environment.

  13. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    Science.gov (United States)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried

  14. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  15. Capability of particle inspection on patterned EUV mask using model EBEYE M

    Science.gov (United States)

    Naka, Masato; Yoshikawa, Ryoji; Yamaguchi, Shinji; Hirano, Takashi; Itoh, Masamitsu; Terao, Kenji; Hatakeyama, Masahiro; Watanabe, Kenji; Sobukawa, Hiroshi; Murakami, Takeshi; Tsukamoto, Kiwamu; Hayashi, Takehide; Tajima, Ryo; Kimura, Norio; Hayashi, Naoya

    2014-09-01

    According to the road map shown in ITRS [1], the EUV mask requirement for defect inspection is to detect the defect size of sub- 20 nm in the near future. EB (Electron Beam) inspection with high resolution is one of the promising candidates to meet such severe defect inspection requirements. However, conventional EB inspection using the SEM method has the problem of low throughput. Therefore, we have developed an EB inspection tool, named Model EBEYE M. The tool has the PEM (Projection Electron Microscope) technique and the image acquisition technique with TDI (Time Delay Integration) sensor while moving the stage continuously to achieve high throughput [2]. In our previous study, we showed the performance of the tool applied for the half pitch (hp) 2X nm node in a production phase for particle inspection on an EUV blank. In the study, the sensitivity of 20 nm with capture rate of 100 % and the throughput of 1 hour per 100 mm square were achieved, which was higher than the conventional optical inspection tool for EUV mask inspection [3]-[5]. Such particle inspection is called for not only on the EUV blank but also on the patterned EUV mask. It is required after defect repair and final cleaning for EUV mask fabrication. Moreover, it is useful as a particle monitoring tool between a certain numbers of exposures for wafer fabrication because EUV pellicle has not been ready yet. However, since the patterned EUV mask consists of 3D structure, it is more difficult than that on the EUV blank. In this paper, we evaluated that the particle inspection on the EUV blank using the tool which was applied for the patterned EUV mask. Moreover, the capability of the particle inspection on the patterned EUV mask for the hp 2X nm node, whose target is 25 nm of the sensitivity, was confirmed. As a result, the inspection and SEM review results of the patterned EUV masks revealed that the sensitivity of the hp 100 nm Line/Space (LS) was 25 nm and that of the hp 140- 160 nm Contact Hole

  16. A Biomechanical Modeling Guided CBCT Estimation Technique.

    Science.gov (United States)

    Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing

    2017-02-01

    Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.

  17. Modeling Techniques for IN/Internet Interworking

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper focuses on the authors' contributions to ITU-T to develop the network modeling for the support of IN/Internet interworking. Following an introduction to benchmark interworking services, the paper describes the consensus enhanced DFP architecture, which is reached based on IETF reference model and the authors' proposal. Then the proposed information flows for benchmark services are presented with new or updated flows identified. Finally a brief description is given to implementation techniques.

  18. Supply Chain Modeling: Downstream Risk Assessment Methodology (DRAM) Demonstration of Capability

    Science.gov (United States)

    2015-04-01

    This work was done for Defense Logistics Agency Strategic Materials (DLA SM) to provide the capability to analyze supply chains of strategic and...of Defense for Acquisition, Technology and Logistics , 2013). 9 3. Supply Chain Modeling... Logistics , 2013. Kouvelis, Panos, Lingxiu Dong, Onur Boyabatli, and Rong Li. Handbook of Integrated Risk Management in Global Supply Chains Hoboken, NJ

  19. University-Industry Research Collaboration: A Model to Assess University Capability

    Science.gov (United States)

    Abramo, Giovanni; D'Angelo, Ciriaco Andrea; Di Costa, Flavia

    2011-01-01

    Scholars and policy makers recognize that collaboration between industry and the public research institutions is a necessity for innovation and national economic development. This work presents an econometric model which expresses the university capability for collaboration with industry as a function of size, location and research quality. The…

  20. Using a Capability Maturity Model to Build on the Generational Approach to Student Engagement Practices

    Science.gov (United States)

    Nelson, K.; Clarke, J.; Stoodley, I.; Creagh, T.

    2015-01-01

    The generational approach to conceptualising first-year student learning behaviour has made a useful contribution to understanding student engagement. It has an explicit focus on student behaviour and we suggest that a Capability Maturity Model interpretation may provide a complementary extension of that understanding as it builds on the…

  1. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni

    2017-05-01

    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  2. How do dynamic capabilities transform external technologies into firms’ renewed technological resources? – A mediation model

    DEFF Research Database (Denmark)

    Li-Ying, Jason; Wang, Yuandi; Ning, Lutao

    2016-01-01

    microfoundations of dynamic technological capabilities, mediate the relationship between external technology breadth and firms’ technological innovation performance, based on the resource-based view and dynamic capability view. Using a sample of listed Chinese licensee firms, we find that firms must broadly......How externally acquired resources may become valuable, rare, hard-to-imitate, and non-substitute resource bundles through the development of dynamic capabilities? This study proposes and tests a mediation model of how firms’ internal technological diversification and R&D, as two distinctive...... explore external technologies to ignite the dynamism in internal technological diversity and in-house R&D, which play their crucial roles differently to transform and reconfigure firms’ technological resources....

  3. Extending the Lunar Mapping and Modeling Portal - New Capabilities and New Worlds

    Science.gov (United States)

    Day, B.; Law, E.; Arevalo, E.; Bui, B.; Chang, G.; Dodge, K.; Kim, R.; Malhotra, S.; Sadaqathullah, S.; Schmidt, G.; Bailey, B.

    2015-01-01

    NASA's Lunar Mapping and Modeling Portal (LMMP) provides a web-based Portal and a suite of interactive visualization and analysis tools to enable mission planners, lunar scientists, and engineers to access mapped lunar data products from past and current lunar missions (http://lmmp.nasa.gov). During the past year, the capabilities and data served by LMMP have been significantly expanded. New interfaces are providing improved ways to access and visualize data. At the request of NASA's Science Mission Directorate, LMMP's technology and capabilities are now being extended to additional planetary bodies. New portals for Vesta and Mars are the first of these new products to be released. This presentation will provide an overview of LMMP, Vesta Trek, and Mars Trek, demonstrate their uses and capabilities, highlight new features, and preview coming enhancements.

  4. How do dynamic capabilities transform external technologies into firms’ renewed technological resources? – A mediation model

    DEFF Research Database (Denmark)

    Li-Ying, Jason; Wang, Yuandi; Ning, Lutao

    2016-01-01

    How externally acquired resources may become valuable, rare, hard-to-imitate, and non-substitute resource bundles through the development of dynamic capabilities? This study proposes and tests a mediation model of how firms’ internal technological diversification and R&D, as two distinctive...... microfoundations of dynamic technological capabilities, mediate the relationship between external technology breadth and firms’ technological innovation performance, based on the resource-based view and dynamic capability view. Using a sample of listed Chinese licensee firms, we find that firms must broadly...... explore external technologies to ignite the dynamism in internal technological diversity and in-house R&D, which play their crucial roles differently to transform and reconfigure firms’ technological resources....

  5. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  6. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  7. INTELLIGENT CAR STYLING TECHNIQUE AND SYSTEM BASED ON A NEW AERODYNAMIC-THEORETICAL MODEL

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Car styling technique based on a new theoretical model of automotive aerodynamics is introduced, which is proved to be feasible and effective by wind tunnel tests. Development of a multi-module software system from this technique, including modules of knowledge processing, referential styling and ANN aesthetic evaluation etc, capable of assisting car styling works in an intelligent way, is also presented and discussed.

  8. Konsep Tingkat Kematangan penerapan Internet Protokol versi 6 (Capability Maturity Model for IPv6 Implementation

    Directory of Open Access Journals (Sweden)

    Riza Azmi

    2015-03-01

    Full Text Available Internet Protocol atau IP merupakan standar penomoran internet di dunia yang jumlahnya terbatas. Di dunia, alokasi IP diatur oleh Internet Assignd Number Authority (IANA dan didelegasikan ke melalui otoritas masing-masing benua. IP sendiri terdiri dari 2 jenis versi yaitu IPv4 dan IPv6 dimana alokasi IPv4 dinyatakan habis di tingkat IANA pada bulan April 2011. Oleh karena itu, penggunaan IP diarahkan kepada penggunaan IPv6. Untuk melihat bagaimana kematangan suatu organisasi terhadap implementasi IPv6, penelitian ini mencoba membuat sebuah model tingkat kematangan penerapan IPv6. Konsep dasar dari model ini mengambil konsep Capability Maturity Model Integrated (CMMI, dengan beberapa tambahan yaitu roadmap migrasi IPv6 di Indonesia, Request for Comment (RFC yang terkait dengan IPv6 serta beberapa best-practice implementasi dari IPv6. Dengan konsep tersebut, penelitian ini menghasilkan konsep Capability Maturity for IPv6 Implementation.

  9. Earth Observation and Geospatial techniques for Soil Salinity and Land Capability Assessment over Sundarban Bay of Bengal Coast, India

    Science.gov (United States)

    Das, Sumanta; Choudhury, Malini Roy; Das, Subhasish; Nagarajan, M.

    2016-12-01

    To guarantee food security and job creation of small scale farmers to commercial farmers, unproductive farms in the South 24 PGS, West Bengal need land reform program to be restructured and evaluated for agricultural productivity. This study established a potential role of remote sensing and GIS for identification and mapping of salinity zone and spatial planning of agricultural land over the Basanti and Gosaba Islands(808.314sq. km) of South 24 PGS. District of West Bengal. The primary data i.e. soil pH, Electrical Conductivity (EC) and Sodium Absorption ratio (SAR) were obtained from soil samples of various GCP (Ground Control Points) locations collected at 50 mts. intervals by handheld GPS from 0-100 cm depths. The secondary information is acquired from the remotely sensed satellite data (LANDSAT ETM+) in different time scale and digital elevation model. The collected field samples were tested in the laboratory and were validated with Remote Sensing based digital indices analysisover the temporal satellite data to assess the potential changes due to over salinization. Soil physical properties such as texture, structure, depth and drainage condition is stored as attributes in a geographical soil database and linked with the soil map units. The thematic maps are integrated with climatic and terrain conditions of the area to produce land capability maps for paddy. Finally, The weighted overlay analysis was performed to assign theweights according to the importance of parameters taken into account for salineareaidentification and mapping to segregate higher, moderate, lower salinity zonesover the study area.

  10. A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Herrnstein, Aaron R. [Univ. of California, Davis, CA (United States)

    2005-12-01

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No

  11. Field Assessment Techniques for Bank Erosion Modeling

    Science.gov (United States)

    1990-11-22

    Field Assessment Techniques for Bank Erosion Modeling First Interim Report Prepared for US Army European Research Office US AR DS G-. EDISON HOUSE...SEDIMENTATION ANALYSIS SHEETS and GUIDELINES FOR THE USE OF SEDIMENTATION ANALYSIS SHEETS IN THE FIELD Prepared for US Army Engineer Waterways Experiment...Material Type 3 Material Type 4 Cobbles Toe[’ Toe Toefl Toefl Protection Status Cobbles/boulders Mid-Bnak .. Mid-na.k Mid-Bnask[ Mid-Boak

  12. Advanced interaction techniques for medical models

    OpenAIRE

    Monclús, Eva

    2014-01-01

    Advances in Medical Visualization allows the analysis of anatomical structures with the use of 3D models reconstructed from a stack of intensity-based images acquired through different techniques, being Computerized Tomographic (CT) modality one of the most common. A general medical volume graphics application usually includes an exploration task which is sometimes preceded by an analysis process where the anatomical structures of interest are first identified. ...

  13. Capabilities of the ATHENA computer code for modeling the SP-100 space reactor concept

    Science.gov (United States)

    Fletcher, C. D.

    1985-09-01

    The capability to perform thermal-hydraulic analyses of an SP-100 space reactor was demonstrated using the ATHENA computer code. The preliminary General Electric SP-100 design was modeled using Athena. The model simulates the fast reactor, liquid-lithium coolant loops, and lithium-filled heat pipes of this design. Two ATHENA demonstration calculations were performed simulating accident scenarios. A mask for the SP-100 model and an interface with the Nuclear Plant Analyzer (NPA) were developed, allowing a graphic display of the calculated results on the NPA.

  14. New Modelling Capabilities in Commercial Software for High-Gain Antennas

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Lumholt, Michael; Meincke, Peter

    2012-01-01

    type of EM software tool aimed at extending the ways engineers can use antenna measurements in the antenna design process. The tool allows reconstruction of currents and near fields on a 3D surface conformal to the antenna, by using the measured antenna field as input. The currents on the antenna......This paper presents an overview of selected new modelling algorithms and capabilities in commercial software tools developed by TICRA. A major new area is design and analysis of printed reflectarrays where a fully integrated design environment is under development, allowing fast and accurate...... characterization of the reflectarray element, an initial phaseonly synthesis, followed by a full optimization procedure taking into account the near-field from the feed and the finite extent of the array. Another interesting new modelling capability is made available through the DIATOOL software, which is a new...

  15. Mass transport and direction dependent battery modeling for accurate on-line power capability prediction

    Energy Technology Data Exchange (ETDEWEB)

    Wiegman, H.L.N. [General Electric Corporate Research and Development, Schenectady, NY (United States)

    2000-07-01

    Some recent advances in battery modeling were discussed with reference to on-line impedance estimates and power performance predictions for aqueous solution, porous electrode cell structures. The objective was to determine which methods accurately estimate a battery's internal state and power capability while operating a charge and sustaining a hybrid electric vehicle (HEV) over a wide range of driving conditions. The enhancements to the Randles-Ershler equivalent electrical model of common cells with lead-acid, nickel-cadmium and nickel-metal hydride chemistries were described. This study also investigated which impedances are sensitive to boundary layer charge concentrations and mass transport limitations. Non-linear impedances were shown to significantly affect the battery's ability to process power. The main advantage of on-line estimating a battery's impedance state and power capability is that the battery can be optimally sized for any application. refs., tabs., figs., append.

  16. A NOVEL MULTI-VALUED BAM MODEL WITH IMPROVED ERROR-CORRECTING CAPABILITY

    Institute of Scientific and Technical Information of China (English)

    Zhang Daoqiang; Chen Songcan

    2003-01-01

    A Hyperbolic Tangent multi-valued Bi-directional Associative Memory (HTBAM)model is proposed in this letter. Two general energy functions are defined to prove the stabilityof one class of multi-valued Bi-directional Associative Memorys(BAMs), with HTBAM being thespecial case. Simulation results show that HTBAM has a competitive storage capacity and muchmore error-correcting capability than other multi-valued BAMs.

  17. Earth Observation and Geospatial techniques for Soil Salinity and Land Capability Assessment over Sundarban Bay of Bengal Coast, India

    Directory of Open Access Journals (Sweden)

    Das Sumanta

    2016-12-01

    Full Text Available To guarantee food security and job creation of small scale farmers to commercial farmers, unproductive farms in the South 24 PGS, West Bengal need land reform program to be restructured and evaluated for agricultural productivity. This study established a potential role of remote sensing and GIS for identification and mapping of salinity zone and spatial planning of agricultural land over the Basanti and Gosaba Islands(808.314sq. km of South 24 PGS. District of West Bengal. The primary data i.e. soil pH, Electrical Conductivity (EC and Sodium Absorption ratio (SAR were obtained from soil samples of various GCP (Ground Control Points locations collected at 50 mts. intervals by handheld GPS from 0–100 cm depths. The secondary information is acquired from the remotely sensed satellite data (LANDSAT ETM+ in different time scale and digital elevation model. The collected field samples were tested in the laboratory and were validated with Remote Sensing based digital indices analysisover the temporal satellite data to assess the potential changes due to over salinization. Soil physical properties such as texture, structure, depth and drainage condition is stored as attributes in a geographical soil database and linked with the soil map units. The thematic maps are integrated with climatic and terrain conditions of the area to produce land capability maps for paddy. Finally, The weighted overlay analysis was performed to assign theweights according to the importance of parameters taken into account for salineareaidentification and mapping to segregate higher, moderate, lower salinity zonesover the study area.

  18. Spatial Preference Modelling for equitable infrastructure provision: an application of Sen's Capability Approach

    Science.gov (United States)

    Wismadi, Arif; Zuidgeest, Mark; Brussel, Mark; van Maarseveen, Martin

    2014-01-01

    To determine whether the inclusion of spatial neighbourhood comparison factors in Preference Modelling allows spatial decision support systems (SDSSs) to better address spatial equity, we introduce Spatial Preference Modelling (SPM). To evaluate the effectiveness of this model in addressing equity, various standardisation functions in both Non-Spatial Preference Modelling and SPM are compared. The evaluation involves applying the model to a resource location-allocation problem for transport infrastructure in the Special Province of Yogyakarta in Indonesia. We apply Amartya Sen's Capability Approach to define opportunity to mobility as a non-income indicator. Using the extended Moran's I interpretation for spatial equity, we evaluate the distribution output regarding, first, `the spatial distribution patterns of priority targeting for allocation' (SPT) and, second, `the effect of new distribution patterns after location-allocation' (ELA). The Moran's I index of the initial map and its comparison with six patterns for SPT as well as ELA consistently indicates that the SPM is more effective for addressing spatial equity. We conclude that the inclusion of spatial neighbourhood comparison factors in Preference Modelling improves the capability of SDSS to address spatial equity. This study thus proposes a new formal method for SDSS with specific attention on resource location-allocation to address spatial equity.

  19. Level of detail technique for plant models

    Institute of Scientific and Technical Information of China (English)

    Xiaopeng ZHANG; Qingqiong DENG; Marc JAEGER

    2006-01-01

    Realistic modelling and interactive rendering of forestry and landscape is a challenge in computer graphics and virtual reality. Recent new developments in plant growth modelling and simulation lead to plant models faithful to botanical structure and development, not only representing the complex architecture of a real plant but also its functioning in interaction with its environment. Complex geometry and material of a large group of plants is a big burden even for high performances computers, and they often overwhelm the numerical calculation power and graphic rendering power. Thus, in order to accelerate the rendering speed of a group of plants, software techniques are often developed. In this paper, we focus on plant organs, i.e. leaves, flowers, fruits and inter-nodes. Our approach is a simplification process of all sparse organs at the same time, i. e. , Level of Detail (LOD) , and multi-resolution models for plants. We do explain here the principle and construction of plant simplification. They are used to construct LOD and multi-resolution models of sparse organs and branches of big trees. These approaches take benefit from basic knowledge of plant architecture, clustering tree organs according to biological structures. We illustrate the potential of our approach on several big virtual plants for geometrical compression or LOD model definition. Finally we prove the efficiency of the proposed LOD models for realistic rendering with a virtual scene composed by 184 mature trees.

  20. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    Science.gov (United States)

    2015-01-01

    several years earlier, and was echoed in 61 reports on the state of U.S. climate modeling (NRC 1998, NRC 2001, Rood et al. 2000). Leads 62 from...infrastructure developers emerged, whose members exchanged ideas through a series of 85 international meetings focused on coupling techniques (e.g...detailed discussion of techniques is available in documents 278 such as Craig (2014). The different approaches encountered to date can be

  1. Evaluation of remote-sensing-based rainfall products through predictive capability in hydrological runoff modelling

    DEFF Research Database (Denmark)

    Stisen, Simon; Sandholt, Inge

    2010-01-01

    The emergence of regional and global satellite-based rainfall products with high spatial and temporal resolution has opened up new large-scale hydrological applications in data-sparse or ungauged catchments. Particularly, distributed hydrological models can benefit from the good spatial coverage...... and distributed nature of satellite-based rainfall estimates (SRFE). In this study, five SRFEs with temporal resolution of 24 h and spatial resolution between 8 and 27 km have been evaluated through their predictive capability in a distributed hydrological model of the Senegal River basin in West Africa. The main...

  2. A general technique to train language models on language models

    NARCIS (Netherlands)

    Nederhof, MJ

    2005-01-01

    We show that under certain conditions, a language model can be trained oil the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained auto

  3. A new formal model for privilege control with supporting POSIX capability mechanism

    Institute of Scientific and Technical Information of China (English)

    JI Qingguang; QING Sihan; HE Yeping

    2005-01-01

    In order to enforce the least privilege principle in the operating system, it is necessary for the process privilege to be effectively controlled; but this is very difficult because a process always changes as time changes. In this paper, based on the analysis on how the process privilege is generated and how it works, a hierarchy implementing the least privilege principle with three layers, i.e. administration layer, functionality control layer and performance layer, is posed. It is clearly demonstrated that to bound privilege's working scope is a critical part for controlling privilege, but this is only mentioned implicitly while not supported in POSIX capability mechanism. Based on analysis of existing control mechanism for privilege, not only an improved capability inheritance formula but also a new complete formal model for controlling process based on integrating RBAC, DTE, and POSIX capability mechanism is introduced. The new invariants in the model show that this novel privilege control mechanism is different from RBAC's, DTE's, and POSIX's, and it generalizes subdomain control mechanism and makes this mechanism dynamic.

  4. Gaming Technique in Formation of Motor-Coordinational and Psychomotor Capabilities of 5-6 Year-old Children, Going in for Tennis

    OpenAIRE

    Ervand P. Gasparyan

    2012-01-01

    Application of gaming technique during 5-6 year-old tennis-players training when motor coordination and psychomotor capabilities are formed allowed to increase the indexes of all the examined motor coordinations and both to preserve the natural age character of motor coordination changes and to improve this process fundamentally.

  5. Gaming Technique in Formation of Motor-Coordinational and Psychomotor Capabilities of 5-6 Year-old Children, Going in for Tennis

    Directory of Open Access Journals (Sweden)

    Ervand P. Gasparyan

    2012-06-01

    Full Text Available Application of gaming technique during 5-6 year-old tennis-players training when motor coordination and psychomotor capabilities are formed allowed to increase the indexes of all the examined motor coordinations and both to preserve the natural age character of motor coordination changes and to improve this process fundamentally.

  6. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.; Aziz, Khalid

    2001-08-23

    Research results for the second year of this project on the development of improved modeling techniques for non-conventional (e.g., horizontal, deviated or multilateral) wells were presented. The overall program entails the development of enhanced well modeling and general simulation capabilities. A general formulation for black-oil and compositional reservoir simulation was presented.

  7. Incorporation of RAM techniques into simulation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.C. Jr.; Haire, M.J.; Schryver, J.C.

    1995-07-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model represents the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army`s next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through ``what if`` questions, sensitivity studies, and battle scenario changes.

  8. A Cycle Model of Co-evolution between Emerging Technology and Firm’s Capabilities Based on Case Study

    Institute of Scientific and Technical Information of China (English)

    Wang; Min; Li; Limiao; Yin; Lu

    2011-01-01

    This study explores the mechanism on the co-evolution between emerging technology and capability.Our research focus is how the firms capabilities affect the evolution of emerging technology through strategy.Based on the theoretical analysis and case study,this paper builds a theoretical framework:firms capability is classified into static capability and dynamic capability,and the evolution of emerging technology is summarized by a cycle model.Further,strategy is looked as a mediated variable.The conclusion is that the static capability affects the emerging technology evolution through strategy implement,and the dynamic capability affects the evolution through strategy change.In both situations,organization learning is a key capability to the evolution of emerging technology.

  9. Management Innovation Capabilities

    DEFF Research Database (Denmark)

    Harder, Mie

    Management innovation is the implementation of a new management practice, process, technique or structure that significantly alters the way the work of management is performed. This paper presents a typology categorizing management innovation along two dimensions; radicalness and complexity. Then......, the paper introduces the concept of management innovation capabilities which refers to the ability of a firm to purposefully create, extend and modify its managerial resource base to address rapidly changing environments. Drawing upon behavioral theory of the firm and the dynamic capabilities framework......, the paper proposes a model of the foundations of management innovation. Propositions and implications for future research are discussed....

  10. Dark Current and Multipacting Capabilities in OPAL: Model Benchmarks and Applications

    CERN Document Server

    Wang, C; Yin, Z G; Zhang, T J

    2012-01-01

    Dark current and multiple electron impacts (multipacting), as for example observed in radio frequency (RF) structures of accelerators, are usually harmful to the equipment and the beam quality. These effects need to be suppressed to guarantee efficient and stable operation. Large scale simulations can be used to understand causes and develop strategies to suppress these phenomenas. We extend \\opal, a parallel framework for charged particle optics in accelerator structures and beam lines, with the necessary physics models to efficiently and precisely simulate multipacting phenomenas. We added a Fowler-Nordheim field emission model, two secondary electron emission models, developed by Furman-Pivi and Vaughan respectively, as well as efficient 3D boundary geometry handling capabilities. The models and their implementation are carefully benchmark against a non-stationary multipacting theory for the classic parallel plate geometry. A dedicated, parallel plate experiment is sketched.

  11. Transitioning Enhanced Land Surface Initialization and Model Verification Capabilities to the Kenya Meteorological Department (KMD)

    Science.gov (United States)

    Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Zavodsky, Bradley T.; Srikishen, Jayanthi; Limaye, Ashutosh; Blankenship, Clay B.

    2016-01-01

    Flooding, severe weather, and drought are key forecasting challenges for the Kenya Meteorological Department (KMD), based in Nairobi, Kenya. Atmospheric processes leading to convection, excessive precipitation and/or prolonged drought can be strongly influenced by land cover, vegetation, and soil moisture content, especially during anomalous conditions and dry/wet seasonal transitions. It is thus important to represent accurately land surface state variables (green vegetation fraction, soil moisture, and soil temperature) in Numerical Weather Prediction (NWP) models. The NASA SERVIR and the Short-term Prediction Research and Transition (SPoRT) programs in Huntsville, AL have established a working partnership with KMD to enhance its regional modeling capabilities. SPoRT and SERVIR are providing experimental land surface initialization datasets and model verification capabilities for capacity building at KMD. To support its forecasting operations, KMD is running experimental configurations of the Weather Research and Forecasting (WRF; Skamarock et al. 2008) model on a 12-km/4-km nested regional domain over eastern Africa, incorporating the land surface datasets provided by NASA SPoRT and SERVIR. SPoRT, SERVIR, and KMD participated in two training sessions in March 2014 and June 2015 to foster the collaboration and use of unique land surface datasets and model verification capabilities. Enhanced regional modeling capabilities have the potential to improve guidance in support of daily operations and high-impact weather and climate outlooks over Eastern Africa. For enhanced land-surface initialization, the NASA Land Information System (LIS) is run over Eastern Africa at 3-km resolution, providing real-time land surface initialization data in place of interpolated global model soil moisture and temperature data available at coarser resolutions. Additionally, real-time green vegetation fraction (GVF) composites from the Suomi-NPP VIIRS instrument is being incorporated

  12. Verification of the predictive capabilities of the 4C code cryogenic circuit model

    Science.gov (United States)

    Zanino, R.; Bonifetto, R.; Hoa, C.; Richard, L. Savoldi

    2014-01-01

    The 4C code was developed to model thermal-hydraulics in superconducting magnet systems and related cryogenic circuits. It consists of three coupled modules: a quasi-3D thermal-hydraulic model of the winding; a quasi-3D model of heat conduction in the magnet structures; an object-oriented a-causal model of the cryogenic circuit. In the last couple of years the code and its different modules have undergone a series of validation exercises against experimental data, including also data coming from the supercritical He loop HELIOS at CEA Grenoble. However, all this analysis work was done each time after the experiments had been performed. In this paper a first demonstration is given of the predictive capabilities of the 4C code cryogenic circuit module. To do that, a set of ad-hoc experimental scenarios have been designed, including different heating and control strategies. Simulations with the cryogenic circuit module of 4C have then been performed before the experiment. The comparison presented here between the code predictions and the results of the HELIOS measurements gives the first proof of the excellent predictive capability of the 4C code cryogenic circuit module.

  13. Model for the extension of the processing and memory capabilities of Java Card smartcards

    Directory of Open Access Journals (Sweden)

    Susana María Ramírez Brey

    2014-01-01

    Full Text Available Smartcard have distinctive features like portability, by the reduced size, and the low cost, in order to be used on a large scale. Associated to these characteristics, find out limitations of the hardware’s resources, related fundamentally with memory and processing capabilities. These and others limitations of Java Card technology constitute significant limitations for the smartcard applications developers, and in general. In this work, is presented a smartcard application development model with Java Card technology that allows to extend memory and processing capabilities, making use of the computer's hardware resources. This model guarantees the safe environment that is characteristic of this device type. The proposed development model provide a mechanism for storage data associated to smartcard applications off- card, and for the execution of high cost computational algorithms, that for runtime or complexity is more feasible to perform off- card. With this new model is intended to significantly increase the applications and use of the smartcard, in connected and controlled environments like companies and institutions.

  14. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  15. Geometrical geodesy techniques in Goddard earth models

    Science.gov (United States)

    Lerch, F. J.

    1974-01-01

    The method for combining geometrical data with satellite dynamical and gravimetry data for the solution of geopotential and station location parameters is discussed. Geometrical tracking data (simultaneous events) from the global network of BC-4 stations are currently being processed in a solution that will greatly enhance of geodetic world system of stations. Previously the stations in Goddard earth models have been derived only from dynamical tracking data. A linear regression model is formulated from combining the data, based upon the statistical technique of weighted least squares. Reduced normal equations, independent of satellite and instrumental parameters, are derived for the solution of the geodetic parameters. Exterior standards for the evaluation of the solution and for the scale of the earth's figure are discussed.

  16. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    Science.gov (United States)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  17. Adaptive Planning: Understanding Organizational Workload to Capability/ Capacity through Modeling and Simulation

    Science.gov (United States)

    Hase, Chris

    2010-01-01

    In August 2003, the Secretary of Defense (SECDEF) established the Adaptive Planning (AP) initiative [1] with an objective of reducing the time necessary to develop and revise Combatant Commander (COCOM) contingency plans and increase SECDEF plan visibility. In addition to reducing the traditional plan development timeline from twenty-four months to less than twelve months (with a goal of six months)[2], AP increased plan visibility to Department of Defense (DoD) leadership through In-Progress Reviews (IPRs). The IPR process, as well as the increased number of campaign and contingency plans COCOMs had to develop, increased the workload while the number of planners remained fixed. Several efforts from collaborative planning tools to streamlined processes were initiated to compensate for the increased workload enabling COCOMS to better meet shorter planning timelines. This paper examines the Joint Strategic Capabilities Plan (JSCP) directed contingency planning and staffing requirements assigned to a combatant commander staff through the lens of modeling and simulation. The dynamics of developing a COCOM plan are captured with an ExtendSim [3] simulation. The resulting analysis provides a quantifiable means by which to measure a combatant commander staffs workload associated with development and staffing JSCP [4] directed contingency plans with COCOM capability/capacity. Modeling and simulation bring significant opportunities in measuring the sensitivity of key variables in the assessment of workload to capability/capacity analysis. Gaining an understanding of the relationship between plan complexity, number of plans, planning processes, and number of planners with time required for plan development provides valuable information to DoD leadership. Through modeling and simulation AP leadership can gain greater insight in making key decisions on knowing where to best allocate scarce resources in an effort to meet DoD planning objectives.

  18. Expand the Modeling Capabilities of DOE's EnergyPlus Building Energy Simulation Program

    Energy Technology Data Exchange (ETDEWEB)

    Don Shirey

    2008-02-28

    EnergyPlus{trademark} is a new generation computer software analysis tool that has been developed, tested, and commercialized to support DOE's Building Technologies (BT) Program in terms of whole-building, component, and systems R&D (http://www.energyplus.gov). It is also being used to support evaluation and decision making of zero energy building (ZEB) energy efficiency and supply technologies during new building design and existing building retrofits. Version 1.0 of EnergyPlus was released in April 2001, followed by semiannual updated versions over the ensuing seven-year period. This report summarizes work performed by the University of Central Florida's Florida Solar Energy Center (UCF/FSEC) to expand the modeling capabilities of EnergyPlus. The project tasks involved implementing, testing, and documenting the following new features or enhancement of existing features: (1) A model for packaged terminal heat pumps; (2) A model for gas engine-driven heat pumps with waste heat recovery; (3) Proper modeling of window screens; (4) Integrating and streamlining EnergyPlus air flow modeling capabilities; (5) Comfort-based controls for cooling and heating systems; and (6) An improved model for microturbine power generation with heat recovery. UCF/FSEC located existing mathematical models or generated new model for these features and incorporated them into EnergyPlus. The existing or new models were (re)written using Fortran 90/95 programming language and were integrated within EnergyPlus in accordance with the EnergyPlus Programming Standard and Module Developer's Guide. Each model/feature was thoroughly tested and identified errors were repaired. Upon completion of each model implementation, the existing EnergyPlus documentation (e.g., Input Output Reference and Engineering Document) was updated with information describing the new or enhanced feature. Reference data sets were generated for several of the features to aid program users in selecting proper

  19. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis

    Science.gov (United States)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2013-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.

  20. Towards enhancing Sandia's capabilities in multiscale materials modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Aidun, John Bahram; Fang, Huei Eliot; Barbour, John Charles; Westrich, Henry Roger; Chen, Er-Ping

    2004-01-01

    We report our conclusions in support of the FY 2003 Science and Technology Milestone ST03-3.5. The goal of the milestone was to develop a research plan for expanding Sandia's capabilities in materials modeling and simulation. From inquiries and discussion with technical staff during FY 2003 we conclude that it is premature to formulate the envisioned coordinated research plan. The more appropriate goal is to develop a set of computational tools for making scale transitions and accumulate experience with applying these tools to real test cases so as to enable us to attack each new problem with higher confidence of success.

  1. Initiative-taking, Improvisational Capability and Business Model Innovation in Emerging Market

    DEFF Research Database (Denmark)

    Cao, Yangfeng

    Business model innovation plays a very important role in developing competitive advantage when multinational small and medium-sized enterprises (SMEs) from developed country enter into emerging markets because of the large contextual distances or gaps between the emerging and developed economies....... Many prior researches have shown that the foreign subsidiaries play important role in shaping the overall strategy of the parent company. However, little is known about how subsidiary specifically facilitates business model innovation (BMI) in emerging markets. Adopting the method of comparative...... innovation in emerging markets. We find that high initiative-taking and strong improvisational capability can accelerate the business model innovation. Our research contributes to the literatures on international and strategic entrepreneurship....

  2. Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Hamlin, Nathaniel D., E-mail: nh322@cornell.edu [438 Rhodes Hall, Cornell University, Ithaca, NY, 14853 (United States); Seyler, Charles E., E-mail: ces7@cornell.edu [Cornell University, Ithaca, NY, 14853 (United States)

    2014-12-15

    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm’s law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.

  3. A Scalable Model for the Performance Evaluation of ROADMs with Generic Switching Capabilities

    Directory of Open Access Journals (Sweden)

    Athanasios S Tsokanos

    2010-10-01

    Full Text Available In order to evaluate the performance of Reconfigurable Optical Add/Drop Multiplexers (ROADMs consisting of a single large switch, in circuit switched Wavelength-Division Multiplexing (WDM networks, a theoretical Queuing Network Model (QNM is developed, which consists of two M/M/c/c loss systems each of which is analyzed in isolation. An overall analytical blocking probability of a ROADM is obtained. This model can also be used for the performance optimization of ROADMs with a single switch capable of switching all or a partial number of the wavelengths being used. It is demonstrated how the proposed model can be used for the performance evaluation of a ROADM for different number of wavelengths inside the switch, in various traffic intensity conditions producing an exact blocking probability solution. The accuracy of the analytical results is validated by simulation.

  4. Implementing a Nuclear Power Plant Model for Evaluating Load-Following Capability on a Small Grid

    Science.gov (United States)

    Arda, Samet Egemen

    A pressurized water reactor (PWR) nuclear power plant (NPP) model is introduced into Positive Sequence Load Flow (PSLF) software by General Electric in order to evaluate the load-following capability of NPPs. The nuclear steam supply system (NSSS) consists of a reactor core, hot and cold legs, plenums, and a U-tube steam generator. The physical systems listed above are represented by mathematical models utilizing a state variable lumped parameter approach. A steady-state control program for the reactor, and simple turbine and governor models are also developed. Adequacy of the isolated reactor core, the isolated steam generator, and the complete PWR models are tested in Matlab/Simulink and dynamic responses are compared with the test results obtained from the H. B. Robinson NPP. Test results illustrate that the developed models represents the dynamic features of real-physical systems and are capable of predicting responses due to small perturbations of external reactivity and steam valve opening. Subsequently, the NSSS representation is incorporated into PSLF and coupled with built-in excitation system and generator models. Different simulation cases are run when sudden loss of generation occurs in a small power system which includes hydroelectric and natural gas power plants besides the developed PWR NPP. The conclusion is that the NPP can respond to a disturbance in the power system without exceeding any design and safety limits if appropriate operational conditions, such as achieving the NPP turbine control by adjusting the speed of the steam valve, are met. In other words, the NPP can participate in the control of system frequency and improve the overall power system performance.

  5. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    Science.gov (United States)

    Iacobucci, Joseph V.

    problem domain by establishing an effective means to communicate the semantics from the RAAM framework. These techniques make it possible to include diverse multi-metric models within the RAAM framework in addition to system and operational level trades. A canonical example was used to explore the uses of the methodology. The canonical example contains all of the features of a full system of systems architecture analysis study but uses fewer tasks and systems. Using RAAM with the canonical example it was possible to consider both system and operational level trades in the same analysis. Once the methodology had been tested with the canonical example, a Suppression of Enemy Air Defenses (SEAD) capability model was developed. Due to the sensitive nature of analyses on that subject, notional data was developed. The notional data has similar trends and properties to realistic Suppression of Enemy Air Defenses data. RAAM was shown to be traceable and provided a mechanism for a unified treatment of a variety of metrics. The SEAD capability model demonstrated lower computer runtimes and reduced model creation complexity as compared to methods currently in use. To determine the usefulness of the implementation of the methodology on current computing hardware, RAAM was tested with system of system architecture studies of different sizes. This was necessary since system of systems may be called upon to accomplish thousands of tasks. It has been clearly demonstrated that RAAM is able to enumerate and evaluate the types of large, complex design spaces usually encountered in capability based design, oftentimes providing the ability to efficiently search the entire decision space. The core algorithms for generation and evaluation of alternatives scale linearly with expected problem sizes. The SEAD capability model outputs prompted the discovery a new issue, the data storage and manipulation requirements for an analysis. Two strategies were developed to counter large data sizes, the use

  6. CIVA workstation for NDE: mixing of NDE techniques and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Benoist, P.; Besnard, R. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes et Systemes Avances; Bayon, G. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Reacteurs Experimentaux; Boutaine, J.L. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Applications et de la Metrologie des Rayonnements Ionisants

    1994-12-31

    In order to compare the capabilities of different NDE techniques, or to use complementary inspection methods, the same components are examined with different procedures. It is then very useful to have a single evaluation tool allowing direct comparison of the methods: CIVA is an open system for processing NDE data; it is adapted to a standard work station (UNIX, C, MOTIF) and can read different supports on which the digitized data are stored. It includes a large library of signal and image processing methods accessible and adapted to NDE data (filtering, deconvolution, 2D and 3D spatial correlations...). Different CIVA application examples are described: brazing inspection (neutronography, ultrasonic), tube inspection (eddy current, ultrasonic), aluminium welds examination (UT and radiography). Modelling and experimental results are compared. 16 fig., 7 ref.

  7. Model assisted qualification of NDE techniques

    Science.gov (United States)

    Ballisat, Alexander; Wilcox, Paul; Smith, Robert; Hallam, David

    2017-02-01

    The costly and time consuming nature of empirical trials typically performed for NDE technique qualification is a major barrier to the introduction of NDE techniques into service. The use of computational models has been proposed as a method by which the process of qualification can be accelerated. However, given the number of possible parameters present in an inspection, the number of combinations of parameter values scales to a power law and running simulations at all of these points rapidly becomes infeasible. Given that many NDE inspections result in a single valued scalar quantity, such as a phase or amplitude, using suitable sampling and interpolation methods significantly reduces the number of simulations that have to be performed. This paper presents initial results of applying Latin Hypercube Designs and M ultivariate Adaptive Regression Splines to the inspection of a fastener hole using an oblique ultrasonic shear wave inspection. It is demonstrated that an accurate mapping of the response of the inspection for the variations considered can be achieved by sampling only a small percentage of the parameter space of variations and that the required percentage decreases as the number of parameters and the number of possible sample points increases. It is then shown how the outcome of this process can be used to assess the reliability of the inspection through commonly used metrics such as probability of detection, thereby providing an alternative methodology to the current practice of performing empirical probability of detection trials.

  8. Water structure-forming capabilities are temperature shifted for different models.

    Science.gov (United States)

    Shevchuk, Roman; Prada-Gracia, Diego; Rao, Francesco

    2012-06-28

    A large number of water models exist for molecular simulations. They differ in the ability to reproduce specific features of real water instead of others, like the correct temperature for the density maximum or the diffusion coefficient. Past analysis mostly concentrated on ensemble quantities, while few data were reported on the different microscopic behavior. Here, we compare seven widely used classical water models (SPC, SPC/E, TIP3P, TIP4P, TIP4P-Ew, TIP4P/2005, and TIP5P) in terms of their local structure-forming capabilities through hydrogen bonds for temperatures ranging from 210 to 350 K by the introduction of a set of order parameters taking into account the configuration of up to the second solvation shell. We found that all models share the same structural pattern up to a temperature shift. When this shift is applied, all models overlap onto a master curve. Interestingly, increased stabilization of fully coordinated structures extending to at least two solvation shells is found for models that are able to reproduce the correct position of the density maximum. Our results provide a self-consistent atomic-level structural comparison protocol, which can be of help in elucidating the influence of different water models on protein structure and dynamics.

  9. Programming with models: modularity and abstraction provide powerful capabilities for systems biology.

    Science.gov (United States)

    Mallavarapu, Aneil; Thomson, Matthew; Ullian, Benjamin; Gunawardena, Jeremy

    2009-03-06

    Mathematical models are increasingly used to understand how phenotypes emerge from systems of molecular interactions. However, their current construction as monolithic sets of equations presents a fundamental barrier to progress. Overcoming this requires modularity, enabling sub-systems to be specified independently and combined incrementally, and abstraction, enabling generic properties of biological processes to be specified independently of specific instances. These, in turn, require models to be represented as programs rather than as datatypes. Programmable modularity and abstraction enables libraries of modules to be created, which can be instantiated and reused repeatedly in different contexts with different components. We have developed a computational infrastructure that accomplishes this. We show here why such capabilities are needed, what is required to implement them and what can be accomplished with them that could not be done previously.

  10. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    Energy Technology Data Exchange (ETDEWEB)

    Burke, G.J.

    1988-04-08

    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  11. MESSOC capabilities and results. [Model for Estimating Space Station Opertions Costs

    Science.gov (United States)

    Shishko, Robert

    1990-01-01

    MESSOC (Model for Estimating Space Station Operations Costs) is the result of a multi-year effort by NASA to understand and model the mature operations cost of Space Station Freedom. This paper focuses on MESSOC's ability to contribute to life-cycle cost analyses through its logistics equations and databases. Together, these afford MESSOC the capability to project not only annual logistics costs for a variety of Space Station scenarios, but critical non-cost logistics results such as annual Station maintenance crewhours, upweight/downweight, and on-orbit sparing availability as well. MESSOC results using current logistics databases and baseline scenario have already shown important implications for on-orbit maintenance approaches, space transportation systems, and international operations cost sharing.

  12. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  13. The Aviation System Analysis Capability Air Carrier Cost-Benefit Model

    Science.gov (United States)

    Gaier, Eric M.; Edlich, Alexander; Santmire, Tara S.; Wingrove, Earl R.., III

    1999-01-01

    To meet its objective of assisting the U.S. aviation industry with the technological challenges of the future, NASA must identify research areas that have the greatest potential for improving the operation of the air transportation system. Therefore, NASA is developing the ability to evaluate the potential impact of various advanced technologies. By thoroughly understanding the economic impact of advanced aviation technologies and by evaluating how the new technologies will be used in the integrated aviation system, NASA aims to balance its aeronautical research program and help speed the introduction of high-leverage technologies. To meet these objectives, NASA is building the Aviation System Analysis Capability (ASAC). NASA envisions ASAC primarily as a process for understanding and evaluating the impact of advanced aviation technologies on the U.S. economy. ASAC consists of a diverse collection of models and databases used by analysts and other individuals from the public and private sectors brought together to work on issues of common interest to organizations in the aviation community. ASAC also will be a resource available to the aviation community to analyze; inform; and assist scientists, engineers, analysts, and program managers in their daily work. The ASAC differs from previous NASA modeling efforts in that the economic behavior of buyers and sellers in the air transportation and aviation industries is central to its conception. Commercial air carriers, in particular, are an important stakeholder in this community. Therefore, to fully evaluate the implications of advanced aviation technologies, ASAC requires a flexible financial analysis tool that credibly links the technology of flight with the financial performance of commercial air carriers. By linking technical and financial information, NASA ensures that its technology programs will continue to benefit the user community. In addition, the analysis tool must be capable of being incorporated into the

  14. A Relationship Framework for Building Information Modeling (BIM Capability in Quantity Surveying Practice and Project Performance

    Directory of Open Access Journals (Sweden)

    Wong, P. F.

    2015-12-01

    Full Text Available Construction industry has suffered from poor project performance and it’s crucial to find out solution to improve this issue. Quantity surveyors (QSs play a key role in managing project cost. However, their method of performing tasks is tedious till affect the project performance. Building information modeling (BIM application is attaining attention in the construction industry as a mean to improve the project performance. However, the adoption is low among QSs due to limited study of the BIM’s capabilities in their profession. This research aims to identify the BIM capabilities in quantity surveying practices and examine its relationship with project performance by developing a relationship framework. Data were collected through questionnaire survey and interview in Malaysia. Questionnaire results revealed that several BIM capabilities were significantly correlated with project performance and they were validated through interview. The relationship framework will guide QSs to focus on the identified BIM capabilities for better project outcomes.La industria de la construcción ha sufrido históricamente desviaciones en las mediciones de los materiales empleados frente a las cantidades proyectadas. Los aparejadores juegan un papel clave en este aspecto como responsables de la recepción de materiales. Sin embargo, el trabajo de medición es tedioso hasta el punto de afectar al rendimiento del proyecto. La aplicación del Building Information Modeling (BIM está logrando mejorar este trabajo. Aun así, su utilización es baja entre los aparejadores debido a la escasa formación recibida sobre las posibilidades del BIM. Esta investigación busca identificar las capacidades del BIM aplicado a la medición de materiales y examinar su relación con el rendimiento del proyecto desarrollando un marco de relación. Mediante encuestas y entrevistas realizadas en Malasia, se obtuvieron datos que revelaron que varias capacidades de BIM se correlacionan

  15. Improving National Capability in Biogeochemical Flux Modelling: the UK Environmental Virtual Observatory (EVOp)

    Science.gov (United States)

    Johnes, P.; Greene, S.; Freer, J. E.; Bloomfield, J.; Macleod, K.; Reaney, S. M.; Odoni, N. A.

    2012-12-01

    The best outcomes from watershed management arise where policy and mitigation efforts are underpinned by strong science evidence, but there are major resourcing problems associated with the scale of monitoring needed to effectively characterise the sources rates and impacts of nutrient enrichment nationally. The challenge is to increase national capability in predictive modelling of nutrient flux to waters, securing an effective mechanism for transferring knowledge and management tools from data-rich to data-poor regions. The inadequacy of existing tools and approaches to address these challenges provided the motivation for the Environmental Virtual Observatory programme (EVOp), an innovation from the UK Natural Environment Research Council (NERC). EVOp is exploring the use of a cloud-based infrastructure in catchment science, developing an exemplar to explore N and P fluxes to inland and coastal waters in the UK from grid to catchment and national scale. EVOp is bringing together for the first time national data sets, models and uncertainty analysis into cloud computing environments to explore and benchmark current predictive capability for national scale biogeochemical modelling. The objective is to develop national biogeochemical modelling capability, capitalising on extensive national investment in the development of science understanding and modelling tools to support integrated catchment management, and supporting knowledge transfer from data rich to data poor regions, The AERC export coefficient model (Johnes et al., 2007) has been adapted to function within the EVOp cloud environment, and on a geoclimatic basis, using a range of high resolution, geo-referenced digital datasets as an initial demonstration of the enhanced national capacity for N and P flux modelling using cloud computing infrastructure. Geoclimatic regions are landscape units displaying homogenous or quasi-homogenous functional behaviour in terms of process controls on N and P cycling

  16. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  17. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  18. Predictability of the geospace variations and measuring the capability to model the state of the system

    Science.gov (United States)

    Pulkkinen, A.

    2012-12-01

    Empirical modeling has been the workhorse of the past decades in predicting the state of the geospace. For example, numerous empirical studies have shown that global geoeffectiveness indices such as Kp and Dst are generally well predictable from the solar wind input. These successes have been facilitated partly by the strongly externally driven nature of the system. Although characterizing the general state of the system is valuable and empirical modeling will continue playing an important role, refined physics-based quantification of the state of the system has been the obvious next step in moving toward more mature science. Importantly, more refined and localized products are needed also for space weather purposes. Predictions of local physical quantities are necessary to make physics-based links to the impacts on specific systems. As we have introduced more localized predictions of the geospace state one central question is how predictable these local quantities are? This complex question can be addressed by rigorously measuring the model performance against the observed data. Space sciences community has made great advanced on this topic over the past few years and there are ongoing efforts in SHINE, CEDAR and GEM to carry out community-wide evaluations of the state-of-the-art solar and heliospheric, ionosphere-thermosphere and geospace models, respectively. These efforts will help establish benchmarks and thus provide means to measure the progress in the field analogous to monitoring of the improvement in lower atmospheric weather predictions carried out rigorously since 1980s. In this paper we will discuss some of the latest advancements in predicting the local geospace parameters and give an overview of some of the community efforts to rigorously measure the model performances. We will also briefly discuss some of the future opportunities for advancing the geospace modeling capability. These will include further development in data assimilation and ensemble

  19. An Improved Technique Based on Firefly Algorithm to Estimate the Parameters of the Photovoltaic Model

    Directory of Open Access Journals (Sweden)

    Issa Ahmed Abed

    2016-12-01

    Full Text Available This paper present a method to enhance the firefly algorithm by coupling with a local search. The constructed technique is applied to identify the solar parameters model where the method has been proved its ability to obtain the photovoltaic parameters model. Standard firefly algorithm (FA, electromagnetism-like (EM algorithm, and electromagnetism-like without local (EMW search algorithm all are compared with the suggested method to test its capability to solve this model.

  20. 能抵抗RQP分析的密写技术%Steganographic Technique Capable of Withstanding RQP Analysis

    Institute of Scientific and Technical Information of China (English)

    王朔中; 张新鹏; 张开文

    2002-01-01

    A new steganographic approach for 24-bit color images that can resist the RQP(raw-quick-pairs) steganalysis is described.The technique is based on modification of color triplets such that the existing color palette is not excessively expanded or even reduced. In this way, numbers of unique colors and pairs of close-colors in the image do not rise significantly. This invalidates the RQP analysis. Experimental results are presented to support the argument.

  1. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration

    2004-12-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access

  2. Application of Capability Maturity Model Integration to Innovation Management for Software and Service Companies

    Institute of Scientific and Technical Information of China (English)

    LI Jing

    2008-01-01

    To look through innovation management in small project-based firms, such as software engineering companies, which are serv-ice firms that conduct projects for their clients, capability maturity model integration (CMMI) is introduced. It is a process im- provement approach that provides organizations with the essential elements of effective processes. Taking ABC Software Company as an example, the performances before and after the introduction of CMMI in the firm were compared. The results indicated that after two years of application, the productivity increased 92%, and the ability of detecting errors improved 26.45% ; while the rate of faults and the cost of software development dropped 12.45% and 77.55%, respectively. To conclude, small project-based firms benefit a lot if they take CMMI into their process of innovation management, particularly for those R&D firms, since the implemen- tation of CMMI leads them to a promising future with higher efficiency and better effects.

  3. Defining Building Information Modeling implementation activities based on capability maturity evaluation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Romain Morlhon

    2015-01-01

    Full Text Available Building Information Modeling (BIM has become a widely accepted tool to overcome the many hurdles that currently face the Architecture, Engineering and Construction industries. However, implementing such a system is always complex and the recent introduction of BIM does not allow organizations to build their experience on acknowledged standards and procedures. Moreover, data on implementation projects is still disseminated and fragmentary. The objective of this study is to develop an assistance model for BIM implementation. Solutions that are proposed will help develop BIM that is better integrated and better used, and take into account the different maturity levels of each organization. Indeed, based on Critical Success Factors, concrete activities that help in implementation are identified and can be undertaken according to the previous maturity evaluation of an organization. The result of this research consists of a structured model linking maturity, success factors and actions, which operates on the following principle: once an organization has assessed its BIM maturity, it can identify various weaknesses and find relevant answers in the success factors and the associated actions.

  4. Capabilities of stochastic rainfall models as data providers for urban hydrology

    Science.gov (United States)

    Haberlandt, Uwe

    2017-04-01

    For planning of urban drainage systems using hydrological models, long, continuous precipitation series with high temporal resolution are needed. Since observed time series are often too short or not available everywhere, the use of synthetic precipitation is a common alternative. This contribution compares three precipitation models regarding their suitability to provide 5 minute continuous rainfall time series for a) sizing of drainage networks for urban flood protection and b) dimensioning of combined sewage systems for pollution reduction. The rainfall models are a parametric stochastic model (Haberlandt et al., 2008), a non-parametric probabilistic approach (Bárdossy, 1998) and a stochastic downscaling of dynamically simulated rainfall (Berg et al., 2013); all models are operated both as single site and multi-site generators. The models are applied with regionalised parameters assuming that there is no station at the target location. Rainfall and discharge characteristics are utilised for evaluation of the model performance. The simulation results are compared against results obtained from reference rainfall stations not used for parameter estimation. The rainfall simulations are carried out for the federal states of Baden-Württemberg and Lower Saxony in Germany and the discharge simulations for the drainage networks of the cities of Hamburg, Brunswick and Freiburg. Altogether, the results show comparable simulation performance for the three models, good capabilities for single site simulations but low skills for multi-site simulations. Remarkably, there is no significant difference in simulation performance comparing the tasks flood protection with pollution reduction, so the models are finally able to simulate both the extremes and the long term characteristics of rainfall equally well. Bárdossy, A., 1998. Generating precipitation time series using simulated annealing. Wat. Resour. Res., 34(7): 1737-1744. Berg, P., Wagner, S., Kunstmann, H., Schädler, G

  5. Application of voltage oriented control technique in a fully renewable, wind powered, autonomous system with storage capabilities

    Science.gov (United States)

    Kondylis, Georgios P.; Vokas, Georgios A.; Anastasiadis, Anestis G.; Konstantinopoulos, Stavros A.

    2017-02-01

    The main purpose of this paper is to examine the technological feasibility of a small autonomous network, with electricity storage capability, which is completely electrified by wind energy. The excess energy produced, with respect to the load requirements, is sent to the batteries for storage. When the energy produced by the wind generator is not sufficient, load's energy requirement is covered by the battery system, ensuring, however, that voltage, frequency and other system characteristics are within the proper boundaries. For the purpose of this study, a Voltage Oriented Control system has been developed in order to monitor the autonomous operation and perform the energy management of the network. This system manages the power flows between the load and the storage system by properly controlling the Pulse Width Modulation pulses in the converter, thus ensuring power flows are adequate and frequency remains under control. The experimental results clearly indicate that a stand-alone wind energy system based on battery energy storage system is feasible and reliable. This paves the way for fully renewable and zero emission energy schemes.

  6. Development of explosive event scale model testing capability at Sandia`s large scale centrifuge facility

    Energy Technology Data Exchange (ETDEWEB)

    Blanchat, T.K.; Davie, N.T.; Calderone, J.J. [and others

    1998-02-01

    Geotechnical structures such as underground bunkers, tunnels, and building foundations are subjected to stress fields produced by the gravity load on the structure and/or any overlying strata. These stress fields may be reproduced on a scaled model of the structure by proportionally increasing the gravity field through the use of a centrifuge. This technology can then be used to assess the vulnerability of various geotechnical structures to explosive loading. Applications of this technology include assessing the effectiveness of earth penetrating weapons, evaluating the vulnerability of various structures, counter-terrorism, and model validation. This document describes the development of expertise in scale model explosive testing on geotechnical structures using Sandia`s large scale centrifuge facility. This study focused on buried structures such as hardened storage bunkers or tunnels. Data from this study was used to evaluate the predictive capabilities of existing hydrocodes and structural dynamics codes developed at Sandia National Laboratories (such as Pronto/SPH, Pronto/CTH, and ALEGRA). 7 refs., 50 figs., 8 tabs.

  7. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Directory of Open Access Journals (Sweden)

    S. Galelli

    2013-02-01

    Full Text Available Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modeling. In this paper we investigate the prediction capability of extremely randomized trees (Extra-Trees, in terms of accuracy, explanation ability and computational efficiency, in a streamflow modeling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i alleviates the poor generalization property and tendency to overfitting of traditional standalone decision trees (e.g. CART; (ii is computationally very efficient; and, (iii allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analyzed on two real-world case studies (Marina catchment (Singapore and Canning River (Western Australia representing two different morphoclimatic contexts comparatively with other tree-based methods (CART and M5 and parametric data-driven approaches (ANNs and multiple linear regression. Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5 in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  8. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  9. Compact Models and Measurement Techniques for High-Speed Interconnects

    CERN Document Server

    Sharma, Rohit

    2012-01-01

    Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.

  10. A JOINT VENTURE MODEL FOR ASSESSMENT OF PARTNER CAPABILITIES: THE CASE OF ESKOM ENTERPRISES AND THE AFRICAN POWER SECTOR

    Directory of Open Access Journals (Sweden)

    Y.V. Soni

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This article investigates the concept of joint ventures in the international energy sector and develops a joint venture model, as a business development and assessment tool. The joint venture model presents a systematic method that relies on modern business intelligence to assess a potential business venture by using a balanced score card technique to screen potential partners, based on their technological and financial core capabilities. The model can be used by business development managers to harness the potential of joint ventures to create economic growth and sustainable business expansion. Furthermore, partnerships with local companies can help to mitigate econo-political risk, and facilitate buy-in from the national governments that are normally the primary stakeholders in the energy sector ventures (directly or indirectly. The particular case of Eskom Enterprises (Pty Ltd, a wholly owned subsidiary of Eskom, is highlighted.

    AFRIKAANSE OPSOMMING: Hierdie artikel ondersoek die begrip gesamentlike onderneming in die internasionale energiesektor en ontwikkel 'n gesamentlike-onderneming-model as 'n sake-ontwikkeling- en takseermodel. Die gesamentlike-onderneming-model bied 'n stelselmatige metode wat op moderne sake-intelligensie staat maak om 'n potensiële sake-onderneming op grond van die tegnologiese en finansiële kernvermoëns daarvan te takseer deur 'n gebalanseerdepuntekaart-tegniek te gebruik. Die model kan deur sake-ontwikkelingsbestuurders gebruik word om die potensiaal van gesamentlike ondernemings in te span om ekonomiese groei en volhoubare sake-uitbreiding daar te stel. Verder kan venootskappe met plaaslike maatskappye help om die ekonomiese risiko te verminder en inkoop te vergemaklik van die nasionale regerings wat gewoonlik die primêre belanghebbendes in die energiesektorondernemings is (hetsy regstreeks of onregstreeks. Die besondere geval van Eskom Enterprises (Edms Bpk, 'n vol filiaal van Eskom

  11. A Hidden Markov Model method, capable of predicting and discriminating β-barrel outer membrane proteins

    Directory of Open Access Journals (Sweden)

    Hamodrakas Stavros J

    2004-03-01

    Full Text Available Abstract Background Integral membrane proteins constitute about 20–30% of all proteins in the fully sequenced genomes. They come in two structural classes, the α-helical and the β-barrel membrane proteins, demonstrating different physicochemical characteristics, structure and localization. While transmembrane segment prediction for the α-helical integral membrane proteins appears to be an easy task nowadays, the same is much more difficult for the β-barrel membrane proteins. We developed a method, based on a Hidden Markov Model, capable of predicting the transmembrane β-strands of the outer membrane proteins of gram-negative bacteria, and discriminating those from water-soluble proteins in large datasets. The model is trained in a discriminative manner, aiming at maximizing the probability of correct predictions rather than the likelihood of the sequences. Results The training has been performed on a non-redundant database of 14 outer membrane proteins with structures known at atomic resolution; it has been tested with a jacknife procedure, yielding a per residue accuracy of 84.2% and a correlation coefficient of 0.72, whereas for the self-consistency test the per residue accuracy was 88.1% and the correlation coefficient 0.824. The total number of correctly predicted topologies is 10 out of 14 in the self-consistency test, and 9 out of 14 in the jacknife. Furthermore, the model is capable of discriminating outer membrane from water-soluble proteins in large-scale applications, with a success rate of 88.8% and 89.2% for the correct classification of outer membrane and water-soluble proteins respectively, the highest rates obtained in the literature. That test has been performed independently on a set of known outer membrane proteins with low sequence identity with each other and also with the proteins of the training set. Conclusion Based on the above, we developed a strategy, that enabled us to screen the entire proteome of E. coli for

  12. Capability of Spaceborne Hyperspectral EnMAP Mission for Mapping Fractional Cover for Soil Erosion Modeling

    Directory of Open Access Journals (Sweden)

    Sarah Malec

    2015-09-01

    Full Text Available Soil erosion can be linked to relative fractional cover of photosynthetic-active vegetation (PV, non-photosynthetic-active vegetation (NPV and bare soil (BS, which can be integrated into erosion models as the cover-management C-factor. This study investigates the capability of EnMAP imagery to map fractional cover in a region near San Jose, Costa Rica, characterized by spatially extensive coffee plantations and grazing in a mountainous terrain. Simulated EnMAP imagery is based on airborne hyperspectral HyMap data. Fractional cover estimates are derived in an automated fashion by extracting image endmembers to be used with a Multiple End-member Spectral Mixture Analysis approach. The C-factor is calculated based on the fractional cover estimates determined independently for EnMAP and HyMap. Results demonstrate that with EnMAP imagery it is possible to extract quality endmember classes with important spectral features related to PV, NPV and soil, and be able to estimate relative cover fractions. This spectral information is critical to separate BS and NPV which greatly can impact the C-factor derivation. From a regional perspective, we can use EnMAP to provide good fractional cover estimates that can be integrated into soil erosion modeling.

  13. Evaluation of the 3d Urban Modelling Capabilities in Geographical Information Systems

    Science.gov (United States)

    Dogru, A. O.; Seker, D. Z.

    2010-12-01

    Geographical Information System (GIS) Technology, which provides successful solutions to basic spatial problems, is currently widely used in 3 dimensional (3D) modeling of physical reality with its developing visualization tools. The modeling of large and complicated phenomenon is a challenging problem in terms of computer graphics currently in use. However, it is possible to visualize that phenomenon in 3D by using computer systems. 3D models are used in developing computer games, military training, urban planning, tourism and etc. The use of 3D models for planning and management of urban areas is very popular issue of city administrations. In this context, 3D City models are produced and used for various purposes. However the requirements of the models vary depending on the type and scope of the application. While a high level visualization, where photorealistic visualization techniques are widely used, is required for touristy and recreational purposes, an abstract visualization of the physical reality is generally sufficient for the communication of the thematic information. The visual variables, which are the principle components of cartographic visualization, such as: color, shape, pattern, orientation, size, position, and saturation are used for communicating the thematic information. These kinds of 3D city models are called as abstract models. Standardization of technologies used for 3D modeling is now available by the use of CityGML. CityGML implements several novel concepts to support interoperability, consistency and functionality. For example it supports different Levels-of-Detail (LoD), which may arise from independent data collection processes and are used for efficient visualization and efficient data analysis. In one CityGML data set, the same object may be represented in different LoD simultaneously, enabling the analysis and visualization of the same object with regard to different degrees of resolution. Furthermore, two CityGML data sets

  14. Uncertainty quantification's role in modeling and simulation planning, and credibility assessment through the predictive capability maturity model

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Witkowski, Walter R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-04-13

    The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationally simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.

  15. Selective pressurized liquid extraction technique capable of analyzing dioxins, furans, and PCBs in clams and crab tissue.

    Science.gov (United States)

    Subedi, Bikram; Aguilar, Lissette; Williams, E Spencer; Brooks, Bryan W; Usenko, Sascha

    2014-04-01

    A selective pressurized liquid extraction technique (SPLE) was developed for the analysis of polychlorodibenzo-p-dioxins, polychlorodibenzofurans (PCDD/Fs) and dioxin-like polychlorobiphenyls (dl-PCBs) in clam and crab tissue. The SPLE incorporated multiple cleanup adsorbents (alumina, florisil, silica gel, celite, and carbopack) within the extraction cell. Tissue extracts were analyzed by high resolution gas chromatography coupled with electron capture negative ionization mass spectrometry. Mean recovery (n = 3) and percent relative standard deviation for PCDD/Fs and dl-PCBs in clam and crabs was 89 ± 2.3 and 85 ± 4.0, respectively. The SPLE method was applied to clams and crabs collected from the San Jacinto River Waste Pits, a Superfund site in Houston, TX. The dl-PCBs concentrations in clams and crabs ranged from 50 to 2,450 and 5 to 800 ng/g ww, respectively. Sample preparation time and solvents were reduced by 92 % and 65 %, respectively, as compared to USEPA method 1613.

  16. Assessment of the capability of remote sensing and GIS techniques for monitoring reclamation success in coal mine degraded lands.

    Science.gov (United States)

    Karan, Shivesh Kishore; Samadder, Sukha Ranjan; Maiti, Subodh Kumar

    2016-11-01

    The objective of the present study is to monitor reclamation activity in mining areas. Monitoring of these reclaimed sites in the vicinity of mining areas and on closed Over Burden (OB) dumps is critical for improving the overall environmental condition, especially in developing countries where area around the mines are densely populated. The present study evaluated the reclamation success in the Block II area of Jharia coal field, India, using Landsat satellite images for the years 2000 and 2015. Four image processing methods (support vector machine, ratio vegetation index, enhanced vegetation index, and normalized difference vegetation index) were used to quantify the change in vegetation cover between the years 2000 and 2015. The study also evaluated the relationship between vegetation health and moisture content of the study area using remote sensing techniques. Statistical linear regression analysis revealed that Normalized Difference Vegetation Index (NDVI) coupled with Normalized Difference Moisture Index (NDMI) is the best method for vegetation monitoring in the study area when compared to other indices. A strong linear relationship (r(2) > 0.86) was found between NDVI and NDMI. An increase of 21% from 213.88 ha in 2000 to 258.9 ha in 2015 was observed in the vegetation cover of the reclaimed sites for an open cast mine, indicating satisfactory reclamation activity. NDVI results indicated that vegetation health also improved over the years. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The Analysis of Information Exchange Capability for Battlefield Networks Using M&S Techniques of the NetSPIN

    Science.gov (United States)

    2013-06-01

    systems such as an UAV in a corps and a video conferencing system in a division. Particularly, the battlefield network is composed of simulated models...introduction of new systems such as an UAV in a corps and a video conferencing system in a division. Particularly, the battlefield network is composed of...systems such as the UAV in a corps and the video conferencing system in a division, the end-to-end delay of simulated operation messages and the

  18. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    in the wellbore); and (3) accurate approaches to account for the effects of reservoir heterogeneity and for the optimization of nonconventional well deployment. An overview of our progress in each of these main areas is as follows. A general purpose object-oriented research simulator (GPRS) was developed under this project. The GPRS code is managed using modern software management techniques and has been deployed to many companies and research institutions. The simulator includes general black-oil and compositional modeling modules. The formulation is general in that it allows for the selection of a wide variety of primary and secondary variables and accommodates varying degrees of solution implicitness. Specifically, we developed and implemented an IMPSAT procedure (implicit in pressure and saturation, explicit in all other variables) for compositional modeling as well as an adaptive implicit procedure. Both of these capabilities allow for efficiency gains through selective implicitness. The code treats cell connections through a general connection list, which allows it to accommodate both structured and unstructured grids. The GPRS code was written to be easily extendable so new modeling techniques can be readily incorporated. Along these lines, we developed a new dual porosity module compatible with the GPRS framework, as well as a new discrete fracture model applicable for fractured or faulted reservoirs. Both of these methods display substantial advantages over previous implementations. Further, we assessed the performance of different preconditioners in an attempt to improve the efficiency of the linear solver. As a result of this investigation, substantial improvements in solver performance were achieved.

  19. Spatio–temporal rain attenuation model for application to fade mitigation techniques

    OpenAIRE

    2004-01-01

    We present a new stochastic-dynamic model useful for the planning and design of gigahertz satellite communications using fade mitigation techniques. It is a generalization of the Maseng–Bakken and targets dual-site dual-frequency rain attenu- ated satellite links. The outcome is a consistent and comprehensive model capable of yielding theoretical descriptions of: 1) long-term power spectral density of rain attenuation; 2) rain fade slope; 3) rain frequency scaling factor; 4) site diversity; a...

  20. Immune Modulating Capability of Two Exopolysaccharide-Producing Bifidobacterium Strains in a Wistar Rat Model

    Directory of Open Access Journals (Sweden)

    Nuria Salazar

    2014-01-01

    Full Text Available Fermented dairy products are the usual carriers for the delivery of probiotics to humans, Bifidobacterium and Lactobacillus being the most frequently used bacteria. In this work, the strains Bifidobacterium animalis subsp. lactis IPLA R1 and Bifidobacterium longum IPLA E44 were tested for their capability to modulate immune response and the insulin-dependent glucose homeostasis using male Wistar rats fed with a standard diet. Three intervention groups were fed daily for 24 days with 10% skimmed milk, or with 109 cfu of the corresponding strain suspended in the same vehicle. A significant increase of the suppressor-regulatory TGF-β cytokine occurred with both strains in comparison with a control (no intervention group of rats; the highest levels were reached in rats fed IPLA R1. This strain presented an immune protective profile, as it was able to reduce the production of the proinflammatory IL-6. Moreover, phosphorylated Akt kinase decreased in gastroctemius muscle of rats fed the strain IPLA R1, without affecting the glucose, insulin, and HOMA index in blood, or levels of Glut-4 located in the membrane of muscle and adipose tissue cells. Therefore, the strain B. animalis subsp. lactis IPLA R1 is a probiotic candidate to be tested in mild grade inflammation animal models.

  1. Model-based Assessment for Balancing Privacy Requirements and Operational Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Knirsch, Fabian [Salzburg Univ. (Austria); Engel, Dominik [Salzburg Univ. (Austria); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States)

    2015-02-17

    The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and – if feasible – an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.

  2. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  3. Quantitative model validation techniques: new insights

    CERN Document Server

    Ling, You

    2012-01-01

    This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...

  4. Model of a neural network inertial satellite navigation system capable of estimating the earth's gravitational field gradient

    Science.gov (United States)

    Devyatisil'nyi, A. S.

    2016-09-01

    A model for recognizing inertial and satellite data on an object's motion that are delivered by a set of distributed onboard sensors (newtonmeters, gyros, satellite receivers) has been described. Specifically, the model is capable of estimating the parameters of the gravitational field.

  5. Techniques and Simulation Models in Risk Management

    OpenAIRE

    Mirela GHEORGHE

    2012-01-01

    In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...

  6. Study of Semi-Span Model Testing Techniques

    Science.gov (United States)

    Gatlin, Gregory M.; McGhee, Robert J.

    1996-01-01

    An investigation has been conducted in the NASA Langley 14- by 22-Foot Subsonic Tunnel in order to further the development of semi-span testing capabilities. A twin engine, energy efficient transport (EET) model with a four-element wing in a takeoff configuration was used for this investigation. Initially a full span configuration was tested and force and moment data, wing and fuselage surface pressure data, and fuselage boundary layer measurements were obtained as a baseline data set. The semi-span configurations were then mounted on the wind tunnel floor, and the effects of fuselage standoff height and shape as well as the effects of the tunnel floor boundary layer height were investigated. The effectiveness of tangential blowing at the standoff/floor juncture as an active boundary-layer control technique was also studied. Results indicate that the semi-span configuration was more sensitive to variations in standoff height than to variations in floor boundary layer height. A standoff height equivalent to 30 percent of the fuselage radius resulted in better correlation with full span data than no standoff or the larger standoff configurations investigated. Undercut standoff leading edges or the use of tangential blowing in the standoff/ floor juncture improved correlation of semi-span data with full span data in the region of maximum lift coefficient.

  7. A TECHNIQUE OF DIGITAL SURFACE MODEL GENERATION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is usually a time-consuming process to real-time set up 3D digital surface mo del(DSM) of an object with complex sur face.On the basis of the architectural survey proje ct of“Chilin Nunnery Reconstruction",this paper investigates an easy and feasi ble way,that is,on project site,applying digital close range photogrammetry an d CAD technique to establish the DSM for simulating ancient architectures with c omplex surface.The method has been proved very effective in practice.

  8. Capability Paternalism

    NARCIS (Netherlands)

    Claassen, R.J.G.

    2014-01-01

    A capability approach prescribes paternalist government actions to the extent that it requires the promotion of specific functionings, instead of the corresponding capabilities. Capability theorists have argued that their theories do not have much of these paternalist implications, since promoting c

  9. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  10. Multiscale Modeling of Nano-scale Phenomena: Towards a Multiphysics Simulation Capability for Design and Optimization of Sensor Systems

    Energy Technology Data Exchange (ETDEWEB)

    Becker, R; McElfresh, M; Lee, C; Balhorn, R; White, D

    2003-12-01

    In this white paper, a road map is presented to establish a multiphysics simulation capability for the design and optimization of sensor systems that incorporate nanomaterials and technologies. The Engineering Directorate's solid/fluid mechanics and electromagnetic computer codes will play an important role in both multiscale modeling and integration of required physics issues to achieve a baseline simulation capability. Molecular dynamic simulations performed primarily in the BBRP, CMS and PAT directorates, will provide information for the construction of multiscale models. All of the theoretical developments will require closely coupled experimental work to develop material models and validate simulations. The plan is synergistic and complimentary with the Laboratory's emerging core competency of multiscale modeling. The first application of the multiphysics computer code is the simulation of a ''simple'' biological system (protein recognition utilizing synthesized ligands) that has a broad range of applications including detection of biological threats, presymptomatic detection of illnesses, and drug therapy. While the overall goal is to establish a simulation capability, the near-term work is mainly focused on (1) multiscale modeling, i.e., the development of ''continuum'' representations of nanostructures based on information from molecular dynamics simulations and (2) experiments for model development and validation. A list of LDRDER proposals and ongoing projects that could be coordinated to achieve these near-term objectives and demonstrate the feasibility and utility of a multiphysics simulation capability is given.

  11. Microwave Diffraction Techniques from Macroscopic Crystal Models

    Science.gov (United States)

    Murray, William Henry

    1974-01-01

    Discusses the construction of a diffractometer table and four microwave models which are built of styrofoam balls with implanted metallic reflecting spheres and designed to simulate the structures of carbon (graphite structure), sodium chloride, tin oxide, and palladium oxide. Included are samples of Bragg patterns and computer-analysis results.…

  12. Re-framing Inclusive Education Through the Capability Approach: An Elaboration of the Model of Relational Inclusion

    Directory of Open Access Journals (Sweden)

    Maryam Dalkilic

    2016-09-01

    Full Text Available Scholars have called for the articulation of new frameworks in special education that are responsive to culture and context and that address the limitations of medical and social models of disability. In this article, we advance a theoretical and practical framework for inclusive education based on the integration of a model of relational inclusion with Amartya Sen’s (1985 Capability Approach. This integrated framework engages children, educators, and families in principled practices that acknowledge differences, rather than deficits, and enable attention to enhancing the capabilities of children with disabilities in inclusive educational environments. Implications include the development of policy that clarifies the process required to negotiate capabilities and valued functionings and the types of resources required to permit children, educators, and families to create relationally inclusive environments.

  13. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  14. Q-DPM: An Efficient Model-Free Dynamic Power Management Technique

    CERN Document Server

    Li, Min; Yao, Richard; Yan, Xiaolang

    2011-01-01

    When applying Dynamic Power Management (DPM) technique to pervasively deployed embedded systems, the technique needs to be very efficient so that it is feasible to implement the technique on low end processor and tight-budget memory. Furthermore, it should have the capability to track time varying behavior rapidly because the time varying is an inherent characteristic of real world system. Existing methods, which are usually model-based, may not satisfy the aforementioned requirements. In this paper, we propose a model-free DPM technique based on Q-Learning. Q-DPM is much more efficient because it removes the overhead of parameter estimator and mode-switch controller. Furthermore, its policy optimization is performed via consecutive online trialing, which also leads to very rapid response to time varying behavior.

  15. Comparative Analysis of Vehicle Make and Model Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Faiza Ayub Syed

    2014-03-01

    Full Text Available Vehicle Make and Model Recognition (VMMR has emerged as a significant element of vision based systems because of its application in access control systems, traffic control and monitoring systems, security systems and surveillance systems, etc. So far a number of techniques have been developed for vehicle recognition. Each technique follows different methodology and classification approaches. The evaluation results highlight the recognition technique with highest accuracy level. In this paper we have pointed out the working of various vehicle make and model recognition techniques and compare these techniques on the basis of methodology, principles, classification approach, classifier and level of recognition After comparing these factors we concluded that Locally Normalized Harris Corner Strengths (LHNS performs best as compared to other techniques. LHNS uses Bayes and K-NN classification approaches for vehicle classification. It extracts information from frontal view of vehicles for vehicle make and model recognition.

  16. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  17. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  18. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  19. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I/O-mod...... for range reporting problems in the pointer machine and the I/O-model. With this technique, we tighten the gap between the known upper bound and lower bound for the most fundamental range reporting problem, orthogonal range reporting. 5......In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  20. The Integrated Use of Enterprise and System Dynamics Modelling Techniques in Support of Business Decisions

    Directory of Open Access Journals (Sweden)

    K. Agyapong-Kodua

    2012-01-01

    Full Text Available Enterprise modelling techniques support business process (reengineering by capturing existing processes and based on perceived outputs, support the design of future process models capable of meeting enterprise requirements. System dynamics modelling tools on the other hand are used extensively for policy analysis and modelling aspects of dynamics which impact on businesses. In this paper, the use of enterprise and system dynamics modelling techniques has been integrated to facilitate qualitative and quantitative reasoning about the structures and behaviours of processes and resource systems used by a Manufacturing Enterprise during the production of composite bearings. The case study testing reported has led to the specification of a new modelling methodology for analysing and managing dynamics and complexities in production systems. This methodology is based on a systematic transformation process, which synergises the use of a selection of public domain enterprise modelling, causal loop and continuous simulation modelling techniques. The success of the modelling process defined relies on the creation of useful CIMOSA process models which are then converted to causal loops. The causal loop models are then structured and translated to equivalent dynamic simulation models using the proprietary continuous simulation modelling tool iThink.

  1. IAC - INTEGRATED ANALYSIS CAPABILITY

    Science.gov (United States)

    Frisch, H. P.

    1994-01-01

    Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. IAC 2.5 contains several specialized interfaces from NASTRAN in support of multidisciplinary analysis. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. FEMNET, which converts finite element structural analysis models to finite difference thermal analysis models, is also interfaced with the IAC database. 3) System dynamics - The DISCOS simulation program which allows for either nonlinear time domain analysis or linear frequency domain analysis, is fully interfaced to the IAC database management capability. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. Level 2.5 includes EIGEN, which provides tools for large order system eigenanalysis, and BOPACE, which allows for geometric capabilities and finite element analysis with nonlinear material. Also included in IAC level 2.5 is SAMSAN 3.1, an

  2. Top-level modeling of an als system utilizing object-oriented techniques

    Science.gov (United States)

    Rodriguez, L. F.; Kang, S.; Ting, K. C.

    The possible configuration of an Advanced Life Support (ALS) System capable of supporting human life for long-term space missions continues to evolve as researchers investigate potential technologies and configurations. To facilitate the decision process the development of acceptable, flexible, and dynamic mathematical computer modeling tools capable of system level analysis is desirable. Object-oriented techniques have been adopted to develop a dynamic top-level model of an ALS system.This approach has several advantages; among these, object-oriented abstractions of systems are inherently modular in architecture. Thus, models can initially be somewhat simplistic, while allowing for adjustments and improvements. In addition, by coding the model in Java, the model can be implemented via the World Wide Web, greatly encouraging the utilization of the model. Systems analysis is further enabled with the utilization of a readily available backend database containing information supporting the model. The subsystem models of the ALS system model include Crew, Biomass Production, Waste Processing and Resource Recovery, Food Processing and Nutrition, and the Interconnecting Space. Each subsystem model and an overall model have been developed. Presented here is the procedure utilized to develop the modeling tool, the vision of the modeling tool, and the current focus for each of the subsystem models.

  3. Modelling of pulverized coal boilers: review and validation of on-line simulation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Diez, L.I.; Cortes, C.; Campo, A. [University of Zaragoza, Zaragoza (Spain). Centro de Investigacion de Recursos y Consumos Energeticos (CIRCE)

    2005-07-01

    Thermal modelling of large pulverized fuel utility boilers has reached a very remarkable development, through the application of CFD techniques and other advanced mathematical methods. However, due to the computational requirements, on-line monitoring and simulation tools still rely on lumped models and semiempirical approaches, which are often strongly simplified and not well connected with sound theoretical basis. This paper reviews on-line modelling techniques, aiming at the improvement of their capabilities, by means of the revision and modification of conventional lumped models and the integration of off-line CFD predictions. The paper illustrates the coherence of monitoring calculations as well as the validation of the on-line thermal simulator, starting from real operation data from a case-study unit. The outcome is that it is possible to significantly improve the accuracy of on-line calculations provided by conventional models, taking into account the singularities of large combustion systems and coupling offline CFD predictions for selected scenarios.

  4. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  5. Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation

    Science.gov (United States)

    Lee, George

    1992-01-01

    A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.

  6. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  7. HYDROïD humanoid robot head with perception and emotion capabilities :Modeling, Design and Experimental Results

    Directory of Open Access Journals (Sweden)

    Samer eAlfayad

    2016-04-01

    Full Text Available In the framework of the HYDROïD humanoid robot project, this paper describes the modeling and design of an electrically actuated head mechanism. Perception and emotion capabilities are considered in the design process. Since HYDROïD humanoid robot is hydraulically actuated, the choice of electrical actuation for the head mechanism addressed in this paper is justified. Considering perception and emotion capabilities leads to a total number of 15 degrees of freedom for the head mechanism which are split on four main sub-mechanisms: the neck, the mouth, the eyes and the eyebrows. Biological data and kinematics performances of human head are taken as inputs of the design process. A new solution of uncoupled eyes is developed to possibly address the master-slave process that links the human eyes as well as vergence capabilities. Modeling each sub-system is carried out in order to get equations of motion, their frequency responses and their transfer functions. The neck pitch rotation is given as a study example. Then, the head mechanism performances are presented through a comparison between model and experimental results validating the hardware capabilities. Finally, the head mechanism is integrated on the HYDROïD upper-body. An object tracking experiment coupled with emotional expressions is carried out to validate the synchronization of the eye rotations with the body motions.

  8. Capability ethics

    NARCIS (Netherlands)

    I.A.M. Robeyns (Ingrid)

    2012-01-01

    textabstractThe capability approach is one of the most recent additions to the landscape of normative theories in ethics and political philosophy. Yet in its present stage of development, the capability approach is not a full-blown normative theory, in contrast to utilitarianism, deontological

  9. Capability ethics

    NARCIS (Netherlands)

    I.A.M. Robeyns (Ingrid)

    2012-01-01

    textabstractThe capability approach is one of the most recent additions to the landscape of normative theories in ethics and political philosophy. Yet in its present stage of development, the capability approach is not a full-blown normative theory, in contrast to utilitarianism, deontological theor

  10. Capability ethics

    NARCIS (Netherlands)

    I.A.M. Robeyns (Ingrid)

    2012-01-01

    textabstractThe capability approach is one of the most recent additions to the landscape of normative theories in ethics and political philosophy. Yet in its present stage of development, the capability approach is not a full-blown normative theory, in contrast to utilitarianism, deontological theor

  11. Production of a New Model for Evaluation of Iran Ecological Capabilities in Order to Establish Services and Civil Development Application

    Directory of Open Access Journals (Sweden)

    J. Nouri

    2004-01-01

    Full Text Available This study is aimed to design a new model for service and civil development application in order to be used in evaluation of Iran ecological capability studies. For this purpose in the first step, the frequency of sustainable and unsustainable ecological factors in Iran was known. In the next step, the Delphi method was used which is itself a branch of phase theory methods. Effective ecological factors priorities and frequency value of each factor were determined by completing 750 questionnaires for desired branches (Delphi group. Questionnaire data were analyzed using software SPSS 11.0. After designing, model was introduced to geographical information system using Arcinfo program. Model sensitive analysis test was done with the purpose to determine the level of sensibility in favorable replies against the specific changes in target function with the simplex method using Lingo software. This model is used in evaluation of ecological capability at the time of ecological resources analysis in the field under examination and after the preparation of environmental unit maps. Indeed environmental unit map is considered as a base map in ecological capability evaluation in this study. To asses the capabilities of the new method, ecological capability of District 22, Municipality of Tehran was evaluated as a case study and service and civil development application map was prepared using Arc-view GIS 3.2a program. Results of the studies on this section, according to the new method, the points given in environmental units vary from zero to sixty-five. There are restricting factors such as some environmental units along river path, fold passage and hilly areas, which hinder these units from getting service and civil development application.

  12. Desisgning customer agility model based on dynamic capabilities: Effect of IT Competence, Entrepreneurial Alertness And Market Acuity

    Directory of Open Access Journals (Sweden)

    Seyed Hamid Khodadad Hoseini

    2012-06-01

    Full Text Available Today organizations face great environmental turbulence. High levels of environmental turbulence can paralyze a firm’s operations. Actually outputs of organization process in such environments depend on the firm’s ability to change management and its flexibility. Since one of the important changes in the turbulent environments is changing of needs and preferences of customers and also based on new marketing approach, in order to consider customer's needs, customer agility has been identified as the vital competency for the survival of organizations. Given the critical role of customer agility in turbulent competitive environment, this concept has attracted many researchers of management science in recent years. Therefore, it is vital for organizations to find how to achieve this important capability and it is important to note that very few researches have been done in this regard. Since one of the important tools for achieving customer agility in the organization is dynamic capabilities, therefore in this research the model of formation of customer agility in organization based on dynamic capabilities has been proposed and examined in electronics industry of Iran. The model includes IT competencies, entrepreneurial and market acuity for improving the output of organizational process. This model has been developed based on three management areas of strategic management literature related to dynamic capabilities and the literature of entrepreneurship and information technology. Findings of the sample approve the research model. In addition, it is concluded that dynamic capabilities of the organization will help shaping customer agility and customer agility has a positive effect on quality and efficiency of the process output.

  13. Designing Customer Agility Model Based on Organizational Dynamic Capabilities: Effect of IT Competence, Entrepreneurial Alertness and Market Acuity

    Directory of Open Access Journals (Sweden)

    Soheila Khoddami

    2012-01-01

    Full Text Available Today organizations face great environmental turbulence. High levels of environmental turbulence can paralyze a firm’s operations. Actually outputs of organization process in such environments depend on the firm’s ability to change management and its flexibility. Since one of the important changes in the turbulent environments is changing of needs and preferences of customers and also based on new marketing approach, in order to consider customer's needs, customer agility has been identified as the vital competency for the survival of organizations. Given the critical role of customer agility in turbulent competitive environment, this concept has attracted many researchers of management science in recent years. Therefore, it is vital for organizations to find how to achieve this important capability and it is important to note that very few researches have been done in this regard. Since one of the important tools for achieving customer agility in the organization is dynamic capabilities, therefore in this research the model of formation of customer agility in organization based on dynamic capabilities has been proposed and examined in electronics industry of Iran. The model includes IT competencies, entrepreneurial and market acuity for improving the output of organizational process. This model has been developed based on three management areas of strategic management literature related to dynamic capabilities and the literature of entrepreneurship and information technology. Findings of the sample approve the research model. In addition, it is concluded that dynamic capabilities of the organization will help shaping customer agility and customer agility has a positive effect on quality and efficiency of the process output.

  14. ACCURACY ANALYSIS OF WRIGHT'S CAPABILITY INDEX "CS" AND MODELLING NON-NORMAL DATA USING STATISTICAL SOFTWARE-A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2015-06-01

    Full Text Available Process Capability Indices (PCI has been widely used as a means of summarizing process performance relative to set of specification limits. The proper use of process capability indices are based on some assumptions which may not be true always. Therefore, sometime whether the process capability indices can truly reflect the performance of a process is questionable. Most of PCIs, including Cp, Cpk, Cpm and Cpmk, neglect the changes in the shape of the distribution, which is an important indicator of problems in skewness-prone processes. Wright proposed a process capability index 'Cs' to detect shape changes in a process due to skewness by incorporating a penalty for skewness. In this paper, the effect of skewness on assessment of accuracy of Wright's capability index Cs is studied and comparison is made with Cp, Cpk, Cpm and Cpmk indices when the distribution of the quality characteristic (spring force considered is skewed slightly. This paper also discusses how modelling the non normal data using statistical software and results were compared with other methods.

  15. Symmetry and partial order reduction techniques in model checking Rebeca

    NARCIS (Netherlands)

    Jaghouri, M.M.; Sirjani, M.; Mousavi, M.R.; Movaghar, A.

    2007-01-01

    Rebeca is an actor-based language with formal semantics that can be used in modeling concurrent and distributed software and protocols. In this paper, we study the application of partial order and symmetry reduction techniques to model checking dynamic Rebeca models. Finding symmetry based equivalen

  16. Prediction of survival with alternative modeling techniques using pseudo values

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); F.R. Datema (Frank); R.J. Baatenburg de Jong (Robert Jan); E.W. Steyerberg (Ewout)

    2014-01-01

    textabstractBackground: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo

  17. Use of surgical techniques in the rat pancreas transplantation model

    National Research Council Canada - National Science Library

    Ma, Yi; Guo, Zhi-Yong

    2008-01-01

    ... (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years...

  18. A Causal Model of the Quality Activities Process: Exploring the Links between Quality Capabilities, Competitiveness and Organizational Performance

    Directory of Open Access Journals (Sweden)

    Cheng-tao Yu

    2014-10-01

    Full Text Available The purpose of this study is to examine the relationship between Total Quality Management (TQM practices, quality capabilities, competitiveness and firm performance. In this study, TQM has been conceptualized as soft and hard practices. An empirical analysis based upon an extensive validation process was applied to refine the construct scales, respectively. The sample consists of 423 valid responses for applying Structural Equation Modeling (SEM. Results derived from this study show that soft TQM practices have a direct, positive and significant relationship between quality capabilities, competitive strategies and Organizational performance. In addition, an indirect, positive and significant relationship on organizational performance through quality capabilities and competitive strategies was observed. The findings of this research show that hypotheses H3b, H4b and H6b do not support, the rest are in line with the model inference. Particularly, from the results indicate that soft TQM are the most important resource, which has strong effects on organizational performance. Results derived from this study might help managers to implement TQM practices in order to effectively allocate resources and improve financial performance. Thus, managers should consider that improvement in soft TQM would support the successful implementation of quality capabilities, competitive advantage and organizational performance. Much efforts relating to social aspects in TQM activities are particularly key issues to improve performance.

  19. Improving high-altitude emp modeling capabilities by using a non-equilibrium electron swarm model to monitor conduction electron evolution

    Science.gov (United States)

    Pusateri, Elise Noel

    additional physical parameters such as the O2 electron attachment rate, recombination rate, and mutual neutralization rate. This necessitates tracking the positive and negative ion densities in the swarm model. Adding these parameters, especially electron attachment, is important at lower EMP altitudes where atmospheric density is high. We compare swarm model equilibrium temperatures and times using the HLO and BOLSIG+ coefficients for a uniform electric field of 1 StatV/cm for a range of atmospheric heights. This is done in order to test sensitivity to the swarm parameters used in the swarm model. It is shown that the equilibrium temperature and time are sensitive to the modifications in the collision frequency and ionization rate based on the updated electron interaction cross sections. We validate the swarm model by comparing ionization coefficients and equilibrium drift velocities to experimental results over a wide range of reduced electric field values. The final part of the PhD thesis work includes integrating the swarm model into CHAP-LA. We discuss the physics included in the CHAP-LA EMP model and demonstrate EMP damping behavior caused by the ohmic model at high altitudes. We report on numerical techniques for incorporation of the swarm model into CHAP-LA's Maxwell solver. This includes a discussion of integration techniques for Maxwell's equations in CHAP-LA using the swarm model calculated conduction current. We show improvements on EMP parameter calculations when modeling a high altitude, upward EMP scenario. This provides a novel computational capability that will have an important impact on the atmospheric and EMP research community.

  20. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  1. Dynamic capabilities

    DEFF Research Database (Denmark)

    Grünbaum, Niels Nolsøe; Stenger, Marianne

    2013-01-01

    it was dominated by a lack of systematism, assessment, monitoring, marketing speculations and feasibility calculation. Furthermore, the sphere was dictated by asymmetric supplier-customer relationships and negotiation power leading, among other possible factors, to meager profitability.......The consequences of dynamic capabilities (i.e. innovation performance and profitability) is an under researched area in the growing body of literature on dynamic capabilities and innovation management. This study aims to examine the relationship between dynamic capabilities, innovation performance...... and profitability of small and medium sized manufacturing enterprises operating in volatile environments. A multi-case study design was adopted as research strategy. The findings reveal a positive relationship between dynamic capabilities and innovation performance in the case companies, as we would expect. It was...

  2. Prediction of survival with alternative modeling techniques using pseudo values.

    Directory of Open Access Journals (Sweden)

    Tjeerd van der Ploeg

    Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising

  3. Transfer of ocean modelling capability to two scientists of the National Institute of Oceanography of India

    NARCIS (Netherlands)

    Holthuijsen, L.H.; Booij, N.

    1985-01-01

    Two scientists from the National Institute of Oceanography of India have been trained to use the storm surge model DUCHESS and the wave model DOLPHIN. The results are published separately in two reports. This is the first of them.

  4. Transfer of ocean modelling capability to two scientists of the National Institute of Oceanography of India

    NARCIS (Netherlands)

    Holthuijsen, L.H.; Booij, N.

    1985-01-01

    Two scientists from the National Institute of Oceanography of India have been trained to use the storm surge model DUCHESS and the wave model DOLPHIN. The results are published separately in two reports. This is the first of them.

  5. Applicability of the capability maturity model for engineer-to-order firms

    NARCIS (Netherlands)

    Veldman, J.; Klingenberg, W.

    2009-01-01

    Most of the well-known management and improvement systems and techniques, Such as Lean Production (e.g. Just-In-Time (JIT) Pull production, one piece flow) and Six Sigma (reduction in variation) were developed in high Volume industries. In order to measure the progress of the implementation of Such

  6. Applicability of the capability maturity model for engineer-to-order firms

    NARCIS (Netherlands)

    Veldman, Jasper; Klingenberg, Warse

    2009-01-01

    Most of the well-known management and improvement systems and techniques, such as Lean Production (e.g. Just-In-Time (JIT) pull production, one piece flow) and Six Sigma (reduction in variation) were developed in high volume industries. In order to measure the progress of the implementation of such

  7. Applicability of the capability maturity model for engineer-to-order firms

    NARCIS (Netherlands)

    Veldman, Jasper; Klingenberg, Warse

    2009-01-01

    Most of the well-known management and improvement systems and techniques, such as Lean Production (e.g. Just-In-Time (JIT) pull production, one piece flow) and Six Sigma (reduction in variation) were developed in high volume industries. In order to measure the progress of the implementation of such

  8. Applicability of the capability maturity model for engineer-to-order firms

    NARCIS (Netherlands)

    Veldman, J.; Klingenberg, W.

    2009-01-01

    Most of the well-known management and improvement systems and techniques, Such as Lean Production (e.g. Just-In-Time (JIT) Pull production, one piece flow) and Six Sigma (reduction in variation) were developed in high Volume industries. In order to measure the progress of the implementation of Such

  9. A model-based, multichannel, real-time capable sawtooth crash detector

    NARCIS (Netherlands)

    van den Brand, H.; de Baar, M. R.; van Berkel, M.; Blanken, T. C.; Felici, F.; Westerhof, E.; Willensdorfer, M.; ASDEX Upgrade team,; EUROfusion MST1 Team,

    2016-01-01

    Control of the time between sawtooth crashes, necessary for ITER and DEMO, requires real-time detection of the moment of the sawtooth crash. In this paper, estimation of sawtooth crash times is demonstrated using the model-based interacting multiple model (IMM) estimator, based on simplified models

  10. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  11. Using data mining techniques for building fusion models

    Science.gov (United States)

    Zhang, Zhongfei; Salerno, John J.; Regan, Maureen A.; Cutler, Debra A.

    2003-03-01

    Over the past decade many techniques have been developed which attempt to predict possible events through the use of given models or patterns of activity. These techniques work quite well given the case that one has a model or a valid representation of activity. However, in reality for the majority of the time this is not the case. Models that do exist, in many cases were hand crafted, required many man-hours to develop and they are very brittle in the dynamic world in which we live. Data mining techniques have shown some promise in providing a set of solutions. In this paper we will provide the details for our motivation, theory and techniques which we have developed, as well as the results of a set of experiments.

  12. Modeling of a stair-climbing wheelchair mechanism with high single-step capability.

    Science.gov (United States)

    Lawn, Murray J; Ishimatsu, Takakazu

    2003-09-01

    In the field of providing mobility for the elderly and disabled, the aspect of dealing with stairs continues largely unresolved. This paper focuses on presenting the development of a stair-climbing wheelchair mechanism with high single-step capability. The mechanism is based on front and rear wheel clusters connected to the base (chair) via powered linkages so as to permit both autonomous stair ascent and descent in the forward direction, and high single-step functionality for such as direct entry to and from a van. Primary considerations were inherent stability, provision of a mechanism that is physically no larger than a standard powered wheelchair, aesthetics, and being based on readily available low-cost components.

  13. Semantic Model of Variability and Capabilities of IoT Applications for Embedded Software Ecosystems

    DEFF Research Database (Denmark)

    Tomlein, Matus; Grønbæk, Kaj

    2016-01-01

    have also identified a need to model their deployment topology and functionality in order to enable their seamless integration into the platform. In this paper, we draw from research in related fields and present a model of IoT applications. It is built using semantic annotations and uses semantic...... conclude that it is suitable for modeling applications in IoT software ecosystems since it is more adaptable and expressive than the alternatives....

  14. Modeling the Cloud to Enhance Capabilities for Crises and Catastrophe Management

    Science.gov (United States)

    2016-11-16

    validation tests for basic cloud model: We used a cloud simulation on 32 compute nodes to study the performance of test problems under varying load...Construction of simple validation test for basic cloud model, and 3) Develop sub-models to represent crises/catastrophe scenarios. As the first step...ScaLAPACK: A scalable linear algebra library for distributed memory concurrent computers,” in Frontiers of Massively Parallel Computation, 1992., Fourth

  15. Semantic Model of Variability and Capabilities of IoT Applications for Embedded Software Ecosystems

    DEFF Research Database (Denmark)

    Tomlein, Matus; Grønbæk, Kaj

    have also identified a need to model their deployment topology and functionality in order to enable their seamless integration into the platform. In this paper, we draw from research in related fields and present a model of IoT applications. It is built using semantic annotations and uses semantic...... conclude that it is suitable for modeling applications in IoT software ecosystems since it is more adaptable and expressive than the alternatives....

  16. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  17. Matrix eigenvalue model: Feynman graph technique for all genera

    Energy Technology Data Exchange (ETDEWEB)

    Chekhov, Leonid [Steklov Mathematical Institute, ITEP and Laboratoire Poncelet, Moscow (Russian Federation); Eynard, Bertrand [SPhT, CEA, Saclay (France)

    2006-12-15

    We present the diagrammatic technique for calculating the free energy of the matrix eigenvalue model (the model with arbitrary power {beta} by the Vandermonde determinant) to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves)

  18. Manifold learning techniques and model reduction applied to dissipative PDEs

    CERN Document Server

    Sonday, Benjamin E; Gear, C William; Kevrekidis, Ioannis G

    2010-01-01

    We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relation of this nonlinear extension with the so-called "nonlinear Galerkin" methods developed in the context of Approximate Inertial Manifolds.

  19. Induction generator model in phase coordinates for fault ride-through capability studies of wind turbines

    DEFF Research Database (Denmark)

    Ruano, Luis Alberto Fajardo; Iov, Florin; Medina Reos, J. Aurelio

    2007-01-01

    A phase coordinates induction generator model with time varying electrical parameters as influenced by magnetic saturation and rotor deep bar effects, is presented in this paper. The model exhibits a per-phase formulation, uses standard data sheet for characterization of the electrical parameters...

  20. An Evacuation Emergency Response Model Coupling Atmospheric Release Advisory Capability Output.

    Science.gov (United States)

    1983-01-10

    concentration contours coupled with the SMI evacuation model were calculated by using the MATHEW and ADPIC codes. The evacuation emergency response...2 M ATH EW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 2 ADPIC ...CDC 7600 computer within a matter of minutes MATHEW and ADPIC codes. These two models after the computer center is notified, are described briefly

  1. Final Report for LDRD Project 05-ERD-050: "Developing a Reactive Chemistry Capability for the NARAC Operational Model (LODI)"

    Energy Technology Data Exchange (ETDEWEB)

    Cameron-Smith, P; Grant, K; Connell, P

    2008-02-11

    In support of the National Security efforts of LLNL, this project addressed the existing imbalance between dispersion and chemical capabilities of LODI (Lagrangian Operational Dispersion Integrator--the NARAC operational dispersion model). We have demonstrated potentially large effects of atmospheric chemistry on the impact of chemical releases (e.g., industrial chemicals and nerve agents). Prior to our work, LODI could only handle chains of first-order losses (exponential decays) that were independent of time and space, limiting NARAC's capability to respond when reactive chemistry is important. We significantly upgraded the chemistry and aerosol capability of LODI to handle (1) arbitrary networks of chemical reactions, (2) mixing and reactions with ambient species, (3) evaporation and condensation of aerosols, and (4) heat liberated from chemical reactions and aerosol condensation (which can cause a cold and dense plume hugging the ground to rise into the atmosphere, then descend to the ground again as droplets). When this is made operational, it will significantly improve NARAC's ability to respond to terrorist attacks and industrial accidents that involve reactive chemistry, including many chemical agents and toxic industrial chemicals (TICS). As a dual-use, the resulting model also has the potential to be a state-of-the-art air-quality model. Chemical releases are the most common type of airborne hazardous release and many operational applications involve such scenarios. The new capability we developed is therefore relevant to the needs of the Department of Energy (DOE), Department of Homeland Security (DHS) and Department of Defense (DoD).

  2. Simulating run-up on steep slopes with operational Boussinesq models; capabilities, spurious effects and instabilities

    Directory of Open Access Journals (Sweden)

    F. Løvholt

    2013-06-01

    Full Text Available Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.

  3. Incorporation of NREL Solar Advisor Model Photovoltaic Capabilities with GridLAB-D

    Energy Technology Data Exchange (ETDEWEB)

    Tuffner, Francis K.; Hammerstrom, Janelle L.; Singh, Ruchi

    2012-10-19

    This report provides a summary of the work updating the photovoltaic model inside GridLAB-D. The National Renewable Energy Laboratory Solar Advisor Model (SAM) was utilized as a basis for algorithms and validation of the new implementation. Subsequent testing revealed that the two implementations are nearly identical in both solar impacts and power output levels. This synergized model aides the system-level impact studies of GridLAB-D, but also allows more specific details of a particular site to be explored via the SAM software.

  4. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  5. A finite element parametric modeling technique of aircraft wing structures

    Institute of Scientific and Technical Information of China (English)

    Tang Jiapeng; Xi Ping; Zhang Baoyuan; Hu Bifu

    2013-01-01

    A finite element parametric modeling method of aircraft wing structures is proposed in this paper because of time-consuming characteristics of finite element analysis pre-processing. The main research is positioned during the preliminary design phase of aircraft structures. A knowledge-driven system of fast finite element modeling is built. Based on this method, employing a template parametric technique, knowledge including design methods, rules, and expert experience in the process of modeling is encapsulated and a finite element model is established automatically, which greatly improves the speed, accuracy, and standardization degree of modeling. Skeleton model, geometric mesh model, and finite element model including finite element mesh and property data are established on parametric description and automatic update. The outcomes of research show that the method settles a series of problems of parameter association and model update in the pro-cess of finite element modeling which establishes a key technical basis for finite element parametric analysis and optimization design.

  6. MENS, an Info-Computational Model for (Neuro-cognitive Systems Capable of Creativity

    Directory of Open Access Journals (Sweden)

    Andrée C. Ehresmann

    2012-09-01

    Full Text Available MENS is a bio-inspired model for higher level cognitive systems; it is an application of the Memory Evolutive Systems developed with Vanbremeersch to model complex multi-scale, multi-agent self-organized systems, such as biological or social systems. Its development resorts to an info-computationalism: first we characterize the properties of the human brain/mind at the origin of higher order cognitive processes up to consciousness and creativity, then we ‘abstract’ them in a MENS mathematical model for natural or artificial cognitive systems. The model, based on a ‘dynamic’ Category Theory incorporating Time, emphasizes the computability problems which are raised.

  7. Model-Based Real Time Assessment of Capability Left for Spacecraft Under Failure Mode Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project is aimed at developing a model based diagnostics system for spacecraft that will allow real time assessment of its state, while it is impacted...

  8. Comparison of Fuzzy AHP Buckley and ANP Models in Forestry Capability Evaluation (Case Study: Behbahan City Fringe

    Directory of Open Access Journals (Sweden)

    V. Rahimi

    2015-12-01

    Full Text Available The area of Zagros forests is continuously in danger of destruction. Therefore, the remaining forests should be carefully managed based on ecological capability evaluation. In fact, land evaluation includes prediction or assessment of land quality for a special land use with regard to production, vulnerability and management requirements. In this research, we studied the ecological capability of Behbahan city fringe for forestry land use. After the basic studies were completed and the thematic maps such as soil criteria, climate, physiography, vegetation and bedrock were prepared, the fuzzy multi-criteria decision-making methods of Fuzzy AHP Buckley and ANP were used to standardize and determine the weights of criteria. Finally, the ecological model of the region’s capability was generated to prioritize forestry land use and prepare the final map of evaluation using WLC model in seven classes. The results showed that in ANP method, 55.58% of the area is suitable for forestry land use which is more consistent with the reality, while in the Fuzzy AHP method, 95.23% of the area was found suitable. Finally, it was concluded that the ANP method shows more flexibility and ability to determine suitable areas for forestry land use in the study area.

  9. A model-based, multichannel, real-time capable sawtooth crash detector

    Science.gov (United States)

    van den Brand, H.; de Baar, M. R.; van Berkel, M.; Blanken, T. C.; Felici, F.; Westerhof, E.; Willensdorfer, M.; The ASDEX Upgrade Team; The EUROfusion MST1 Team

    2016-07-01

    Control of the time between sawtooth crashes, necessary for ITER and DEMO, requires real-time detection of the moment of the sawtooth crash. In this paper, estimation of sawtooth crash times is demonstrated using the model-based interacting multiple model (IMM) estimator, based on simplified models for the sawtooth crash. In contrast to previous detectors, this detector uses the spatial extent of the sawtooth crash as detection characteristic. The IMM estimator is tuned and applied to multiple ECE channels at once. A model for the sawtooth crash is introduced, which is used in the IMM algorithm. The IMM algorithm is applied to seven datasets from the ASDEX Upgrade tokamak. Five crash models with different mixing radii are used. All sawtooth crashes that have been identified beforehand by visual inspection of the data, are detected by the algorithm. A few additional detections are made, which upon closer inspection are seen to be sawtooth crashes, which show a partial reconnection. A closer inspection of the detected normal crashes shows that about 42% are not well fitted by any of the full reconnection models and show some characteristics of a partial reconnection. In some case, the measurement time is during the sawtooth crashes, which also results in an incorrect estimate of the mixing radius. For data provided at a sampling rate of 1 kHz, the run time of the IMM estimator is below 1 ms, thereby fulfilling real-time requirements.

  10. Community Elder Mistreatment Intervention With Capable Older Adults: Toward a Conceptual Practice Model.

    Science.gov (United States)

    Burnes, David

    2016-02-12

    Community-based elder mistreatment response programs (EMRP), such as adult protective services, that are responsible for directly addressing elder abuse and neglect are under increasing pressure with greater reporting/referrals nationwide. Our knowledge and understanding of effective response interventions represents a major gap in the EM literature. At the center of this gap is a lack of theory or conceptual models to help guide EMRP research and practice. This article develops a conceptual practice model for community-based EMRPs that work directly with cognitively intact EM victims. Anchored by core EMRP values of voluntariness, self-determination, and least restrictive path, the practice model is guided by an overarching postmodern, constructivist, eco-systemic practice paradigm that accepts multiple, individually constructed mistreatment realities and solutions. Harm-reduction, client-centered, and multidisciplinary practice models are described toward a common EMRP goal to reduce the risk of continued mistreatment. Finally, the model focuses on client-practitioner relationship-oriented practice skills such as engagement and therapeutic alliance to elicit individual mistreatment realities and client-centered solutions. The practice model helps fill a conceptual gap in the EM intervention literature and carries implications for EMRP training, research, and practice. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  12. An Empirical Study of Smoothing Techniques for Language Modeling

    CERN Document Server

    Chen, S F; Chen, Stanley F.; Goodman, Joshua T.

    1996-01-01

    We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.

  13. An integrated PHY-MAC analytical model for IEEE 802.15.7 VLC network with MPR capability

    Science.gov (United States)

    Yu, Hai-feng; Chi, Xue-fen; Liu, Jian

    2014-09-01

    Considering that the collision caused by hidden terminal is particularly serious due to the narrow beams of optical devices, the multi-packet reception (MPR) is introduced to mitigate the collisions for IEEE 802.15.7 visible light communication (VLC) system. To explore the impact of MPR on system performance and investigate the interaction between physical (PHY) layer and media access control (MAC) layer, a three dimensional (3D) integrated PHY-MAC analytical model of carrier sense multiple access/collision avoidance (CSMA/CA) is established based on Markov chain theory for VLC system, in which MPR is implemented through the use of orthogonal code sequence. Throughput is derived to evaluate the performance of VLC system with MPR capability under imperfect optical channel. The results can be used for the performance optimization of a VLC system with MPR capability.

  14. A hybrid RANS-LES approach with delayed-DES and wall-modelled LES capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Shur, Mikhail L. [New Technologies and Services, 14, Dobrolyubov Avenue, 197198 St. Petersburg (Russian Federation); Spalart, Philippe R. [Boeing Commercial Airplanes, P.O. Box 3707, Seattle, WA 98124 (United States); Strelets, Mikhail Kh. [New Technologies and Services, 14, Dobrolyubov Avenue, 197198 St. Petersburg (Russian Federation)], E-mail: strelets@mail.rcom.ru; Travin, Andrey K. [New Technologies and Services, 14, Dobrolyubov Avenue, 197198 St. Petersburg (Russian Federation)

    2008-12-15

    A CFD strategy is proposed that combines delayed detached-eddy simulation (DDES) with an improved RANS-LES hybrid model aimed at wall modelling in LES (WMLES). The system ensures a different response depending on whether the simulation does or does not have inflow turbulent content. In the first case, it reduces to WMLES: most of the turbulence is resolved except near the wall. Empirical improvements to this model relative to the pure DES equations provide a great increase of the resolved turbulence activity near the wall and adjust the resolved logarithmic layer to the modelled one, thus resolving the issue of 'log layer mismatch' which is common in DES and other WMLES methods. An essential new element here is a definition of the subgrid length-scale which depends not only on the grid spacings, but also on the wall distance. In the case without inflow turbulent content, the proposed model performs as DDES, i.e., it gives a pure RANS solution for attached flows and a DES-like solution for massively separated flows. The coordination of the two branches is carried out by a blending function. The promise of the model is supported by its satisfactory performance in all the three modes it was designed for, namely, in pure WMLES applications (channel flow in a wide Reynolds-number range and flow over a hydrofoil with trailing-edge separation), in a natural DDES application (an airfoil in deep stall), and in a flow where both branches of the model are active in different flow regions (a backward-facing-step flow)

  15. Forward models for extending the mechanical damage evaluation capability of resonant ultrasound spectroscopy.

    Science.gov (United States)

    Goodlet, B R; Torbet, C J; Biedermann, E J; Jauriqui, L M; Aldrin, J C; Pollock, T M

    2017-02-08

    Finite element (FE) modeling has been coupled with resonant ultrasound spectroscopy (RUS) for nondestructive evaluation (NDE) of high temperature damage induced by mechanical loading. Forward FE models predict mode-specific changes in resonance frequencies (ΔfR), inform RUS measurements of mode-type, and identify diagnostic resonance modes sensitive to individual or multiple concurrent damage mechanisms. The magnitude of modeled ΔfR correlate very well with the magnitude of measured ΔfR from RUS, affording quantitative assessments of damage. This approach was employed to study creep damage in a polycrystalline Ni-based superalloy (Mar-M247) at 950°C. After iterative applications of creep strains up to 8.8%, RUS measurements recorded ΔfR that correspond to the accumulation of plastic deformation and cracks in the gauge section of a cylindrical dog-bone specimen. Of the first 50 resonance modes that occur, ranging from 3 to 220kHz, modes classified as longitudinal bending were most sensitive to creep damage while transverse bending modes were found to be largely unaffected. Measure to model comparisons of ΔfR show that the deformation experienced by the specimen during creep, specifically uniform elongation of the gauge section, is responsible for a majority of the measured ΔfR until at least 6.1% creep strain. After 8.8% strain considerable surface cracking along the gauge section of the dog-bone was observed, for which FE models indicate low-frequency longitudinal bending modes are significantly affected. Key differences between historical implementations of RUS for NDE and the FE model-based framework developed herein are discussed, with attention to general implementation of a FE model-based framework for NDE of damage.

  16. Capability approach

    DEFF Research Database (Denmark)

    Jensen, Niels Rosendal; Kjeldsen, Christian Christrup

    Lærebogen er den første samlede danske præsentation af den af Amartya Sen og Martha Nussbaum udviklede Capability Approach. Bogen indeholder en præsentation og diskussion af Sen og Nussbaums teoretiske platform. I bogen indgår eksempler fra såvel uddannelse/uddannelsespolitik, pædagogik og omsorg....

  17. ENTREPRENEURIAL CAPABILITIES

    DEFF Research Database (Denmark)

    Rasmussen, Lauge Baungaard; Nielsen, Thorkild

    2003-01-01

    The aim of this article is to analyse entrepreneurship from an action research perspective. What is entrepreneurship about? Which are the fundamental capabilities and processes of entrepreneurship? To answer these questions the article includes a case study of a Danish entrepreneur and his networks...

  18. Comparative genome-scale modelling of Staphylococcus aureus strains identifies strain-specific metabolic capabilities linked to pathogenicity.

    Science.gov (United States)

    Bosi, Emanuele; Monk, Jonathan M; Aziz, Ramy K; Fondi, Marco; Nizet, Victor; Palsson, Bernhard Ø

    2016-06-28

    Staphylococcus aureus is a preeminent bacterial pathogen capable of colonizing diverse ecological niches within its human host. We describe here the pangenome of S. aureus based on analysis of genome sequences from 64 strains of S. aureus spanning a range of ecological niches, host types, and antibiotic resistance profiles. Based on this set, S. aureus is expected to have an open pangenome composed of 7,411 genes and a core genome composed of 1,441 genes. Metabolism was highly conserved in this core genome; however, differences were identified in amino acid and nucleotide biosynthesis pathways between the strains. Genome-scale models (GEMs) of metabolism were constructed for the 64 strains of S. aureus These GEMs enabled a systems approach to characterizing the core metabolic and panmetabolic capabilities of the S. aureus species. All models were predicted to be auxotrophic for the vitamins niacin (vitamin B3) and thiamin (vitamin B1), whereas strain-specific auxotrophies were predicted for riboflavin (vitamin B2), guanosine, leucine, methionine, and cysteine, among others. GEMs were used to systematically analyze growth capabilities in more than 300 different growth-supporting environments. The results identified metabolic capabilities linked to pathogenic traits and virulence acquisitions. Such traits can be used to differentiate strains responsible for mild vs. severe infections and preference for hosts (e.g., animals vs. humans). Genome-scale analysis of multiple strains of a species can thus be used to identify metabolic determinants of virulence and increase our understanding of why certain strains of this deadly pathogen have spread rapidly throughout the world.

  19. Comparative genome-scale modelling of Staphylococcus aureus strains identifies strain-specific metabolic capabilities linked to pathogenicity

    Science.gov (United States)

    Bosi, Emanuele; Monk, Jonathan M.; Aziz, Ramy K.; Fondi, Marco; Nizet, Victor; Palsson, Bernhard Ø.

    2016-01-01

    Staphylococcus aureus is a preeminent bacterial pathogen capable of colonizing diverse ecological niches within its human host. We describe here the pangenome of S. aureus based on analysis of genome sequences from 64 strains of S. aureus spanning a range of ecological niches, host types, and antibiotic resistance profiles. Based on this set, S. aureus is expected to have an open pangenome composed of 7,411 genes and a core genome composed of 1,441 genes. Metabolism was highly conserved in this core genome; however, differences were identified in amino acid and nucleotide biosynthesis pathways between the strains. Genome-scale models (GEMs) of metabolism were constructed for the 64 strains of S. aureus. These GEMs enabled a systems approach to characterizing the core metabolic and panmetabolic capabilities of the S. aureus species. All models were predicted to be auxotrophic for the vitamins niacin (vitamin B3) and thiamin (vitamin B1), whereas strain-specific auxotrophies were predicted for riboflavin (vitamin B2), guanosine, leucine, methionine, and cysteine, among others. GEMs were used to systematically analyze growth capabilities in more than 300 different growth-supporting environments. The results identified metabolic capabilities linked to pathogenic traits and virulence acquisitions. Such traits can be used to differentiate strains responsible for mild vs. severe infections and preference for hosts (e.g., animals vs. humans). Genome-scale analysis of multiple strains of a species can thus be used to identify metabolic determinants of virulence and increase our understanding of why certain strains of this deadly pathogen have spread rapidly throughout the world. PMID:27286824

  20. Accounting for anatomical noise in search-capable model observers for planar nuclear imaging.

    Science.gov (United States)

    Sen, Anando; Gifford, Howard C

    2016-01-01

    Model observers intended to predict the diagnostic performance of human observers should account for the effects of both quantum and anatomical noise. We compared the abilities of several visual-search (VS) and scanning Hotelling-type models to account for anatomical noise in a localization receiver operating characteristic (LROC) study involving simulated nuclear medicine images. Our VS observer invoked a two-stage process of search and analysis. The images featured lesions in the prostate and pelvic lymph nodes. Lesion contrast and the geometric resolution and sensitivity of the imaging collimator were the study variables. A set of anthropomorphic mathematical phantoms was imaged with an analytic projector based on eight parallel-hole collimators with different sensitivity and resolution properties. The LROC study was conducted with human observers and the channelized nonprewhitening, channelized Hotelling (CH) and VS model observers. The CH observer was applied in a "background-known-statistically" protocol while the VS observer performed a quasi-background-known-exactly task. Both of these models were applied with and without internal noise in the decision variables. A perceptual search threshold was also tested with the VS observer. The model observers without inefficiencies failed to mimic the average performance trend for the humans. The CH and VS observers with internal noise matched the humans primarily at low collimator sensitivities. With both internal noise and the search threshold, the VS observer attained quantitative agreement with the human observers. Computational efficiency is an important advantage of the VS observer.

  1. Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Rivera, Michael K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC)

    2014-04-01

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  2. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  3. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  4. Capabilities and performance of Elmer/Ice, a new-generation ice sheet model

    Directory of Open Access Journals (Sweden)

    O. Gagliardini

    2013-08-01

    Full Text Available The Fourth IPCC Assessment Report concluded that ice sheet flow models, in their current state, were unable to provide accurate forecast for the increase of polar ice sheet discharge and the associated contribution to sea level rise. Since then, the glaciological community has undertaken a huge effort to develop and improve a new generation of ice flow models, and as a result a significant number of new ice sheet models have emerged. Among them is the parallel finite-element model Elmer/Ice, based on the open-source multi-physics code Elmer. It was one of the first full-Stokes models used to make projections for the evolution of the whole Greenland ice sheet for the coming two centuries. Originally developed to solve local ice flow problems of high mechanical and physical complexity, Elmer/Ice has today reached the maturity to solve larger-scale problems, earning the status of an ice sheet model. Here, we summarise almost 10 yr of development performed by different groups. Elmer/Ice solves the full-Stokes equations, for isotropic but also anisotropic ice rheology, resolves the grounding line dynamics as a contact problem, and contains various basal friction laws. Derived fields, like the age of the ice, the strain rate or stress, can also be computed. Elmer/Ice includes two recently proposed inverse methods to infer badly known parameters. Elmer is a highly parallelised code thanks to recent developments and the implementation of a block preconditioned solver for the Stokes system. In this paper, all these components are presented in detail, as well as the numerical performance of the Stokes solver and developments planned for the future.

  5. HYBRID REASONING MODEL FOR STRENGTHENING THE PROBLEM SOLVING CAPABILITY OF EXPERT SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kapil Khandelwal

    2013-11-01

    Full Text Available In this paper, we briefly outlined popular case-based reasoning combinations. More specifically, we focus on combinations of case-based reasoning with rule based reasoning, and model based reasoning. Further we examined the strengths and weaknesses of various reasoning models, case-based reasoning, rule-based reasoning and model-based reasoning, and discuss how they can be combined to form a more robust and better-performing hybrid. In a decision support system to address the variety of tasks a user performs, a single type of knowledge and reasoning method is often not sufficient. It is often necessary to determine which reasoning method would be the most appropriate for each task, and a combination of different methods has often shown the best results. In this study CBR was mixed with other RBR and MBR approaches to promote synergies and benefits beyond those achievable using CBR or other individual reasoning approaches alone. Each approach has advantages and disadvantages, which are proved to be complementary in a large degree. So, it is well-justified to combine these to produce effective hybrid approaches, surpassing the disadvantages of each component method. “KNAPS-CR” model integrates problem solving with learning from experience within an extensive model of different knowledge types. “KNAPS-CR” has a reasoning strategy which first attempts case-based reasoning, then rule-based reasoning, and, finally, model-based reasoning. It learns from each problem solving session by updating its collection of cases, irrespective of which reasoning method that succeeded in solving the problem.

  6. Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique

    Directory of Open Access Journals (Sweden)

    Hussein Rappel

    2014-01-01

    integration technique (EFIT as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capable of properly propagating in the material, interfering with defects/damages, and being received in good conditions. Modern simulation tools based on numerical methods such as finite integration technique (FIT, finite element method (FEM, and boundary element method (BEM may be used for modeling. In this paper, two sets of simulation are performed. In the first set, group velocities of lamb wave in a steel plate are obtained numerically. Results are then compared with analytical results to validate the simulation. In the second set, EFIT is employed to study fundamental symmetric mode interaction with a surface braking defect.

  7. Environmental Harmony and Evaluation of Advertisement Billboards with Digital Photogrammetry Technique and GIS Capabilities: A Case Study in the City of Ankara.

    Science.gov (United States)

    Aydın, Cevdetx C; Nisancı, Recep

    2008-05-19

    Geographical Information Systems (GIS) have been gaining a growing interest in Turkey. Many local governments and public agencies have been struggling to set up such systems to serve the needs and meet public requirements. Urban life shelters the advertisement reality which is presented at various places, on vehicles, shops etc. in daily life. It can be said that advertisement is a part of daily life in urban area, especially in city centers. In addition, one of the main sources of revenue for municipalities comes from advertising and notices. The advertising sector provides a great level of income today. Therefore advertising is individually very important for local governments and urban management. Although it is valuable for local governments, it is also very important for urban management to place these advertisement signs and billboards in an orderly fashion which is pleasing to the eye. Another point related to this subject is the systematic control mechanism which is necessary for collecting taxes regularly and updating. In this paper, first practical meaning of notice and advertisement subject, problem definition and objectives are described and then legal support and daily practice are revised. Current practice and problems are mentioned. Possibilities of measuring and obtaining necessary information by using digital images and transferring them to spatial databases are studied. By this study, a modern approach was developed for urban management and municipalities by using information technology which is an alternative to current application. Criteria which provide environmental harmony such as urban beauty, colour, compatibility and safety were also evaluated. It was finally concluded that measuring commercial signs and keeping environmental harmony under control for urban beauty can be provided by Digital Photogrammetry (DP) technique and GIS capabilities which were studied with pilot applications in the city center of Ankara.

  8. Environmental Harmony and Evaluation of Advertisement Billboards with Digital Photogrammetry Technique and GIS Capabilities: A Case Study in the City of Ankara

    Directory of Open Access Journals (Sweden)

    Recep Nisancı

    2008-05-01

    Full Text Available Geographical Information Systems (GIS have been gaining a growing interest in Turkey. Many local governments and public agencies have been struggling to set up such systems to serve the needs and meet public requirements. Urban life shelters the advertisement reality which is presented at various places, on vehicles, shops etc. in daily life. It can be said that advertisement is a part of daily life in urban area, especially in city centers. In addition, one of the main sources of revenue for municipalities comes from advertising and notices. The advertising sector provides a great level of income today. Therefore advertising is individually very important for local governments and urban management. Although it is valuable for local governments, it is also very important for urban management to place these advertisement signs and billboards in an orderly fashion which is pleasing to the eye. Another point related to this subject is the systematic control mechanism which is necessary for collecting taxes regularly and updating. In this paper, first practical meaning of notice and advertisement subject, problem definition and objectives are described and then legal support and daily practice are revised. Current practice and problems are mentioned. Possibilities of measuring and obtaining necessary information by using digital images and transferring them to spatial databases are studied. By this study, a modern approach was developed for urban management and municipalities by using information technology which is an alternative to current application. Criteria which provide environmental harmony such as urban beauty, colour, compatibility and safety were also evaluated. It was finally concluded that measuring commercial signs and keeping environmental harmony under control for urban beauty can be provided by Digital Photogrammetry (DP technique and GIS capabilities which were studied with pilot applications in the city center of Ankara.

  9. Realizing joined-up government: Dynamic capabilities and stage models for transformation

    NARCIS (Netherlands)

    Klievink, B.; Janssen, M.

    2009-01-01

    Joining up remains a high priority on the e-government agenda and requires extensive transformation. Stage models are predictable patterns which exist in the growth of organizations and unfold as discrete time periods that result in discontinuity and can help e-government development towards joined-

  10. Ship Response Capability Models for Counter-Piracy Patrols in the Gulf of Aden

    Science.gov (United States)

    2011-09-01

    9 2.1.2 Ripple Propagation Algorithm . . . . . . . . . . . . . . . . . . . 9 2.1.3 Marsaglia et al. Algorithm...of Marsaglia et al. (1990) . . . . . . . . . . . . . . . . . . . . . 47 Annex D: Error Calculations for 1D model...effectively patrolled by a particular asset. One 2D approach uses the an- alytical work of Marsaglia et al. [10] to generate the probability distribution of

  11. Conceptualising the capabilities and value creation of HRM shared service models

    NARCIS (Netherlands)

    Maatman, Marco; Bondarouk, Tatiana; Looise, Jan C.

    2010-01-01

    Organisations are increasingly establishing HRM Shared Service Models (SSMs) for the delivery of HRM. These SSMs are claimed to maximise usage of the advantages of centralised and decentralised delivery approaches while minimising the drawbacks of both. This article draws on concepts from the

  12. Team mental models: techniques, methods, and analytic approaches.

    Science.gov (United States)

    Langan-Fox, J; Code, S; Langfield-Smith, K

    2000-01-01

    Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.

  13. The Cyber Defense (CyDef) Model for Assessing Countermeasure Capabilities.

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Margot [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); DeVries, Troy Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gordon, Susanna P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-06-01

    Cybersecurity is essential to maintaining operations, and is now a de facto cost of business. Despite this, there is little consensus on how to systematically make decisions about cyber countermeasures investments. Identifying gaps and determining the expected return on investment (ROI) of adding a new cybersecurity countermeasure is frequently a hand-waving exercise at best. Worse, cybersecurity nomenclature is murky and frequently over-loaded, which further complicates issues by inhibiting clear communication. This paper presents a series of foundational models and nomenclature for discussing cybersecurity countermeasures, and then introduces the Cyber Defense (CyDef) model, which provides a systematic and intuitive way for decision-makers to effectively communicate with operations and device experts.

  14. Structural Model Tuning Capability in an Object-Oriented Multidisciplinary Design, Analysis, and Optimization Tool

    Science.gov (United States)

    Lung, Shun-fat; Pak, Chan-gi

    2008-01-01

    Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization (MDAO) tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.

  15. Military Hydrology: Report 21, Regulation of Streamflow by Dams and Associated Modeling Capabilities

    Science.gov (United States)

    1992-10-01

    Laboratory (EL), and Dr. V. E. LaGarde III, Chief of the Environmental Systems Division (ESD), EL, and under the direct supervision of Mr. M. P. Keown ...optimize the operation of an interconnected system of reservoirs, hydroelectric power plants, pump canals, pipelines, and river reaches ( Martin 1981...Water Resource Systems, Harvard University Press, Cam- bridge, Mass. Martin , Quentin W., 1981. "Surface Water Resources Allocation Model (AL-V), Program

  16. A Capability-Based, Meta-Model Approach to Combatant Ship Design

    Science.gov (United States)

    2011-03-01

    horizontal, parallel to a Compass Course of 090° to keep the trigonometry used in the motion model simple, however as long the box is kept a true...make a course correction. Some of these include greater government-contractor collaboration, inventive contracting initiatives, and extensive... course . However, the computations required to simulate this are complex and it is likely they will unduly slow the simulation down, so the more

  17. DESTINY: A Comprehensive Tool with 3D and Multi-Level Cell Memory Modeling Capability

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2017-09-01

    Full Text Available To enable the design of large capacity memory structures, novel memory technologies such as non-volatile memory (NVM and novel fabrication approaches, e.g., 3D stacking and multi-level cell (MLC design have been explored. The existing modeling tools, however, cover only a few memory technologies, technology nodes and fabrication approaches. We present DESTINY, a tool for modeling 2D/3D memories designed using SRAM, resistive RAM (ReRAM, spin transfer torque RAM (STT-RAM, phase change RAM (PCM and embedded DRAM (eDRAM and 2D memories designed using spin orbit torque RAM (SOT-RAM, domain wall memory (DWM and Flash memory. In addition to single-level cell (SLC designs for all of these memories, DESTINY also supports modeling MLC designs for NVMs. We have extensively validated DESTINY against commercial and research prototypes of these memories. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g., latency, area or energy-delay product for a given memory technology, choosing the suitable memory technology or fabrication method (i.e., 2D v/s 3D for a given optimization target, etc. We believe that DESTINY will boost studies of next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers. The latest source-code of DESTINY is available from the following git repository: https://bitbucket.org/sparshmittal/destinyv2.

  18. Development of Detonation Modeling Capabilities for Rocket Test Facilities: Hydrogen-Oxygen-Nitrogen Mixtures

    Science.gov (United States)

    Allgood, Daniel C.

    2016-01-01

    The objective of the presented work was to develop validated computational fluid dynamics (CFD) based methodologies for predicting propellant detonations and their associated blast environments. Applications of interest were scenarios relevant to rocket propulsion test and launch facilities. All model development was conducted within the framework of the Loci/CHEM CFD tool due to its reliability and robustness in predicting high-speed combusting flow-fields associated with rocket engines and plumes. During the course of the project, verification and validation studies were completed for hydrogen-fueled detonation phenomena such as shock-induced combustion, confined detonation waves, vapor cloud explosions, and deflagration-to-detonation transition (DDT) processes. The DDT validation cases included predicting flame acceleration mechanisms associated with turbulent flame-jets and flow-obstacles. Excellent comparison between test data and model predictions were observed. The proposed CFD methodology was then successfully applied to model a detonation event that occurred during liquid oxygen/gaseous hydrogen rocket diffuser testing at NASA Stennis Space Center.

  19. A hybrid model of QFD, SERVQUAL and KANO to increase bank's capabilities

    Directory of Open Access Journals (Sweden)

    Hasan Rajabi

    2012-10-01

    Full Text Available In global market, factors such as precedence of competitors extending shave on market, promoting quality of services and identifying customers' needs are important. This paper attempts to identify strategic services in one of the biggest governmental banks in Iran called Melli bank for getting competition merit using Kano and SERVQUAL compound models and to extend operation quality and to provide suitable strategies. The primary question of this paper is on how to introduce high quality services in this bank. The proposed model of this paper uses a hybrid of three quality-based methods including SERVQUAL, QFD and Kano models. Statistical society in this article is all clients and customers of Melli bank who use this banks' services and based on random sampling method, 170 customers were selected. The study was held in one of provinces located in west part of Iran called Semnan. Research findings show that Melli banks' customers are dissatisfied from the quality of services and to solve this problem the bank should do some restructuring to place some special characteristics to reach better operation at the heed of its affairs. The characteristics include, in terms of their priorities, possibility of transferring money by sale terminal, possibility of creating wireless pos, accelerating in doing bank works, getting special merits to customers who use electronic services, eliminating such bank commission, solving problems in least time as disconnecting system, possibility of receiving foreign exchange by ATM and suitable parking in city.

  20. IAC - INTEGRATED ANALYSIS CAPABILITY

    Science.gov (United States)

    Frisch, H. P.

    1994-01-01

    Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. IAC 2.5 contains several specialized interfaces from NASTRAN in support of multidisciplinary analysis. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. FEMNET, which converts finite element structural analysis models to finite difference thermal analysis models, is also interfaced with the IAC database. 3) System dynamics - The DISCOS simulation program which allows for either nonlinear time domain analysis or linear frequency domain analysis, is fully interfaced to the IAC database management capability. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. Level 2.5 includes EIGEN, which provides tools for large order system eigenanalysis, and BOPACE, which allows for geometric capabilities and finite element analysis with nonlinear material. Also included in IAC level 2.5 is SAMSAN 3.1, an

  1. A Performance Evaluation for IT/IS Implementation in Organisation: Preliminary New IT/IS Capability Evaluation (NICE Model

    Directory of Open Access Journals (Sweden)

    Hafez Salleh

    2011-12-01

    Full Text Available Most of the traditional IT/IS performance measures are based on productivity and process, which mainly focus on method of investment appraisal. There is a need to produce alternative holistic measurement models that enable soft and hard issues to be measured qualitatively. A New IT/IS Capability Evaluation (NICE framework has been designed to measure the capability of organisations to'successfully implement IT systems' and it is applicable across industries.The idea is to provide managers with measurement tools to enable them to identify where improvements are required within their organisations and to indicate their readiness prior to IT investment. The NICE framework investigates four organisational key elements: IT, Environment, Process and People, and is composed of six progressive stages of maturity that a company can achieve its IT/IS capabilities. For each maturity stage, the NICE framework describes a set of critical success factors that must be in place for the company to achieve each stage.

  2. Dynamic capabilities

    DEFF Research Database (Denmark)

    Grünbaum, Niels Nolsøe; Stenger, Marianne

    2013-01-01

    and profitability of small and medium sized manufacturing enterprises operating in volatile environments. A multi-case study design was adopted as research strategy. The findings reveal a positive relationship between dynamic capabilities and innovation performance in the case companies, as we would expect. It was...... it was dominated by a lack of systematism, assessment, monitoring, marketing speculations and feasibility calculation. Furthermore, the sphere was dictated by asymmetric supplier-customer relationships and negotiation power leading, among other possible factors, to meager profitability....

  3. Bumetanide is not capable of terminating status epilepticus but enhances phenobarbital efficacy in different rat models.

    Science.gov (United States)

    Töllner, Kathrin; Brandt, Claudia; Erker, Thomas; Löscher, Wolfgang

    2015-01-05

    In about 20-40% of patients, status epilepticus (SE) is refractory to standard treatment with benzodiazepines, necessitating second- and third-line treatments that are not always successful, resulting in increased mortality. Rat models of refractory SE are instrumental in studying the changes underlying refractoriness and to develop more effective treatments for this severe medical emergency. Failure of GABAergic inhibition is a likely cause of the development of benzodiazepine resistance during SE. In addition to changes in GABAA receptor expression, trafficking, and function, alterations in Cl(-) homeostasis with increased intraneuronal Cl(-) levels may be involved. Bumetanide, which reduces intraneuronal Cl(-) by inhibiting the Cl(-) intruding Na(+), K(+), Cl(-) cotransporter NKCC1, has been reported to interrupt SE induced by kainate in urethane-anesthetized rats, indicating that this diuretic drug may be an interesting candidate for treatment of refractory SE. In this study, we evaluated the effects of bumetanide in the kainate and lithium-pilocarpine models of SE as well as a model in which SE is induced by sustained electrical stimulation of the basolateral amygdala. Unexpectedly, bumetanide alone was ineffective to terminate SE in both conscious and anesthetized adult rats. However, it potentiated the anticonvulsant effect of low doses of phenobarbital, although this was only seen in part of the animals; higher doses of phenobarbital, particularly in combination with diazepam, were more effective to terminate SE than bumetanide/phenobarbital combinations. These data do not suggest that bumetanide, alone or in combination with phenobarbital, is a valuable option in the treatment of refractory SE in adult patients. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. QMU as an approach to strengthening the predictive capabilities of complex models.

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Genetha Anne.; Boggs, Paul T.; Grace, Matthew D.

    2010-09-01

    Complex systems are made up of multiple interdependent parts, and the behavior of the entire system cannot always be directly inferred from the behavior of the individual parts. They are nonlinear and system responses are not necessarily additive. Examples of complex systems include energy, cyber and telecommunication infrastructures, human and animal social structures, and biological structures such as cells. To meet the goals of infrastructure development, maintenance, and protection for cyber-related complex systems, novel modeling and simulation technology is needed. Sandia has shown success using M&S in the nuclear weapons (NW) program. However, complex systems represent a significant challenge and relative departure from the classical M&S exercises, and many of the scientific and mathematical M&S processes must be re-envisioned. Specifically, in the NW program, requirements and acceptable margins for performance, resilience, and security are well-defined and given quantitatively from the start. The Quantification of Margins and Uncertainties (QMU) process helps to assess whether or not these safety, reliability and performance requirements have been met after a system has been developed. In this sense, QMU is used as a sort of check that requirements have been met once the development process is completed. In contrast, performance requirements and margins may not have been defined a priori for many complex systems, (i.e. the Internet, electrical distribution grids, etc.), particularly not in quantitative terms. This project addresses this fundamental difference by investigating the use of QMU at the start of the design process for complex systems. Three major tasks were completed. First, the characteristics of the cyber infrastructure problem were collected and considered in the context of QMU-based tools. Second, UQ methodologies for the quantification of model discrepancies were considered in the context of statistical models of cyber activity. Third

  5. Software applications for providing comprehensive computing capabilities to problems related to mixed models in animal breeding

    Institute of Scientific and Technical Information of China (English)

    Monchai; DAUNGJINDA

    2005-01-01

    Recently,several computer packages havebeen developed to accomplish problems relatedto mixed model in animal breeding.Special soft-ware for estimation of variance components andprediction of genetic merits are basically neededfor genetic evaluation and selection program.Al-though there are some packages available on theinternet,however,most of them are commercialor unfriendly to be used.The lists of recent soft-ware available on the internet are shown in Tab.1.Most software is free license(mostly for ac-ade...

  6. E-Learning Applications for Urban Modelling and Ogc Standards Using HTML5 Capabilities

    Science.gov (United States)

    Kaden, R.; König, G.; Malchow, C.; Kolbe, T. H.

    2012-07-01

    This article reports on the development of HTML5 based web-content related to urban modelling with special focus on GML and CityGML, allowing participants to access it regardless of the device platform. An essential part of the learning modules are short video lectures, supplemented by exercises and tests during the lecture to improve students' individual progress and success. The evaluation of the tests is used to guide students through the course content, depending on individual knowledge. With this approach, we provide learning applications on a wide range of devices, either mobile or desktop, fulfil the needs of just-in-time knowledge, and increase the emphasis on lifelong learning.

  7. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  8. The Hill equation: a review of its capabilities in pharmacological modelling.

    Science.gov (United States)

    Goutelle, Sylvain; Maurin, Michel; Rougier, Florent; Barbaut, Xavier; Bourguignon, Laurent; Ducher, Michel; Maire, Pascal

    2008-12-01

    The Hill equation was first introduced by A.V. Hill to describe the equilibrium relationship between oxygen tension and the saturation of haemoglobin. In pharmacology, the Hill equation has been extensively used to analyse quantitative drug-receptor relationships. Many pharmacokinetic-pharmacodynamic models have used the Hill equation to describe nonlinear drug dose-response relationships. Although the Hill equation is widely used, its many properties are not all well known. This article aims at reviewing the various properties of the Hill equation. The descriptive aspects of the Hill equation, in particular mathematical and graphical properties, are examined, and related to Hill's original work. The mechanistic aspect of the Hill equation, involving a strong connection with the Guldberg and Waage law of mass action, is also described. Finally, a probabilistic view of the Hill equation is examined. Here, we provide some new calculation results, such as Fisher information and Shannon entropy, and we introduce multivariate probabilistic Hill equations. The main features and potential applications of this probabilistic approach are also discussed. Thus, within the same formalism, the Hill equation has many different properties which can be of great interest for those interested in mathematical modelling in pharmacology and biosciences.

  9. A refined one-dimensional rotordynamics model with three-dimensional capabilities

    Science.gov (United States)

    Carrera, E.; Filippi, M.

    2016-03-01

    This paper evaluates the vibration characteristics of various rotating structures. The present methodology exploits the one-dimensional Carrera Unified Formulation (1D CUF), which enables one to go beyond the kinematic assumptions of classical beam theories. According to the component-wise (CW) approach, Lagrange-like polynomial expansions (LE) are here adopted to develop the refined displacement theories. The LE elements make it possible to model each structural component of the rotor with an arbitrary degree of accuracy using either different displacement theories or localized mesh refinements. Hamilton's Principle is used to derive the governing equations, which are solved by the Finite Element Method. The CUF one-dimensional theory includes all the effects due to rotation, namely the Coriolis term, spin softening and geometrical stiffening. The numerical simulations have been performed considering a thin ring, discs and bladed-deformable shafts. The effects of the number and the position of the blades on the dynamic stability of the rotor have been evaluated. The results have been compared, when possible, with the 2D and 3D solutions that are available in the literature. CUF models appear very practical to investigate the dynamics of complex rotating structures since they provide 2D and quasi-3D results, while preserving the computational effectiveness of one-dimensional solutions.

  10. Concerning the Feasibility of Example-driven Modelling Techniques

    OpenAIRE

    Thorne, Simon; Ball, David; Lawson, Zoe Frances

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven \\ud modelling. The main aim was to establish experimentally within an academic \\ud environment; the relationship between error and task complexity using a) Traditional \\ud spreadsheet modelling, b) example driven techniques. We report on the experimental \\ud design, sampling, research methods and the tasks set for both control and treatment \\ud groups. Analysis of the completed tasks allows comparison of several...

  11. Advanced Phase noise modeling techniques of nonlinear microwave devices

    OpenAIRE

    Prigent, M.; J. C. Nallatamby; R. Quere

    2004-01-01

    In this paper we present a coherent set of tools allowing an accurate and predictive design of low phase noise oscillators. Advanced phase noise modelling techniques in non linear microwave devices must be supported by a proven combination of the following : - Electrical modeling of low-frequency noise of semiconductor devices, oriented to circuit CAD . The local noise sources will be either cyclostationary noise sources or quasistationary noise sources. - Theoretic...

  12. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  13. Validation of Models : Statistical Techniques and Data Availability

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1999-01-01

    This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real

  14. Techniques and tools for efficiently modeling multiprocessor systems

    Science.gov (United States)

    Carpenter, T.; Yalamanchili, S.

    1990-01-01

    System-level tools and methodologies associated with an integrated approach to the development of multiprocessor systems are examined. Tools for capturing initial program structure, automated program partitioning, automated resource allocation, and high-level modeling of the combined application and resource are discussed. The primary language focus of the current implementation is Ada, although the techniques should be appropriate for other programming paradigms.

  15. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  16. BUSINESS MODELS FOR EXTENDING OF 112 EMERGENCY CALL CENTER CAPABILITIES WITH E-CALL FUNCTION INSERTION

    Directory of Open Access Journals (Sweden)

    Pop Dragos Paul

    2010-12-01

    Full Text Available The present article concerns present status of implementation in Romania and Europe of eCall service and the proposed business models regarding eCall function implementation in Romania. eCall system is used for reliable transmission in case of crush between In Vehicle System and Public Service Answering Point, via the voice channel of cellular and Public Switched Telephone Network (PSTN. eCall service could be initiated automatically or manual the driver. All data presented in this article are part of researches made by authors in the Sectorial Contract Implementation study regarding eCall system, having as partners ITS Romania and Electronic Solution, with the Romanian Ministry of Communication and Information Technology as beneficiary.

  17. Modeling the capability of penetrating a jammed crowd to eliminate freezing transition

    Science.gov (United States)

    Mohammed Mahmod, Shuaib

    2016-05-01

    Frozen state from jammed state is one of the most interesting aspects produced when simulating the multidirectional pedestrian flow of high density crowds. Cases of real life situations for such a phenomenon are not exhaustively treated. Our observations in the Hajj crowd show that freezing transition does not occur very often. On the contrary, penetrating a jammed crowd is a common aspect. We believe the kindness of pedestrians facing others whose walking is blocked is a main factor in eliminating the frozen state as well as in relieving the jammed state. We refine the social force model by incorporating a new social force to enable the simulated pedestrians to mimic the real behavior observed in the Hajj area. Simulations are performed to validate the work qualitatively.

  18. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  19. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  20. Separable Watermarking Technique Using the Biological Color Model

    Directory of Open Access Journals (Sweden)

    David Nino

    2009-01-01

    Full Text Available Problem statement: The issue of having robust and fragile watermarking is still main focus for various researchers worldwide. Performance of a watermarking technique depends on how complex as well as how feasible to implement. These issues are tested using various kinds of attacks including geometry and transformation. Watermarking techniques in color images are more challenging than gray images in terms of complexity and information handling. In this study, we focused on implementation of watermarking technique in color images using the biological model. Approach: We proposed a novel method for watermarking using spatial and the Discrete Cosine Transform (DCT domains. The proposed method deled with colored images in the biological color model, the Hue, Saturation and Intensity (HSI. Technique was implemented and used against various colored images including the standard ones such as pepper image. The experiments were done using various attacks such as cropping, transformation and geometry. Results: The method robustness showed high accuracy in retrieval data and technique is fragile against geometric attacks. Conclusion: Watermark security was increased by using the Hadamard transform matrix. The watermarks used were meaningful and of varying sizes and details.

  1. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-10-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  2. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-11-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  3. Using analytic hierarchy process to identify the nurses with high stress-coping capability: model and application.

    Science.gov (United States)

    F C Pan, Frank

    2014-03-01

    Nurses have long been relied as the major labor force in hospitals. Featured with complicated and highly labor-intensive job requirement, multiple pressures from different sources was inevitable. Success in identifying stresses and accordingly coping with such stresses is important for job performance of nurses, and service quality of a hospital. Purpose of this research is to identify the determinants of nurses' capabilities. A modified Analytic Hierarchy Process (AHP) was adopted. Overall, 105 nurses from several randomly selected hospitals in southern Taiwan were investigated to generate factors. Ten experienced practitioners were included as the expert in the AHP to produce weights of each criterion. Six nurses from two regional hospitals were then selected to test the model. Four factors are then identified as the second level of hierarchy. The study result shows that the family factor is the most important factor, and followed by the personal attributes. Top three sub-criteria that attribute to the nurse's stress-coping capability are children's education, good career plan, and healthy family. The practical simulation provided evidence for the usefulness of this model. The study suggested including these key determinants into the practice of human-resource management, and restructuring the hospital's organization, creating an employee-support system as well as a family-friendly working climate. The research provided evidence that supports the usefulness of AHP in identifying the key factors that help stabilizing a nursing team.

  4. Using analytic hierarchy process to identify the nurses with high stress-coping capability: model and application.

    Directory of Open Access Journals (Sweden)

    Frank F C Pan

    2014-03-01

    Full Text Available Nurses have long been relied as the major labor force in hospitals. Featured with complicated and highly labor-intensive job requirement, multiple pressures from different sources was inevitable. Success in identifying stresses and accordingly coping with such stresses is important for job performance of nurses, and service quality of a hospital. Purpose of this research is to identify the determinants of nurses' capabilities.A modified Analytic Hierarchy Process (AHP was adopted. Overall, 105 nurses from several randomly selected hospitals in southern Taiwan were investigated to generate factors. Ten experienced practitioners were included as the expert in the AHP to produce weights of each criterion. Six nurses from two regional hospitals were then selected to test the model.Four factors are then identified as the second level of hierarchy. The study result shows that the family factor is the most important factor, and followed by the personal attributes. Top three sub-criteria that attribute to the nurse's stress-coping capability are children's education, good career plan, and healthy family. The practical simulation provided evidence for the usefulness of this model.The study suggested including these key determinants into the practice of human-resource management, and restructuring the hospital's organization, creating an employee-support system as well as a family-friendly working climate. The research provided evidence that supports the usefulness of AHP in identifying the key factors that help stabilizing a nursing team.

  5. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  6. 软件工程教育中的能力成熟度模型%Capability Maturity Model for Software Engineering Education

    Institute of Scientific and Technical Information of China (English)

    邵栋; 荣国平; 郑滔; 赵志宏

    2007-01-01

    Good software engineering education plays a vital role to improve the software development ability. In China, many software engineering education institutions have been established in recent years. Current research on the software engineering education is mostly focused on the certain concrete courses and body of knowledge, which lacks the view as an integrated education process. How to evaluate the institutions' education capability and how to help them to improve their capability is a critical issue.?This paper describes a model for measuring the maturity of software engineering education processes, which leverages the CMM (Capability Maturity Model) of software process. We call this model SEEDU-CMM (Capability Maturity Model for Software Engineering Education). SEEDU-CMM is used to evaluate the capability of the software engineering education institutions, as well as to provide guide for them to improve their education quality.

  7. A Process and Environment Aware Sierra/SolidMechanics Cohesive Zone Modeling Capability for Polymer/Solid Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Reedy, E. D. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Chambers, Robert S. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Hughes, Lindsey Gloe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Kropka, Jamie Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Stavig, Mark E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Stevens, Mark J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    The performance and reliability of many mechanical and electrical components depend on the integrity of po lymer - to - solid interfaces . Such interfaces are found in adhesively bonded joints, encapsulated or underfilled electronic modules, protective coatings, and laminates. The work described herein was aimed at improving Sandia's finite element - based capability to predict interfacial crack growth by 1) using a high fidelity nonlinear viscoelastic material model for the adhesive in fracture simulations, and 2) developing and implementing a novel cohesive zone fracture model that generates a mode - mixity dependent toughness as a natural consequence of its formulation (i.e., generates the observed increase in interfacial toughness wi th increasing crack - tip interfacial shear). Furthermore, molecular dynamics simulations were used to study fundamental material/interfa cial physics so as to develop a fuller understanding of the connection between molecular structure and failure . Also reported are test results that quantify how joint strength and interfacial toughness vary with temperature.

  8. Research on Operational Compiled Capability Quantitative Modeling%作战编成作战能力量化模型

    Institute of Scientific and Technical Information of China (English)

    周华任; 马亚平; 马元正; 陈国社

    2014-01-01

    作战编成(编组)作战能力量化是作战模拟中的一个重要方面,基于五力分析的基础上,讨论了作战编成(编组)作战能力的聚合方法以及对作战影响的因素,提出了作战编成(编组)作战能力量化的静态和动态量化模型。%Operational capability quantify of operational compiled operational is a very important aspect in the operational simulation,based on five power analysis,the aggregation methods of operational compiled (group)and operational affect factors are discussed,static modeling and dynamic modeling of the operational compiled(group)operational quantify are presented.

  9. Modelling and Assessment of the Capabilities of a Supermarket Refrigeration System for the Provision of Regulating Power

    DEFF Research Database (Denmark)

    O'Connell, Niamh; Madsen, Henrik; Pinson, Pierre

    This report presents an analysis of the demand response capabilities of a supermarket refrigeration system, with a particular focus on the suitability of this resource for participation in the regulating power market. An ARMAX model of the system is identified from experimental data, and the model...... is found to have time constants at 10 and 0.12 hours, indicating the potential for the system to provide exibility in both the long- and short-term. Direct- and indirect-control architectures are employed to simulate the demand response attainable from the refrigeration system. A number of complexities...... are revealed that would complicate the task of devising bids on a conventional power market. These complexities are incurred due to the physical characteristics and constraints of the system as well as the particular characteristics of the control frameworks employed. Simulations considering the provision...

  10. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  11. Videogrammetric Model Deformation Measurement Technique for Wind Tunnel Applications

    Science.gov (United States)

    Barrows, Danny A.

    2006-01-01

    Videogrammetric measurement technique developments at NASA Langley were driven largely by the need to quantify model deformation at the National Transonic Facility (NTF). This paper summarizes recent wind tunnel applications and issues at the NTF and other NASA Langley facilities including the Transonic Dynamics Tunnel, 31-Inch Mach 10 Tunnel, 8-Ft high Temperature Tunnel, and the 20-Ft Vertical Spin Tunnel. In addition, several adaptations of wind tunnel techniques to non-wind tunnel applications are summarized. These applications include wing deformation measurements on vehicles in flight, determining aerodynamic loads based on optical elastic deformation measurements, measurements on ultra-lightweight and inflatable space structures, and the use of an object-to-image plane scaling technique to support NASA s Space Exploration program.

  12. An observational model for biomechanical assessment of sprint kayaking technique.

    Science.gov (United States)

    McDonnell, Lisa K; Hume, Patria A; Nolte, Volker

    2012-11-01

    Sprint kayaking stroke phase descriptions for biomechanical analysis of technique vary among kayaking literature, with inconsistencies not conducive for the advancement of biomechanics applied service or research. We aimed to provide a consistent basis for the categorisation and analysis of sprint kayak technique by proposing a clear observational model. Electronic databases were searched using key words kayak, sprint, technique, and biomechanics, with 20 sources reviewed. Nine phase-defining positions were identified within the kayak literature and were divided into three distinct types based on how positions were defined: water-contact-defined positions, paddle-shaft-defined positions, and body-defined positions. Videos of elite paddlers from multiple camera views were reviewed to determine the visibility of positions used to define phases. The water-contact-defined positions of catch, immersion, extraction, and release were visible from multiple camera views, therefore were suitable for practical use by coaches and researchers. Using these positions, phases and sub-phases were created for a new observational model. We recommend that kayaking data should be reported using single strokes and described using two phases: water and aerial. For more detailed analysis without disrupting the basic two-phase model, a four-sub-phase model consisting of entry, pull, exit, and aerial sub-phases should be used.

  13. Nitrate reduction in geologically heterogeneous catchments — A framework for assessing the scale of predictive capability of hydrological models

    Energy Technology Data Exchange (ETDEWEB)

    Refsgaard, Jens Christian, E-mail: jcr@geus.dk [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Auken, Esben [Department of Earth Sciences, Aarhus University (Denmark); Bamberg, Charlotte A. [City of Aarhus (Denmark); Christensen, Britt S.B. [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Clausen, Thomas [DHI, Hørsholm (Denmark); Dalgaard, Esben [Department of Earth Sciences, Aarhus University (Denmark); Effersø, Flemming [SkyTEM Aps, Beder (Denmark); Ernstsen, Vibeke [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Gertz, Flemming [Knowledge Center for Agriculture, Skejby (Denmark); Hansen, Anne Lausten [Department of Geosciences and Natural Resource Management, University of Copenhagen (Denmark); He, Xin [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Jacobsen, Brian H. [Department of Food and Resource Economics, University of Copenhagen (Denmark); Jensen, Karsten Høgh [Department of Geosciences and Natural Resource Management, University of Copenhagen (Denmark); Jørgensen, Flemming; Jørgensen, Lisbeth Flindt [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Koch, Julian [Department of Geosciences and Natural Resource Management, University of Copenhagen (Denmark); Nilsson, Bertel [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Petersen, Christian [City of Odder (Denmark); De Schepper, Guillaume [Université Laval, Québec (Canada); Schamper, Cyril [Department of Earth Sciences, Aarhus University (Denmark); and others

    2014-01-01

    In order to fulfil the requirements of the EU Water Framework Directive nitrate load from agricultural areas to surface water in Denmark needs to be reduced by about 40%. The regulations imposed until now have been uniform, i.e. the same restrictions for all areas independent of the subsurface conditions. Studies have shown that on a national basis about 2/3 of the nitrate leaching from the root zone is reduced naturally, through denitrification, in the subsurface before reaching the streams. Therefore, it is more cost-effective to identify robust areas, where nitrate leaching through the root zone is reduced in the saturated zone before reaching the streams, and vulnerable areas, where no subsurface reduction takes place, and then only impose regulations/restrictions on the vulnerable areas. Distributed hydrological models can make predictions at grid scale, i.e. at much smaller scale than the entire catchment. However, as distributed models often do not include local scale hydrogeological heterogeneities, they are typically not able to make accurate predictions at scales smaller than they are calibrated. We present a framework for assessing nitrate reduction in the subsurface and for assessing at which spatial scales modelling tools have predictive capabilities. A new instrument has been developed for airborne geophysical measurements, Mini-SkyTEM, dedicated to identifying geological structures and heterogeneities with horizontal and lateral resolutions of 30–50 m and 2 m, respectively, in the upper 30 m. The geological heterogeneity and uncertainty are further analysed by use of the geostatistical software TProGS by generating stochastic geological realisations that are soft conditioned against the geophysical data. Finally, the flow paths within the catchment are simulated by use of the MIKE SHE hydrological modelling system for each of the geological models generated by TProGS and the prediction uncertainty is characterised by the variance between the

  14. One technique for refining the global Earth gravity models

    Science.gov (United States)

    Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.

    2017-01-01

    The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.

  15. Generalization Technique for 2D+SCALE Dhe Data Model

    Science.gov (United States)

    Karim, Hairi; Rahman, Alias Abdul; Boguslawski, Pawel

    2016-10-01

    Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE) data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension) for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information) in scale dimension could be used for the future 3D-scale applications.

  16. Interpolation techniques in robust constrained model predictive control

    Science.gov (United States)

    Kheawhom, Soorathep; Bumroongsri, Pornchai

    2017-05-01

    This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.

  17. Mathematical analysis techniques for modeling the space network activities

    Science.gov (United States)

    Foster, Lisa M.

    1992-01-01

    The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.

  18. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    OpenAIRE

    N.RATHIKA; Dr.A.Senthil kumar; A.ANUSUYA

    2014-01-01

    This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO) technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as...

  19. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  20. Equivalence and differences between structural equation modeling and state-space modeling techniques

    NARCIS (Netherlands)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, E.L.; Dolan, C.V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and

  1. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  2. Modeling resources and capabilities in enterprise architecture: a well-founded ontology-based proposal for ArchiMate

    NARCIS (Netherlands)

    Azevedo, C.L.B.; Iacob, Maria Eugenia; Almeida, J.P.A.; van Sinderen, Marten J.; Ferreira Pires, Luis; Guizzardi, G.

    2015-01-01

    The importance of capabilities and resources for portfolio management and business strategy has been recognized in the management literature. Despite that, little attention has been given to integrate the notions of capabilities and resources in enterprise architecture descriptions. One notable

  3. Modeling resources and capabilities in enterprise architecture: A well-founded ontology-based proposal for ArchiMate

    NARCIS (Netherlands)

    Azevedo, Carlos; Azevedo, Carlos L.B.; Iacob, Maria Eugenia; Andrade Almeida, João; van Sinderen, Marten J.; Ferreira Pires, Luis; Guizzardi, Giancarlo

    2015-01-01

    The importance of capabilities and resources for portfolio management and business strategy has been recognized in the management literature. Despite that, little attention has been given to integrate the notions of capabilities and resources in enterprise architecture descriptions. One notable

  4. Analysis of computational modeling techniques for complete rotorcraft configurations

    Science.gov (United States)

    O'Brien, David M., Jr.

    Computational fluid dynamics (CFD) provides the helicopter designer with a powerful tool for identifying problematic aerodynamics. Through the use of CFD, design concepts can be analyzed in a virtual wind tunnel long before a physical model is ever created. Traditional CFD analysis tends to be a time consuming process, where much of the effort is spent generating a high quality computational grid. Recent increases in computing power and memory have created renewed interest in alternative grid schemes such as unstructured grids, which facilitate rapid grid generation by relaxing restrictions on grid structure. Three rotor models have been incorporated into a popular fixed-wing unstructured CFD solver to increase its capability and facilitate availability to the rotorcraft community. The benefit of unstructured grid methods is demonstrated through rapid generation of high fidelity configuration models. The simplest rotor model is the steady state actuator disk approximation. By transforming the unsteady rotor problem into a steady state one, the actuator disk can provide rapid predictions of performance parameters such as lift and drag. The actuator blade and overset blade models provide a depiction of the unsteady rotor wake, but incur a larger computational cost than the actuator disk. The actuator blade model is convenient when the unsteady aerodynamic behavior needs to be investigated, but the computational cost of the overset approach is too large. The overset or chimera method allows the blades loads to be computed from first-principles and therefore provides the most accurate prediction of the rotor wake for the models investigated. The physics of the flow fields generated by these models for rotor/fuselage interactions are explored, along with efficiencies and limitations of each method.

  5. Investigation of model capability in capturing vertical hydrodynamic coastal processes: a case study in the north Adriatic Sea

    Science.gov (United States)

    McKiver, W. J.; Sannino, G.; Braga, F.; Bellafiore, D.

    2016-01-01

    In this work we consider a numerical study of hydrodynamics in the coastal zone using two different models, SHYFEM (shallow water hydrodynamic finite element model) and MITgcm (Massachusetts Institute of Technology general circulation model), to assess their capability to capture the main processes. We focus on the north Adriatic Sea during a strong dense water event that occurred at the beginning of 2012. This serves as an interesting test case to examine both the models strengths and weaknesses, while giving an opportunity to understand how these events affect coastal processes, like upwelling and downwelling, and how they interact with estuarine dynamics. Using the models we examine the impact of setup, surface and lateral boundary treatment, resolution and mixing schemes, as well as assessing the importance of nonhydrostatic dynamics in coastal processes. Both models are able to capture the dense water event, though each displays biases in different regions. The models show large differences in the reproduction of surface patterns, identifying the choice of suitable bulk formulas as a central point for the correct simulation of the thermohaline structure of the coastal zone. Moreover, the different approaches in treating lateral freshwater sources affect the vertical coastal stratification. The results indicate the importance of having high horizontal resolution in the coastal zone, specifically in close proximity to river inputs, in order to reproduce the effect of the complex coastal morphology on the hydrodynamics. A lower resolution offshore is acceptable for the reproduction of the dense water event, even if specific vortical structures are missed. Finally, it is found that nonhydrostatic processes are of little importance for the reproduction of dense water formation in the shelf of the north Adriatic Sea.

  6. A Comparison of Evolutionary Computation Techniques for IIR Model Identification

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2014-01-01

    Full Text Available System identification is a complex optimization problem which has recently attracted the attention in the field of science and engineering. In particular, the use of infinite impulse response (IIR models for identification is preferred over their equivalent FIR (finite impulse response models since the former yield more accurate models of physical plants for real world applications. However, IIR structures tend to produce multimodal error surfaces whose cost functions are significantly difficult to minimize. Evolutionary computation techniques (ECT are used to estimate the solution to complex optimization problems. They are often designed to meet the requirements of particular problems because no single optimization algorithm can solve all problems competitively. Therefore, when new algorithms are proposed, their relative efficacies must be appropriately evaluated. Several comparisons among ECT have been reported in the literature. Nevertheless, they suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. This study presents the comparison of various evolutionary computation optimization techniques applied to IIR model identification. Results over several models are presented and statistically validated.

  7. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  8. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Gao, X; Sorooshian, S

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  9. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  10. A New Mathematical Modeling Technique for Pull Production Control Systems

    Directory of Open Access Journals (Sweden)

    O. Srikanth

    2013-12-01

    Full Text Available The Kanban Control System widely used to control the release of parts of multistage manufacturing system operating under a pull production control system. Most of the work on Kanban Control System deals with multi-product manufacturing system. In this paper, we are proposing a regression modeling technique in a multistage manufacturing system is to be coordinates the release of parts into each stage of the system with the arrival of customer demands for final products. And also comparing two variants stages of the Kanban Control System model and combines with mathematical and Simulink model for the production coordination of parts in an assembly manufacturing systems. In both variants, the production of a new subassembly is authorized only when an assembly Kanban is available. Assembly kanbans become available when finished product is consumed. A simulation environment for the product line system has to generate with the proposed model and the mathematical model have to give implementation against the simulation model in the working platform of MATLAB. Both the simulation and model outputs have provided an in depth analysis of each of the resulting control system for offering model of a product line system.

  11. Evolution of Modelling Techniques for Service Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Mikit Kanakia

    2014-07-01

    Full Text Available Service-oriented architecture (SOA is a software design and architecture design pattern based on independent pieces of software providing functionality as services to other applications. The benefit of SOA in the IT infrastructure is to allow parallel use and data exchange between programs which are services to the enterprise. Unified Modelling Language (UML is a standardized general-purpose modelling language in the field of software engineering. The UML includes a set of graphic notation techniques to create visual models of object-oriented software systems. We want to make UML available for SOA as well. SoaML (Service oriented architecture Modelling Language is an open source specification project from the Object Management Group (OMG, describing a UML profile and meta-model for the modelling and design of services within a service-oriented architecture. BPMN was also extended for SOA but there were few pitfalls. There is a need of a modelling framework which dedicated to SOA. Michael Bell authored a framework called Service Oriented Modelling Framework (SOMF which is dedicated for SOA.

  12. Nitrate reduction in geologically heterogeneous catchments--a framework for assessing the scale of predictive capability of hydrological models.

    Science.gov (United States)

    Refsgaard, Jens Christian; Auken, Esben; Bamberg, Charlotte A; Christensen, Britt S B; Clausen, Thomas; Dalgaard, Esben; Effersø, Flemming; Ernstsen, Vibeke; Gertz, Flemming; Hansen, Anne Lausten; He, Xin; Jacobsen, Brian H; Jensen, Karsten Høgh; Jørgensen, Flemming; Jørgensen, Lisbeth Flindt; Koch, Julian; Nilsson, Bertel; Petersen, Christian; De Schepper, Guillaume; Schamper, Cyril; Sørensen, Kurt I; Therrien, Rene; Thirup, Christian; Viezzoli, Andrea

    2014-01-15

    In order to fulfil the requirements of the EU Water Framework Directive nitrate load from agricultural areas to surface water in Denmark needs to be reduced by about 40%. The regulations imposed until now have been uniform, i.e. the same restrictions for all areas independent of the subsurface conditions. Studies have shown that on a national basis about 2/3 of the nitrate leaching from the root zone is reduced naturally, through denitrification, in the subsurface before reaching the streams. Therefore, it is more cost-effective to identify robust areas, where nitrate leaching through the root zone is reduced in the saturated zone before reaching the streams, and vulnerable areas, where no subsurface reduction takes place, and then only impose regulations/restrictions on the vulnerable areas. Distributed hydrological models can make predictions at grid scale, i.e. at much smaller scale than the entire catchment. However, as distributed models often do not include local scale hydrogeological heterogeneities, they are typically not able to make accurate predictions at scales smaller than they are calibrated. We present a framework for assessing nitrate reduction in the subsurface and for assessing at which spatial scales modelling tools have predictive capabilities. A new instrument has been developed for airborne geophysical measurements, Mini-SkyTEM, dedicated to identifying geological structures and heterogeneities with horizontal and lateral resolutions of 30-50 m and 2m, respectively, in the upper 30 m. The geological heterogeneity and uncertainty are further analysed by use of the geostatistical software TProGS by generating stochastic geological realisations that are soft conditioned against the geophysical data. Finally, the flow paths within the catchment are simulated by use of the MIKE SHE hydrological modelling system for each of the geological models generated by TProGS and the prediction uncertainty is characterised by the variance between the

  13. Multi-Model Combination Techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N; Duan, Q; Gao, X; Sorooshian, S

    2006-05-08

    This paper examines several multi-model combination techniques: the Simple Multimodel Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  14. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560

  15. Demonstration of the Recent Additions in Modeling Capabilities for the WEC-Sim Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tom, N.; Lawson, M.; Yu, Y. H.

    2015-03-01

    WEC-Sim is a mid-fidelity numerical tool for modeling wave energy conversion (WEC) devices. The code uses the MATLAB SimMechanics package to solve the multi-body dynamics and models the wave interactions using hydrodynamic coefficients derived from frequency domain boundary element methods. In this paper, the new modeling features introduced in the latest release of WEC-Sim will be presented. The first feature discussed is the conversion of the fluid memory kernel to a state-space approximation that provides significant gains in computational speed. The benefit of the state-space calculation becomes even greater after the hydrodynamic body-to-body coefficients are introduced as the number of interactions increases exponentially with the number of floating bodies. The final feature discussed is the capability toadd Morison elements to provide additional hydrodynamic damping and inertia. This is generally used as a tuning feature, because performance is highly dependent on the chosen coefficients. In this paper, a review of the hydrodynamic theory for each of the features is provided and successful implementation is verified using test cases.

  16. Capability Maturity Model and Key Process Area%软件成熟度模型及其要点

    Institute of Scientific and Technical Information of China (English)

    吴昆; 余成

    2005-01-01

    This paper studies five levels of Capability Maturity Model and its corresponding Key Process Area. New insights have been given as to common themes, benefits and tools used for organizitions that determined to comply with the model. Those software engineering practices that consistent with the model are proved to be effective, and are conducive to software organizations that need to make right decisions on how to develop software and regulate their software development practice. By adopting these practices, software development organization will have best practice for their deliverables and customers will have assured high quality product they expected.%研究软件成熟度模型的五个级别,及其相关要点.详细阐述了其共性及特点.该模型经验证可有效提高软件工程质量,对大型软件项目的管理起到良好的指导作用,并被世界大型软件公司认可、接受和执行.

  17. System identification and model reduction using modulating function techniques

    Science.gov (United States)

    Shen, Yan

    1993-01-01

    Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.

  18. Use of surgical techniques in the rat pancreas transplantation model

    Institute of Scientific and Technical Information of China (English)

    Yi Ma; Zhi-Yong Guo

    2008-01-01

    BACKGROUND:Pancreas transplantation is currently considered to be the most reliable and effective treatment for insulin-dependent diabetes mellitus (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years. We investigated the surgical techniques of pancreas transplantation in rats by analysing the difference between cervical segmental pancreas transplantation and abdominal pancreaticoduodenal transplantation. METHODS:Two hundred and forty male adult Wistar rats weighing 200-300 g were used, 120 as donors and 120 as recipients. Sixty cervical segmental pancreas transplants and 60 abdominal pancreaticoduodenal transplants were carried out and vessel anastomoses were made with microsurgical techniques. RESULTS:The time of donor pancreas harvesting in the cervical and abdominal groups was 31±6 and 37.6±3.8 min, respectively, and the lengths of recipient operations were 49.2±5.6 and 60.6±7.8 min. The time for donor operation was not signiifcantly different (P>0.05), but the recipient operation time in the abdominal group was longer than that in the cervical group (P0.05). CONCLUSIONS:Both pancreas transplantation methods are stable models for immunological and physiological studies in pancreas transplantation. Since each has its own advantages and disadvantages, the designer can choose the appropriate method according to the requirements of the study.

  19. Crop Yield Forecasted Model Based on Time Series Techniques

    Institute of Scientific and Technical Information of China (English)

    Li Hong-ying; Hou Yan-lin; Zhou Yong-juan; Zhao Hui-ming

    2012-01-01

    Traditional studies on potential yield mainly referred to attainable yield: the maximum yield which could be reached by a crop in a given environment. The new concept of crop yield under average climate conditions was defined in this paper, which was affected by advancement of science and technology. Based on the new concept of crop yield, the time series techniques relying on past yield data was employed to set up a forecasting model. The model was tested by using average grain yields of Liaoning Province in China from 1949 to 2005. The testing combined dynamic n-choosing and micro tendency rectification, and an average forecasting error was 1.24%. In the trend line of yield change, and then a yield turning point might occur, in which case the inflexion model was used to solve the problem of yield turn point.

  20. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  1. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  2. Theoretical modeling techniques and their impact on tumor immunology.

    Science.gov (United States)

    Woelke, Anna Lena; Murgueitio, Manuela S; Preissner, Robert

    2010-01-01

    Currently, cancer is one of the leading causes of death in industrial nations. While conventional cancer treatment usually results in the patient suffering from severe side effects, immunotherapy is a promising alternative. Nevertheless, some questions remain unanswered with regard to using immunotherapy to treat cancer hindering it from being widely established. To help rectify this deficit in knowledge, experimental data, accumulated from a huge number of different studies, can be integrated into theoretical models of the tumor-immune system interaction. Many complex mechanisms in immunology and oncology cannot be measured in experiments, but can be analyzed by mathematical simulations. Using theoretical modeling techniques, general principles of tumor-immune system interactions can be explored and clinical treatment schedules optimized to lower both tumor burden and side effects. In this paper, we aim to explain the main mathematical and computational modeling techniques used in tumor immunology to experimental researchers and clinicians. In addition, we review relevant published work and provide an overview of its impact to the field.

  3. Assessment of the reliability of reproducing two-dimensional resistivity models using an image processing technique.

    Science.gov (United States)

    Ishola, Kehinde S; Nawawi, Mohd Nm; Abdullah, Khiruddin; Sabri, Ali Idriss Aboubakar; Adiat, Kola Abdulnafiu

    2014-01-01

    This study attempts to combine the results of geophysical images obtained from three commonly used electrode configurations using an image processing technique in order to assess their capabilities to reproduce two-dimensional (2-D) resistivity models. All the inverse resistivity models were processed using the PCI Geomatica software package commonly used for remote sensing data sets. Preprocessing of the 2-D inverse models was carried out to facilitate further processing and statistical analyses. Four Raster layers were created, three of these layers were used for the input images and the fourth layer was used as the output of the combined images. The data sets were merged using basic statistical approach. Interpreted results show that all images resolved and reconstructed the essential features of the models. An assessment of the accuracy of the images for the four geologic models was performed using four criteria: the mean absolute error and mean percentage absolute error, resistivity values of the reconstructed blocks and their displacements from the true models. Generally, the blocks of the images of maximum approach give the least estimated errors. Also, the displacement of the reconstructed blocks from the true blocks is the least and the reconstructed resistivities of the blocks are closer to the true blocks than any other combined used. Thus, it is corroborated that when inverse resistivity models are combined, most reliable and detailed information about the geologic models is obtained than using individual data sets.

  4. A formal model for integrity protection based on DTE technique

    Institute of Scientific and Technical Information of China (English)

    JI Qingguang; QING Sihan; HE Yeping

    2006-01-01

    In order to provide integrity protection for the secure operating system to satisfy the structured protection class' requirements, a DTE technique based integrity protection formalization model is proposed after the implications and structures of the integrity policy have been analyzed in detail. This model consists of some basic rules for configuring DTE and a state transition model, which are used to instruct how the domains and types are set, and how security invariants obtained from initial configuration are maintained in the process of system transition respectively. In this model, ten invariants are introduced, especially, some new invariants dealing with information flow are proposed, and their relations with corresponding invariants described in literatures are also discussed.The thirteen transition rules with well-formed atomicity are presented in a well-operational manner. The basic security theorems correspond to these invariants and transition rules are proved. The rationalities for proposing the invariants are further annotated via analyzing the differences between this model and ones described in literatures. At last but not least, future works are prospected, especially, it is pointed out that it is possible to use this model to analyze SE-Linux security.

  5. Spoken Document Retrieval Leveraging Unsupervised and Supervised Topic Modeling Techniques

    Science.gov (United States)

    Chen, Kuan-Yu; Wang, Hsin-Min; Chen, Berlin

    This paper describes the application of two attractive categories of topic modeling techniques to the problem of spoken document retrieval (SDR), viz. document topic model (DTM) and word topic model (WTM). Apart from using the conventional unsupervised training strategy, we explore a supervised training strategy for estimating these topic models, imagining a scenario that user query logs along with click-through information of relevant documents can be utilized to build an SDR system. This attempt has the potential to associate relevant documents with queries even if they do not share any of the query words, thereby improving on retrieval quality over the baseline system. Likewise, we also study a novel use of pseudo-supervised training to associate relevant documents with queries through a pseudo-feedback procedure. Moreover, in order to lessen SDR performance degradation caused by imperfect speech recognition, we investigate leveraging different levels of index features for topic modeling, including words, syllable-level units, and their combination. We provide a series of experiments conducted on the TDT (TDT-2 and TDT-3) Chinese SDR collections. The empirical results show that the methods deduced from our proposed modeling framework are very effective when compared with a few existing retrieval approaches.

  6. Study of the radiation dose reduction capability of a CT reconstruction algorithm: LCD performance assessment using mathematical model observers

    Science.gov (United States)

    Fan, Jiahua; Tseng, Hsin-Wu; Kupinski, Matthew; Cao, Guangzhi; Sainath, Paavana; Hsieh, Jiang

    2013-03-01

    Radiation dose on patient has become a major concern today for Computed Tomography (CT) imaging in clinical practice. Various hardware and algorithm solutions have been designed to reduce dose. Among them, iterative reconstruction (IR) has been widely expected to be an effective dose reduction approach for CT. However, there is no clear understanding on the exact amount of dose saving an IR approach can offer for various clinical applications. We know that quantitative image quality assessment should be task-based. This work applied mathematical model observers to study detectability performance of CT scan data reconstructed using an advanced IR approach as well as the conventional filtered back-projection (FBP) approach. The purpose of this work is to establish a practical and robust approach for CT IR detectability image quality evaluation and to assess the dose saving capability of the IR method under study. Low contrast (LC) objects imbedded in head size and body size phantoms were imaged multiple times with different dose levels. Independent signal present and absent pairs were generated for model observer study training and testing. Receiver Operating Characteristic (ROC) curves for location known exact and location ROC (LROC) curves for location unknown as well as their corresponding the area under the curve (AUC) values were calculated. Results showed approximately 3 times dose reduction has been achieved using the IR method under study.

  7. Design and modeling of an autonomous multi-link snake robot, capable of 3D-motion

    Directory of Open Access Journals (Sweden)

    Rizkallah Rabel

    2016-01-01

    Full Text Available The paper presents the design of an autonomous, wheeless, mechanical snake robot that was modeled and built at Notre Dame University – Louaize. The robot is also capable of 3D motion with an ability to climb in the z-direction. The snake is made of a series links, each containing one to three high torque DC motors and a gearing system. They are connected to each other through Aluminum hollow rods that can be rotated through a 180° span. This allows the snake to move in various environments including unfriendly and cluttered ones. The front link has a proximity sensor used to map the environment. This mapping is sent to a microcontroller which controls and adapts the motion pattern of the snake. The snake can therefore choose to avoid obstacles, or climb over them if their height is within its range. The presented model is made of five links, but this number can be increased as their role is repetitive. The novel design is meant to overcome previous limitations by allowing 3D motion through electric actuators and low energy consumption.

  8. ATLAS Event Data Organization and I/O Framework Capabilities in Support of Heterogeneous Data Access and Processing Models

    CERN Document Server

    Malon, David; The ATLAS collaboration

    2016-01-01

    Choices in persistent data models and data organization have significant performance ramifications for data-intensive scientific computing. In experimental high energy physics, organizing file-based event data for efficient per-attribute retrieval may improve the I/O performance of some physics analyses but hamper the performance of processing that requires full-event access. In-file data organization tuned for serial access by a single process may be less suitable for opportunistic sub-file-based processing on distributed computing resources. Unique I/O characteristics of high-performance computing platforms pose additional challenges. The ATLAS experiment at the Large Hadron Collider employs a flexible I/O framework and a suite of tools and techniques for persistent data organization to support an increasingly heterogeneous array of data access and processing models.

  9. Evaluation models for research capabilities and environments%研究能力与环境的评价模型

    Institute of Scientific and Technical Information of China (English)

    菊池智子; 荣莉莉; 王众托; 中森羲辉

    2007-01-01

    Knowledge science has been producing results such as knowledge conversion theory, knowledge systematizing methods, and methods for the development of creativity. It is expected recently that knowledge science should help researchers produce creative theoretical results in important natural sciences. For this purpose, it is required to establish a "Ba" or an environment or circumstance, which supports the development and practice of scientific knowledge creation. The paper proposes a checklist on research capabilities and research environments based on the two knowledge creation models: the i-System and the Triple Helix Model, to design and evaluate "Ba" for technology creation in academia. The authors have carried out a questionnaire survey on the research capabilities and environments of the graduate students in the research fields of knowledge and information science. This paper analyzes the data with the fuzzy correspondence analysis and presents a useful interpretation of data for supervisors.%知识科学已经产生了研究成果,如知识转化理论、知识系统化方法和知识创造力开发方法.预计近期研究者将在重大自然科学领域,在现有知识科学的协助下,获得创造性的理论研究成果.为此,必须建立一个"Ba"或者一种情境,它可以支持科学知识创新的发展和实践.文章在i系统和三重螺旋模型的基础上,提出一个研究能力和研究环境的一览表,旨在设计和评价学术团体技术创新的"Ba".文章采用调查问卷形式,对象是知识和信息科学研究领域的研究生的研究能力及其环境.文章运用模糊对应分析方法进行数据分析,并向管理人员提供有益的数据解释和说明.

  10. Concerning the Feasibility of Example-driven Modelling Techniques

    CERN Document Server

    Thorne, Simon R; Lawson, Z

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven modelling. The main aim was to establish experimentally within an academic environment: the relationship between error and task complexity using a) Traditional spreadsheet modelling; b) example driven techniques. We report on the experimental design, sampling, research methods and the tasks set for both control and treatment groups. Analysis of the completed tasks allows comparison of several different variables. The experimental results compare the performance indicators for the treatment and control groups by comparing accuracy, experience, training, confidence measures, perceived difficulty and perceived completeness. The various results are thoroughly tested for statistical significance using: the Chi squared test, Fisher's exact test for significance, Cochran's Q test and McNemar's test on difficulty.

  11. Advanced computer modeling techniques expand belt conveyor technology

    Energy Technology Data Exchange (ETDEWEB)

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  12. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    N.RATHIKA

    2014-07-01

    Full Text Available This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as to optimize some stipulated set of parameters without violating some constraint. Accurate value can be extracted using PSO and it can be reformulated. Modeling and simulation of the machine is executed. MATLAB/Simulink has been cast-off to implement and validate the result.

  13. Verification of the New FAST v8 Capabilities for the Modeling of Fixed-Bottom Offshore Wind Turbines: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Barahona, B.; Jonkman, J.; Damiani, R.; Robertson, A.; Hayman, G.

    2014-12-01

    Coupled dynamic analysis has an important role in the design of offshore wind turbines because the systems are subject to complex operating conditions from the combined action of waves and wind. The aero-hydro-servo-elastic tool FAST v8 is framed in a novel modularization scheme that facilitates such analysis. Here, we present the verification of new capabilities of FAST v8 to model fixed-bottom offshore wind turbines. We analyze a series of load cases with both wind and wave loads and compare the results against those from the previous international code comparison projects-the International Energy Agency (IEA) Wind Task 23 Subtask 2 Offshore Code Comparison Collaboration (OC3) and the IEA Wind Task 30 OC3 Continued (OC4) projects. The verification is performed using the NREL 5-MW reference turbine supported by monopile, tripod, and jacket substructures. The substructure structural-dynamics models are built within the new SubDyn module of FAST v8, which uses a linear finite-element beam model with Craig-Bampton dynamic system reduction. This allows the modal properties of the substructure to be synthesized and coupled to hydrodynamic loads and tower dynamics. The hydrodynamic loads are calculated using a new strip theory approach for multimember substructures in the updated HydroDyn module of FAST v8. These modules are linked to the rest of FAST through the new coupling scheme involving mapping between module-independent spatial discretizations and a numerically rigorous implicit solver. The results show that the new structural dynamics, hydrodynamics, and coupled solutions compare well to the results from the previous code comparison projects.

  14. Updates on measurements and modeling techniques for expendable countermeasures

    Science.gov (United States)

    Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.

    2016-10-01

    The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.

  15. A Model of Entrepreneurial Capability Based on a Holistic Review of the Literature from Three Academic Domains

    Science.gov (United States)

    Lewis, Hilary

    2011-01-01

    While there has been a noted variation in the "species" of entrepreneur so that no single list of traits, characteristics or attributes is definitive, it is posited that to be an entrepreneur a certain amount of entrepreneurial capability is required. "Entrepreneurial capability" is a concept developed to place some form of…

  16. Modeling resources and capabilities in enterprise architecture: A well-founded ontology-based proposal for ArchiMate

    NARCIS (Netherlands)

    Azevedo, Carlos L.B.; Iacob, Maria-Eugenia; Almeida, João Paolo A.; Sinderen, van Marten; Ferreira Pires, Luis; Guizzardi, Giancarlo

    2015-01-01

    The importance of capabilities and resources for portfolio management and business strategy has been recognized in the management literature. Despite that, little attention has been given to integrate the notions of capabilities and resources in enterprise architecture descriptions. One notable exce

  17. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  18. TRANSFORMATION OF THE STUDENTS’ INQUIRY CAPABILITY THROUGH MINDMAP EDUCATIVE BY USING GAME OBSERVATION NORMATIVELY (MEGONO LEARNING MODEL

    Directory of Open Access Journals (Sweden)

    Tasiwan Tasiwan

    2016-04-01

    Full Text Available This classroom action research was conducted to analyze the development of the students’ inquiry abilities in science learning by a learning model of mindmap educative by using game observation normatively (Megono. The study was conducted in three cycles. In each cycle, the students were divided into five groups, each groups consisted of seven students. Each group was mandated to observe and to analyze the images/photos. After the image observations, they were asked to discuss, write and compile the information into a concept map.  One of the students was act as a representative of the group in a game of observation. Data were obtained through the pre-test, post-test, and observation by the observers as well as from the photo and video recording. The results showed that the students’ inquiry ability increased by 63.27% at the end of the cycle. At the initial conditions, the ability of the student was low (0.49. After the first cycle, it increased to 0.63 (medium, and then increased to 0.68 (moderate on the second cycle, and finally it increased to 0.80 (high in the third cycle. The average increase in every aspect was 68.59%.  The highest inquiry capability was achieved in aspects of reasoning amounted to 89.29 (very high. It was suggested to use the observation games fairly and needed more time adjustment to obtain higher learning outcomes.

  19. Regional Arctic System Model (RASM): A Tool to Address the U.S. Priorities and Advance Capabilities for Arctic Climate Modeling and Prediction

    Science.gov (United States)

    Maslowski, W.; Roberts, A.; Cassano, J. J.; Gutowski, W. J., Jr.; Nijssen, B.; Osinski, R.; Zeng, X.; Brunke, M.; Duvivier, A.; Hamman, J.; Hossainzadeh, S.; Hughes, M.; Seefeldt, M. W.

    2015-12-01

    The Arctic is undergoing some of the most coordinated rapid climatic changes currently occurring anywhere on Earth, including the retreat of the perennial sea ice cover, which integrates forcing by, exchanges with and feedbacks between atmosphere, ocean and land. While historical reconstructions from Earth System Models (ESMs) are in broad agreement with these changes, the rate of change in ESMs generally remains outpaced by observations. Reasons for that relate to a combination of coarse resolution, inadequate parameterizations, under-represented processes and a limited knowledge of physical interactions. We demonstrate the capability of the Regional Arctic System Model (RASM) in addressing some of the ESM limitations in simulating observed variability and trends in arctic surface climate. RASM is a high resolution, pan-Arctic coupled climate model with the sea ice and ocean model components configured at an eddy-permitting resolution of 1/12o and the atmosphere and land hydrology model components at 50 km resolution, which are all coupled at 20-minute intervals. RASM is an example of limited-area, process-resolving, fully coupled ESM, which due to the constraints from boundary conditions facilitates detailed comparisons with observational statistics that are not possible with ESMs. The overall goal of RASM is to address key requirements published in the Navy Arctic Roadmap: 2014-2030 and in the Implementation Plan for the National Strategy for the Arctic Region, regarding the need for advanced modeling capabilities for operational forecasting and strategic climate predictions through 2030. The main science objectives of RASM are to advance understanding and model representation of critical physical processes and feedbacks of importance to sea ice thickness and area distribution. RASM results are presented to quantify relative contributions by (i) resolved processes and feedbacks as well as (ii) sensitivity to space dependent sub-grid parameterizations to better

  20. Comparative molecular modeling of Anopheles gambiae CYP6Z1, a mosquito P450 capable of metabolizing DDT.

    Science.gov (United States)

    Chiu, Ting-Lan; Wen, Zhimou; Rupasinghe, Sanjeewa G; Schuler, Mary A

    2008-07-01

    One of the challenges faced in malarial control is the acquisition of insecticide resistance that has developed in mosquitoes that are vectors for this disease. Anopheles gambiae, which has been the major mosquito vector of the malaria parasite Plasmodium falciparum in Africa, has over the years developed resistance to insecticides including dieldrin, 1,1-bis(p-chlorophenyl)-2,2,2-trichloroethane (DDT), and pyrethroids. Previous microarray studies using fragments of 230 An. gambiae genes identified five P450 loci, including CYP4C27, CYP4H15, CYP6Z1, CYP6Z2, and CYP12F1, that showed significantly higher expression in the DDT-resistant ZAN/U strain compared with the DDT-susceptible Kisumu strain. To predict whether either of the CYP6Z1 and CYP6Z2 proteins might potentially metabolize DDT, we generated and compared molecular models of these two proteins with and without DDT docked in their catalytic sites. This comparison indicated that, although these two CYP6Z proteins share high sequence identity, their metabolic profiles were likely to differ dramatically from the larger catalytic site of CYP6Z1, potentially involved in DDT metabolism, and the more constrained catalytic site of CYP6Z2, not likely to metabolize DDT. Heterologous expressions of these proteins have corroborated these predictions: only CYP6Z1 is capable of metabolizing DDT. Overlays of these models indicate that slight differences in the backbone of SRS1 and variations of side chains in SRS2 and SRS4 account for the significant differences in their catalytic site volumes and DDT-metabolic capacities. These data identify CYP6Z1 as one important target for inhibitor design aimed at inactivating insecticide-metabolizing P450s in natural populations of this malarial mosquito.

  1. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; ptechnique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat osteosarcoma model was shown to be feasible: the take rate was high, surgical mortality was

  2. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  3. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    Directory of Open Access Journals (Sweden)

    Frederico R. Romero

    2007-02-01

    Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.

  4. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  5. Modelling, analysis and validation of microwave techniques for the characterisation of metallic nanoparticles

    Science.gov (United States)

    Sulaimalebbe, Aslam

    High Frequency Structure Simulator (HFSS), followed by the electrical characterisation of synthesised Pt NP films using the novel miniature fabricated OCP technique. The results obtained from this technique provided the inspiration to synthesise and evaluate the microwave properties of Au NPs. The findings from this technique provided the motivation to characterise both the Pt and Au NP films using the DR technique. Unlike the OCP technique, the DR method is highly sensitive but the achievable measurement accuracy is limited since this technique does not have broadband frequency capability like the OCP method. The results obtained from the DR technique show a good agreement with the theoretical prediction. In the last phase of this research, a further validation of the aperture admittance models on different types OCP (i.e. RG-405 and RG-402 cables and SMA connector) have been carried out on the developed 3D full wave models using HFSS software, followed by the development of universal models for the aforementioned OCPs based on the same 3D full wave models.

  6. THE TECHNIQUE OF ANALYSIS OF SOFTWARE OF ON-BOARD COMPUTERS OF AIR VESSEL TO ABSENCE OF UNDECLARED CAPABILITIES BY SIGNATURE-HEURISTIC WAY

    Directory of Open Access Journals (Sweden)

    Viktor Ivanovich Petrov

    2017-01-01

    Full Text Available The article considers the issues of civil aviation aircraft onboard computers data safety. Infor- mation security undeclared capabilities stand for technical equipment or software possibilities, which are not mentioned in the documentation. Documentation and tests content requirements are imposed during the software certification. Documentation requirements include documents composition and content of control (specification, description and program code, the source code. Test requirements include: static analysis of program codes (including the compliance of the sources with their loading modules monitoring; dynamic analysis of source code (including implementation of routes monitor- ing. Currently, there are no complex measures for checking onboard computer software. There are no rules and regulations that can allow controlling foreign production aircraft software, and the actual receiving of software is difficult. Consequently, the author suggests developing the basics of aviation rules and regulations, which allow to analyze the programs of CA aircraft onboard computers. If there are no software source codes the two approaches of code analysis are used: a structural static and dy- namic analysis of the source code; signature-heuristic analysis of potentially dangerous operations. Static analysis determines the behavior of the program by reading the program code (without running the program which is represented in the assembler language - disassembly listing. Program tracing is performed by the dynamic analysis. The analysis of aircraft software ability to detect undeclared capa- bilities using the interactive disassembler was considered in this article.

  7. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory

    2012-09-11

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  8. Power Capability Investigation Based on Electrothermal Models of Press-pack IGBT Three-Level NPC and ANPC VSCs for Multimegawatt Wind Turbines

    DEFF Research Database (Denmark)

    Senturk, Osman Selcuk; Helle, Lars; Munk-Nielsen, Stig

    2012-01-01

    are addressed in this study for the three-level neutral-point-clamped voltage source converter (3L-NPC-VSC) and 3L Active NPC VSC (3L-ANPC-VSC) with press-pack insulated gate bipolar transistors employed as a grid-side converter. In order to investigate these VSCs' power capabilities under various operating...... conditions with respect to these limiting factors, a power capability generation algorithm based on the converter electrothermal model is developed. Built considering the VSCs' operation principles and physical structure, the model is validated by a 2 MV·A single-phase 3L-ANPC-VSC test setup. The power...

  9. Designing and Validating a Model for Measuring Sustainability of Overall Innovation Capability of Small and Medium-Sized Enterprises

    Directory of Open Access Journals (Sweden)

    Mohd Nizam Ab Rahman

    2015-01-01

    Full Text Available The business environment is currently characterized by intensified competition at both the national and firm levels. Many studies have shown that innovation positively affect firms in enhancing their competitiveness. Innovation is a dynamic process that requires a continuous, evolving, and mastered management. Evaluating the sustainability of overall innovation capability of a business is a major means of determining how well this firm effectively and efficiently manages its innovation process. A psychometrically valid scale of evaluating the sustainability of overall innovation capability of a firm is still insufficient in the current innovation literature. Thus, this study developed a reliable and valid scale of measuring the sustainability of overall innovation capability construct. The unidimensionality, reliability, and several validity components of the developed scale were tested using the data collected from 175 small and medium-sized enterprises in Iran. A series of systematic statistical analyses were performed. Results of the reliability measures, exploratory and confirmatory factor analyses, and several components of validity tests strongly supported an eight-dimensional (8D scale of measuring the sustainability of overall innovation capability construct. The dimensions of the scale were strategic management, supportive culture and structure, resource allocation, communication and networking, knowledge and technology management, idea management, project development, and commercialization capabilities.

  10. Modeling about Systematic Operational Capability Based on Network Theories%基于网络理论的体系作战能力建模

    Institute of Scientific and Technical Information of China (English)

    郭英然; 王志敏

    2012-01-01

    在分析信息化体系作战能力“跃迁”特征的基础上,进行了作战体系的网络化结构建模分析;研究了感知网络、指控网络以及执行网络能力模型,构建了基于网络理论的体系作战能力综合模型.结合仿真算例分析,探讨了体系信息结构力的产生机理,为体系作战能力综合研究提供了借鉴和支撑.%The thesis analyzed the network modeling about operational systems structure, on the foundation of analyzing the 'transition' characteristic of informational systematic operational capability. The thesis also established integration models of systematic operational capability based on network the ories, on the foundation of the capability model about the apperceiving network, the commanding network and the executing network. Lastly, combining with a simulation example analysis we discussed the emergence principle of operational systems information structural capability, in order to offering the reference and support for synthetically researching of systematic operational capability.

  11. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  12. The Miniature X-ray Solar Spectrometer (MinXSS) CubeSats: spectrometer characterization techniques, spectrometer capabilities, and solar science objectives

    Science.gov (United States)

    Moore, Christopher S.; Woods, Thomas N.; Caspi, Amir; Mason, James P.

    2016-07-01

    The Miniature X-ray Solar Spectrometer (MinXSS) are twin 3U CubeSats. The first of the twin CubeSats (MinXSS-1) launched in December 2015 to the International Space Station for deployment in mid-2016. Both MinXSS CubeSats utilize a commercial off the shelf (COTS) X-ray spectrometer from Amptek to measure the solar irradiance from 0.5 to 30 keV with a nominal 0.15 keV FWHM spectral resolution at 5.9 keV, and a LASP-developed X-ray broadband photometer with similar spectral sensitivity. MinXSS design and development has involved over 40 graduate students supervised by professors and professionals at the University of Colorado at Boulder. The majority of previous solar soft X-ray measurements have been either at high spectral resolution with a narrow bandpass or spectrally integrating (broadband) photometers. MinXSS will conduct unique soft X-ray measurements with moderate spectral resolution over a relatively large energy range to study solar active region evolution, solar flares, and the effects of solar soft X-ray emission on Earth's ionosphere. This paper focuses on the X-ray spectrometer instrument characterization techniques involving radioactive X-ray sources and the National Institute for Standards and Technology (NIST) Synchrotron Ultraviolet Radiation Facility (SURF). Spectrometer spectral response, spectral resolution, response linearity are discussed as well as future solar science objectives.

  13. Best Practices for Evaluating the Capability of Nondestructive Evaluation (NDE) and Structural Health Monitoring (SHM) Techniques for Damage Characterization (Post-Print)

    Science.gov (United States)

    2016-02-10

    Derriso , M., “Model-assisted probabilistic reliability assessment for structural health monitoring systems,” Review of Progress in QNDE, Vol. 29, AIP...Workshop on SHM, Stanford: DesTech Publications, Inc. p. 2417-2428., (2011). 22. Schubert-Kabban A. C., Derriso M. M., “Certification in structural health...2011). 23. Schubert-Kabban C., Greenwell B., DeSimio M., Derriso M., “The probability of detection for structural health monitoring systems: repeated

  14. Test technique development in interference free testing, flow visualization, and remote control model technology at Langley's Unitary Plan wind tunnel

    Science.gov (United States)

    Corlett, W. A.

    1979-01-01

    A metric half-span model is considered as a means of mechanical support for a wind-tunnel model which allows measurement of aerodynamic forces and moments without support interference or model distortion. This technique can be applied to interference-free propulsion models. The vapor screen method of flow visualization at supersonic Mach numbers is discussed. The use of smoke instead of water vapor as a medium to produce the screen is outlined. Vapor screen data are being used in the development of analytical vortex tracking programs. Test results for a remote control model system are evaluated. Detailed control effectiveness and cross-coupling data were obtained with a single run. For the afterbody tail configuration, tested control boundaries at several roll orientations were established utilizing the facility's on-line capability to 'fly' the model in the wind tunnel.

  15. An Enhanced Capability to Model How Compensation Policy Affects U.S. Department of Defense Civil Service Retention and Cost

    Science.gov (United States)

    2016-01-01

    bill for the force, con- sistently with OPM’s actuarial practice. This gives an amount—an accrual charge—sufficient to cover the retirement...capability could also be extended to other occupational areas within DoD, including the cyber workforce; to other pay systems, such as the science

  16. An ontology-based well-founded proposal for modeling resources and capabilities in ArchiMate

    NARCIS (Netherlands)

    Azevedo, Carlos L.B.; Iacob, Maria Eugenia; Andrade Almeida, João; van Sinderen, Marten J.; Ferreira Pires, Luis; Guizzardi, G.; Gasevic, D; Hatala, M.; Motahari Nezhad, H.R.; Reichert, M.U.

    The importance of capabilities and resources for portfolio management and business strategy has been recognized in the management literature and on a recent proposal to extend ArchiMate, which includes these concepts in order to improve ArchiMate’s coverage of portfolio management. This paper

  17. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  18. An inter-comparison exercise on the capabilities of CFD models to predict the short and long term distribution and mixing of hydrogen in a garage

    NARCIS (Netherlands)

    Venetsanos, A.G.; Papanikolaou, E.; Delichatsios, M.; Garcia, J.; Hansen, O.R.; Heitsch, M.; Huser, A.; Jahn, W.; Jordan, T.; Lacome, J.-M.; Ledin, H.S.; Makarov, D.; Middha, P.; Studer, E.; Tchouvelev, A.V.; Teodorczyk, A.; Verbecke, F.; Voort, M.M. van der

    2009-01-01

    The paper presents the results of the CFD inter-comparison exercise SBEP-V3, performed within the activity InsHyde, internal project of the HySafe network of excellence, in the framework of evaluating the capability of various CFD tools and modelling approaches in predicting the short and long term

  19. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  20. Semantic techniques for enabling knowledge reuse in conceptual modelling

    NARCIS (Netherlands)

    Gracia, J.; Liem, J.; Lozano, E.; Corcho, O.; Trna, M.; Gómez-Pérez, A.; Bredeweg, B.

    2010-01-01

    Conceptual modelling tools allow users to construct formal representations of their conceptualisations. These models are typically developed in isolation, unrelated to other user models, thus losing the opportunity of incorporating knowledge from other existing models or ontologies that might enrich

  1. Autonomous selection of PDE inpainting techniques vs. exemplar inpainting techniques for void fill of high resolution digital surface models

    Science.gov (United States)

    Rahmes, Mark; Yates, J. Harlan; Allen, Josef DeVaughn; Kelley, Patrick

    2007-04-01

    High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSite TM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.

  2. Capabilities for innovation

    DEFF Research Database (Denmark)

    Nielsen, Peter; Nielsen, René Nesgaard; Bamberger, Simon Grandjean

    2012-01-01

    is a survey that collected information from 601 firms belonging to the private urban sector in Denmark. The survey was carried out in late 2010. Keywords: dynamic capabilities/innovation/globalization/employee/employer cooperation/Nordic model Acknowledgment: The GOPA study was financed by grant 20080053113....../12-2008-09 from the Foundation for Research of Work Environment, Denmark. The funders played no part in the conduct or reporting of the research....

  3. Capability of the ‘Ball-Berry' model for predicting stomatal conductance and water use efficiency of potato leaves under different irrigation regimes

    DEFF Research Database (Denmark)

    Liu, Fulai; Andersen, Mathias Neumann; Jensen, Christian Richardt

    2009-01-01

    of soil water deficits on gs, a simple equation modifying the slope (m) based on the mean soil water potential (Ψs) in the soil columns was incorporated into the original BB-model. Compared with the original BB-model, the modified BB-model showed better predictability for both gs and WUE of potato leaves......The capability of the ‘Ball-Berry' model (BB-model) in predicting stomatal conductance (gs) and water use efficiency (WUE) of potato (Solanum tuberosum L.) leaves under different irrigation regimes was tested using data from two independent pot experiments in 2004 and 2007. Data obtained from 2004....... The simulation results showed that the modified BB-model better simulated gs for the NI and DI treatments than the original BB-model, whilst the two models performed equally well for predicting gs of the FI and PRD treatments. Although both models had poor predictability for WUE (0.47 

  4. Research on Construction of Comprehensive Evaluation Model of Agriculture Logistics Capability%粮食物流能力综合评价模型构建研究

    Institute of Scientific and Technical Information of China (English)

    洪运华

    2012-01-01

    粮食物流能力是粮食产业链或粮食企业核心能力的重要组成,是实现竞争优势的重要手段。是根食物流系统整体能力的综合反映。文章界定粮食物流能力的内涵,介绍粮食物流能力综合评价指标,构建粮食物流能力综合评价模型。%Agriculture logistics capability is the important composition of grain industry chain or the grain enterprise core ability, is the important method of realizing the competitive advantage, is the synthesis reflection of grain logistics system overall capability. The thesis limits the definition of agriculture logistics capability, introduces comprehensive evaluation indices and constructs comprehensive evaluation model.

  5. Is Technology Good for Us? A Eudaimonic Meta-Model for Evaluating the Contributive Capability of Technologies for a Good Life.

    Science.gov (United States)

    Spence, Edward H

    2011-12-01

    The title refers to the question addressed in this paper, namely, to what degree if any technology, including nanotechnologies, in the form of products and processes, is capable of contributing to a good life. To answer that question, the paper will develop a meta-normative model whose primary purpose is to determine the essential conditions that any normative theory of the Good Life and Technology (T-GLAT) must adequately address in order to be able to account for, explain and evaluate the Contributive Capability of Technology for a Good Life (CCT-GL). By CCT-GL understand the capability of any technological product or process in its design and/or its use to contribute in some way, if any, to the good life of individuals and society at large. In this paper, the all-embracing term "technology" will be used to refer to both the products and processes of different technologies.

  6. PENGEMBANGAN MODEL INTERNALISASI NILAI KARAKTER DALAM PEMBELAJARAN SEJARAH MELALUI MODEL VALUE CLARIFICATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Nunuk Suryani

    2013-07-01

    Full Text Available This research produce a product model of internalization of the character in learning history through Value Clarification Technique as a revitalization of the role of social studies in the formation of national character. In general, this research consist of three levels : (1 doing  pre-survey which identified the current condition of  the learning value of character in ​​in learning history (2 development of a model based on the findings of  pre-survey, the model used is the Dick and Carey Model, and (3 validating the models. Development models implemented with limited trials and extensive testing. The findings of this study lead to the conclusion that the VCT model is effective to internalize the character value in learning history. VCT models effective for increasing the role of learning history in the formation of student character. It can be concluded VCT models effective for improving the quality of processes and products of learning character values ​​in social studies SMP especially in Surakarta Keywords: Internalization, the value of character, Model VCT, learning history, learning social studies Penelitian ini bertujuan menghasilkan suatu produk model internalisasi nilai karakter dalam pembelajaran IPS melalui Model Value Clarification Technique sebagai revitalisasi peran pembelajaran IPS dalam pembentukan karakter bangsa. Secara garis besar tahapan penelitian meliputi (1 prasurvai untuk mengidetifikasi kondisi pembelajaran nilai karakter pada pembelajaran  IPS Sejarah SMP yang sedang berjalan, (2 pengembangan model berdasarkan hasil prasurvai, model yang digunakan adalah model Dick and Carey, dan (3 vaidasi model. Pengembangan model dilaksanakan dengan ujicoba terbatas dan uji coba luas. Temuan penelitian ini menghasilkan kesimpulan bahwa model VCT efektif  menginternalisasi nilai karakter dalam pembelajaran Sejarah. Model VCT efektif untuk meningkatkan peran pembelajaran Sejarah dalam

  7. Establishment of C6 brain glioma models through stereotactic technique for laser interstitial thermotherapy research

    Directory of Open Access Journals (Sweden)

    Jian Shi

    2015-01-01

    Conclusion: The rat C6 brain glioma model established in the study was a perfect model to study LITT of glioma. Infrared thermograph technique measured temperature conveniently and effectively. The technique is noninvasive, and the obtained data could be further processed using software used in LITT research. To measure deep-tissue temperature, combining thermocouple with infrared thermograph technique would present better results.

  8. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  9. A Titration Technique for Demonstrating a Magma Replenishment Model.

    Science.gov (United States)

    Hodder, A. P. W.

    1983-01-01

    Conductiometric titrations can be used to simulate subduction-setting volcanism. Suggestions are made as to the use of this technique in teaching volcanic mechanisms and geochemical indications of tectonic settings. (JN)

  10. New Developments and Techniques in Structural Equation Modeling

    CERN Document Server

    Marcoulides, George A

    2001-01-01

    Featuring contributions from some of the leading researchers in the field of SEM, most chapters are written by the author(s) who originally proposed the technique and/or contributed substantially to its development. Content highlights include latent varia

  11. Molecular dynamics techniques for modeling G protein-coupled receptors.

    Science.gov (United States)

    McRobb, Fiona M; Negri, Ana; Beuming, Thijs; Sherman, Woody

    2016-10-01

    G protein-coupled receptors (GPCRs) constitute a major class of drug targets and modulating their signaling can produce a wide range of pharmacological outcomes. With the growing number of high-resolution GPCR crystal structures, we have the unprecedented opportunity to leverage structure-based drug design techniques. Here, we discuss a number of advanced molecular dynamics (MD) techniques that have been applied to GPCRs, including long time scale simulations, enhanced sampling techniques, water network analyses, and free energy approaches to determine relative binding free energies. On the basis of the many success stories, including those highlighted here, we expect that MD techniques will be increasingly applied to aid in structure-based drug design and lead optimization for GPCRs.

  12. Subsurface flow and transport of organic chemicals: an assessment of current modeling capability and priority directions for future research (1987-1995)

    Energy Technology Data Exchange (ETDEWEB)

    Streile, G.P.; Simmons, C.S.

    1986-09-01

    Theoretical and computer modeling capability for assessing the subsurface movement and fate of organic contaminants in groundwater was examined. Hence, this study is particularly concerned with energy-related, organic compounds that could enter a subsurface environment and move as components of a liquid phase separate from groundwater. The migration of organic chemicals that exist in an aqueous dissolved state is certainly a part of this more general scenario. However, modeling of the transport of chemicals in aqueous solution has already been the subject of several reviews. Hence, this study emphasizes the multiphase scenario. This study was initiated to focus on the important physicochemical processes that control the behavior of organic substances in groundwater systems, to evaluate the theory describing these processes, and to search for and evaluate computer codes that implement models that correctly conceptualize the problem situation. This study is not a code inventory, and no effort was made to identify every available code capable of representing a particular process.

  13. State-of-the-art Tools and Techniques for Quantitative Modeling and Analysis of Embedded Systems

    DEFF Research Database (Denmark)

    Bozga, Marius; David, Alexandre; Hartmanns, Arnd;

    2012-01-01

    This paper surveys well-established/recent tools and techniques developed for the design of rigorous embedded sys- tems. We will first survey U PPAAL and M ODEST, two tools capable of dealing with both timed and stochastic aspects. Then, we will overview the BIP framework for modular design...

  14. 业务能力开放标杆分析及趋势探讨%Research on Models and Trends of Opening Service Capability

    Institute of Scientific and Technical Information of China (English)

    陆钢; 杨新章; 李丽; 李蓉蓉

    2011-01-01

    移动互联网和云计算创造了新的商业模式,对业务能力开放提出了新的需求.差异化能力成为运营商、互联网企业、IT企业争夺新市场的核心竞争力.本文介绍了业务能力开放的现状,重点分析标杆企业在此领域的实践案例,基于案例的共性和特性总结能力开放的3类模式,最后展望其发展趋势.%Mobile Intemet and cloud computing create a new business model, and propose new requirements of opening service capability. Differentiated capabilities are core competencies of operators, Intemet companies, IT companies to compete for new markets. This article describes the status of opening service capability, analyzes practice case of the business models in this field, take the conclusion of three modes for opening capability, prospect of its trend.

  15. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    Science.gov (United States)

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  16. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    Science.gov (United States)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  17. Validation and comparison of two-phase flow modeling capabilities of CFD, sub channel and system codes by means of post-test calculations of BFBT transient tests

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Wadim; Manes, Jorge Perez; Imke, Uwe; Escalante, Javier Jimenez; Espinoza, Victor Sanchez, E-mail: victor.sanchez@kit.edu

    2013-10-15

    Highlights: • Simulation of BFBT turbine and pump transients at multiple scales. • CFD, sub-channel and system codes are used for the comparative study. • Heat transfer models are compared to identify difference between the code predictions. • All three scales predict results in good agreement to experiment. • Sub cooled boiling models are identified as field for future research. -- Abstract: The Institute for Neutron Physics and Reactor Technology (INR) at the Karlsruhe Institute of Technology (KIT) is involved in the validation and qualification of modern thermo hydraulic simulations tools at various scales. In the present paper, the prediction capabilities of four codes from three different scales – NEPTUNE{sub C}FD as fine mesh computational fluid dynamics code, SUBCHANFLOW and COBRA-TF as sub channels codes and TRACE as system code – are assessed with respect to their two-phase flow modeling capabilities. The subject of the investigations is the well-known and widely used data base provided within the NUPEC BFBT benchmark related to BWRs. Void fraction measurements simulating a turbine and a re-circulation pump trip are provided at several axial levels of the bundle. The prediction capabilities of the codes for transient conditions with various combinations of boundary conditions are validated by comparing the code predictions with the experimental data. In addition, the physical models of the different codes are described and compared to each other in order to explain the different results and to identify areas for further improvements.

  18. MODFLOW-2000, the U.S. Geological Survey modular ground-water model -- Documentation of the Model-Layer Variable-Direction Horizontal Anisotropy (LVDA) capability of the Hydrogeologic-Unit Flow (HUF) package

    Science.gov (United States)

    Anderman, Evan R.; Kipp, K.L.; Hill, Mary C.; Valstar, Johan; Neupauer, R.M.

    2002-01-01

    This report documents the model-layer variable-direction horizontal anisotropy (LVDA) capability of the Hydrogeologic-Unit Flow (HUF) Package of MODFLOW-2000. The LVDA capability allows the principal directions of horizontal anisotropy to be different than the model-grid row and column directions, and for the directions to vary on a cell-by-cell basis within model layers. The HUF Package calculates effective hydraulic properties for model grid cells based on hydraulic properties of hydrogeologic units with thicknesses defined independently of the model layers. These hydraulic properties include, among other characteristics, hydraulic conductivity and a horizontal anisotropy ratio. Using the LVDA capability, horizontal anisotropy direction is defined for model grid cells within which one or more hydrogeologic units may occur. For each grid cell, the HUF Package calculates the effective horizontal hydraulic conductivity along the primary direction of anisotropy using the hydrogeologic-unit hydraulic conductivities, and calculates the effective horizontal hydraulic conductivity along the orthogonal anisotropy direction using the effective primary direction hydraulic conductivities and horizontal anisotropy ratios. The direction assigned to the model layer effective primary hydraulic conductivity is specified using a new data set defined by the LVDA capability, when active, to calculate coefficients needed to solve the ground-water flow equation. Use of the LVDA capability is illustrated in four simulation examples, which also serve to verify hydraulic heads, advective-travel paths, and sensitivities calculated using the LVDA capability. This version of the LVDA capability defines variable-direction horizontal anisotropy using model layers, not the hydrogeologic units defined by the HUF Package. This difference needs to be taken into account when designing model layers and hydrogeologic units to produce simulations that accurately represent a given field problem. This

  19. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  20. Verification and Validation Strategy for Implementation of Hybrid Potts-Phase Field Hydride Modeling Capability in MBM

    Energy Technology Data Exchange (ETDEWEB)

    Jason D. Hales; Veena Tikare

    2014-04-01

    The Used Fuel Disposition (UFD) program has initiated a project to develop a hydride formation modeling tool using a hybrid Potts­phase field approach. The Potts model is incorporated in the SPPARKS code from Sandia National Laboratories. The phase field model is provided through MARMOT from Idaho National Laboratory.

  1. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  2. OFF-LINE HANDWRITING RECOGNITION USING VARIOUS HYBRID MODELING TECHNIQUES AND CHARACTER N-GRAMS

    NARCIS (Netherlands)

    Brakensiek, A.; Rottland, J.; Kosmala, A.; Rigoll, G.

    2004-01-01

    In this paper a system for on-line cursive handwriting recognition is described. The system is based on Hidden Markov Models (HMMs) using discrete and hybrid modeling techniques. Here, we focus on two aspects of the recognition system. First, we present different hybrid modeling techniques, whereas

  3. User's instructions for the Guyton circulatory dynamics model using the Univac 1110 batch and demand processing (with graphic capabilities)

    Science.gov (United States)

    Archer, G. T.

    1974-01-01

    The model presents a systems analysis of a human circulatory regulation based almost entirely on experimental data and cumulative present knowledge of the many facets of the circulatory system. The model itself consists of eighteen different major systems that enter into circulatory control. These systems are grouped into sixteen distinct subprograms that are melded together to form the total model. The model develops circulatory and fluid regulation in a simultaneous manner. Thus, the effects of hormonal and autonomic control, electrolyte regulation, and excretory dynamics are all important and are all included in the model.

  4. A Hybrid Multi-Criteria Decision Model for Technological Innovation Capability Assessment: Research on Thai Automotive Parts Firms

    Directory of Open Access Journals (Sweden)

    Sumrit Detcharat

    2013-01-01

    Full Text Available The efficient appraisal of technological innovation capabilities (TICs of enterprises is an important factor to enhance competitiveness. This study aims to evaluate and rank TICs evaluation criteria in order to provide a practical insight of systematic analysis by gathering the qualified experts’ opinions combined with three methods of multi-criteria decision making approach. Firstly, Fuzzy Delphi method is used to screen TICs evaluation criteria from the recent published researches. Secondly, the Analytic Hierarchy Process is utilized to compute the relative important weights. Lastly, the VIKOR method is used to rank the enterprises based on TICs evaluation criteria. An empirical study is applied for Thai automotive parts firms to illustrate the proposed methods. This study found that the interaction between criteria is essential and influences TICs; furthermore, this ranking development of TICs assessment is also one of key management tools to simply facilitate and offer a new mindset for managements of other related industries.

  5. Multidisciplinary Optimization of Tilt Rotor Blades Using Comprehensive Composite Modeling Technique

    Science.gov (United States)

    Chattopadhyay, Aditi; McCarthy, Thomas R.; Rajadas, John N.

    1997-01-01

    An optimization procedure is developed for addressing the design of composite tilt rotor blades. A comprehensive technique, based on a higher-order laminate theory, is developed for the analysis of the thick composite load-carrying sections, modeled as box beams, in the blade. The theory, which is based on a refined displacement field, is a three-dimensional model which approximates the elasticity solution so that the beam cross-sectional properties are not reduced to one-dimensional beam parameters. Both inplane and out-of-plane warping are included automatically in the formulation. The model can accurately capture the transverse shear stresses through the thickness of each wall while satisfying stress free boundary conditions on the inner and outer surfaces of the beam. The aerodynamic loads on the blade are calculated using the classical blade element momentum theory. Analytical expressions for the lift and drag are obtained based on the blade planform with corrections for the high lift capability of rotor blades. The aerodynamic analysis is coupled with the structural model to formulate the complete coupled equations of motion for aeroelastic analyses. Finally, a multidisciplinary optimization procedure is developed to improve the aerodynamic, structural and aeroelastic performance of the tilt rotor aircraft. The objective functions include the figure of merit in hover and the high speed cruise propulsive efficiency. Structural, aerodynamic and aeroelastic stability criteria are imposed as constraints on the problem. The Kreisselmeier-Steinhauser function is used to formulate the multiobjective function problem. The search direction is determined by the Broyden-Fletcher-Goldfarb-Shanno algorithm. The optimum results are compared with the baseline values and show significant improvements in the overall performance of the tilt rotor blade.

  6. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...

  7. Mobile Test Capabilities

    Data.gov (United States)

    Federal Laboratory Consortium — The Electrical Power Mobile Test capabilities are utilized to conduct electrical power quality testing on aircraft and helicopters. This capability allows that the...

  8. Reform the Training Model to Improve the Post-graduate's Innovative Capability%论如何提高研究生的创新能力

    Institute of Scientific and Technical Information of China (English)

    王根顺; 曹瑞红

    2009-01-01

    It is an inevitable for us to improve innovative capability for post-graduates and it is a requirement to implement the strategy of rejuvenating China through science and technology.The training model plays an important role in fostering of post-graduate' s innovative capability.This paper puts forward some suggestions to reform the training model based on the analysis of relationship between the post-graduate training model and innovative capability.%培养研究生的创新能力是提高研究生质量、实现科教兴国战略的必然要求,研究生的培养模式影响着创新能力的培养.从研究生创新能力培养的重要性与紧迫性入手,探讨了培养模式与创新能力之间的关系,并提出了一些变革措施.

  9. Discrete Analytic Domains: a New Technique for Groundwater Flow Modeling in Layered, Anisotropic, and Heterogeneous Aquifer Systems

    Science.gov (United States)

    Fitts, C. R.

    2004-12-01

    A new technique for modeling groundwater flow with analytic solutions is presented. It allows modeling of layered aquifer systems with complex heterogeneity and anisotropy. As with previous AEM techniques, flow in each layer is modeled with two-dimensional analytic solutions, there is high accuracy and resolution, the domain is not discretized into grid blocks or elements, and the modeled area can easily be altered and expanded in the midst of the modeling process. This method differs from previous Analytic Element Method (AEM) techniques by allowing general anisotropy conditions in the aquifer. The flow field is broken into discrete polygonal domains, each with its own definition of isotropic or anisotropic aquifer parameters. An advantage of this approach is that the anisotropy orientation and ratio can differ from one domain to another, a capability not possible with the "infinite domain" of previous AEM formulations. With this approach, the potential and discharge vector functions at a point are the sum of contributions from elements within or on the boundary of the domain containing the point. Unlike previous AEM schemes, elements beyond the domain boundary don't contribute to these functions. Once a solution is in hand, less computation is required to evaluate heads and discharges because fewer elements contribute to the equations. This computational efficiency could prove a significant advantage in large regional models and in solute transport models that use a flow model's velocity field. This technique allows multiple aquifer layers to be stacked vertically, and it has the novel ability to have more layers in the area of interest than in distant areas. This feature can save significant computation and input effort by concentrating layering detail only where it is needed and warranted by data. The boundary conditions at line element boundaries are approximated, including the continuity of flow and head across heterogeneity boundaries. By using high

  10. Application of experimental design techniques to structural simulation meta-model building using neural network

    Institute of Scientific and Technical Information of China (English)

    费庆国; 张令弥

    2004-01-01

    Neural networks are being used to construct meta-models in numerical simulation of structures. In addition to network structures and training algorithms, training samples also greatly affect the accuracy of neural network models. In this paper, some existing main sampling techniques are evaluated, including techniques based on experimental design theory,random selection, and rotating sampling. First, advantages and disadvantages of each technique are reviewed. Then, seven techniques are used to generate samples for training radial neural networks models for two benchmarks: an antenna model and an aircraft model. Results show that the uniform design, in which the number of samples and mean square error network models are considered, is the best sampling technique for neural network based meta-model building.

  11. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  12. A vortex model for Darrieus turbine using finite element techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ponta, Fernando L. [Universidad de Buenos Aires, Dept. de Electrotecnia, Grupo ISEP, Buenos Aires (Argentina); Jacovkis, Pablo M. [Universidad de Buenos Aires, Dept. de Computacion and Inst. de Calculo, Buenos Aires (Argentina)

    2001-09-01

    Since 1970 several aerodynamic prediction models have been formulated for the Darrieus turbine. We can identify two families of models: stream-tube and vortex. The former needs much less computation time but the latter is more accurate. The purpose of this paper is to show a new option for modelling the aerodynamic behaviour of Darrieus turbines. The idea is to combine a classic free vortex model with a finite element analysis of the flow in the surroundings of the blades. This avoids some of the remaining deficiencies in classic vortex models. The agreement between analysis and experiment when predicting instantaneous blade forces and near wake flow behind the rotor is better than the one obtained in previous models. (Author)

  13. TESTING DIFFERENT SURVEY TECHNIQUES TO MODEL ARCHITECTONIC NARROW SPACES

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH field. In fact, even if the technical specifications (range, accuracy and field of view are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan’s cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  14. Testing Different Survey Techniques to Model Architectonic Narrow Spaces

    Science.gov (United States)

    Mandelli, A.; Fassi, F.; Perfetti, L.; Polari, C.

    2017-08-01

    In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH) field. In fact, even if the technical specifications (range, accuracy and field of view) are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan's cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  15. Re-Framing Inclusive Education through the Capability Approach: An Elaboration of the Model of Relational Inclusion

    Science.gov (United States)

    Dalkilic, Maryam; Vadeboncoeur, Jennifer A.

    2016-01-01

    Scholars have called for the articulation of new frameworks in special education that are responsive to culture and context and that address the limitations of medical and social models of disability. In this article, we advance a theoretical and practical framework for inclusive education based on the integration of a model of relational…

  16. Advancing hydrometeorological prediction capabilities through standards-based cyberinfrastructure development: The community WRF-Hydro modeling system

    Science.gov (United States)

    gochis, David; Parodi, Antonio; Hooper, Rick; Jha, Shantenu; Zaslavsky, Ilya

    2013-04-01

    The need for improved assessments and predictions of many key environmental variables is driving a multitude of model development efforts in the geosciences. The proliferation of weather and climate impacts research is driving a host of new environmental prediction model development efforts as society seeks to understand how climate does and will impact key societal activities and resources and, in turn, how human activities influence climate and the environment. This surge in model development has highlighted the role of model coupling as a fundamental activity itself and, at times, a significant bottleneck in weather and climate impacts research. This talk explores some of the recent activities and progress that has been made in assessing the attributes of various approaches to the coupling of physics-based process models for hydrometeorology. One example modeling system that is emerging from these efforts is the community 'WRF-Hydro' modeling system which is based on the modeling architecture of the Weather Research and Forecasting (WRF). An overview of the structural components of WRF-Hydro will be presented as will results from several recent applications which include the prediction of flash flooding events in the Rocky Mountain Front Range region of the U.S. and along the Ligurian coastline in the northern Mediterranean. Efficient integration of the coupled modeling system with distributed infrastructure for collecting and sharing hydrometeorological observations is one of core themes of the work. Specifically, we aim to demonstrate how data management infrastructures used in the US and Europe, in particular data sharing technologies developed within the CUAHSI Hydrologic Information System and UNIDATA, can interoperate based on international standards for data discovery and exchange, such as standards developed by the Open Geospatial Consortium and adopted by GEOSS. The data system we envision will help manage WRF-Hydro prediction model data flows, enabling

  17. Experimental technique of calibration of symmetrical air pollution models

    Indian Academy of Sciences (India)

    P Kumar

    2005-10-01

    Based on the inherent property of symmetry of air pollution models,a Symmetrical Air Pollution Model Index (SAPMI)has been developed to calibrate the accuracy of predictions made by such models,where the initial quantity of release at the source is not known.For exact prediction the value of SAPMI should be equal to 1.If the predicted values are overestimating then SAPMI is > 1and if it is underestimating then SAPMI is < 1.Specific design for the layout of receptors has been suggested as a requirement for the calibration experiments.SAPMI is applicable for all variations of symmetrical air pollution dispersion models.

  18. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  19. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  20. Novel anisotropic continuum-discrete damage model capable of representing localized failure of massive structures. Part II: identification from tests under heterogeneous stress field

    CERN Document Server

    Kucerova, A; Ibrahimbegovic, A; Zeman, J; Bittnar, Z

    2009-01-01

    In Part I of this paper we have presented a simple model capable of describing the localized failure of a massive structure. In this part, we discuss the identification of the model parameters from two kinds of experiments: a uniaxial tensile test and a three-point bending test. The former is used only for illustration of material parameter response dependence, and we focus mostly upon the latter, discussing the inverse optimization problem for which the specimen is subjected to a heterogeneous stress field.

  1. Research on a Software Test Capability Maturity Model%软件测评能力成熟度模型研究

    Institute of Scientific and Technical Information of China (English)

    王峰; 谷天阳; 佟金荣

    2011-01-01

    As the most important means of software quality assurance, software test has been receiving an adequate attention in the community. To verify the software be content with the users' requirements and expose its bugs as more as possible in a limited period, human resource and budget, it must be done to find the way of promoting the efficiency and quality of test. So, not only be the techniques, methods and tools are researched, but also stress must be laid on the management and improvement of software test process. After learning from CMM, CMMI, TMMi, GJB 5000-2003 and GJB 5000A-2008, this paper presents a software test capability maturity model, which is suitable for third-party testing organizations, helping them to improve their test processes, and responsible institute to select testing organizations.%软件测评作为软件质量保证的重要手段,其重要性已越来越受到各方的高度关注.要在有限的时间、人员和经费等约束条件下,验证软件是否满足要求并尽可能多地发现其中存在的缺陷,就必须找到提高测评效率和质量的有效途径,为此不仅要研究软件测评的技术、方法和工具,还必须注重软件测评过程的管理和改进.本文在借鉴CMM、CMMI、TMMi、GJB 5000、GJB 5000A思想的基础上,提出了适用于软件第三方测评机构的软件测评能力成熟度模型,有助于第三方测评机构改进其测评过程、主管部门遴选测评机构.

  2. Evaluating the capability of regional-scale air quality models to capture the vertical distribution of pollutants

    Directory of Open Access Journals (Sweden)

    E. Solazzo

    2013-06-01

    Full Text Available This study is conducted in the framework of the Air Quality Modelling Evaluation International Initiative (AQMEII and aims at the operational evaluation of an ensemble of 12 regional-scale chemical transport models used to predict air quality over the North American (NA and European (EU continents for 2006. The modelled concentrations of ozone and CO, along with the meteorological fields of wind speed (WS and direction (WD, temperature (T, and relative humidity (RH, are compared against high-quality in-flight measurements collected by instrumented commercial aircraft as part of the Measurements of OZone, water vapour, carbon monoxide and nitrogen oxides by Airbus In-service airCraft (MOZAIC programme. The evaluation is carried out for five model domains positioned around four major airports in NA (Portland, Philadelphia, Atlanta, and Dallas and one in Europe (Frankfurt, from the surface to 8.5 km. We compare mean vertical profiles of modelled and measured variables for all airports to compute error and variability statistics, perform analysis of altitudinal error correlation, and examine the seasonal error distribution for ozone, including an estimation of the bias introduced by the lateral boundary conditions (BCs. The results indicate that model performance is highly dependent on the variable, location, season, and height (e.g. surface, planetary boundary layer (PBL or free troposphere being analysed. While model performance for T is satisfactory at all sites (correlation coefficient in excess of 0.90 and fractional bias ≤ 0.01 K, WS is not replicated as well within the PBL (exhibiting a positive bias in the first 100 m and also underestimating observed variability, while above 1000 m, the model performance improves (correlation coefficient often above 0.9. The WD at NA airports is found to be biased in the PBL, primarily due to an overestimation of westerly winds. RH is modelled well within the PBL, but in the free troposphere large

  3. Study on modeling of vehicle dynamic stability and control technique

    Institute of Scientific and Technical Information of China (English)

    GAO Yun-ting; LI Pan-feng

    2012-01-01

    In order to solve the problem of enhancing the vehicle driving stability and safety,which has been the hot question researched by scientific and engineering in the vehicle industry,the new control method was investigated.After the analysis of tire moving characteristics and the vehicle stress analysis,the tire model based on the extension pacejka magic formula which combined longitudinal motion and lateral motion was developed and a nonlinear vehicle dynamical stability model with seven freedoms was made.A new model reference adaptive control project which made the slip angle and yaw rate of vehicle body as the output and feedback variable in adjusting the torque of vehicle body to control the vehicle stability was designed.A simulation model was also built in Matlab/Simulink to evaluate this control project.It was made up of many mathematical subsystem models mainly including the tire model module,the yaw moment calculation module,the center of mass parameter calculation module,tire parameter calculation module of multiple and so forth.The severe lane change simulation result shows that this vehicle model and the model reference adaptive control method have an excellent performance.

  4. Development of a model capable of predicting the performance of piston ring-cylinder liner-like tribological interfaces

    DEFF Research Database (Denmark)

    Felter, C.L.; Vølund, A.; Imran, Tajammal

    2010-01-01

    on a measured temperature only; thus, it is not necessary to include the energy equation. Conservation of oil is ensured throughout the domain by considering the amount of oil outside the lubricated interface. A model for hard contact through asperities is also included. Second, a laboratory-scale test rig....... The work described in this article addresses the subject from both an experimental and a theoretical perspective. First, a one-dimensional numerical model based on the Reynolds equation is presented. It uses a pressure-density relation for the modelling of cavitation. The viscosity is assumed to depend...

  5. Variational Data Assimilation Technique in Mathematical Modeling of Ocean Dynamics

    Science.gov (United States)

    Agoshkov, V. I.; Zalesny, V. B.

    2012-03-01

    Problems of the variational data assimilation for the primitive equation ocean model constructed at the Institute of Numerical Mathematics, Russian Academy of Sciences are considered. The model has a flexible computational structure and consists of two parts: a forward prognostic model, and its adjoint analog. The numerical algorithm for the forward and adjoint models is constructed based on the method of multicomponent splitting. The method includes splitting with respect to physical processes and space coordinates. Numerical experiments are performed with the use of the Indian Ocean and the World Ocean as examples. These numerical examples support the theoretical conclusions and demonstrate the rationality of the approach using an ocean dynamics model with an observed data assimilation procedure.

  6. Wave Propagation in Fluids Models and Numerical Techniques

    CERN Document Server

    Guinot, Vincent

    2007-01-01

    This book presents the physical principles of wave propagation in fluid mechanics and hydraulics. The mathematical techniques that allow the behavior of the waves to be analyzed are presented, along with existing numerical methods for the simulation of wave propagation. Particular attention is paid to discontinuous flows, such as steep fronts and shock waves, and their mathematical treatment. A number of practical examples are taken from various areas fluid mechanics and hydraulics, such as contaminant transport, the motion of immiscible hydrocarbons in aquifers, river flow, pipe transients an

  7. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified...... that is extremely sensitive to structural changes in the system. This quantity, which is derived from the edge-length distribution function of the Voronoi polygons, displays a dramatic change at the solid-liquid transition. This is found to be more useful for locating the transition than either the defect density...

  8. Household water use and conservation models using Monte Carlo techniques

    Science.gov (United States)

    Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.

    2013-10-01

    The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  9. Multiple Fan-Beam Optical Tomography: Modelling Techniques

    Directory of Open Access Journals (Sweden)

    Pang Jon Fea

    2009-10-01

    Full Text Available This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image.

  10. Size reduction techniques for vital compliant VHDL simulation models

    Science.gov (United States)

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  11. Modeling the effects of land cover and use on landscape capability for urban ungulate populations: Chapter 11

    Science.gov (United States)

    Underwood, Harold; Kilheffer, Chellby R.; Francis, Robert A.; Millington, James D. A.; Chadwick, Michael A.

    2016-01-01

    Expanding ungulate populations are causing concerns for wildlife professionals and residents in many urban areas worldwide. Nowhere is the phenomenon more apparent than in the eastern US, where urban white-tailed deer (Odocoileus virginianus) populations are increasing. Most habitat suitability models for deer have been developed in rural areas and across large (>1000 km2) spatial extents. Only recently have we begun to understand the factors that contribute to space use by deer over much smaller spatial extents. In this study, we explore the concepts, terminology, methodology and state-of-the-science in wildlife abundance modeling as applied to overabundant deer populations across heterogeneous urban landscapes. We used classified, high-resolution digital orthoimagery to extract landscape characteristics in several urban areas of upstate New York. In addition, we assessed deer abundance and distribution in 1-km2 blocks across each study area from either aerial surveys or ground-based distance sampling. We recorded the number of detections in each block and used binomial mixture models to explore important relationships between abundance and key landscape features. Finally, we cross-validated statistical models of abundance and compared covariate relationships across study sites. Study areas were characterized along a gradient of urbanization based on the proportions of impervious surfaces and natural vegetation which, based on the best-supported models, also distinguished blocks potentially occupied by deer. Models performed better at identifying occurrence of deer and worse at predicting abundance in cross-validation comparisons. We attribute poor predictive performance to differences in deer population trajectories over time. The proportion of impervious surfaces often yielded better predictions of abundance and occurrence than did the proportion of natural vegetation, which we attribute to a lack of certain land cover classes during cold and snowy winters

  12. Liquid propellant analogy technique in dynamic modeling of launch vehicle

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The coupling effects among lateral mode,longitudinal mode and torsional mode of a launch vehicle cannot be taken into account in traditional dynamic analysis using lateral beam model and longitudinal spring-mass model individually.To deal with the problem,propellant analogy methods based on beam model are proposed and coupled mass-matrix of liquid propellant is constructed through additional mass in the present study.Then an integrated model of launch vehicle for free vibration analysis is established,by which research on the interactions between longitudinal and lateral modes,longitudinal and torsional modes of the launch vehicle can be implemented.Numerical examples for tandem tanks validate the present method and its necessity.

  13. Evaluation of dynamical models: dissipative synchronization and other techniques.

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A B

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams--which in turn is much greater than, say, that of correlation dimension--but at a much lower computational cost.

  14. Evaluation of dynamical models: Dissipative synchronization and other techniques

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A. B.

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams—which in turn is much greater than, say, that of correlation dimension—but at a much lower computational cost.

  15. Improving the capability of an integrated CA-Markov model to simulate spatio-temporal urban growth trends using an Analytical Hierarchy Process and Frequency Ratio

    Science.gov (United States)

    Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan

    2017-07-01

    The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.

  16. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  17. Advanced modeling techniques in application to plasma pulse treatment

    Science.gov (United States)

    Pashchenko, A. F.; Pashchenko, F. F.

    2016-06-01

    Different approaches considered for simulation of plasma pulse treatment process. The assumption of a significant non-linearity of processes in the treatment of oil wells has been confirmed. Method of functional transformations and fuzzy logic methods suggested for construction of a mathematical model. It is shown, that models, based on fuzzy logic are able to provide a satisfactory accuracy of simulation and prediction of non-linear processes observed.

  18. Recent Additions in the Modeling Capabilities of an Open-Source Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tom, N.; Lawson, M.; Yu, Y. H.

    2015-04-20

    WEC-Sim is a midfidelity numerical tool for modeling wave energy conversion devices. The code uses the MATLAB SimMechanics package to solve multibody dynamics and models wave interactions using hydrodynamic coefficients derived from frequency-domain boundary-element methods. This paper presents the new modeling features introduced in the latest release of WEC-Sim. The first feature discussed conversion of the fluid memory kernel to a state-space form. This enhancement offers a substantial computational benefit after the hydrodynamic body-to-body coefficients are introduced and the number of interactions increases exponentially with each additional body. Additional features include the ability to calculate the wave-excitation forces based on the instantaneous incident wave angle, allowing the device to weathervane, as well as import a user-defined wave elevation time series. A review of the hydrodynamic theory for each feature is provided and the successful implementation is verified using test cases.

  19. A novel model of motor learning capable of developing an optimal movement control law online from scratch.

    Science.gov (United States)

    Shimansky, Yury P; Kang, Tao; He, Jiping

    2004-02-01

    A computational model of a learning system (LS) is described that acquires knowledge and skill necessary for optimal control of a multisegmental limb dynamics (controlled object or CO), starting from "knowing" only the dimensionality of the object's state space. It is based on an optimal control problem setup different from that of reinforcement learning. The LS solves the optimal control problem online while practicing the manipulation of CO. The system's functional architecture comprises several adaptive components, each of which incorporates a number of mapping functions approximated based on artificial neural nets. Besides the internal model of the CO's dynamics and adaptive controller that computes the control law, the LS includes a new type of internal model, the minimal cost (IM(mc)) of moving the controlled object between a pair of states. That internal model appears critical for the LS's capacity to develop an optimal movement trajectory. The IM(mc) interacts with the adaptive controller in a cooperative manner. The controller provides an initial approximation of an optimal control action, which is further optimized in real time based on the IM(mc). The IM(mc) in turn provides information for updating the controller. The LS's performance was tested on the task of center-out reaching to eight randomly selected targets with a 2DOF limb model. The LS reached an optimal level of performance in a few tens of trials. It also quickly adapted to movement perturbations produced by two different types of external force field. The results suggest that the proposed design of a self-optimized control system can serve as a basis for the modeling of motor learning that includes the formation and adaptive modification of the plan of a goal-directed movement.

  20. Regional assessments of the Nation's water quality—Improved understanding of stream nutrient sources through enhanced modeling capabilities

    Science.gov (United States)

    Preston, Stephen D.; Alexander, Richard B.; Woodside, Michael D.

    2011-01-01

    The U.S. Geological Survey (USGS) recently completed assessments of stream nutrients in six major regions extending over much of the conterminous United States. SPARROW (SPAtially Referenced Regressions On Watershed attributes) models were developed for each region to explain spatial patterns in monitored stream nutrient loads in relation to human activities and natural resources and processes. The model information, reported by stream reach and catchment, provides contrasting views of the spatial patterns of nutrient source contributions, including those from urban (wastewater effluent and diffuse runoff from developed land), agricultural (farm fertilizers and animal manure), and specific background sources (atmospheric nitrogen deposition, soil phosphorus, forest nitrogen fixation, and channel erosion).

  1. On the predictive capabilities of the shear modified Gurson and the modified Mohr-Coulomb fracture models over a wide range of stress triaxialities and Lode angles

    Science.gov (United States)

    Dunand, Matthieu; Mohr, Dirk

    2011-07-01

    The predictive capabilities of the shear-modified Gurson model [Nielsen and Tvergaard, Eng. Fract. Mech. 77, 2010] and the Modified Mohr-Coulomb (MMC) fracture model [Bai and Wierzbicki, Int. J. Fract. 161, 2010] are evaluated. Both phenomenological fracture models are physics-inspired and take the effect of the first and third stress tensor invariants into account in predicting the onset of ductile fracture. The MMC model is based on the assumption that the initiation of fracture is determined by a critical stress state, while the shear-modified Gurson model assumes void growth as the governing mechanism. Fracture experiments on TRIP-assisted steel sheets covering a wide range of stress states (from shear to equibiaxial tension) are used to calibrate and validate these models. The model accuracy is quantified based on the predictions of the displacement to fracture for experiments which have not been used for calibration. It is found that the MMC model predictions agree well with all experiments (less than 4% error), while less accurate predictions are observed for the shear-modified Gurson model. A comparison of plots of the strain to fracture as a function of the stress triaxiality and the normalized third invariant reveals significant differences between the two models except within the vicinity of stress states that have been used for calibration.

  2. Fusing Observations and Model Results for Creation of Enhanced Ozone Spatial Fields: Comparison of Three Techniques

    Science.gov (United States)

    This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...

  3. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  4. Equivalent modeling of PMSG-based wind power plants considering LVRT capabilities: electromechanical transients in power systems.

    Science.gov (United States)

    Ding, Ming; Zhu, Qianlong

    2016-01-01

    Hardware protection and control action are two kinds of low voltage ride-through technical proposals widely used in a permanent magnet synchronous generator (PMSG). This paper proposes an innovative clustering concept for the equivalent modeling of a PMSG-based wind power plant (WPP), in which the impacts of both the chopper protection and the coordinated control of active and reactive powers are taken into account. First, the post-fault DC link voltage is selected as a concentrated expression of unit parameters, incoming wind and electrical distance to a fault point to reflect the transient characteristics of PMSGs. Next, we provide an effective method for calculating the post-fault DC link voltage based on the pre-fault wind energy and the terminal voltage dip. Third, PMSGs are divided into groups by analyzing the calculated DC link voltages without any clustering algorithm. Finally, PMSGs of the same group are equivalent as one rescaled PMSG to realize the transient equivalent modeling of the PMSG-based WPP. Using the DIgSILENT PowerFactory simulation platform, the efficiency and accuracy of the proposed equivalent model are tested against the traditional equivalent WPP and the detailed WPP. The simulation results show the proposed equivalent model can be used to analyze the offline electromechanical transients in power systems.

  5. Attempting to train a digital human model to reproduce human subject reach capabilities in an ejection seat aircraft

    NARCIS (Netherlands)

    Zehner, G.F.; Hudson, J.A.; Oudenhuijzen, A.

    2006-01-01

    From 1997 through 2002, the Air Force Research Lab and TNO Defence, Security and Safety (Business Unit Human Factors) were involved in a series of tests to quantify the accuracy of five Human Modeling Systems (HMSs) in determining accommodation limits of ejection seat aircraft. The results of these

  6. How does a cadaver model work for testing ultrasound diagnostic capability for rheumatic-like tendon damage?

    DEFF Research Database (Denmark)

    Janta, Iustina; Morán, Julio; Naredo, Esperanza;

    2016-01-01

    To establish whether a cadaver model can serve as an effective surrogate for the detection of tendon damage characteristic of rheumatoid arthritis (RA). In addition, we evaluated intraobserver and interobserver agreement in the grading of RA-like tendon tears shown by US, as well as the concordan...

  7. An Energy-Equivalent d+/d− Damage Model with Enhanced Microcrack Closure-Reopening Capabilities for Cohesive-Frictional Materials

    Science.gov (United States)

    Cervera, Miguel; Tesei, Claudia

    2017-01-01

    In this paper, an energy-equivalent orthotropic d+/d− damage model for cohesive-frictional materials is formulated. Two essential mechanical features are addressed, the damage-induced anisotropy and the microcrack closure-reopening (MCR) effects, in order to provide an enhancement of the original d+/d− model proposed by Faria et al. 1998, while keeping its high algorithmic efficiency unaltered. First, in order to ensure the symmetry and positive definiteness of the secant operator, the new formulation is developed in an energy-equivalence framework. This proves thermodynamic consistency and allows one to describe a fundamental feature of the orthotropic damage models, i.e., the reduction of the Poisson’s ratio throughout the damage process. Secondly, a “multidirectional” damage procedure is presented to extend the MCR capabilities of the original model. The fundamental aspects of this approach, devised for generic cyclic conditions, lie in maintaining only two scalar damage variables in the constitutive law, while preserving memory of the degradation directionality. The enhanced unilateral capabilities are explored with reference to the problem of a panel subjected to in-plane cyclic shear, with or without vertical pre-compression; depending on the ratio between shear and pre-compression, an absent, a partial or a complete stiffness recovery is simulated with the new multidirectional procedure. PMID:28772793

  8. An Energy-Equivalent d⁺/d(-) Damage Model with Enhanced Microcrack Closure-Reopening Capabilities for Cohesive-Frictional Materials.

    Science.gov (United States)

    Cervera, Miguel; Tesei, Claudia

    2017-04-20

    In this paper, an energy-equivalent orthotropic d⁺/d(-) damage model for cohesive-frictional materials is formulated. Two essential mechanical features are addressed, the damage-induced anisotropy and the microcrack closure-reopening (MCR) effects, in order to provide an enhancement of the original d⁺/d(-) model proposed by Faria et al. 1998, while keeping its high algorithmic efficiency unaltered. First, in order to ensure the symmetry and positive definiteness of the secant operator, the new formulation is developed in an energy-equivalence framework. This proves thermodynamic consistency and allows one to describe a fundamental feature of the orthotropic damage models, i.e., the reduction of the Poisson's ratio throughout the damage process. Secondly, a "multidirectional" damage procedure is presented to extend the MCR capabilities of the original model. The fundamental aspects of this approach, devised for generic cyclic conditions, lie in maintaining only two scalar damage variables in the constitutive law, while preserving memory of the degradation directionality. The enhanced unilateral capabilities are explored with reference to the problem of a panel subjected to in-plane cyclic shear, with or without vertical pre-compression; depending on the ratio between shear and pre-compression, an absent, a partial or a complete stiffness recovery is simulated with the new multidirectional procedure.

  9. Runoff simulation using distributed hydrological modeling approach, remote sensing and GIS techniques: A case study from an Indian agricultural watershed

    Science.gov (United States)

    Chowdary, V. M.; Desai, V. R.; Gupta, M.; Jeyaram, A.; Murthy, Y. V. N. K.

    2012-07-01

    Distributed hydrological modeling has the capability of simulating distributed watershed basin processes, by dividing a heterogeneous and complex land surface divided into computational elements such as Hydrologic Response Units (HRU), grid cell or sub watersheds. The present study was taken up to simulate spatial hydrological processes from a case study area of Kansavati watershed in Purulia district of West Bengal, India having diverse geographical features using distributed hydrological modelling approach. In the present study, overland flow in terms of direct runoff from storm rainfall was computed using USDA Soil Conservation Services (SCS) curve number technique and subsequently it served as input to channel routing model. For channel flow routing, Muskingum-Cunge flood routing technique was used, specifically to route surface runoff from the different sub watershed outlet points to the outlet point of the watershed. Model parameters were derived for each grid cell either from remote sensing data or conventional maps under GIS environment. For distributed approach, validation show reasonable fit between the simulated and measured data and CMR value in all the cases is negative and ranges from -0.1 to - 0.3. Further, this study investigates the effect of cell size on runoff simulation for different grid cell sizes of 23, 46, 92, 184, 368, 736, 1472 m resolution. The difference between simulated and observed runoff values increases with the increase of grid size beyond 184 m more prominently. Further, this model can be used to evaluate futuristic water availability scenarios for an agricultural watershed in eastern India.

  10. Crude Oil Model Emulsion Characterised by means of Near Infrared Spectroscopy and Multivariate Techniques

    DEFF Research Database (Denmark)

    Kallevik, H.; Hansen, Susanne Brunsgaard; Sæther, Ø.

    2000-01-01

    Water-in-oil emulsions are investigated by means of multivariate analysis of near infrared (NIR) spectroscopic profiles in the range 1100 - 2250 nm. The oil phase is a paraffin-diluted crude oil from the Norwegian Continental Shelf. The influence of water absorption and light scattering...... of the water droplets are shown to be strong. Despite the strong influence of the water phase, the NIR technique is still capable of predicting the composition of the investigated oil phase....

  11. A modeling technique for STOVL ejector and volume dynamics

    Science.gov (United States)

    Drummond, C. K.; Barankiewicz, W. S.

    1990-01-01

    New models for thrust augmenting ejector performance prediction and feeder duct dynamic analysis are presented and applied to a proposed Short Take Off and Vertical Landing (STOVL) aircraft configuration. Central to the analysis is the nontraditional treatment of the time-dependent volume integrals in the otherwise conventional control-volume approach. In the case of the thrust augmenting ejector, the analysis required a new relationship for transfer of kinetic energy from the primary flow to the secondary flow. Extraction of the required empirical corrections from current steady-state experimental data is discussed; a possible approach for modeling insight through Computational Fluid Dynamics (CFD) is presented.

  12. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power Electronics,"…

  13. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  14. Application of Yamaguchi's technique for the Rescorla-Wagner model.

    Science.gov (United States)

    Yamaguchi, Makoto

    2007-12-01

    Yamaguchi in 2006 solved for the first time a problem concerning a 1972 mathematical model of classical conditioning by Rescorla and Wagner. That derivation is not an isolated contribution. Here it is shown that the same line of derivation can be successfully applied to another experimental situation involving more stimuli.

  15. Suitability of sheet bending modelling techniques in CAPP applications

    NARCIS (Netherlands)

    Streppel, A.H.; de Vin, L.J.; de Vin, L.J.; Brinkman, J.; Brinkman, J.; Kals, H.J.J.

    1993-01-01

    The use of CNC machine tools, together with decreasing lot sizes and stricter tolerance prescriptions, has led to changes in sheet-metal part manufacturing. In this paper, problems introduced by the difference between the actual material behaviour and the results obtained from analytical models and

  16. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  17. A review of propeller modelling techniques based on Euler methods

    NARCIS (Netherlands)

    Zondervan, G.J.D.

    1998-01-01

    Future generation civil aircraft will be powered by new, highly efficient propeller propulsion systems. New, advanced design tools like Euler methods will be needed in the design process of these aircraft. This report describes the application of Euler methods to the modelling of flowfields generate

  18. ATLAS Event Data Organization and I/O Framework Capabilities in Support of Heterogeneous Data Access and Processing Models

    CERN Document Server

    Malon, David; The ATLAS collaboration; van Gemmeren, Peter

    2016-01-01

    Choices in persistent data models and data organization have significant performance ramifications for data-intensive scientific computing. In experimental high energy physics, organizing file-based event data for efficient per-attribute retrieval may improve the I/O performance of some physics analyses but hamper the performance of processing that requires full-event access. In-file data organization tuned for serial access by a single process may be less suitable for opportunistic sub-file-based processing on distributed computing resources. Unique I/O characteristics of high-performance computing platforms pose additional challenges. This paper describes work in the ATLAS experiment at the Large Hadron Collider to provide an I/O framework and tools for persistent data organization to support an increasingly heterogenous array of data access and processing models.

  19. Verification of muscle fatigue detection capability of unipolar and bipolar lead systems using surface EMG generation model

    OpenAIRE

    堀田, 優; 小浦方, 裕騎; 伊藤, 建一; Hotta, Yu; Kourakata, Yuki; Ito, Kenichi

    2013-01-01

    In this study, we constructed a simulation model to generate a surface EMG during isometric exercise. The surface EMG was detected using both unipolar and bipolar lead systems, and the measurement performance of both systems was compared. When detecting surface EMGs using the unipolar lead system, low-frequency components were increased to a greater extent than in the bipolar lead system, suggesting that the unipolar lead system is more suitable for the detection of surface EMGs

  20. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  1. Detecting feature interactions in Web services with model checking techniques

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    As a platform-independent software system, a Web service is designed to offer interoperability among diverse and heterogeneous applications.With the introduction of service composition in the Web service creation, various message interactions among the atomic services result in a problem resembling the feature interaction problem in the telecommunication area.This article defines the problem as feature interaction in Web services and proposes a model checking-based detection method.In the method, the Web service description is translated to the Promela language - the input language of the model checker simple promela interpreter (SPIN), and the specific properties, expressed as linear temporal logic (LTL) formulas, are formulated according to our classification of feature interaction.Then, SPIN is used to check these specific properties to detect the feature interaction in Web services.

  2. A Memory Insensitive Technique for Large Model Simplification

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Silva, C

    2001-08-07

    In this paper we propose three simple, but significant improvements to the OoCS (Out-of-Core Simplification) algorithm of Lindstrom [20] which increase the quality of approximations and extend the applicability of the algorithm to an even larger class of compute systems. The original OoCS algorithm has memory complexity that depends on the size of the output mesh, but no dependency on the size of the input mesh. That is, it can be used to simplify meshes of arbitrarily large size, but the complexity of the output mesh is limited by the amount of memory available. Our first contribution is a version of OoCS that removes the dependency of having enough memory to hold (even) the simplified mesh. With our new algorithm, the whole process is made essentially independent of the available memory on the host computer. Our new technique uses disk instead of main memory, but it is carefully designed to avoid costly random accesses. Our two other contributions improve the quality of the approximations generated by OoCS. We propose a scheme for preserving surface boundaries which does not use connectivity information, and a scheme for constraining the position of the ''representative vertex'' of a grid cell to an optimal position inside the cell.

  3. A Model-Following Technique for Insensitive Aircraft Control Systems.

    Science.gov (United States)

    1981-01-01

    Harvey and Pope(131 and Vinkler[301 compared several different methods in their works, while Shenkar [261 and Ashkenazi[2i extended the most promising...Following for In- sensitive Control works, let us consider the simple, first-order system used by Shenkar [261. The plant is described by x -(1 + Ar)x + u...representative of the methods of Vinkler, Asikenazi, and Shenkar ), and Model Following for Insensitive Control (MrIC). For the LQR design, we assume that our

  4. IMPROVED SOFTWARE QUALITY ASSURANCE TECHNIQUES USING SAFE GROWTH MODEL

    Directory of Open Access Journals (Sweden)

    M.Sangeetha

    2010-09-01

    Full Text Available In our lives are governed by large, complex systems with increasingly complex software, and the safety, security, and reliability of these systems has become a major concern. As the software in today’ssystems grows larger, it has more defects, and these defects adversely affect the safety, security, and reliability of the systems. Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, andmaintenance of software. Software divides into two pieces: internal and external quality characteristics.External quality characteristics are those parts of a product that face its users, where internal quality characteristics are those that do not.Quality is conformance to product requirements and should be free. This research concerns the role of software Quality. Software reliability is an important facet of software quality. It is the probability of failure-freeoperation of a computer program in a specified environment for a specified time. In software reliability modeling, the parameters of the model are typically estimated from the test data of the corresponding component. However, the widely used point estimatorsare subject to random variations in the data, resulting in uncertainties in these estimated parameters. This research describes a new approach to the problem of software testing. The approach is based on Bayesian graphical models and presents formal mechanisms forthe logical structuring of the software testing problem, the probabilistic and statistical treatment of the uncertainties to be addressed, the test design and analysis process, and the incorporation and implication of test results. Once constructed, the models produced are dynamic representations of the software testingproblem. It explains need of the common test-and-fix software quality strategy is no longer adequate, and characterizes the properties of the quality strategy.

  5. Modeling and Analyzing Terrain Data Acquired by Modern Mapping Techniques

    Science.gov (United States)

    2009-09-22

    enhanced by new terrain mapping technologies such as Laser altimetry (LIDAR), ground based laser scanning and Real Time Kinematic GPS ( RTK - GPS ) that...developed and implemented an approach that has the following features: it is modular so that a user can use different models for each of the modules ...support some way of connecting separate modules together to form pipelines, however this requires manual intervention. While a typical GIS can manage

  6. Groundwater Resources Assessment For Joypurhat District Using Mathematical Modelling Technique

    Directory of Open Access Journals (Sweden)

    Md. Iquebal Hossain

    2015-06-01

    Full Text Available In this study potential recharge as well as groundwater availability for 5 Upazillas (Akkelpur, Kalai, Joypurhat Sadar, Khetlal and Panchbibi of Joypurhat districts has been estimated using MIKE SHE modelling tools. The main aquifers of the study area are dominated by medium sands, medium and coarse sands with little gravels. The top of aquifers ranges from 15 m to 24 m and the screenable thickness of aquifers range from 33 m to 46 m within the depth range from 57 m to 87 m. Heavy abstraction of groundwater for agricultural, industrial and domestic uses results in excessive lowering of water table making the shallow and hand tubewells inoperable in the dry season. The upazilawise potential recharge for the study area was estimated through mathematical model using MIKE SHE modelling tools in an integrated approach. The required data were collected from the different relevant organisations. The potential recharge of the present study varies from 452 mm to 793 mm. Maximum depth to groundwater table in most of the places occurs at the end of April. At this time, groundwater table in most of the part of Kalai, Khetlal, Akkelpur and Panchbibi goes below suction limit causing HTWs and STWs partially/fully in operable.

  7. An improved calibration technique for wind tunnel model attitude sensors

    Science.gov (United States)

    Tripp, John S.; Wong, Douglas T.; Finley, Tom D.; Tcheng, Ping

    1993-01-01

    Aerodynamic wind tunnel tests at NASA Langley Research Center (LaRC) require accurate measurement of model attitude. Inertial accelerometer packages have been the primary sensor used to measure model attitude to an accuracy of +/- 0.01 deg as required for aerodynamic research. The calibration parameters of the accelerometer package are currently obtained from a seven-point tumble test using a simplified empirical approximation. The inaccuracy due to the approximation exceeds the accuracy requirement as the misalignment angle between the package axis and the model body axis increases beyond 1.4 deg. This paper presents the exact solution derived from the coordinate transformation to eliminate inaccuracy caused by the approximation. In addition, a new calibration procedure is developed in which the data taken from the seven-point tumble test is fit to the exact solution by means of a least-squares estimation procedure. Validation tests indicate that the new calibration procedure provides +/- 0.005-deg accuracy over large package misalignments, which is not possible with the current procedure.

  8. Modeling and Control System Design for an Integrated Solar Generation and Energy Storage System with a Ride-Through Capability: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X.; Yue, M.; Muljadi, E.

    2012-09-01

    This paper presents a generic approach for PV panel modeling. Data for this modeling can be easily obtained from manufacturer datasheet, which provides a convenient way for the researchers and engineers to investigate the PV integration issues. A two-stage power conversion system (PCS) is adopted in this paper for the PV generation system and a Battery Energy Storage System (BESS) can be connected to the dc-link through a bi-directional dc/dc converter. In this way, the BESS can provide some ancillary services which may be required in the high penetration PV generation scenario. In this paper, the fault ride-through (FRT) capability is specifically focused. The integrated BESS and PV generation system together with the associated control systems is modeled in PSCAD and Matlab platforms and the effectiveness of the controller is validated by the simulation results.

  9. Modeling and Simulation of A Novel Autonomous Underwater Vehicle with Glider and Flapping-Foil Propulsion Capabilities

    Institute of Scientific and Technical Information of China (English)

    TIAN Wen-long; SONG Bao-wei; DU Xiao-xu; MAO Zhao-yong; DING Hao

    2012-01-01

    HAISHEN is a long-ranged and highly maneuverable AUV which has two operating modes:glider mode and flapping-foil propulsion mode.As part of the vehicle development,a three-dimensional mathematical model of the conceptual vehicle was developed on the assumption that HAISHEN has a rigid body with two independently controlled oscillating hydrofoils.A flapping-foil model was developed based on the work done by Georgiades et al.(2009).Effect of controllable hydrofoils on the vehicle stable motion performance was studied theoretically.Finally,a dynamics simulation of the vehicle in both operating modes is created in this paper.The simulation demonstrates that:(1) in the glider mode,owing to the independent control of the pitch angle of each hydrofoil,HAISHEN travels faster and more efficiently and has a smaller turning radius than conventional fix-winged gliders; (2) in the flapping-foil propulsion mode,HAISHEN has a high maneuverability with a turning radius smaller than 15 m and a forward motion velocity about 1.8 m/s; (3) the vehicle is stable under all expected operating conditions.

  10. Evaluating Higher Education Institutions through Agency and Resources-Capabilities Theories. A Model for Measuring the Perceived Quality of Service

    Directory of Open Access Journals (Sweden)

    José G. Vargas-Hernández

    2016-12-01

    Full Text Available The objective of this paper is to explain through the agency theory and theory of resources and capacities as is the process of assessment in higher education institutions. The actors that are involved in the decision-making and the use that is giving the resources derived from repeatedly to practices that opportunistic diminishing the value that is given to the evaluation, in addition to the decrease in team work. A model is presented to measure the perception of service quality by students of the Technological Institute of Celaya, as part of the system of quality control, based on the theoretical support of several authors who have developed this topic (SERVQUAL and SERPERF an instrument adapted to the student area of the institution called SERQUALITC is generated. The paper presents the areas or departments to assess and the convenient size, the number of items used by size and Likert scale, the validation study instrument is mentioned. Finally, it is presented the model that poses a global vision of quality measurement process including corrective action services that enable continuous improvement.

  11. Evaluating Higher Education Institutions through Agency and Resources-Capabilities Theories. A Model for Measuring the Perceived Quality of Service

    Directory of Open Access Journals (Sweden)

    José Guadalupe Vargas-hernández

    2016-08-01

    Full Text Available The objective of this paper is to explain through the agency theory and theory of resources and capacities as is the process of assessment in higher education institutions. The actors that are involved in the decision-making and the use that is giving the resources derived from repeatedly to practices that opportunistic diminishing the value that is given to the evaluation, in addition to the decrease in team work. A model is presented to measure the perception of service quality by students of the Technological Institute of Celaya, as part of the system of quality control, based on the theoretical support of several authors who have developed this topic (SERVQUAL and SERPERF an instrument adapted to the student area of the institution called SERQUALITC is generated. The paper presents the areas or departments to assess and the convenient size, the number of items used by size and Likert scale, the validation study instrument is mentioned. Finally, it is presented the model that poses a global vision of quality measurement process including corrective action services that enable continuous improvement.

  12. A titration model for evaluating calcium hydroxide removal techniques

    Directory of Open Access Journals (Sweden)

    Mark PHILLIPS

    2015-02-01

    Full Text Available Objective Calcium hydroxide (Ca(OH2 has been used in endodontics as an intracanal medicament due to its antimicrobial effects and its ability to inactivate bacterial endotoxin. The inability to totally remove this intracanal medicament from the root canal system, however, may interfere with the setting of eugenol-based sealers or inhibit bonding of resin to dentin, thus presenting clinical challenges with endodontic treatment. This study used a chemical titration method to measure residual Ca(OH2 left after different endodontic irrigation methods. Material and Methods Eighty-six human canine roots were prepared for obturation. Thirty teeth were filled with known but different amounts of Ca(OH2 for 7 days, which were dissolved out and titrated to quantitate the residual Ca(OH2 recovered from each root to produce a standard curve. Forty-eight of the remaining teeth were filled with equal amounts of Ca(OH2 followed by gross Ca(OH2 removal using hand files and randomized treatment of either: 1 Syringe irrigation; 2 Syringe irrigation with use of an apical file; 3 Syringe irrigation with added 30 s of passive ultrasonic irrigation (PUI, or 4 Syringe irrigation with apical file and PUI (n=12/group. Residual Ca(OH2 was dissolved with glycerin and titrated to measure residual Ca(OH2 left in the root. Results No method completely removed all residual Ca(OH2. The addition of 30 s PUI with or without apical file use removed Ca(OH2 significantly better than irrigation alone. Conclusions This technique allowed quantification of residual Ca(OH2. The use of PUI (with or without apical file resulted in significantly lower Ca(OH2 residue compared to irrigation alone.

  13. Computable General Equilibrium Techniques for Carbon Tax Modeling

    Directory of Open Access Journals (Sweden)

    Al-Amin

    2009-01-01

    Full Text Available Problem statement: Lacking of proper environmental models environmental pollution is now a solemn problem in many developing countries particularly in Malaysia. Some empirical studies of worldwide reveal that imposition of a carbon tax significantly decreases carbon emissions and does not dramatically reduce economic growth. To our knowledge there has not been any research done to simulate the economic impact of emission control policies in Malaysia. Approach: Therefore this study developed an environmental computable general equilibrium model for Malaysia and investigated carbon tax policy responses in the economy applying exogenously different degrees of carbon tax into the model. Three simulations were carried out using a Malaysian social accounting matrix. Results: The carbon tax policy illustrated that a 1.21% reduction of carbon emission reduced the nominal GDP by 0.82% and exports by 2.08%; 2.34% reduction of carbon emission reduced the nominal GDP by 1.90% and exports by 3.97% and 3.40% reduction of carbon emission reduced the nominal GDP by 3.17% and exports by 5.71%. Conclusion/Recommendations: Imposition of successively higher carbon tax results in increased government revenue from baseline by 26.67, 53.07 and 79.28% respectively. However, fixed capital investment increased in scenario 1a by 0.43% and decreased in scenarios 1b and 1c by 0.26 and 1.79% respectively from the baseline. According to our policy findings policy makers should consider 1st (scenario 1a carbon tax policy. This policy results in achieving reasonably good environmental impacts without losing the investment, fixed capital investment, investment share of nominal GDP and government revenue.

  14. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  15. Reliability–based economic model predictive control for generalised flow–based networks including actuators’ health–aware capabilities

    Directory of Open Access Journals (Sweden)

    Grosso Juan M.

    2016-09-01

    Full Text Available This paper proposes a reliability-based economic model predictive control (MPC strategy for the management of generalised flow-based networks, integrating some ideas on network service reliability, dynamic safety stock planning, and degradation of equipment health. The proposed strategy is based on a single-layer economic optimisation problem with dynamic constraints, which includes two enhancements with respect to existing approaches. The first enhancement considers chance-constraint programming to compute an optimal inventory replenishment policy based on a desired risk acceptability level, leading to dynamical allocation of safety stocks in flow-based networks to satisfy non-stationary flow demands. The second enhancement computes a smart distribution of the control effort and maximises actuators’ availability by estimating their degradation and reliability. The proposed approach is illustrated with an application of water transport networks using the Barcelona network as the case study considered.

  16. Genome-enabled Modeling of Microbial Biogeochemistry using a Trait-based Approach. Does Increasing Metabolic Complexity Increase Predictive Capabilities?

    Science.gov (United States)

    King, E.; Karaoz, U.; Molins, S.; Bouskill, N.; Anantharaman, K.; Beller, H. R.; Banfield, J. F.; Steefel, C. I.; Brodie, E.

    2015-12-01

    The biogeochemical functioning of ecosystems is shaped in part by genomic information stored in the subsurface microbiome. Cultivation-independent approaches allow us to extract this information through reconstruction of thousands of genomes from a microbial community. Analysis of these genomes, in turn, gives an indication of the organisms present and their functional roles. However, metagenomic analyses can currently deliver thousands of different genomes that range in abundance/importance, requiring the identification and assimilation of key physiologies and metabolisms to be represented as traits for successful simulation of subsurface processes. Here we focus on incorporating -omics information into BioCrunch, a genome-informed trait-based model that represents the diversity of microbial functional processes within a reactive transport framework. This approach models the rate of nutrient uptake and the thermodynamics of coupled electron donors and acceptors for a range of microbial metabolisms including heterotrophs and chemolithotrophs. Metabolism of exogenous substrates fuels catabolic and anabolic processes, with the proportion of energy used for cellular maintenance, respiration, biomass development, and enzyme production based upon dynamic intracellular and environmental conditions. This internal resource partitioning represents a trade-off against biomass formation and results in microbial community emergence across a fitness landscape. Biocrunch was used here in simulations that included organisms and metabolic pathways derived from a dataset of ~1200 non-redundant genomes reflecting a microbial community in a floodplain aquifer. Metagenomic data was directly used to parameterize trait values related to growth and to identify trait linkages associated with respiration, fermentation, and key enzymatic functions such as plant polymer degradation. Simulations spanned a range of metabolic complexities and highlight benefits originating from simulations

  17. Ecological Footprint Model Using the Support Vector Machine Technique

    Science.gov (United States)

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance. PMID:22291949

  18. Comparative Studies of Clustering Techniques for Real-Time Dynamic Model Reduction

    CERN Document Server

    Hogan, Emilie; Halappanavar, Mahantesh; Huang, Zhenyu; Lin, Guang; Lu, Shuai; Wang, Shaobu

    2015-01-01

    Dynamic model reduction in power systems is necessary for improving computational efficiency. Traditional model reduction using linearized models or offline analysis would not be adequate to capture power system dynamic behaviors, especially the new mix of intermittent generation and intelligent consumption makes the power system more dynamic and non-linear. Real-time dynamic model reduction emerges as an important need. This paper explores the use of clustering techniques to analyze real-time phasor measurements to determine generator groups and representative generators for dynamic model reduction. Two clustering techniques -- graph clustering and evolutionary clustering -- are studied in this paper. Various implementations of these techniques are compared and also compared with a previously developed Singular Value Decomposition (SVD)-based dynamic model reduction approach. Various methods exhibit different levels of accuracy when comparing the reduced model simulation against the original model. But some ...

  19. Modelling and Design of a Microstrip Band-Pass Filter Using Space Mapping Techniques

    CERN Document Server

    Tavakoli, Saeed; Mohanna, Shahram

    2010-01-01

    Determination of design parameters based on electromagnetic simulations of microwave circuits is an iterative and often time-consuming procedure. Space mapping is a powerful technique to optimize such complex models by efficiently substituting accurate but expensive electromagnetic models, fine models, with fast and approximate models, coarse models. In this paper, we apply two space mapping, an explicit space mapping as well as an implicit and response residual space mapping, techniques to a case study application, a microstrip band-pass filter. First, we model the case study application and optimize its design parameters, using explicit space mapping modelling approach. Then, we use implicit and response residual space mapping approach to optimize the filter's design parameters. Finally, the performance of each design methods is evaluated. It is shown that the use of above-mentioned techniques leads to achieving satisfactory design solutions with a minimum number of computationally expensive fine model eval...

  20. Advanced Multivariate Inversion Techniques for High Resolution 3D Geophysical Modeling

    Science.gov (United States)

    2011-09-01

    distributed across surrounding tectonic plates . Though the resulting continent-scale maps possess less detail than local-scale group velocity maps...requirements with high confidence, the Air Force Technical Applications Center needs new and improved capabilities for analyzing regional seismic ...wave magnitude mbɜ) seismic events. For seismically active areas, inaccurate models can be corrected using the kriging methodology and, therefore