WorldWideScience

Sample records for modeling technique capable

  1. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available In this second part of the two-part paper, the data driven modeling (DDM experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs, genetic programming (GP, evolutionary polynomial regression (EPR, Support vector machines (SVM, M5 model trees (M5, K-nearest neighbors (K-nn, and multiple linear regression (MLR techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it

  2. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Science.gov (United States)

    Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D. P.

    2010-10-01

    In this second part of the two-part paper, the data driven modeling (DDM) experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets) are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations) were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs), genetic programming (GP), evolutionary polynomial regression (EPR), Support vector machines (SVM), M5 model trees (M5), K-nearest neighbors (K-nn), and multiple linear regression (MLR) techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it should

  3. Group Capability Model

    Science.gov (United States)

    Olejarski, Michael; Appleton, Amy; Deltorchio, Stephen

    2009-01-01

    The Group Capability Model (GCM) is a software tool that allows an organization, from first line management to senior executive, to monitor and track the health (capability) of various groups in performing their contractual obligations. GCM calculates a Group Capability Index (GCI) by comparing actual head counts, certifications, and/or skills within a group. The model can also be used to simulate the effects of employee usage, training, and attrition on the GCI. A universal tool and common method was required due to the high risk of losing skills necessary to complete the Space Shuttle Program and meet the needs of the Constellation Program. During this transition from one space vehicle to another, the uncertainty among the critical skilled workforce is high and attrition has the potential to be unmanageable. GCM allows managers to establish requirements for their group in the form of head counts, certification requirements, or skills requirements. GCM then calculates a Group Capability Index (GCI), where a score of 1 indicates that the group is at the appropriate level; anything less than 1 indicates a potential for improvement. This shows the health of a group, both currently and over time. GCM accepts as input head count, certification needs, critical needs, competency needs, and competency critical needs. In addition, team members are categorized by years of experience, percentage of contribution, ex-members and their skills, availability, function, and in-work requirements. Outputs are several reports, including actual vs. required head count, actual vs. required certificates, CGI change over time (by month), and more. The program stores historical data for summary and historical reporting, which is done via an Excel spreadsheet that is color-coded to show health statistics at a glance. GCM has provided the Shuttle Ground Processing team with a quantifiable, repeatable approach to assessing and managing the skills in their organization. They now have a common

  4. Business models and dynamic capabilities

    OpenAIRE

    Teece, DJ

    2017-01-01

    © 2017 The Author. Business models, dynamic capabilities, and strategy are interdependent. The strength of a firm's dynamic capabilities help shape its proficiency at business model design. Through its effect on organization design, a business model influences the firm's dynamic capabilities and places bounds on the feasibility of particular strategies. While these relationships are understood at a theoretical level, there is a need for future empirical work to flesh out the details. In parti...

  5. TRISTAN I: techniques, capabilities, and accomplishments

    International Nuclear Information System (INIS)

    Talbert, W.L. Jr.

    1977-01-01

    Following a brief description of the TRISTAN facility, the techniques developed for on-line nuclear spectroscopy of short-lived fission products, the studies possible, and the activities studied are presented. All journal publications relating to the development of the facility and the studies carried out using it are referenced, and co-workers identified

  6. System Code Models and Capabilities

    International Nuclear Information System (INIS)

    Bestion, D.

    2008-01-01

    System thermalhydraulic codes such as RELAP, TRACE, CATHARE or ATHLET are now commonly used for reactor transient simulations. The whole methodology of code development is described including the derivation of the system of equations, the analysis of experimental data to obtain closure relation and the validation process. The characteristics of the models are briefly presented starting with the basic assumptions, the system of equations and the derivation of closure relationships. An extensive work was devoted during the last three decades to the improvement and validation of these models, which resulted in some homogenisation of the different codes although separately developed. The so called two-fluid model is the common basis of these codes and it is shown how it can describe both thermal and mechanical nonequilibrium. A review of some important physical models allows to illustrate the main capabilities and limitations of system codes. Attention is drawn on the role of flow regime maps, on the various methods for developing closure laws, on the role of interfacial area and turbulence on interfacial and wall transfers. More details are given for interfacial friction laws and their relation with drift flux models. Prediction of chocked flow and CFFL is also addressed. Based on some limitations of the present generation of codes, perspectives for future are drawn.

  7. Towards a national cybersecurity capability development model

    CSIR Research Space (South Africa)

    Jacobs, Pierre C

    2017-06-01

    Full Text Available - the incident management cybersecurity capability - is selected to illustrate the application of the national cybersecurity capability development model. This model was developed as part of previous research, and is called the Embryonic Cyberdefence Monitoring...

  8. Geospatial Information System Capability Maturity Models

    Science.gov (United States)

    2017-06-01

    To explore how State departments of transportation (DOTs) evaluate geospatial tool applications and services within their own agencies, particularly their experiences using capability maturity models (CMMs) such as the Urban and Regional Information ...

  9. Business Models for Cost Sharing & Capability Sustainment

    Science.gov (United States)

    2012-08-18

    Masanell and Ricart (2010), we can arrive at the working definition of a business model used in this report, namely, that a business model is a...capabilities over a long time frame. In order to identify the key factors in the Harrier RTI success, a SWOT analysis was carried out. The results are shown in...Table 1. Table 1. SWOT Analysis of Harrier Strengths - Small team - UK/BAE controlled - RTI Weaknesses - Small program—little

  10. Numerical modeling capabilities to predict repository performance

    International Nuclear Information System (INIS)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used

  11. Simulation and Modeling Capability for Standard Modular Hydropower Technology

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Kevin M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Brennan T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Witt, Adam M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); DeNeale, Scott T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevelhimer, Mark S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pries, Jason L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burress, Timothy A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kao, Shih-Chieh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mobley, Miles H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Kyutae [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Curd, Shelaine L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Tsakiris, Achilleas [Univ. of Tennessee, Knoxville, TN (United States); Mooneyham, Christian [Univ. of Tennessee, Knoxville, TN (United States); Papanicolaou, Thanos [Univ. of Tennessee, Knoxville, TN (United States); Ekici, Kivanc [Univ. of Tennessee, Knoxville, TN (United States); Whisenant, Matthew J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Welch, Tim [US Department of Energy, Washington, DC (United States); Rabon, Daniel [US Department of Energy, Washington, DC (United States)

    2017-08-01

    Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.

  12. Stochastic Capability Models for Degrading Satellite Constellations

    National Research Council Canada - National Science Library

    Gulyas, Cole W

    2005-01-01

    This thesis proposes and analyzes a new measure of functional capability for satellite constellations that incorporates the instantaneous availability and mission effectiveness of individual satellites...

  13. Overview of ASC Capability Computing System Governance Model

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott W. [Los Alamos National Laboratory

    2012-07-11

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  14. Classifying variability modeling techniques

    NARCIS (Netherlands)

    Sinnema, Marco; Deelstra, Sybren

    Variability modeling is important for managing variability in software product families, especially during product derivation. In the past few years, several variability modeling techniques have been developed, each using its own concepts to model the variability provided by a product family. The

  15. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  16. Spent fuel reprocessing system security engineering capability maturity model

    International Nuclear Information System (INIS)

    Liu Yachun; Zou Shuliang; Yang Xiaohua; Ouyang Zigen; Dai Jianyong

    2011-01-01

    In the field of nuclear safety, traditional work places extra emphasis on risk assessment related to technical skills, production operations, accident consequences through deterministic or probabilistic analysis, and on the basis of which risk management and control are implemented. However, high quality of product does not necessarily mean good safety quality, which implies a predictable degree of uniformity and dependability suited to the specific security needs. In this paper, we make use of the system security engineering - capability maturity model (SSE-CMM) in the field of spent fuel reprocessing, establish a spent fuel reprocessing systems security engineering capability maturity model (SFR-SSE-CMM). The base practices in the model are collected from the materials of the practice of the nuclear safety engineering, which represent the best security implementation activities, reflect the regular and basic work of the implementation of the security engineering in the spent fuel reprocessing plant, the general practices reveal the management, measurement and institutional characteristics of all process activities. The basic principles that should be followed in the course of implementation of safety engineering activities are indicated from 'what' and 'how' aspects. The model provides a standardized framework and evaluation system for the safety engineering of the spent fuel reprocessing system. As a supplement to traditional methods, this new assessment technique with property of repeatability and predictability with respect to cost, procedure and quality control, can make or improve the activities of security engineering to become a serial of mature, measurable and standard activities. (author)

  17. Capabilities and accuracy of energy modelling software

    CSIR Research Space (South Africa)

    Osburn, L

    2010-11-01

    Full Text Available Energy modelling can be used in a number of different ways to fulfill different needs, including certification within building regulations or green building rating tools. Energy modelling can also be used in order to try and predict what the energy...

  18. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  19. Predictive capabilities of various constitutive models for arterial tissue.

    Science.gov (United States)

    Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel

    2018-02-01

    Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Facility Modeling Capability Demonstration Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sadasivan, Pratap [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fallgren, Andrew James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demuth, Scott Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Aleman, Sebastian E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chiswell, Steven R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-02-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration’s (NNSA’s) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  1. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  2. Computable general equilibrium model fiscal year 2014 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Laboratory; Boero, Riccardo [Los Alamos National Laboratory

    2016-05-11

    This report provides an overview of the development of the NISAC CGE economic modeling capability since 2012. This capability enhances NISAC's economic modeling and analysis capabilities to answer a broader set of questions than possible with previous economic analysis capability. In particular, CGE modeling captures how the different sectors of the economy, for example, households, businesses, government, etc., interact to allocate resources in an economy and this approach captures these interactions when it is used to estimate the economic impacts of the kinds of events NISAC often analyzes.

  3. A Thermo-Optic Propagation Modeling Capability.

    Energy Technology Data Exchange (ETDEWEB)

    Schrader, Karl; Akau, Ron

    2014-10-01

    A new theoretical basis is derived for tracing optical rays within a finite-element (FE) volume. The ray-trajectory equations are cast into the local element coordinate frame and the full finite-element interpolation is used to determine instantaneous index gradient for the ray-path integral equation. The FE methodology (FEM) is also used to interpolate local surface deformations and the surface normal vector for computing the refraction angle when launching rays into the volume, and again when rays exit the medium. The method is implemented in the Matlab(TM) environment and compared to closed- form gradient index models. A software architecture is also developed for implementing the algorithms in the Zemax(TM) commercial ray-trace application. A controlled thermal environment was constructed in the laboratory, and measured data was collected to validate the structural, thermal, and optical modeling methods.

  4. Hybrid Modeling Capability for Aircraft Electrical Propulsion Systems, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — PC Krause and Associates is partnering with Purdue University, EleQuant, and GridQuant to create a hybrid modeling capability. The combination of PCKA's extensive...

  5. Semiconductor Modeling Techniques

    CERN Document Server

    Xavier, Marie

    2012-01-01

    This book describes the key theoretical techniques for semiconductor research to quantitatively calculate and simulate the properties. It presents particular techniques to study novel semiconductor materials, such as 2D heterostructures, quantum wires, quantum dots and nitrogen containing III-V alloys. The book is aimed primarily at newcomers working in the field of semiconductor physics to give guidance in theory and experiment. The theoretical techniques for electronic and optoelectronic devices are explained in detail.

  6. Demonstration of a Model Averaging Capability in FRAMES

    Science.gov (United States)

    Meyer, P. D.; Castleton, K. J.

    2009-12-01

    Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.

  7. Capability Maturity Model (CMM) for Software Process Improvements

    Science.gov (United States)

    Ling, Robert Y.

    2000-01-01

    This slide presentation reviews the Avionic Systems Division's implementation of the Capability Maturity Model (CMM) for improvements in the software development process. The presentation reviews the process involved in implementing the model and the benefits of using CMM to improve the software development process.

  8. Neural network modeling of a dolphin's sonar discrimination capabilities

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL

    1994-01-01

    The capability of an echo-locating dolphin to discriminate differences in the wall thickness of cylinders was previously modeled by a counterpropagation neural network using only spectral information of the echoes [W. W. L. Au, J. Acoust. Soc. Am. 95, 2728–2735 (1994)]. In this study, both time...... and frequency information were used to model the dolphin discrimination capabilities. Echoes from the same cylinders were digitized using a broadband simulated dolphin sonar signal with the transducer mounted on the dolphin's pen. The echoes were filtered by a bank of continuous constant-Q digital filters...

  9. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  10. Experiences with the Capability Maturity Model in a research environment

    NARCIS (Netherlands)

    Velden, van der M.J.; Vreke, J.; Wal, van der B.; Symons, A.

    1996-01-01

    The project described here was aimed at evaluating the Capability Maturity Model (CMM) in the context of a research organization. Part of the evaluation was a standard CMM assessment. It was found that CMM could be applied to a research organization, although its five maturity levels were considered

  11. Boundary representation modelling techniques

    CERN Document Server

    2006-01-01

    Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.

  12. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  13. Integration Of Facility Modeling Capabilities For Nuclear Nonproliferation Analysis

    International Nuclear Information System (INIS)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  14. Mac OS X Snow Leopard for Power Users Advanced Capabilities and Techniques

    CERN Document Server

    Granneman, Scott

    2010-01-01

    Mac OS X Snow Leopard for Power Users: Advanced Capabilities and Techniques is for Mac OS X users who want to go beyond the obvious, the standard, and the easy. If want to dig deeper into Mac OS X and maximize your skills and productivity using the world's slickest and most elegant operating system, then this is the book for you. Written by Scott Granneman, an experienced teacher, developer, and consultant, Mac OS X for Power Users helps you push Mac OS X to the max, unveiling advanced techniques and options that you may have not known even existed. Create custom workflows and apps with Automa

  15. Development of a fourth generation predictive capability maturity model.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel; Rider, William J.; Trucano, Timothy Guy

    2013-09-01

    The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, the PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.

  16. The interpersonal circumplex as a model of interpersonal capabilities.

    Science.gov (United States)

    Hofsess, Christy D; Tracey, Terence J G

    2005-04-01

    In this study, we sought to challenge the existing conceptualization of interpersonal capabilities as a distinct construct from interpersonal traits by explicitly taking into account the general factor inherent within most models of circumplexes. A sample of 206 college students completed a battery of measures including the Battery of Interpersonal Capabilities (BIC; Paulhus & Martin, 1987). Principal components analysis and the randomization test of hypothesized order relations demonstrated that contrary to previous findings, the BIC adhered to a circular ordering. Joint analysis of the BIC with the Interpersonal Adjective Scale (Wiggins, 1995) using principal components analysis and structural equation modeling demonstrated that the 2 measures represented similar constructs. Furthermore, the general factor in the BIC was not correlated with measures of general self-competence, satisfaction with life, or general pathology.

  17. Hybriding CMMI and requirement engineering maturity and capability models

    OpenAIRE

    Buglione, Luigi; Hauck, Jean Carlo R.; Gresse von Wangenheim, Christiane; Mc Caffery, Fergal

    2012-01-01

    peer-reviewed Estimation represents one of the most critical processes for any project and it is highly dependent on the quality of requirements elicitation and management. Therefore, the management of requirements should be prioritised in any process improvement program, because the less precise the requirements gathering, analysis and sizing, the greater the error in terms of time and cost estimation. Maturity and Capability Models (MCM) represent a good tool for assessing the status of ...

  18. Fuel analysis code FAIR and its high burnup modelling capabilities

    International Nuclear Information System (INIS)

    Prasad, P.S.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1995-01-01

    A computer code FAIR has been developed for analysing performance of water cooled reactor fuel pins. It is capable of analysing high burnup fuels. This code has recently been used for analysing ten high burnup fuel rods irradiated at Halden reactor. In the present paper, the code FAIR and its various high burnup models are described. The performance of code FAIR in analysing high burnup fuels and its other applications are highlighted. (author). 21 refs., 12 figs

  19. Off-Gas Adsorption Model Capabilities and Recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Lyon, Kevin L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Welty, Amy K. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Law, Jack [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ladshaw, Austin [Georgia Inst. of Technology, Atlanta, GA (United States); Yiacoumi, Sotira [Georgia Inst. of Technology, Atlanta, GA (United States); Tsouris, Costas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-03-01

    Off-gas treatment is required to reduce emissions from aqueous fuel reprocessing. Evaluating the products of innovative gas adsorption research requires increased computational simulation capability to more effectively transition from fundamental research to operational design. Early modeling efforts produced the Off-Gas SeParation and REcoverY (OSPREY) model that, while efficient in terms of computation time, was of limited value for complex systems. However, the computational and programming lessons learned in development of the initial model were used to develop Discontinuous Galerkin OSPREY (DGOSPREY), a more effective model. Initial comparisons between OSPREY and DGOSPREY show that, while OSPREY does reasonably well to capture the initial breakthrough time, it displays far too much numerical dispersion to accurately capture the real shape of the breakthrough curves. DGOSPREY is a much better tool as it utilizes a more stable set of numerical methods. In addition, DGOSPREY has shown the capability to capture complex, multispecies adsorption behavior, while OSPREY currently only works for a single adsorbing species. This capability makes DGOSPREY ultimately a more practical tool for real world simulations involving many different gas species. While DGOSPREY has initially performed very well, there is still need for improvement. The current state of DGOSPREY does not include any micro-scale adsorption kinetics and therefore assumes instantaneous adsorption. This is a major source of error in predicting water vapor breakthrough because the kinetics of that adsorption mechanism is particularly slow. However, this deficiency can be remedied by building kinetic kernels into DGOSPREY. Another source of error in DGOSPREY stems from data gaps in single species, such as Kr and Xe, isotherms. Since isotherm data for each gas is currently available at a single temperature, the model is unable to predict adsorption at temperatures outside of the set of data currently

  20. Evacuation emergency response model coupling atmospheric release advisory capability output

    International Nuclear Information System (INIS)

    Rosen, L.C.; Lawver, B.S.; Buckley, D.W.; Finn, S.P.; Swenson, J.B.

    1983-01-01

    A Federal Emergency Management Agency (FEMA) sponsored project to develop a coupled set of models between those of the Lawrence Livermore National Laboratory (LLNL) Atmospheric Release Advisory Capability (ARAC) system and candidate evacuation models is discussed herein. This report describes the ARAC system and discusses the rapid computer code developed and the coupling with ARAC output. The computer code is adapted to the use of color graphics as a means to display and convey the dynamics of an emergency evacuation. The model is applied to a specific case of an emergency evacuation of individuals surrounding the Rancho Seco Nuclear Power Plant, located approximately 25 miles southeast of Sacramento, California. The graphics available to the model user for the Rancho Seco example are displayed and noted in detail. Suggestions for future, potential improvements to the emergency evacuation model are presented

  1. Climbing the ladder: capability maturity model integration level 3

    Science.gov (United States)

    Day, Bryce; Lutteroth, Christof

    2011-02-01

    This article details the attempt to form a complete workflow model for an information and communication technologies (ICT) company in order to achieve a capability maturity model integration (CMMI) maturity rating of 3. During this project, business processes across the company's core and auxiliary sectors were documented and extended using modern enterprise modelling tools and a The Open Group Architectural Framework (TOGAF) methodology. Different challenges were encountered with regard to process customisation and tool support for enterprise modelling. In particular, there were problems with the reuse of process models, the integration of different project management methodologies and the integration of the Rational Unified Process development process framework that had to be solved. We report on these challenges and the perceived effects of the project on the company. Finally, we point out research directions that could help to improve the situation in the future.

  2. Capability to model reactor regulating system in RFSP

    International Nuclear Information System (INIS)

    Chow, H.C.; Rouben, B.; Younis, M.H.; Jenkins, D.A.; Baudouin, A.; Thompson, P.D.

    1995-01-01

    The Reactor Regulating System package extracted from SMOKIN-G2 was linked within RFSP to the spatial kinetics calculation. The objective is to use this new capability in safety analysis to model the actions of RRS in hypothetical events such as in-core LOCA or moderator drain scenarios. This paper describes the RRS modelling in RFSP and its coupling to the neutronics calculations, verification of the RRS control routine functions, sample applications and comparisons to SMOKIN-G2 results for the same transient simulations. (author). 7 refs., 6 figs

  3. Nuclear Hybrid Energy System Modeling: RELAP5 Dynamic Coupling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Piyush Sabharwall; Nolan Anderson; Haihua Zhao; Shannon Bragg-Sitton; George Mesina

    2012-09-01

    The nuclear hybrid energy systems (NHES) research team is currently developing a dynamic simulation of an integrated hybrid energy system. A detailed simulation of proposed NHES architectures will allow initial computational demonstration of a tightly coupled NHES to identify key reactor subsystem requirements, identify candidate reactor technologies for a hybrid system, and identify key challenges to operation of the coupled system. This work will provide a baseline for later coupling of design-specific reactor models through industry collaboration. The modeling capability addressed in this report focuses on the reactor subsystem simulation.

  4. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  5. DOE International Collaboration; Seismic Modeling and Simulation Capability Project

    Energy Technology Data Exchange (ETDEWEB)

    Leininger, Lara D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Settgast, Randolph R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-10-12

    The following report describes the development and exercise of a new capability at LLNL to model complete, non-linear, seismic events in 3-dimensions with a fully-coupled soil structure interaction response. This work is specifically suited to nuclear reactor design because this design space is exempt from the Seismic Design requirements of International Building Code (IBC) and the American Society of Civil Engineers (ASCE) [4,2]. Both IBC and ASCE-7 exempt nuclear reactors because they are considered “structures that require special consideration” and their design is governed only by “other regulations”. In the case of nuclear reactors, the regulations are from both the Nuclear Regulatory Commission (NRC) [10] and ASCE 43 [3]. This current framework of design guidance, coupled to this new and evolving capability to provide high fidelity design solutions as presented in this report, enables the growing field of Performance-Based Design (PBD) for nuclear reactors subjected to earthquake ground motions.

  6. Advanced capabilities for materials modelling with Quantum ESPRESSO

    Science.gov (United States)

    Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.

    2017-11-01

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  7. Atmospheric disturbance model for aircraft and space capable vehicles

    Science.gov (United States)

    Chimene, Beau C.; Park, Young W.; Bielski, W. P.; Shaughnessy, John D.; Mcminn, John D.

    1992-01-01

    An atmospheric disturbance model (ADM) is developed that considers the requirements of advanced aerospace vehicles and balances algorithmic assumptions with computational constraints. The requirements for an ADM include a realistic power spectrum, inhomogeneity, and the cross-correlation of atmospheric effects. The baseline models examined include the Global Reference Atmospheric Model Perturbation-Modeling Technique, the Dryden Small-Scale Turbulence Description, and the Patchiness Model. The Program to Enhance Random Turbulence (PERT) is developed based on the previous models but includes a revised formulation of large-scale atmospheric disturbance, an inhomogeneous Dryden filter, turbulence statistics, and the cross-correlation between Dryden Turbulence Filters and small-scale thermodynamics. Verification with the Monte Carlo approach demonstrates that the PERT software provides effective simulations of inhomogeneous atmospheric parameters.

  8. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  9. Frameworks for Assessing the Quality of Modeling and Simulation Capabilities

    Science.gov (United States)

    Rider, W. J.

    2012-12-01

    The importance of assuring quality in modeling and simulation has spawned several frameworks for structuring the examination of quality. The format and content of these frameworks provides an emphasis, completeness and flow to assessment activities. I will examine four frameworks that have been developed and describe how they can be improved and applied to a broader set of high consequence applications. Perhaps the first of these frameworks was known as CSAU [Boyack] (code scaling, applicability and uncertainty) used for nuclear reactor safety and endorsed the United States' Nuclear Regulatory Commission (USNRC). This framework was shaped by nuclear safety practice, and the practical structure needed after the Three Mile Island accident. It incorporated the dominant experimental program, the dominant analysis approach, and concerns about the quality of modeling. The USNRC gave it the force of law that made the nuclear industry take it seriously. After the cessation of nuclear weapons' testing the United States began a program of examining the reliability of these weapons without testing. This program utilizes science including theory, modeling, simulation and experimentation to replace the underground testing. The emphasis on modeling and simulation necessitated attention on the quality of these simulations. Sandia developed the PCMM (predictive capability maturity model) to structure this attention [Oberkampf]. PCMM divides simulation into six core activities to be examined and graded relative to the needs of the modeling activity. NASA [NASA] has built yet another framework in response to the tragedy of the space shuttle accidents. Finally, Ben-Haim and Hemez focus upon modeling robustness and predictive fidelity in another approach. These frameworks are similar, and applied in a similar fashion. The adoption of these frameworks at Sandia and NASA has been slow and arduous because the force of law has not assisted acceptance. All existing frameworks are

  10. Architectural capability analysis using a model-checking technique

    Directory of Open Access Journals (Sweden)

    Darío José Delgado-Quintero

    2017-01-01

    Full Text Available Este trabajo describe un enfoque matemático basado en una técnica de validación de modelos para analizar capacidades en arquitecturas empresariales construidas utilizando los marcos arquitecturales DoDAF y TOGAF. La base de este enfoque es la validación de requerimientos relacionados con las capacidades empresariales empleando artefactos arquitecturales operacionales o de negocio asociados con el comportamiento dinámico de los procesos. Se muestra cómo este enfoque puede ser utilizado para verificar, de forma cuantitativa, si los modelos operacionales en una arquitectura empresarial pueden satisfacer las capacidades empresariales. Para ello, se utiliza un estudio de caso relacionado con un problema de integración de capacidades.

  11. To the problem of using information technologies capabilities in the training process for technique of motor actions in wrestling

    Directory of Open Access Journals (Sweden)

    Tupeev Y.V.

    2010-08-01

    Full Text Available Presents the generalized data about the information technologies capabilities in the system of scientific-methodical maintenance of sport training. The directions of efficiency increasing of training to technique of junior wrestler's motor actions are identified. The structure and capabilities of designed computer program "Champion" are presented. The prospective of innovation approaches to training of basic technique of junior wrestler's motor actions is identified.

  12. Aviation System Analysis Capability Air Carrier Investment Model-Cargo

    Science.gov (United States)

    Johnson, Jesse; Santmire, Tara

    1999-01-01

    The purpose of the Aviation System Analysis Capability (ASAC) Air Cargo Investment Model-Cargo (ACIMC), is to examine the economic effects of technology investment on the air cargo market, particularly the market for new cargo aircraft. To do so, we have built an econometrically based model designed to operate like the ACIM. Two main drivers account for virtually all of the demand: the growth rate of the Gross Domestic Product (GDP) and changes in the fare yield (which is a proxy of the price charged or fare). These differences arise from a combination of the nature of air cargo demand and the peculiarities of the air cargo market. The net effect of these two factors are that sales of new cargo aircraft are much less sensitive to either increases in GDP or changes in the costs of labor, capital, fuel, materials, and energy associated with the production of new cargo aircraft than the sales of new passenger aircraft. This in conjunction with the relatively small size of the cargo aircraft market means technology improvements to the cargo aircraft will do relatively very little to spur increased sales of new cargo aircraft.

  13. On the Generalization Capabilities of the Ten-Parameter Jiles-Atherton Model

    Directory of Open Access Journals (Sweden)

    Gabriele Maria Lozito

    2015-01-01

    Full Text Available This work proposes an analysis on the generalization capabilities for the modified version of the classic Jiles-Atherton model for magnetic hysteresis. The modified model takes into account the use of dynamic parameterization, as opposed to the classic model where the parameters are constant. Two different dynamic parameterizations are taken into account: a dependence on the excitation and a dependence on the response. The identification process is performed by using a novel nonlinear optimization technique called Continuous Flock-of-Starling Optimization Cube (CFSO3, an algorithm belonging to the class of swarm intelligence. The algorithm exploits parallel architecture and uses a supervised strategy to alternate between exploration and exploitation capabilities. Comparisons between the obtained results are presented at the end of the paper.

  14. Lidar Remote Sensing of Forests: New Instruments and Modeling Capabilities

    Science.gov (United States)

    Cook, Bruce D.

    2012-01-01

    Lidar instruments provide scientists with the unique opportunity to characterize the 3D structure of forest ecosystems. This information allows us to estimate properties such as wood volume, biomass density, stocking density, canopy cover, and leaf area. Structural information also can be used as drivers for photosynthesis and ecosystem demography models to predict forest growth and carbon sequestration. All lidars use time-in-flight measurements to compute accurate ranging measurements; however, there is a wide range of instruments and data types that are currently available, and instrument technology continues to advance at a rapid pace. This seminar will present new technologies that are in use and under development at NASA for airborne and space-based missions. Opportunities for instrument and data fusion will also be discussed, as Dr. Cook is the PI for G-LiHT, Goddard's LiDAR, Hyperspectral, and Thermal airborne imager. Lastly, this talk will introduce radiative transfer models that can simulate interactions between laser light and forest canopies. Developing modeling capabilities is important for providing continuity between observations made with different lidars, and to assist the design of new instruments. Dr. Bruce Cook is a research scientist in NASA's Biospheric Sciences Laboratory at Goddard Space Flight Center, and has more than 25 years of experience conducting research on ecosystem processes, soil biogeochemistry, and exchange of carbon, water vapor and energy between the terrestrial biosphere and atmosphere. His research interests include the combined use of lidar, hyperspectral, and thermal data for characterizing ecosystem form and function. He is Deputy Project Scientist for the Landsat Data Continuity Mission (LDCM); Project Manager for NASA s Carbon Monitoring System (CMS) pilot project for local-scale forest biomass; and PI of Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) airborne imager.

  15. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  16. Innovation and dynamic capabilities of the firm: Defining an assessment model

    Directory of Open Access Journals (Sweden)

    André Cherubini Alves

    2017-05-01

    Full Text Available Innovation and dynamic capabilities have gained considerable attention in both academia and practice. While one of the oldest inquiries in economic and strategy literature involves understanding the features that drive business success and a firm’s perpetuity, the literature still lacks a comprehensive model of innovation and dynamic capabilities. This study presents a model that assesses firms’ innovation and dynamic capabilities perspectives based on four essential capabilities: development, operations, management, and transaction capabilities. Data from a survey of 1,107 Brazilian manufacturing firms were used for empirical testing and discussion of the dynamic capabilities framework. Regression and factor analyses validated the model; we discuss the results, contrasting with the dynamic capabilities’ framework. Operations Capability is the least dynamic of all capabilities, with the least influence on innovation. This reinforces the notion that operations capabilities as “ordinary capabilities,” whereas management, development, and transaction capabilities better explain firms’ dynamics and innovation.

  17. Techniques to develop data for hydrogeochemical models

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, C.M.; Holcombe, L.J.; Gancarz, D.H.; Behl, A.E. (Radian Corp., Austin, TX (USA)); Erickson, J.R.; Star, I.; Waddell, R.K. (Geotrans, Inc., Boulder, CO (USA)); Fruchter, J.S. (Battelle Pacific Northwest Lab., Richland, WA (USA))

    1989-12-01

    The utility industry, through its research and development organization, the Electric Power Research Institute (EPRI), is developing the capability to evaluate potential migration of waste constitutents from utility disposal sites to the environment. These investigations have developed computer programs to predict leaching, transport, attenuation, and fate of inorganic chemicals. To predict solute transport at a site, the computer programs require data concerning the physical and chemical conditions that affect solute transport at the site. This manual provides a comprehensive view of the data requirements for computer programs that predict the fate of dissolved materials in the subsurface environment and describes techniques to measure or estimate these data. In this manual, basic concepts are described first and individual properties and their associated measurement or estimation techniques are described later. The first three sections review hydrologic and geochemical concepts, discuss data requirements for geohydrochemical computer programs, and describe the types of information the programs produce. The remaining sections define and/or describe the properties of interest for geohydrochemical modeling and summarize available technique to measure or estimate values for these properties. A glossary of terms associated with geohydrochemical modeling and an index are provided at the end of this manual. 318 refs., 9 figs., 66 tabs.

  18. A model based lean approach to capability management

    CSIR Research Space (South Africa)

    Venter, Jacobus P

    2017-09-01

    Full Text Available for cyberwar and counter terrorism capabilities as these are fairly new and rapidly changes environments. It is therefore necessary to employ a Capability Management mechanism that can provide answers in the short term, are able to handle continuous changes... is only included or excluded from the Mission Plan. A further refinement is to indicate the role that the FSC play in the mission. The following classification is used for this purpose: • c = the FSC can / should command (directly or indirectly, taking...

  19. Using Genome-scale Models to Predict Biological Capabilities

    DEFF Research Database (Denmark)

    O’Brien, Edward J.; Monk, Jonathan M.; Palsson, Bernhard O.

    2015-01-01

    growth capabilities on various substrates and the effect of gene knockouts at the genome scale. Thus, much interest has developed in understanding and applying these methods to areas such as metabolic engineering, antibiotic design, and organismal and enzyme evolution. This Primer will get you started....

  20. Nascap-2k Spacecraft-Plasma Environment Interactions Modeling: New Capabilities and Verification

    National Research Council Canada - National Science Library

    Davis, V. A; Mandell, M. J; Cooke, D. L; Ferguson, D. C

    2007-01-01

    .... Here we examine the accuracy and limitations of two new capabilities of Nascap-2k: modeling of plasma plumes such as generated by electric thrusters and enhanced PIC computational capabilities...

  1. Capabilities and requirements for modelling radionuclide transport in the geosphere

    International Nuclear Information System (INIS)

    Paige, R.W.; Piper, D.

    1989-02-01

    This report gives an overview of geosphere flow and transport models suitable for use by the Department of the Environment in the performance assessment of radioactive waste disposal sites. An outline methodology for geosphere modelling is proposed, consisting of a number of different types of model. A brief description of each of the component models is given, indicating the purpose of the model, the processes being modelled and the methodologies adopted. Areas requiring development are noted. (author)

  2. Are Hydrostatic Models Still Capable of Simulating Oceanic Fronts

    Science.gov (United States)

    2016-11-10

    stress components which can be modeled by a turbulence closure model. In the present study, the standard Smagorinsky LES model is used. The conservation...is used to solve the pressure Poisson equation. The model is parallelized with Message Passing Interface (MPI). 2.2 Modification to NHWAVE

  3. Communications, Navigation, and Surveillance Models in ACES: Design Implementation and Capabilities

    Science.gov (United States)

    Kubat, Greg; Vandrei, Don; Satapathy, Goutam; Kumar, Anil; Khanna, Manu

    2006-01-01

    Presentation objectives include: a) Overview of the ACES/CNS System Models Design and Integration; b) Configuration Capabilities available for Models and Simulations using ACES with CNS Modeling; c) Descriptions of recently added, Enhanced CNS Simulation Capabilities; and d) General Concepts Ideas that Utilize CNS Modeling to Enhance Concept Evaluations.

  4. NGNP Data Management and Analysis System Modeling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Cynthia D. Gentillon

    2009-09-01

    Projects for the very-high-temperature reactor (VHTR) program provide data in support of Nuclear Regulatory Commission licensing of the VHTR. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high temperature and high fluence environments. In addition, thermal-hydraulic experiments are conducted to validate codes used to assess reactor safety. The VHTR Program has established the NGNP Data Management and Analysis System (NDMAS) to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities for displaying the data in meaningful ways and identifying relationships among the measured quantities that contribute to their understanding.

  5. High Altitude Platforms for Disaster Recovery: Capabilities, Strategies, and Techniques for Emergency Telecommunications

    Directory of Open Access Journals (Sweden)

    Deaton JuanD

    2008-01-01

    Full Text Available Abstract Natural disasters and terrorist acts have significant potential to disrupt emergency communication systems. These emergency communication networks include first-responder, cellular, landline, and emergency answering services such as 911, 112, or 999. Without these essential emergency communications capabilities, search, rescue, and recovery operations during a catastrophic event will be severely debilitated. High altitude platforms could be fitted with telecommunications equipment and used to support these critical communications missions once the catastrophic event occurs. With the ability to be continuously on station, HAPs provide excellent options for providing emergency coverage over high-risk areas before catastrophic incidents occur. HAPs could also provide enhanced 911 capabilities using either GPS or reference stations. This paper proposes potential emergency communications architecture and presents a method for estimating emergency communications systems traffic patterns for a catastrophic event.

  6. High Altitude Platforms for Disaster Recovery: Capabilities, Strategies, and Techniques for Providing Emergency Telecommunications

    Energy Technology Data Exchange (ETDEWEB)

    Juan D. Deaton

    2008-05-01

    Natural disasters and terrorist acts have significant potential to disrupt emergency communication systems. These emergency communication networks include first-responder, cellular, landline, and emergency answering services such as 911, 112, or 999. Without these essential emergency communications capabilities, search, rescue, and recovery operations during a catastrophic event will be severely debilitated. High altitude platforms could be fitted with telecommunications equipment and used to support these critical communications missions once the catastrophic event occurs. With the ability to be continuously on station, HAPs provide excellent options for providing emergency coverage over high-risk areas before catastrophic incidents occur. HAPs could also provide enhanced 911 capabilities using either GPS or reference stations. This paper proposes potential emergency communications architecture and presents a method for estimating emergency communications systems traffic patterns for a catastrophic event.

  7. High Altitude Platforms for Disaster Recovery: Capabilities, Strategies, and Techniques for Emergency Telecommunications

    Directory of Open Access Journals (Sweden)

    Juan D. Deaton

    2008-09-01

    Full Text Available Natural disasters and terrorist acts have significant potential to disrupt emergency communication systems. These emergency communication networks include first-responder, cellular, landline, and emergency answering services such as 911, 112, or 999. Without these essential emergency communications capabilities, search, rescue, and recovery operations during a catastrophic event will be severely debilitated. High altitude platforms could be fitted with telecommunications equipment and used to support these critical communications missions once the catastrophic event occurs. With the ability to be continuously on station, HAPs provide excellent options for providing emergency coverage over high-risk areas before catastrophic incidents occur. HAPs could also provide enhanced 911 capabilities using either GPS or reference stations. This paper proposes potential emergency communications architecture and presents a method for estimating emergency communications systems traffic patterns for a catastrophic event.

  8. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    Science.gov (United States)

    2015-09-30

    wind-and thermohaline -forced isopycnic coordinate model of the North Atlantic. J. Phys. Oceanogr. 22, 1486–1505. Bleck, R., 2002. An oceanic general... circulation model framed in hybrid isopycnic-Cartesian coordinates. Ocean Modell. 4, 55–88. Buijsman, M.C., Kanarska, Y., McWilliams, J.C., 2010...continental margin. Cont. Shelf Res. 24 (6), 693–720. Nakayama, K. and Imberger, J. 2010 Residual circulation due to internal waves shoaling on a slope

  9. Capabilities For Modelling Of Conversion Processes In Life Cycle Assessment

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide

    Life cycle assessment was traditionally used for modelling of product design and optimization. This is also seen in the conventional LCA software which is optimized for the modelling of single materials streams of a homogeneous nature that is assembled into a final product. There has therefore been...

  10. The Creation and Use of an Analysis Capability Maturity Model (trademark) (ACMM)

    National Research Council Canada - National Science Library

    Covey, R. W; Hixon, D. J

    2005-01-01

    .... Capability Maturity Models (trademark) (CMMs) are being used in several intellectual endeavors, such as software engineering, software acquisition, and systems engineering. This Analysis CMM (ACMM...

  11. On the predictive capabilities of multiphase Darcy flow models

    KAUST Repository

    Icardi, Matteo

    2016-01-09

    Darcy s law is a widely used model and the limit of its validity is fairly well known. When the flow is sufficiently slow and the porosity relatively homogeneous and low, Darcy s law is the homogenized equation arising from the Stokes and Navier- Stokes equations and depends on a single effective parameter (the absolute permeability). However when the model is extended to multiphase flows, the assumptions are much more restrictive and less realistic. Therefore it is often used in conjunction with empirical models (such as relative permeability and capillary pressure curves), derived usually from phenomenological speculations and experimental data fitting. In this work, we present the results of a Bayesian calibration of a two-phase flow model, using high-fidelity DNS numerical simulation (at the pore-scale) in a realistic porous medium. These reference results have been obtained from a Navier-Stokes solver coupled with an explicit interphase-tracking scheme. The Bayesian inversion is performed on a simplified 1D model in Matlab by using adaptive spectral method. Several data sets are generated and considered to assess the validity of this 1D model.

  12. Computable general equilibrium model fiscal year 2013 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  13. Lattice Boltzmann model capable of mesoscopic vorticity computation.

    Science.gov (United States)

    Peng, Cheng; Guo, Zhaoli; Wang, Lian-Ping

    2017-11-01

    It is well known that standard lattice Boltzmann (LB) models allow the strain-rate components to be computed mesoscopically (i.e., through the local particle distributions) and as such possess a second-order accuracy in strain rate. This is one of the appealing features of the lattice Boltzmann method (LBM) which is of only second-order accuracy in hydrodynamic velocity itself. However, no known LB model can provide the same quality for vorticity and pressure gradients. In this paper, we design a multiple-relaxation time LB model on a three-dimensional 27-discrete-velocity (D3Q27) lattice. A detailed Chapman-Enskog analysis is presented to illustrate all the necessary constraints in reproducing the isothermal Navier-Stokes equations. The remaining degrees of freedom are carefully analyzed to derive a model that accommodates mesoscopic computation of all the velocity and pressure gradients from the nonequilibrium moments. This way of vorticity calculation naturally ensures a second-order accuracy, which is also proven through an asymptotic analysis. We thus show, with enough degrees of freedom and appropriate modifications, the mesoscopic vorticity computation can be achieved in LBM. The resulting model is then validated in simulations of a three-dimensional decaying Taylor-Green flow, a lid-driven cavity flow, and a uniform flow passing a fixed sphere. Furthermore, it is shown that the mesoscopic vorticity computation can be realized even with single relaxation parameter.

  14. Lattice Boltzmann model capable of mesoscopic vorticity computation

    Science.gov (United States)

    Peng, Cheng; Guo, Zhaoli; Wang, Lian-Ping

    2017-11-01

    It is well known that standard lattice Boltzmann (LB) models allow the strain-rate components to be computed mesoscopically (i.e., through the local particle distributions) and as such possess a second-order accuracy in strain rate. This is one of the appealing features of the lattice Boltzmann method (LBM) which is of only second-order accuracy in hydrodynamic velocity itself. However, no known LB model can provide the same quality for vorticity and pressure gradients. In this paper, we design a multiple-relaxation time LB model on a three-dimensional 27-discrete-velocity (D3Q27) lattice. A detailed Chapman-Enskog analysis is presented to illustrate all the necessary constraints in reproducing the isothermal Navier-Stokes equations. The remaining degrees of freedom are carefully analyzed to derive a model that accommodates mesoscopic computation of all the velocity and pressure gradients from the nonequilibrium moments. This way of vorticity calculation naturally ensures a second-order accuracy, which is also proven through an asymptotic analysis. We thus show, with enough degrees of freedom and appropriate modifications, the mesoscopic vorticity computation can be achieved in LBM. The resulting model is then validated in simulations of a three-dimensional decaying Taylor-Green flow, a lid-driven cavity flow, and a uniform flow passing a fixed sphere. Furthermore, it is shown that the mesoscopic vorticity computation can be realized even with single relaxation parameter.

  15. High Altitude Pollution Program Stratospheric Measurement System Laboratory Performance Capability Report Chemical Conversion Techniques.

    Science.gov (United States)

    1980-02-01

    characteristic fluorescence can be measured. Also, the technique could be extended to NO by first converting NO to NO2 as is done in the chemilumine- scent ...bimolecular reactions with k7>> k9 , the differential rate equation for the intermediary, NO3 is given by d [NO3]l dt 2k19 [No2] [No3] + 2 k2 0 [NO3

  16. Capabilities of optical SIV technique in measurements of flow velocity vector field dynamics

    Science.gov (United States)

    Mikheev, N. I.; Dushin, N. S.; Saushin, I. I.

    2017-11-01

    The main difference between Smoke Image Velocimetry (SIV) technique and the conventional PIV is that higher concentration of tracer particles typical of smoke visualization techniques is used in SIV. Not separate particles but smoke structures with continuous pixel intensity are visible in the recorded images. Owing to better smoke reflectivity, higher spatial and temporal resolution is obtained in the case when relatively simple equipment (camera and laser) is used. It is simple enough to perform SIV measurements of velocity vector field dynamics at the frequency exceeding 15000 Hz, which offers new opportunities in unsteady flow examination. The paper describes fundamentals of SIV technique and gives some new results obtained using this method for the measurements that require high spatial and temporal resolution. The latter include frequency spectra of turbulent velocity fluctuations, turbulence dissipation profiles in the boundary layer and higher-order moments of velocity fluctuations. It has been shown that SIV technique considerably extends the potential of experimental studies of turbulence and flow structure in high-speed processes.

  17. Proceedings of the Joint EC, OECD, IAEA Specialists Meeting on NDE Techniques Capability Demonstration and Inspection Qualification

    International Nuclear Information System (INIS)

    Von Estorff, U.; Lemaitre, P.

    1997-01-01

    The 1997 International Specialists Meeting on NDE Techniques Capability Demonstration and Inspection Qualification was intended to provide an international forum for the discussion of recent developments, results and experience with NDE techniques capability demonstration and inspection qualification methods. The meeting provided an opportunity to compare and assess the qualification principles as proposed or applied by the American Performance Demonstration Initiative, the European Network on Inspection Qualification and the IAEA in its proposed guidelines specific to WWERs. The meeting addressed, in terms of state of the art, the capability demonstration of NDE procedures applied to the major nuclear reactor components. Special emphasis was placed on NDE techniques qualification to detect and size flaws in order to assure structural integrity during plant design life or beyond. National positions or experience were presented showing the typical variety of applications of one or two general principles or methodologies in agreement with national legal and traditional aspects. Experience developed by national qualification bodies and by pilot studies were rich in information concerning the difficulties which were encountered during the studies. Risk Based Inspection concepts were explained due to their relevance with the setting of the ISI objectives and therefore the level of qualification required for each situation considered

  18. EASEWASTE-life cycle modeling capabilities for waste management technologies

    DEFF Research Database (Denmark)

    Bhander, Gurbakhash Singh; Christensen, Thomas Højlund; Hauschild, Michael Zwicky

    2010-01-01

    Background, Aims and Scope The management of municipal solid waste and the associated environmental impacts are subject of growing attention in industrialized countries. EU has recently strongly emphasized the role of LCA in its waste and resource strategies. The development of sustainable solid...... waste management model EASEWASTE, developed at the Technical University of Denmark specifically to meet the needs of the waste system developer with the objective to evaluate the environmental performance of the various elements of existing or proposed solid waste management systems. Materials...... and quantities as well as for the waste technologies mentioned above. The model calculates environmental impacts and resource consumptions and allows the user to trace all impacts to their source in a waste treatment processes or in a specific waste material fraction. In addition to the traditional impact...

  19. Capabilities for modelling of conversion processes in LCA

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide

    2015-01-01

    , EASETECH (Clavreul et al., 2014) was developed which integrates a matrix approach for the functional unit which contains the full chemical composition for different material fractions, and also the number of different material fractions present in the overall mass being handled. These chemical substances...... able to set constraints for a possible flow on basis of other flows, and also do return flows for some material streams. We have therefore developed a new editor for the EASETECH software, which allows the user to make specific process modules where the actual chemical conversion processes can...... be modelled and then integrated into the overall LCA model. This allows for flexible modules which automatically will adjust the material flows it is handling on basis of its chemical information, which can be set for multiple input materials at the same time. A case example of this was carried out for a bio...

  20. NASA Air Force Cost Model (NAFCOM): Capabilities and Results

    Science.gov (United States)

    McAfee, Julie; Culver, George; Naderi, Mahmoud

    2011-01-01

    NAFCOM is a parametric estimating tool for space hardware. Uses cost estimating relationships (CERs) which correlate historical costs to mission characteristics to predict new project costs. It is based on historical NASA and Air Force space projects. It is intended to be used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels and estimates development and production costs. NAFCOM is applicable to various types of missions (crewed spacecraft, uncrewed spacecraft, and launch vehicles). There are two versions of the model: a government version that is restricted and a contractor releasable version.

  1. Expanding the modeling capabilities of the cognitive environment simulation

    International Nuclear Information System (INIS)

    Roth, E.M.; Mumaw, R.J.; Pople, H.E. Jr.

    1991-01-01

    The Nuclear Regulatory Commission has been conducting a research program to develop more effective tools to model the cognitive activities that underlie intention formation during nuclear power plant (NPP) emergencies. Under this program an artificial intelligence (AI) computer simulation called Cognitive Environment Simulation (CES) has been developed. CES simulates the cognitive activities involved in responding to a NPP accident situation. It is intended to provide an analytic tool for predicting likely human responses, and the kinds of errors that can plausibly arise under different accident conditions to support human reliability analysis. Recently CES was extended to handle a class of interfacing loss of coolant accidents (ISLOCAs). This paper summarizes the results of these exercises and describes follow-on work currently underway

  2. An Analysis of the Space Transportation System Launch Rate Capability Utilizing Q-GERT Simulation Techniques.

    Science.gov (United States)

    1982-12-01

    VAPE was modeled to determine this launch rate and to determine the processing times for an Orbiter at VAPe . This informa- 21 tion was then used in the...year (node 79 and activity ?1). ETa are then selected to be sent to either KSC or VAPE (node 80). This decision is made (using Ur 8) on the basis of

  3. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  4. Technique development for modulus, microcracking, hermeticity, and coating evaluation capability characterization of SiC/SiC tubes

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Xunxiang [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States); Ang, Caen K. [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States); Singh, Gyanender P. [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States); Katoh, Yutai [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)

    2016-08-01

    Driven by the need to enlarge the safety margins of nuclear fission reactors in accident scenarios, research and development of accident-tolerant fuel has become an important topic in the nuclear engineering and materials community. A continuous-fiber SiC/SiC composite is under consideration as a replacement for traditional zirconium alloy cladding owing to its high-temperature stability, chemical inertness, and exceptional irradiation resistance. An important task is the development of characterization techniques for SiC/SiC cladding, since traditional work using rectangular bars or disks cannot directly provide useful information on the properties of SiC/SiC composite tubes for fuel cladding applications. At Oak Ridge National Laboratory, experimental capabilities are under development to characterize the modulus, microcracking, and hermeticity of as-fabricated, as-irradiated SiC/SiC composite tubes. Resonant ultrasound spectroscopy has been validated as a promising technique to evaluate the elastic properties of SiC/SiC composite tubes and microcracking within the material. A similar technique, impulse excitation, is efficient in determining the basic mechanical properties of SiC bars prepared by chemical vapor deposition; it also has potential for application in studying the mechanical properties of SiC/SiC composite tubes. Complete evaluation of the quality of the developed coatings, a major mitigation strategy against gas permeation and hydrothermal corrosion, requires the deployment of various experimental techniques, such as scratch indentation, tensile pulling-off tests, and scanning electron microscopy. In addition, a comprehensive permeation test station is being established to assess the hermeticity of SiC/SiC composite tubes and to determine the H/D/He permeability of SiC/SiC composites. This report summarizes the current status of the development of these experimental capabilities.

  5. Estimating Heat and Mass Transfer Processes in Green Roof Systems: Current Modeling Capabilities and Limitations (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Tabares Velasco, P. C.

    2011-04-01

    This presentation discusses estimating heat and mass transfer processes in green roof systems: current modeling capabilities and limitations. Green roofs are 'specialized roofing systems that support vegetation growth on rooftops.'

  6. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  7. Guidelines for Applying the Capability Maturity Model Analysis to Connected and Automated Vehicle Deployment

    Science.gov (United States)

    2017-11-23

    The Federal Highway Administration (FHWA) has adapted the Transportation Systems Management and Operations (TSMO) Capability Maturity Model (CMM) to describe the operational maturity of Infrastructure Owner-Operator (IOO) agencies across a range of i...

  8. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  9. Security Process Capability Model Based on ISO/IEC 15504 Conformant Enterprise SPICE

    Directory of Open Access Journals (Sweden)

    Mitasiunas Antanas

    2014-07-01

    Full Text Available In the context of modern information systems, security has become one of the most critical quality attributes. The purpose of this paper is to address the problem of quality of information security. An approach to solve this problem is based on the main assumption that security is a process oriented activity. According to this approach, product quality can be achieved by means of process quality - process capability. Introduced in the paper, SPICE conformant information security process capability model is based on process capability modeling elaborated by world-wide software engineering community during the last 25 years, namely ISO/IEC 15504 that defines the capability dimension and the requirements for process definition and domain independent integrated model for enterprise-wide assessment and Enterprise SPICE improvement

  10. Co-firing biomass and coal-progress in CFD modelling capabilities

    DEFF Research Database (Denmark)

    Kær, Søren Knudsen; Rosendahl, Lasse Aistrup; Yin, Chungen

    2005-01-01

    This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra-particle...

  11. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  12. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  13. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  14. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    Energy Technology Data Exchange (ETDEWEB)

    Schraad, Mark William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Physics and Engineering Models; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Advanced Simulation and Computing

    2016-09-06

    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  15. Non-Destructive Evaluation for Corrosion Monitoring in Concrete: A Review and Capability of Acoustic Emission Technique

    Science.gov (United States)

    Zaki, Ahmad; Chai, Hwa Kian; Aggelis, Dimitrios G.; Alver, Ninel

    2015-01-01

    Corrosion of reinforced concrete (RC) structures has been one of the major causes of structural failure. Early detection of the corrosion process could help limit the location and the extent of necessary repairs or replacement, as well as reduce the cost associated with rehabilitation work. Non-destructive testing (NDT) methods have been found to be useful for in-situ evaluation of steel corrosion in RC, where the effect of steel corrosion and the integrity of the concrete structure can be assessed effectively. A complementary study of NDT methods for the investigation of corrosion is presented here. In this paper, acoustic emission (AE) effectively detects the corrosion of concrete structures at an early stage. The capability of the AE technique to detect corrosion occurring in real-time makes it a strong candidate for serving as an efficient NDT method, giving it an advantage over other NDT methods. PMID:26251904

  16. An assessment system for the system safety engineering capability maturity model in the case of spent fuel reprocessing

    International Nuclear Information System (INIS)

    Yang Xiaohua; Liu Zhenghai; Liu Zhiming; Wan Yaping; Bai Xiaofeng

    2012-01-01

    We can improve the processing, the evaluation of capability and promote the user's trust by using system security engineering capability maturity model (SSE-CMM). SSE-CMM is the common method for organizing and implementing safety engineering, and it is a mature method for system safety engineering. Combining capability maturity model (CMM) with total quality management and statistic theory, SSE-CMM turns systems security engineering into a well-defined, mature, measurable, advanced engineering discipline. Lack of domain knowledge, the size of data, the diversity of evidences, the cumbersomeness of processes, and the complexity of matching evidences with problems are the main issues that SSE-CMM assessment has to face. To improve effectively the efficiency of assessment of spent fuel reprocessing system security engineering capability maturity model (SFR-SSE-CMM), in this paper we de- signed an intelligent assessment software based on domain ontology and that uses methods such as ontology, evidence theory, semantic web, intelligent information retrieval and intelligent auto-matching techniques. This software includes four subsystems, which are domain ontology creation and management system, evidence auto collection system, and a problem and evidence matching system. The architecture of the software is divided into five layers: a data layer, an oncology layer, a knowledge layer, a service layer arid a presentation layer. (authors)

  17. SIR rumor spreading model considering the effect of difference in nodes’ identification capabilities

    Science.gov (United States)

    Wang, Ya-Qi; Wang, Jing

    In this paper, we study the effect of difference in network nodes’ identification capabilities on rumor propagation. A novel susceptible-infected-removed (SIR) model is proposed, based on the mean-field theory, to investigate the dynamical behaviors of such model on homogeneous networks and inhomogeneous networks, respectively. Theoretical analysis and simulation results demonstrate that when we consider the influence of difference in nodes’ identification capabilities, the critical thresholds obviously increase, but the final rumor sizes are apparently reduced. We also find that the difference in nodes’ identification capabilities prolongs the time of rumor propagation reaching a steady state, and decreases the number of nodes that finally accept rumors. Additionally, under the influence of difference of nodes’ identification capabilities, compared with the homogeneous networks, the rumor transmission rate on the inhomogeneous networks is relatively large.

  18. Advanced techniques in reliability model representation and solution

    Science.gov (United States)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  19. Assessing the LWR codes capability to address SFR BDBAs: Modeling of the ABCOVE tests

    International Nuclear Information System (INIS)

    Garcia, M.; Herranz, L. E.

    2012-01-01

    Tic present paper is aimed at assessing the current capability of LWR codes to model aerosol transport within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC and MELCOR codes lo relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that differences in boundary conditions between LWR and SFR containments under BDBA can be accommodated to some extent.

  20. Exploring a capability-demand interaction model for inclusive design evaluation

    OpenAIRE

    Persad, Umesh

    2012-01-01

    Designers are required to evaluate their designs against the needs and capabilities of their target user groups in order to achieve successful, inclusive products. This dissertation presents exploratory research into the specific problem of supporting analytical design evaluation for Inclusive Design. The analytical evaluation process involves evaluating products with user data rather than testing with actual users. The work focuses on the exploration of a capability-demand model of product i...

  1. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  2. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  3. The capability and constraint model of recoverability: An integrated theory of continuity planning.

    Science.gov (United States)

    Lindstedt, David

    2017-01-01

    While there are best practices, good practices, regulations and standards for continuity planning, there is no single model to collate and sort their various recommended activities. To address this deficit, this paper presents the capability and constraint model of recoverability - a new model to provide an integrated foundation for business continuity planning. The model is non-linear in both construct and practice, thus allowing practitioners to remain adaptive in its application. The paper presents each facet of the model, outlines the model's use in both theory and practice, suggests a subsequent approach that arises from the model, and discusses some possible ramifications to the industry.

  4. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  5. The IDEAL (Integrated Design and Engineering Analysis Languages) modeling methodology: Capabilities and Applications

    Science.gov (United States)

    Evers, Ken H.; Bachert, Robert F.

    1987-01-01

    The IDEAL (Integrated Design and Engineering Analysis Languages) modeling methodology has been formulated and applied over a five-year period. It has proven to be a unique, integrated approach utilizing a top-down, structured technique to define and document the system of interest; a knowledge engineering technique to collect and organize system descriptive information; a rapid prototyping technique to perform preliminary system performance analysis; and a sophisticated simulation technique to perform in-depth system performance analysis.

  6. Dynamic capabilities, Marketing Capability and Organizational Performance

    Directory of Open Access Journals (Sweden)

    Adriana Roseli Wünsch Takahashi

    2017-01-01

    Full Text Available The goal of the study is to investigate the influence of dynamic capabilities on organizational performance and the role of marketing capabilities as a mediator in this relationship in the context of private HEIs in Brazil. As a research method we carried out a survey with 316 IES and data analysis was operationalized with the technique of structural equation modeling. The results indicate that the dynamic capabilities have influence on organizational performance only when mediated by marketing ability. The marketing capability has an important role in the survival, growth and renewal on educational services offerings for HEIs in private sector, and consequently in organizational performance. It is also demonstrated that mediated relationship is more intense for HEI with up to 3,000 students and other organizational profile variables such as amount of courses, the constitution, the type of institution and type of education do not significantly alter the results.

  7. An Observation Capability Metadata Model for EO Sensor Discovery in Sensor Web Enablement Environments

    Directory of Open Access Journals (Sweden)

    Chuli Hu

    2014-10-01

    Full Text Available Accurate and fine-grained discovery by diverse Earth observation (EO sensors ensures a comprehensive response to collaborative observation-required emergency tasks. This discovery remains a challenge in an EO sensor web environment. In this study, we propose an EO sensor observation capability metadata model that reuses and extends the existing sensor observation-related metadata standards to enable the accurate and fine-grained discovery of EO sensors. The proposed model is composed of five sub-modules, namely, ObservationBreadth, ObservationDepth, ObservationFrequency, ObservationQuality and ObservationData. The model is applied to different types of EO sensors and is formalized by the Open Geospatial Consortium Sensor Model Language 1.0. The GeosensorQuery prototype retrieves the qualified EO sensors based on the provided geo-event. An actual application to flood emergency observation in the Yangtze River Basin in China is conducted, and the results indicate that sensor inquiry can accurately achieve fine-grained discovery of qualified EO sensors and obtain enriched observation capability information. In summary, the proposed model enables an efficient encoding system that ensures minimum unification to represent the observation capabilities of EO sensors. The model functions as a foundation for the efficient discovery of EO sensors. In addition, the definition and development of this proposed EO sensor observation capability metadata model is a helpful step in extending the Sensor Model Language (SensorML 2.0 Profile for the description of the observation capabilities of EO sensors.

  8. Incorporating numerical modeling into estimates of the detection capability of the IMS infrasound network

    Science.gov (United States)

    Le Pichon, A.; Ceranna, L.; Vergoz, J.

    2012-03-01

    To monitor compliance with the Comprehensive Nuclear-Test ban Treaty (CTBT), a dedicated International Monitoring System (IMS) is being deployed. Recent global scale observations recorded by this network confirm that its detection capability is highly variable in space and time. Previous studies estimated the radiated source energy from remote observations using empirical yield-scaling relations which account for the along-path stratospheric winds. Although the empirical wind correction reduces the variance in the explosive energy versus pressure relationship, strong variability remains in the yield estimate. Today, numerical modeling techniques provide a basis to better understand the role of different factors describing the source and the atmosphere that influence propagation predictions. In this study, the effects of the source frequency and the stratospheric wind speed are simulated. In order to characterize fine-scale atmospheric structures which are excluded from the current atmospheric specifications, model predictions are further enhanced by the addition of perturbation terms. A theoretical attenuation relation is thus developed from massive numerical simulations using the Parabolic Equation method. Compared with previous studies, our approach provides a more realistic physical description of long-range infrasound propagation. We obtain a new relation combining a near-field and a far-field term, which account for the effects of both geometrical spreading and absorption. In the context of the future verification of the CTBT, the derived attenuation relation quantifies the spatial and temporal variability of the IMS infrasound network performance in higher resolution, and will be helpful for the design and prioritizing maintenance of any arbitrary infrasound monitoring network.

  9. Incorporating numerical modelling into estimates of the detection capability of the IMS infrasound network

    Science.gov (United States)

    Le Pichon, A.; Ceranna, L.

    2011-12-01

    To monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), a dedicated International Monitoring System (IMS) is being deployed. Recent global scale observations recorded by this network confirm that its detection capability is highly variable in space and time. Previous studies estimated the radiated source energy from remote observations using empirical yield-scaling relations which account for the along-path stratospheric winds. Although the empirical wind correction reduces the variance in the explosive energy versus pressure relationship, strong variability remains in the yield estimate. Today, numerical modelling techniques provide a basis to better understand the role of different factors describing the source and the atmosphere that influence propagation predictions. In this study, the effects of the source frequency and the stratospheric wind speed are simulated. In order to characterize fine-scale atmospheric structures which are excluded from the current atmospheric specifications, model predictions are further enhanced by the addition of perturbation terms. Thus, a theoretical attenuation relation is developed from massive numerical simulations using the Parabolic Equation method. Compared with previous studies, our approach provides a more realistic physical description of infrasound propagation. We obtain a new relation combining a near-field and far-field term which account for the effects of both geometrical spreading and dissipation on the pressure wave attenuation. By incorporating real ambient infrasound noise at the receivers which significantly limits the ability to detect and identify signals of interest, the minimum detectable source amplitude can be derived in a broad frequency range. Empirical relations between the source spectrum and the yield of explosions are used to infer detection thresholds in tons of TNT equivalent. In the context of the future verification of the CTBT, the obtained attenuation relation quantifies

  10. Workshop on Computational Modelling Techniques in Structural ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 22; Issue 6. Workshop on Computational Modelling Techniques in Structural Biology. Information and Announcements Volume 22 Issue 6 June 2017 pp 619-619. Fulltext. Click here to view fulltext PDF. Permanent link:

  11. A theoretical model for developing core capabilities from an intellectual capital perspective (Part 1

    Directory of Open Access Journals (Sweden)

    Marius Ungerer

    2005-10-01

    Full Text Available One of the basic assumptions associated with the theoretical model as described in this article is that an organisation (a system can acquire capabilities through intentional strategic and operational initiatives. This intentional capability-building process also implies that the organisation intends to use these capabilities in a constructive way to increase competitive advantage for the firm. Opsomming Een van die basiese aannames wat geassosieer word met die teoretiese model wat in hierdie artikel beskryf word, is dat ’n organisasie (’n stelsel vermoëns deur doelgerigte strategiese en operasionele inisiatiewe kan bekom. Hierdie voorgenome vermoë-skeppingsproses, veronderstel ook dat die onderneming daarop ingestel is om hierdie vermoëns op ’n konstruktiewe wyse te benut om die mededingende voordeel van die organisasie te verhoog.

  12. A theoretical model for developing core capabilities from an intellectual capital perspective (Part 2

    Directory of Open Access Journals (Sweden)

    Marius Ungerer

    2005-10-01

    Full Text Available One of the basic assumptions associated with the theoretical model as described in this article is that an organization (a system can acquire capabilities through intentional strategic and operational initiatives. This intentional capability-building process also implies that the organisation intends to use these capabilities in a constructive way to increase competitive advantage for the firm. Opsomming Een van die basiese aannames wat geassosieer word met die teoretiese model wat in hierdie artikel beskryf word, is dat ’n organisasie (’n stelsel vermoëns deur doelgerigte strategiese en operasionele inisiatiewe kan bekom. Hierdie voorgenome vermoë-skeppingsproses, veronderstel ook dat die onderneming daarop ingestel is om hierdie vermoëns op ’n konstruktiewe wyse te benut om die mededingende voordeel van die organisasie te verhoog.

  13. Capability-based Access Control Delegation Model on the Federated IoT Network

    DEFF Research Database (Denmark)

    Anggorojati, Bayu; Mahalle, Parikshit N.; Prasad, Neeli R.

    2012-01-01

    Flexibility is an important property for general access control system and especially in the Internet of Things (IoT), which can be achieved by access or authority delegation. Delegation mechanisms in access control that have been studied until now have been intended mainly for a system that has...... no resource constraint, such as a web-based system, which is not very suitable for a highly pervasive system such as IoT. To this end, this paper presents an access delegation method with security considerations based on Capability-based Context Aware Access Control (CCAAC) model intended for federated...... machine-to-machine communication or IoT networks. The main idea of our proposed model is that the access delegation is realized by means of a capability propagation mechanism, and incorporating the context information as well as secure capability propagation under federated IoT environments. By using...

  14. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    Science.gov (United States)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried

  15. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  16. University-Industry Research Collaboration: A Model to Assess University Capability

    Science.gov (United States)

    Abramo, Giovanni; D'Angelo, Ciriaco Andrea; Di Costa, Flavia

    2011-01-01

    Scholars and policy makers recognize that collaboration between industry and the public research institutions is a necessity for innovation and national economic development. This work presents an econometric model which expresses the university capability for collaboration with industry as a function of size, location and research quality. The…

  17. Semantic Model of Variability and Capabilities of IoT Applications for Embedded Software Ecosystems

    DEFF Research Database (Denmark)

    Tomlein, Matus; Grønbæk, Kaj

    2016-01-01

    Applications in embedded open software ecosystems for Internet of Things devices open new challenges regarding how their variability and capabilities should be modeled. In collaboration with an industrial partner, we have recognized that such applications have complex constraints on the context. We...

  18. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  19. A cellular automata model for traffic flow based on kinetics theory, vehicles capabilities and driver reactions

    Science.gov (United States)

    Guzmán, H. A.; Lárraga, M. E.; Alvarez-Icaza, L.; Carvajal, J.

    2018-02-01

    In this paper, a reliable cellular automata model oriented to faithfully reproduce deceleration and acceleration according to realistic reactions of drivers, when vehicles with different deceleration capabilities are considered is presented. The model focuses on describing complex traffic phenomena by coding in its rules the basic mechanisms of drivers behavior, vehicles capabilities and kinetics, while preserving simplicity. In particular, vehiclés kinetics is based on uniform accelerated motion, rather than in impulsive accelerated motion as in most existing CA models. Thus, the proposed model calculates in an analytic way three safe preserving distances to determine the best action a follower vehicle can take under a worst case scenario. Besides, the prediction analysis guarantees that under the proper assumptions, collision between vehicles may not happen at any future time. Simulations results indicate that all interactions of heterogeneous vehicles (i.e., car-truck, truck-car, car-car and truck-truck) are properly reproduced by the model. In addition, the model overcomes one of the major limitations of CA models for traffic modeling: the inability to perform smooth approach to slower or stopped vehicles. Moreover, the model is also capable of reproducing most empirical findings including the backward speed of the downstream front of the traffic jam, and different congested traffic patterns induced by a system with open boundary conditions with an on-ramp. Like most CA models, integer values are used to make the model run faster, which makes the proposed model suitable for real time traffic simulation of large networks.

  20. How do dynamic capabilities transform external technologies into firms’ renewed technological resources? – A mediation model

    DEFF Research Database (Denmark)

    Li-Ying, Jason; Wang, Yuandi; Ning, Lutao

    2016-01-01

    How externally acquired resources may become valuable, rare, hard-to-imitate, and non-substitute resource bundles through the development of dynamic capabilities? This study proposes and tests a mediation model of how firms’ internal technological diversification and R&D, as two distinctive...... microfoundations of dynamic technological capabilities, mediate the relationship between external technology breadth and firms’ technological innovation performance, based on the resource-based view and dynamic capability view. Using a sample of listed Chinese licensee firms, we find that firms must broadly...... explore external technologies to ignite the dynamism in internal technological diversity and in-house R&D, which play their crucial roles differently to transform and reconfigure firms’ technological resources....

  1. Space geodetic techniques for global modeling of ionospheric peak parameters

    Science.gov (United States)

    Alizadeh, M. Mahdi; Schuh, Harald; Schmidt, Michael

    The rapid development of new technological systems for navigation, telecommunication, and space missions which transmit signals through the Earth’s upper atmosphere - the ionosphere - makes the necessity of precise, reliable and near real-time models of the ionospheric parameters more crucial. In the last decades space geodetic techniques have turned into a capable tool for measuring ionospheric parameters in terms of Total Electron Content (TEC) or the electron density. Among these systems, the current space geodetic techniques, such as Global Navigation Satellite Systems (GNSS), Low Earth Orbiting (LEO) satellites, satellite altimetry missions, and others have found several applications in a broad range of commercial and scientific fields. This paper aims at the development of a three-dimensional integrated model of the ionosphere, by using various space geodetic techniques and applying a combination procedure for computation of the global model of electron density. In order to model ionosphere in 3D, electron density is represented as a function of maximum electron density (NmF2), and its corresponding height (hmF2). NmF2 and hmF2 are then modeled in longitude, latitude, and height using two sets of spherical harmonic expansions with degree and order 15. To perform the estimation, GNSS input data are simulated in such a way that the true position of the satellites are detected and used, but the STEC values are obtained through a simulation procedure, using the IGS VTEC maps. After simulating the input data, the a priori values required for the estimation procedure are calculated using the IRI-2012 model and also by applying the ray-tracing technique. The estimated results are compared with F2-peak parameters derived from the IRI model to assess the least-square estimation procedure and moreover, to validate the developed maps, the results are compared with the raw F2-peak parameters derived from the Formosat-3/Cosmic data.

  2. Konsep Tingkat Kematangan penerapan Internet Protokol versi 6 (Capability Maturity Model for IPv6 Implementation

    Directory of Open Access Journals (Sweden)

    Riza Azmi

    2015-03-01

    Full Text Available Internet Protocol atau IP merupakan standar penomoran internet di dunia yang jumlahnya terbatas. Di dunia, alokasi IP diatur oleh Internet Assignd Number Authority (IANA dan didelegasikan ke melalui otoritas masing-masing benua. IP sendiri terdiri dari 2 jenis versi yaitu IPv4 dan IPv6 dimana alokasi IPv4 dinyatakan habis di tingkat IANA pada bulan April 2011. Oleh karena itu, penggunaan IP diarahkan kepada penggunaan IPv6. Untuk melihat bagaimana kematangan suatu organisasi terhadap implementasi IPv6, penelitian ini mencoba membuat sebuah model tingkat kematangan penerapan IPv6. Konsep dasar dari model ini mengambil konsep Capability Maturity Model Integrated (CMMI, dengan beberapa tambahan yaitu roadmap migrasi IPv6 di Indonesia, Request for Comment (RFC yang terkait dengan IPv6 serta beberapa best-practice implementasi dari IPv6. Dengan konsep tersebut, penelitian ini menghasilkan konsep Capability Maturity for IPv6 Implementation.

  3. New Modelling Capabilities in Commercial Software for High-Gain Antennas

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Lumholt, Michael; Meincke, Peter

    2012-01-01

    characterization of the reflectarray element, an initial phaseonly synthesis, followed by a full optimization procedure taking into account the near-field from the feed and the finite extent of the array. Another interesting new modelling capability is made available through the DIATOOL software, which is a new......This paper presents an overview of selected new modelling algorithms and capabilities in commercial software tools developed by TICRA. A major new area is design and analysis of printed reflectarrays where a fully integrated design environment is under development, allowing fast and accurate...... type of EM software tool aimed at extending the ways engineers can use antenna measurements in the antenna design process. The tool allows reconstruction of currents and near fields on a 3D surface conformal to the antenna, by using the measured antenna field as input. The currents on the antenna...

  4. Applicability of SEI's Capability Maturity Model in Joint Information Technology, Supreme Command Headquarters

    OpenAIRE

    Thongmuang, Jitti.

    1995-01-01

    The Software Engineering Institute's (SEI) Capability Maturity Model (CMM) is analyzed to identify its technological and economic applicability for the Joint Information Technology (JIT), Supreme Command Headquarters, Royal Thai Ministry of Defense. Kurt Lewin's force field theory was used to analyze different dimensions of CMM's applicability for JIT's organizational environment (defined by the stakeholder concept). It suggests that introducing CMM technology into JIT is unwarranted at this ...

  5. Three Quality Journeys - Capability Maturity Model Integration, Baldrige Performance Excellence Program, and ISO 9000 Series

    Science.gov (United States)

    2012-04-26

    management; also demands involvement by upper executives in order to integrate quality into the business. o ISO 9004:2000 standard provided method for...previously used methods . o Indicated that ISO 9000:2008 provided roadmap for creating a quality management system that addressed issues specific to this...Capability Maturity Model Integration CMMI-DEV – CMMI for Development PDCA – Plan-Do-Check-Act SCAMPI – Standard CMMI Appraisal Method for Process

  6. Spatial Preference Modelling for equitable infrastructure provision: an application of Sen's Capability Approach

    Science.gov (United States)

    Wismadi, Arif; Zuidgeest, Mark; Brussel, Mark; van Maarseveen, Martin

    2014-01-01

    To determine whether the inclusion of spatial neighbourhood comparison factors in Preference Modelling allows spatial decision support systems (SDSSs) to better address spatial equity, we introduce Spatial Preference Modelling (SPM). To evaluate the effectiveness of this model in addressing equity, various standardisation functions in both Non-Spatial Preference Modelling and SPM are compared. The evaluation involves applying the model to a resource location-allocation problem for transport infrastructure in the Special Province of Yogyakarta in Indonesia. We apply Amartya Sen's Capability Approach to define opportunity to mobility as a non-income indicator. Using the extended Moran's I interpretation for spatial equity, we evaluate the distribution output regarding, first, `the spatial distribution patterns of priority targeting for allocation' (SPT) and, second, `the effect of new distribution patterns after location-allocation' (ELA). The Moran's I index of the initial map and its comparison with six patterns for SPT as well as ELA consistently indicates that the SPM is more effective for addressing spatial equity. We conclude that the inclusion of spatial neighbourhood comparison factors in Preference Modelling improves the capability of SDSS to address spatial equity. This study thus proposes a new formal method for SDSS with specific attention on resource location-allocation to address spatial equity.

  7. Landscape capability models as a tool to predict fine-scale forest bird occupancy and abundance

    Science.gov (United States)

    Loman, Zachary G.; DeLuca, William; Harrison, Daniel J.; Loftin, Cynthia S.; Rolek, Brian W.; Wood, Petra

    2018-01-01

    ContextSpecies-specific models of landscape capability (LC) can inform landscape conservation design. Landscape capability is “the ability of the landscape to provide the environment […] and the local resources […] needed for survival and reproduction […] in sufficient quantity, quality and accessibility to meet the life history requirements of individuals and local populations.” Landscape capability incorporates species’ life histories, ecologies, and distributions to model habitat for current and future landscapes and climates as a proactive strategy for conservation planning.ObjectivesWe tested the ability of a set of LC models to explain variation in point occupancy and abundance for seven bird species representative of spruce-fir, mixed conifer-hardwood, and riparian and wooded wetland macrohabitats.MethodsWe compiled point count data sets used for biological inventory, species monitoring, and field studies across the northeastern United States to create an independent validation data set. Our validation explicitly accounted for underestimation in validation data using joint distance and time removal sampling.ResultsBlackpoll warbler (Setophaga striata), wood thrush (Hylocichla mustelina), and Louisiana (Parkesia motacilla) and northern waterthrush (P. noveboracensis) models were validated as predicting variation in abundance, although this varied from not biologically meaningful (1%) to strongly meaningful (59%). We verified all seven species models [including ovenbird (Seiurus aurocapilla), blackburnian (Setophaga fusca) and cerulean warbler (Setophaga cerulea)], as all were positively related to occupancy data.ConclusionsLC models represent a useful tool for conservation planning owing to their predictive ability over a regional extent. As improved remote-sensed data become available, LC layers are updated, which will improve predictions.

  8. Earth Observation and Geospatial techniques for Soil Salinity and Land Capability Assessment over Sundarban Bay of Bengal Coast, India

    Directory of Open Access Journals (Sweden)

    Das Sumanta

    2016-12-01

    Full Text Available To guarantee food security and job creation of small scale farmers to commercial farmers, unproductive farms in the South 24 PGS, West Bengal need land reform program to be restructured and evaluated for agricultural productivity. This study established a potential role of remote sensing and GIS for identification and mapping of salinity zone and spatial planning of agricultural land over the Basanti and Gosaba Islands(808.314sq. km of South 24 PGS. District of West Bengal. The primary data i.e. soil pH, Electrical Conductivity (EC and Sodium Absorption ratio (SAR were obtained from soil samples of various GCP (Ground Control Points locations collected at 50 mts. intervals by handheld GPS from 0–100 cm depths. The secondary information is acquired from the remotely sensed satellite data (LANDSAT ETM+ in different time scale and digital elevation model. The collected field samples were tested in the laboratory and were validated with Remote Sensing based digital indices analysisover the temporal satellite data to assess the potential changes due to over salinization. Soil physical properties such as texture, structure, depth and drainage condition is stored as attributes in a geographical soil database and linked with the soil map units. The thematic maps are integrated with climatic and terrain conditions of the area to produce land capability maps for paddy. Finally, The weighted overlay analysis was performed to assign theweights according to the importance of parameters taken into account for salineareaidentification and mapping to segregate higher, moderate, lower salinity zonesover the study area.

  9. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  10. Advances in transgenic animal models and techniques.

    Science.gov (United States)

    Ménoret, Séverine; Tesson, Laurent; Remy, Séverine; Usal, Claire; Ouisse, Laure-Hélène; Brusselle, Lucas; Chenouard, Vanessa; Anegon, Ignacio

    2017-10-01

    On May 11th and 12th 2017 was held in Nantes, France, the international meeting "Advances in transgenic animal models and techniques" ( http://www.trm.univ-nantes.fr/ ). This biennial meeting is the fifth one of its kind to be organized by the Transgenic Rats ImmunoPhenomic (TRIP) Nantes facility ( http://www.tgr.nantes.inserm.fr/ ). The meeting was supported by private companies (SONIDEL, Scionics computer innovation, New England Biolabs, MERCK, genOway, Journal Disease Models and Mechanisms) and by public institutions (International Society for Transgenic Technology, University of Nantes, INSERM UMR 1064, SFR François Bonamy, CNRS, Région Pays de la Loire, Biogenouest, TEFOR infrastructure, ITUN, IHU-CESTI and DHU-Oncogeffe and Labex IGO). Around 100 participants, from France but also from different European countries, Japan and USA, attended the meeting.

  11. IMPACT OF CO-CREATION ON INNOVATION CAPABILITY AND FIRM PERFORMANCE: A STRUCTURAL EQUATION MODELING

    Directory of Open Access Journals (Sweden)

    FATEMEH HAMIDI

    Full Text Available ABSTRACT Traditional firms used to design products, evaluate marketing messages and control product distribution channels with no costumer interface. With the advancements in interaction technologies, however, users can easily make impacts on firms; the interaction between costumers and firms is now in peak condition in comparison to the past and is no longer controlled by firms. Customers are playing two roles of value creators and consumers simultaneously. We examine the role of co-creation on the influences of innovation capability and firm performance. We develop hypotheses and test them using researcher survey data. The results suggest that implement of co-creation partially mediate the effect of process innovation capability. We discuss the implications of these findings for research and practice on the depict and implement of unique value co-creation model.

  12. Gaming Technique in Formation of Motor-Coordinational and Psychomotor Capabilities of 5-6 Year-old Children, Going in for Tennis

    Directory of Open Access Journals (Sweden)

    Ervand P. Gasparyan

    2012-06-01

    Full Text Available Application of gaming technique during 5-6 year-old tennis-players training when motor coordination and psychomotor capabilities are formed allowed to increase the indexes of all the examined motor coordinations and both to preserve the natural age character of motor coordination changes and to improve this process fundamentally.

  13. Transitioning Enhanced Land Surface Initialization and Model Verification Capabilities to the Kenya Meteorological Department (KMD)

    Science.gov (United States)

    Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Zavodsky, Bradley T.; Srikishen, Jayanthi; Limaye, Ashutosh; Blankenship, Clay B.

    2016-01-01

    Flooding, severe weather, and drought are key forecasting challenges for the Kenya Meteorological Department (KMD), based in Nairobi, Kenya. Atmospheric processes leading to convection, excessive precipitation and/or prolonged drought can be strongly influenced by land cover, vegetation, and soil moisture content, especially during anomalous conditions and dry/wet seasonal transitions. It is thus important to represent accurately land surface state variables (green vegetation fraction, soil moisture, and soil temperature) in Numerical Weather Prediction (NWP) models. The NASA SERVIR and the Short-term Prediction Research and Transition (SPoRT) programs in Huntsville, AL have established a working partnership with KMD to enhance its regional modeling capabilities. SPoRT and SERVIR are providing experimental land surface initialization datasets and model verification capabilities for capacity building at KMD. To support its forecasting operations, KMD is running experimental configurations of the Weather Research and Forecasting (WRF; Skamarock et al. 2008) model on a 12-km/4-km nested regional domain over eastern Africa, incorporating the land surface datasets provided by NASA SPoRT and SERVIR. SPoRT, SERVIR, and KMD participated in two training sessions in March 2014 and June 2015 to foster the collaboration and use of unique land surface datasets and model verification capabilities. Enhanced regional modeling capabilities have the potential to improve guidance in support of daily operations and high-impact weather and climate outlooks over Eastern Africa. For enhanced land-surface initialization, the NASA Land Information System (LIS) is run over Eastern Africa at 3-km resolution, providing real-time land surface initialization data in place of interpolated global model soil moisture and temperature data available at coarser resolutions. Additionally, real-time green vegetation fraction (GVF) composites from the Suomi-NPP VIIRS instrument is being incorporated

  14. Status Report on Modelling and Simulation Capabilities for Nuclear-Renewable Hybrid Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Epiney, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, P. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, J. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bragg-Sitton, S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Yigitoglu, A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Greenwood, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, S. M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ganda, F. [Argonne National Lab. (ANL), Argonne, IL (United States); Maronati, G. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    This report summarizes the current status of the modeling and simulation capabilities developed for the economic assessment of Nuclear-Renewable Hybrid Energy Systems (N-R HES). The increasing penetration of variable renewables is altering the profile of the net demand, with which the other generators on the grid have to cope. N-R HES analyses are being conducted to determine the potential feasibility of mitigating the resultant volatility in the net electricity demand by adding industrial processes that utilize either thermal or electrical energy as stabilizing loads. This coordination of energy generators and users is proposed to mitigate the increase in electricity cost and cost volatility through the production of a saleable commodity. Overall, the financial performance of a system that is comprised of peaking units (i.e. gas turbine), baseload supply (i.e. nuclear power plant), and an industrial process (e.g. hydrogen plant) should be optimized under the constraint of satisfying an electricity demand profile with a certain level of variable renewable (wind) penetration. The optimization should entail both the sizing of the components/subsystems that comprise the system and the optimal dispatch strategy (output at any given moment in time from the different subsystems). Some of the capabilities here described have been reported separately in [1, 2, 3]. The purpose of this report is to provide an update on the improvement and extension of those capabilities and to illustrate their integrated application in the economic assessment of N-R HES.

  15. Management Innovation Capabilities

    DEFF Research Database (Denmark)

    Harder, Mie

    , the paper introduces the concept of management innovation capabilities which refers to the ability of a firm to purposefully create, extend and modify its managerial resource base to address rapidly changing environments. Drawing upon behavioral theory of the firm and the dynamic capabilities framework......Management innovation is the implementation of a new management practice, process, technique or structure that significantly alters the way the work of management is performed. This paper presents a typology categorizing management innovation along two dimensions; radicalness and complexity. Then......, the paper proposes a model of the foundations of management innovation. Propositions and implications for future research are discussed....

  16. Evaluation of the Predictive Capabilities of a Phenomenological Combustion Model for Natural Gas SI Engine

    Directory of Open Access Journals (Sweden)

    Toman Rastislav

    2017-12-01

    Full Text Available The current study evaluates the predictive capabilities of a new phenomenological combustion model, available as a part of the GT-Suite software package. It is comprised of two main sub-models: 0D model of in-cylinder flow and turbulence, and turbulent SI combustion model. The 0D in-cylinder flow model (EngCylFlow uses a combined K-k-ε kinetic energy cascade approach to predict the evolution of the in-cylinder charge motion and turbulence, where K and k are the mean and turbulent kinetic energies, and ε is the turbulent dissipation rate. The subsequent turbulent combustion model (EngCylCombSITurb gives the in-cylinder burn rate; based on the calculation of flame speeds and flame kernel development. This phenomenological approach reduces significantly the overall computational effort compared to the 3D-CFD, thus allowing the computation of full engine operating map and the vehicle driving cycles. Model was calibrated using a full map measurement from a turbocharged natural gas SI engine, with swirl intake ports. Sensitivity studies on different calibration methods, and laminar flame speed sub-models were conducted. Validation process for both the calibration and sensitivity studies was concerning the in-cylinder pressure traces and burn rates for several engine operation points achieving good overall results.

  17. Initiative-taking, Improvisational Capability and Business Model Innovation in Emerging Market

    DEFF Research Database (Denmark)

    Cao, Yangfeng

    . Many prior researches have shown that the foreign subsidiaries play important role in shaping the overall strategy of the parent company. However, little is known about how subsidiary specifically facilitates business model innovation (BMI) in emerging markets. Adopting the method of comparative......Business model innovation plays a very important role in developing competitive advantage when multinational small and medium-sized enterprises (SMEs) from developed country enter into emerging markets because of the large contextual distances or gaps between the emerging and developed economies...... innovation in emerging markets. We find that high initiative-taking and strong improvisational capability can accelerate the business model innovation. Our research contributes to the literatures on international and strategic entrepreneurship....

  18. Initiative-taking, Improvisational Capability and Business Model Innovation in Emerging Market

    DEFF Research Database (Denmark)

    Cao, Yangfeng

    Business model innovation plays a very important role in developing competitive advantage when multinational small and medium-sized enterprises (SMEs) from developed country enter into emerging markets because of the large contextual distances or gaps between the emerging and developed economies....... Many prior researches have shown that the foreign subsidiaries play important role in shaping the overall strategy of the parent company. However, little is known about how subsidiary specifically facilitates business model innovation (BMI) in emerging markets. Adopting the method of comparative...... and longitudinal case study, we tracked the BMI processes of four SMEs from Denmark operating in China. Using resource-based view (RBV), we develop one theoretical framework which indicates that initiative-taking and improvisational capability of subsidiary are the two primary facilitators of business model...

  19. Expanded rock blast modeling capabilities of DMC{_}BLAST, including buffer blasting

    Energy Technology Data Exchange (ETDEWEB)

    Preece, D.S. [Sandia National Labs., Albuquerque, NM (United States); Tidman, J.P.; Chung, S.H. [ICI Explosives (Canada)

    1996-12-31

    A discrete element computer program named DMC{_}BLAST (Distinct Motion Code) has been under development since 1987 for modeling rock blasting. This program employs explicit time integration and uses spherical or cylindrical elements that are represented as circles in 2-D. DMC{_}BLAST calculations compare favorably with data from actual bench blasts. The blast modeling capabilities of DMC{_}BLAST have been expanded to include independently dipping geologic layers, top surface, bottom surface and pit floor. The pit can also now be defined using coordinates based on the toe of the bench. A method for modeling decked explosives has been developed which allows accurate treatment of the inert materials (stemming) in the explosive column and approximate treatment of different explosives in the same blasthole. A DMC{_}BLAST user can specify decking through a specific geologic layer with either inert material or a different explosive. Another new feature of DMC{_}BLAST is specification of an uplift angle which is the angle between the normal to the blasthole and a vector defining the direction of explosive loading on particles adjacent to the blasthole. A buffer (choke) blast capability has been added for situations where previously blasted material is adjacent to the free face of the bench preventing any significant lateral motion during the blast.

  20. Entry into new markets: the development of the business model and dynamic capabilities

    Directory of Open Access Journals (Sweden)

    Victor Wolowski Kenski

    2017-12-01

    Full Text Available This work shows the path through which companies enter new markets or bring new propositions to established ones. It presents the market analysis process, the strategical decisions that determine the company’s position on it and the required changes in the configurations for this new action. It also studies the process of selecting the business model and the conditions for its definition the adoption and subsequent development of resources and capabilities required to conquer this new market. It is presented the necessary conditions to remain and maintain its market position. These concepts are presented through a case study of a business group that takes part in different franchises.

  1. Application of data assimilation to improve the forecasting capability of an atmospheric dispersion model for a radioactive plume

    International Nuclear Information System (INIS)

    Jeong, H.J.; Han, M.H.; Hwang, W.T.; Kim, E.H.

    2008-01-01

    Modeling an atmospheric dispersion of a radioactive plume plays an influential role in assessing the environmental impacts caused by nuclear accidents. The performance of data assimilation techniques combined with Gaussian model outputs and measurements to improve forecasting abilities are investigated in this study. Tracer dispersion experiments are performed to produce field data by assuming a radiological emergency. Adaptive neuro-fuzzy inference system (ANFIS) and linear regression filter are considered to assimilate the Gaussian model outputs with measurements. ANFIS is trained so that the model outputs are likely to be more accurate for the experimental data. Linear regression filter is designed to assimilate measurements similar to the ANFIS. It is confirmed that ANFIS could be an appropriate method for an improvement of the forecasting capability of an atmospheric dispersion model in the case of a radiological emergency, judging from the higher correlation coefficients between the measured and the assimilated ones rather than a linear regression filter. This kind of data assimilation method could support a decision-making system when deciding on the best available countermeasures for public health from among emergency preparedness alternatives

  2. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    Energy Technology Data Exchange (ETDEWEB)

    Burke, G.J.

    1988-04-08

    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  3. Functional capabilities of the breadboard model of SIDRA satellite-borne instrument

    International Nuclear Information System (INIS)

    Dudnik, O.V.; Kurbatov, E.V.; Titov, K.G.; Prieto, M.; Sanchez, S.; Sylwester, J.; Gburek, S.; Podgorski, P.

    2013-01-01

    This paper presents the structure, principles of operation and functional capabilities of the breadboard model of SIDRA compact satellite-borne instrument. SIDRA is intended for monitoring fluxes of high-energy charged particles under outer-space conditions. We present the reasons to develop a particle spectrometer and we list the main objectives to be achieved with the help of this instrument. The paper describes the major specifications of the analog and digital signal processing units of the breadboard model. A specially designed and developed data processing module based on the Actel ProAsic3E A3PE3000 FPGA is presented and compared with the all-in one digital processing signal board based on the Xilinx Spartan 3 XC3S1500 FPGA.

  4. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  5. Improving National Capability in Biogeochemical Flux Modelling: the UK Environmental Virtual Observatory (EVOp)

    Science.gov (United States)

    Johnes, P.; Greene, S.; Freer, J. E.; Bloomfield, J.; Macleod, K.; Reaney, S. M.; Odoni, N. A.

    2012-12-01

    The best outcomes from watershed management arise where policy and mitigation efforts are underpinned by strong science evidence, but there are major resourcing problems associated with the scale of monitoring needed to effectively characterise the sources rates and impacts of nutrient enrichment nationally. The challenge is to increase national capability in predictive modelling of nutrient flux to waters, securing an effective mechanism for transferring knowledge and management tools from data-rich to data-poor regions. The inadequacy of existing tools and approaches to address these challenges provided the motivation for the Environmental Virtual Observatory programme (EVOp), an innovation from the UK Natural Environment Research Council (NERC). EVOp is exploring the use of a cloud-based infrastructure in catchment science, developing an exemplar to explore N and P fluxes to inland and coastal waters in the UK from grid to catchment and national scale. EVOp is bringing together for the first time national data sets, models and uncertainty analysis into cloud computing environments to explore and benchmark current predictive capability for national scale biogeochemical modelling. The objective is to develop national biogeochemical modelling capability, capitalising on extensive national investment in the development of science understanding and modelling tools to support integrated catchment management, and supporting knowledge transfer from data rich to data poor regions, The AERC export coefficient model (Johnes et al., 2007) has been adapted to function within the EVOp cloud environment, and on a geoclimatic basis, using a range of high resolution, geo-referenced digital datasets as an initial demonstration of the enhanced national capacity for N and P flux modelling using cloud computing infrastructure. Geoclimatic regions are landscape units displaying homogenous or quasi-homogenous functional behaviour in terms of process controls on N and P cycling

  6. Improved ceramic slip casting technique. [application to aircraft model fabrication

    Science.gov (United States)

    Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)

    1993-01-01

    A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell

  7. Extending the Lunar Mapping and Modeling Portal - New Capabilities and New Worlds

    Science.gov (United States)

    Day, B. H.; Law, E.; Arevalo, E.; Bui, B.; Chang, G.; Dodge, K.; Kim, R. M.; Malhotra, S.; Sadaqathullah, S.

    2015-12-01

    NASA's Lunar Mapping and Modeling Portal (LMMP) provides a web-based Portal and a suite of interactive visualization and analysis tools to enable mission planners, lunar scientists, and engineers to access mapped lunar data products from past and current lunar missions (http://lmmp.nasa.gov). During the past year, the capabilities and data served by LMMP have been significantly expanded. New interfaces are providing improved ways to access and visualize data. Many of the recent enhancements to LMMP have been specifically in response to the requirements of NASA's proposed Resource Prospector lunar rover, and as such, provide an excellent example of the application of LMMP to mission planning. At the request of NASA's Science Mission Directorate, LMMP's technology and capabilities are now being extended to additional planetary bodies. New portals for Vesta and Mars are the first of these new products to be released. On March 31, 2015, the LMMP team released Vesta Trek (http://vestatrek.jpl.nasa.gov), a web-based application applying LMMP technology to visualizations of the asteroid Vesta. Data gathered from multiple instruments aboard Dawn have been compiled into Vesta Trek's user-friendly set of tools, enabling users to study the asteroid's features. With an initial release on July 1, 2015, Mars Trek replicates the functionality of Vesta Trek for the surface of Mars. While the entire surface of Mars is covered, higher levels of resolution and greater numbers of data products are provided for special areas of interest. Early releases focus on past, current, and future robotic sites of operation. Future releases will add many new data products and analysis tools as Mars Trek has been selected for use in site selection for the Mars 2020 rover and in identifying potential human landing sites on Mars. Other destinations will follow soon. The user community is invited to provide suggestions and requests as the development team continues to expand the capabilities of LMMP

  8. Defining Building Information Modeling implementation activities based on capability maturity evaluation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Romain Morlhon

    2015-01-01

    Full Text Available Building Information Modeling (BIM has become a widely accepted tool to overcome the many hurdles that currently face the Architecture, Engineering and Construction industries. However, implementing such a system is always complex and the recent introduction of BIM does not allow organizations to build their experience on acknowledged standards and procedures. Moreover, data on implementation projects is still disseminated and fragmentary. The objective of this study is to develop an assistance model for BIM implementation. Solutions that are proposed will help develop BIM that is better integrated and better used, and take into account the different maturity levels of each organization. Indeed, based on Critical Success Factors, concrete activities that help in implementation are identified and can be undertaken according to the previous maturity evaluation of an organization. The result of this research consists of a structured model linking maturity, success factors and actions, which operates on the following principle: once an organization has assessed its BIM maturity, it can identify various weaknesses and find relevant answers in the success factors and the associated actions.

  9. Overview of the development of a biosphere modelling capability for UK DoE (HMIP)

    International Nuclear Information System (INIS)

    Nancarrow, D.J.; Ashton, J.; Little, R.H.

    1990-01-01

    A programme of research has been funded, since 1982, by the United Kingdom Department of the Environment (Her Majesty's Inspectorate of Pollution, HMIP), to develop a procedure for post-closure radiological assessment of underground disposal facilities for low and intermediate level radioactive wastes. It is conventional to regard the disposal system as comprising the engineered barriers of the repository, the geological setting which provides natural barriers to migration, and the surface environment or biosphere. The requirement of a biosphere submodel, therefore, is to provide estimates, for given radionuclide inputs, of the dose or probability distribution function of dose to a maximally exposed individual as a function of time. This paper describes the development of the capability for biosphere modelling for HMIP in the context of the development of other assessment procedures. 11 refs., 3 figs., 2 tabs

  10. Management Innovation Capabilities

    DEFF Research Database (Denmark)

    Harder, Mie

    Management innovation is the implementation of a new management practice, process, technique or structure that significantly alters the way the work of management is performed. This paper presents a typology categorizing management innovation along two dimensions; radicalness and complexity. Then......, the paper proposes a model of the foundations of management innovation. Propositions and implications for future research are discussed.......Management innovation is the implementation of a new management practice, process, technique or structure that significantly alters the way the work of management is performed. This paper presents a typology categorizing management innovation along two dimensions; radicalness and complexity. Then......, the paper introduces the concept of management innovation capabilities which refers to the ability of a firm to purposefully create, extend and modify its managerial resource base to address rapidly changing environments. Drawing upon behavioral theory of the firm and the dynamic capabilities framework...

  11. Capabilities of stochastic rainfall models as data providers for urban hydrology

    Science.gov (United States)

    Haberlandt, Uwe

    2017-04-01

    For planning of urban drainage systems using hydrological models, long, continuous precipitation series with high temporal resolution are needed. Since observed time series are often too short or not available everywhere, the use of synthetic precipitation is a common alternative. This contribution compares three precipitation models regarding their suitability to provide 5 minute continuous rainfall time series for a) sizing of drainage networks for urban flood protection and b) dimensioning of combined sewage systems for pollution reduction. The rainfall models are a parametric stochastic model (Haberlandt et al., 2008), a non-parametric probabilistic approach (Bárdossy, 1998) and a stochastic downscaling of dynamically simulated rainfall (Berg et al., 2013); all models are operated both as single site and multi-site generators. The models are applied with regionalised parameters assuming that there is no station at the target location. Rainfall and discharge characteristics are utilised for evaluation of the model performance. The simulation results are compared against results obtained from reference rainfall stations not used for parameter estimation. The rainfall simulations are carried out for the federal states of Baden-Württemberg and Lower Saxony in Germany and the discharge simulations for the drainage networks of the cities of Hamburg, Brunswick and Freiburg. Altogether, the results show comparable simulation performance for the three models, good capabilities for single site simulations but low skills for multi-site simulations. Remarkably, there is no significant difference in simulation performance comparing the tasks flood protection with pollution reduction, so the models are finally able to simulate both the extremes and the long term characteristics of rainfall equally well. Bárdossy, A., 1998. Generating precipitation time series using simulated annealing. Wat. Resour. Res., 34(7): 1737-1744. Berg, P., Wagner, S., Kunstmann, H., Schädler, G

  12. Development of an operational waterborne weaponized chemical agent transport modeling capability

    International Nuclear Information System (INIS)

    Ward, M.C.; Cragan, J.A.; Mueller, C.

    2009-01-01

    The fate of chemical warfare agents (CWAs) in aqueous environments is not well characterized. Limited physical and kinetic data are available for these chemicals in the open literature, partly due to their inherent lethality. As a result, the development of methods for determining the persistence and extent of impact for waterborne chemical agent releases is a significant challenge. In this study, a hydrolysis model was developed to track the fate of several critical CWAs. VX, sarin, soman, tabun, and cyclosarin modeling capabilities were developed for an instantaneous point source aqueous release. Hydrolysis products were tracked and the resulting change in pH was calculated for the local dispersive environment. Using this data, instantaneous hydrolysis rates were calculated. This framework was applied to assess the persistence and fate of the CWAs in different turbulent environments. From this hydrolysis model, estimates of the time and extent of lethality from an aqueous release can be made. Refinement to these estimates requires further investigation into the impact of potential catalysts on these chemicals. Enhanced understanding of equivalent acute percutaneous toxicity for solutions requires changes to current testing and estimation methods.(author)

  13. Research Capabilities Directed to all Electric Engineering Teachers, from an Alternative Energy Model

    Directory of Open Access Journals (Sweden)

    Víctor Hugo Ordóñez Navea

    2017-08-01

    Full Text Available The purpose of this work was to contemplate research capabilities directed to all electric engineering teachers from an alternative energy model intro the explanation of a semiconductor in the National Training Program in Electricity. Some authors, such as. Vidal (2016, Atencio (2014 y Camilo (2012 point out to technological applications with semiconductor electrical devices. In this way; a diagnostic phase is presented, held on this field research as a descriptive type about: a how to identify the necessities of alternative energies, and b The research competences in the alternatives energies of researcher from a solar cell model, to boost and innovate the academic praxis and technologic ingenuity. Themselves was applied a survey for a group of 15 teachers in the National Program of Formation in electricity to diagnose the deficiencies in the research area of alternatives energies. The process of data analysis was carried out through descriptive statistic. Later the conclusions are presented the need to generate strategies for stimulate and propose exploration of alternatives energies to the development of research competences directed to the teachers of electrical engineering for develop the research competences in the enforcement of the teachers exercise for the electric engineering, from an alternative energy model and boost the technologic research in the renewal energies field.

  14. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Most of the existing modelling techniques for the speaker recognition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp Vector Quantization ...

  15. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Abstract. Most of the existing modelling techniques for the speaker recog- nition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp.

  16. Advanced applications of numerical modelling techniques for clay extruder design

    Science.gov (United States)

    Kandasamy, Saravanakumar

    Ceramic materials play a vital role in our day to day life. Recent advances in research, manufacture and processing techniques and production methodologies have broadened the scope of ceramic products such as bricks, pipes and tiles, especially in the construction industry. These are mainly manufactured using an extrusion process in auger extruders. During their long history of application in the ceramic industry, most of the design developments of extruder systems have resulted from expensive laboratory-based experimental work and field-based trial and error runs. In spite of these design developments, the auger extruders continue to be energy intensive devices with high operating costs. Limited understanding of the physical process involved in the process and the cost and time requirements of lab-based experiments were found to be the major obstacles in the further development of auger extruders.An attempt has been made herein to use Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) based numerical modelling techniques to reduce the costs and time associated with research into design improvement by experimental trials. These two techniques, although used widely in other engineering applications, have rarely been applied for auger extruder development. This had been due to a number of reasons including technical limitations of CFD tools previously available. Modern CFD and FEA software packages have much enhanced capabilities and allow the modelling of the flow of complex fluids such as clay.This research work presents a methodology in using Herschel-Bulkley's fluid flow based CFD model to simulate and assess the flow of clay-water mixture through the extruder and the die of a vacuum de-airing type clay extrusion unit used in ceramic extrusion. The extruder design and the operating parameters were varied to study their influence on the power consumption and the extrusion pressure. The model results were then validated using results from

  17. The Global Modeling Test Bed - Building a New National Capability for Advancing Operational Global Modeling in the United States.

    Science.gov (United States)

    Toepfer, F.; Cortinas, J. V., Jr.; Kuo, W.; Tallapragada, V.; Stajner, I.; Nance, L. B.; Kelleher, K. E.; Firl, G.; Bernardet, L.

    2017-12-01

    NOAA develops, operates, and maintains an operational global modeling capability for weather, sub seasonal and seasonal prediction for the protection of life and property and fostering the US economy. In order to substantially improve the overall performance and accelerate advancements of the operational modeling suite, NOAA is partnering with NCAR to design and build the Global Modeling Test Bed (GMTB). The GMTB has been established to provide a platform and a capability for researchers to contribute to the advancement primarily through the development of physical parameterizations needed to improve operational NWP. The strategy to achieve this goal relies on effectively leveraging global expertise through a modern collaborative software development framework. This framework consists of a repository of vetted and supported physical parameterizations known as the Common Community Physics Package (CCPP), a common well-documented interface known as the Interoperable Physics Driver (IPD) for combining schemes into suites and for their configuration and connection to dynamic cores, and an open evidence-based governance process for managing the development and evolution of CCPP. In addition, a physics test harness designed to work within this framework has been established in order to facilitate easier like-to-like comparison of physics advancements. This paper will present an overview of the design of the CCPP and test platform. Additionally, an overview of potential new opportunities of how physics developers can engage in the process, from implementing code for CCPP/IPD compliance to testing their development within an operational-like software environment, will be presented. In addition, insight will be given as to how development gets elevated to CPPP-supported status, the pre-cursor to broad availability and use within operational NWP. An overview of how the GMTB can be expanded to support other global or regional modeling capabilities will also be presented.

  18. A JOINT VENTURE MODEL FOR ASSESSMENT OF PARTNER CAPABILITIES: THE CASE OF ESKOM ENTERPRISES AND THE AFRICAN POWER SECTOR

    Directory of Open Access Journals (Sweden)

    Y.V. Soni

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This article investigates the concept of joint ventures in the international energy sector and develops a joint venture model, as a business development and assessment tool. The joint venture model presents a systematic method that relies on modern business intelligence to assess a potential business venture by using a balanced score card technique to screen potential partners, based on their technological and financial core capabilities. The model can be used by business development managers to harness the potential of joint ventures to create economic growth and sustainable business expansion. Furthermore, partnerships with local companies can help to mitigate econo-political risk, and facilitate buy-in from the national governments that are normally the primary stakeholders in the energy sector ventures (directly or indirectly. The particular case of Eskom Enterprises (Pty Ltd, a wholly owned subsidiary of Eskom, is highlighted.

    AFRIKAANSE OPSOMMING: Hierdie artikel ondersoek die begrip gesamentlike onderneming in die internasionale energiesektor en ontwikkel 'n gesamentlike-onderneming-model as 'n sake-ontwikkeling- en takseermodel. Die gesamentlike-onderneming-model bied 'n stelselmatige metode wat op moderne sake-intelligensie staat maak om 'n potensiële sake-onderneming op grond van die tegnologiese en finansiële kernvermoëns daarvan te takseer deur 'n gebalanseerdepuntekaart-tegniek te gebruik. Die model kan deur sake-ontwikkelingsbestuurders gebruik word om die potensiaal van gesamentlike ondernemings in te span om ekonomiese groei en volhoubare sake-uitbreiding daar te stel. Verder kan venootskappe met plaaslike maatskappye help om die ekonomiese risiko te verminder en inkoop te vergemaklik van die nasionale regerings wat gewoonlik die primêre belanghebbendes in die energiesektorondernemings is (hetsy regstreeks of onregstreeks. Die besondere geval van Eskom Enterprises (Edms Bpk, 'n vol filiaal van Eskom

  19. The influence of ligament modelling strategies on the predictive capability of finite element models of the human knee joint.

    Science.gov (United States)

    Naghibi Beidokhti, Hamid; Janssen, Dennis; van de Groes, Sebastiaan; Hazrati, Javad; Van den Boogaard, Ton; Verdonschot, Nico

    2017-12-08

    In finite element (FE) models knee ligaments can represented either by a group of one-dimensional springs, or by three-dimensional continuum elements based on segmentations. Continuum models closer approximate the anatomy, and facilitate ligament wrapping, while spring models are computationally less expensive. The mechanical properties of ligaments can be based on literature, or adjusted specifically for the subject. In the current study we investigated the effect of ligament modelling strategy on the predictive capability of FE models of the human knee joint. The effect of literature-based versus specimen-specific optimized material parameters was evaluated. Experiments were performed on three human cadaver knees, which were modelled in FE models with ligaments represented either using springs, or using continuum representations. In spring representation collateral ligaments were each modelled with three and cruciate ligaments with two single-element bundles. Stiffness parameters and pre-strains were optimized based on laxity tests for both approaches. Validation experiments were conducted to evaluate the outcomes of the FE models. Models (both spring and continuum) with subject-specific properties improved the predicted kinematics and contact outcome parameters. Models incorporating literature-based parameters, and particularly the spring models (with the representations implemented in this study), led to relatively high errors in kinematics and contact pressures. Using a continuum modelling approach resulted in more accurate contact outcome variables than the spring representation with two (cruciate ligaments) and three (collateral ligaments) single-element-bundle representations. However, when the prediction of joint kinematics is of main interest, spring ligament models provide a faster option with acceptable outcome. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Enhancing Interoperability and Capabilities of Earth Science Data using the Observations Data Model 2 (ODM2

    Directory of Open Access Journals (Sweden)

    Leslie Hsu

    2017-02-01

    Full Text Available Earth Science researchers require access to integrated, cross-disciplinary data in order to answer critical research questions. Partially due to these science drivers, it is common for disciplinary data systems to expand from their original scope in order to accommodate collaborative research. The result is multiple disparate databases with overlapping but incompatible data. In order to enable more complete data integration and analysis, the Observations Data Model Version 2 (ODM2 was developed to be a general information model, with one of its major goals to integrate data collected by 'in situ' sensors with those by 'ex-situ' analyses of field specimens. Four use cases with different science drivers and disciplines have adopted ODM2 because of benefits to their users. The disciplines behind the four cases are diverse – hydrology, rock geochemistry, soil geochemistry, and biogeochemistry. For each case, we outline the benefits, challenges, and rationale for adopting ODM2. In each case, the decision to implement ODM2 was made to increase interoperability and expand data and metadata capabilities. One of the common benefits was the ability to use the flexible handling and comprehensive description of specimens and data collection sites in ODM2’s sampling feature concept. We also summarize best practices for implementing ODM2 based on the experience of these initial adopters. The descriptions here should help other potential adopters of ODM2 implement their own instances or to modify ODM2 to suit their needs.

  1. Capability of Spaceborne Hyperspectral EnMAP Mission for Mapping Fractional Cover for Soil Erosion Modeling

    Directory of Open Access Journals (Sweden)

    Sarah Malec

    2015-09-01

    Full Text Available Soil erosion can be linked to relative fractional cover of photosynthetic-active vegetation (PV, non-photosynthetic-active vegetation (NPV and bare soil (BS, which can be integrated into erosion models as the cover-management C-factor. This study investigates the capability of EnMAP imagery to map fractional cover in a region near San Jose, Costa Rica, characterized by spatially extensive coffee plantations and grazing in a mountainous terrain. Simulated EnMAP imagery is based on airborne hyperspectral HyMap data. Fractional cover estimates are derived in an automated fashion by extracting image endmembers to be used with a Multiple End-member Spectral Mixture Analysis approach. The C-factor is calculated based on the fractional cover estimates determined independently for EnMAP and HyMap. Results demonstrate that with EnMAP imagery it is possible to extract quality endmember classes with important spectral features related to PV, NPV and soil, and be able to estimate relative cover fractions. This spectral information is critical to separate BS and NPV which greatly can impact the C-factor derivation. From a regional perspective, we can use EnMAP to provide good fractional cover estimates that can be integrated into soil erosion modeling.

  2. Multi-phase model development to assess RCIC system capabilities under severe accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kirkland, Karen Vierow [Texas A & M Univ., College Station, TX (United States); Ross, Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beeny, Bradley [Texas A & M Univ., College Station, TX (United States); Luthman, Nicholas [Texas A& M Engineering Experiment Station, College Station, TX (United States); Strater, Zachary [Texas A & M Univ., College Station, TX (United States)

    2017-12-23

    The Reactor Core Isolation Cooling (RCIC) System is a safety-related system that provides makeup water for core cooling of some Boiling Water Reactors (BWRs) with a Mark I containment. The RCIC System consists of a steam-driven Terry turbine that powers a centrifugal, multi-stage pump for providing water to the reactor pressure vessel. The Fukushima Dai-ichi accidents demonstrated that the RCIC System can play an important role under accident conditions in removing core decay heat. The unexpectedly sustained, good performance of the RCIC System in the Fukushima reactor demonstrates, firstly, that its capabilities are not well understood, and secondly, that the system has high potential for extended core cooling in accident scenarios. Better understanding and analysis tools would allow for more options to cope with a severe accident situation and to reduce the consequences. The objectives of this project were to develop physics-based models of the RCIC System, incorporate them into a multi-phase code and validate the models. This Final Technical Report details the progress throughout the project duration and the accomplishments.

  3. Evaluation of the 3d Urban Modelling Capabilities in Geographical Information Systems

    Science.gov (United States)

    Dogru, A. O.; Seker, D. Z.

    2010-12-01

    Geographical Information System (GIS) Technology, which provides successful solutions to basic spatial problems, is currently widely used in 3 dimensional (3D) modeling of physical reality with its developing visualization tools. The modeling of large and complicated phenomenon is a challenging problem in terms of computer graphics currently in use. However, it is possible to visualize that phenomenon in 3D by using computer systems. 3D models are used in developing computer games, military training, urban planning, tourism and etc. The use of 3D models for planning and management of urban areas is very popular issue of city administrations. In this context, 3D City models are produced and used for various purposes. However the requirements of the models vary depending on the type and scope of the application. While a high level visualization, where photorealistic visualization techniques are widely used, is required for touristy and recreational purposes, an abstract visualization of the physical reality is generally sufficient for the communication of the thematic information. The visual variables, which are the principle components of cartographic visualization, such as: color, shape, pattern, orientation, size, position, and saturation are used for communicating the thematic information. These kinds of 3D city models are called as abstract models. Standardization of technologies used for 3D modeling is now available by the use of CityGML. CityGML implements several novel concepts to support interoperability, consistency and functionality. For example it supports different Levels-of-Detail (LoD), which may arise from independent data collection processes and are used for efficient visualization and efficient data analysis. In one CityGML data set, the same object may be represented in different LoD simultaneously, enabling the analysis and visualization of the same object with regard to different degrees of resolution. Furthermore, two CityGML data sets

  4. Uncertainty quantification's role in modeling and simulation planning, and credibility assessment through the predictive capability maturity model

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Witkowski, Walter R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-04-13

    The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationally simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.

  5. Meeting Capability Goals through Effective Modelling and Experimentation of C4ISTAR Options

    Science.gov (United States)

    2011-06-01

    connection, management and visualisation capability provided by Salamander’s MooD ® software [13]. MooD has been chosen for this central role as it offers...suggested the use of capability ‘bullseye charts’ as the visualisation tool, using different colours to indicate the level of capability available at...based information environment (using Salamander’s MooD technology), containing all of the visualisations delivered by the project and the linkages

  6. Capability Paternalism

    NARCIS (Netherlands)

    Claassen, R.J.G.

    A capability approach prescribes paternalist government actions to the extent that it requires the promotion of specific functionings, instead of the corresponding capabilities. Capability theorists have argued that their theories do not have much of these paternalist implications, since promoting

  7. Petroleum system modeling capabilities for use in oil and gas resource assessments

    Science.gov (United States)

    Higley, Debra K.; Lewan, Michael; Roberts, Laura N.R.; Henry, Mitchell E.

    2006-01-01

    Summary: Petroleum resource assessments are among the most highly visible and frequently cited scientific products of the U.S. Geological Survey. The assessments integrate diverse and extensive information on the geologic, geochemical, and petroleum production histories of provinces and regions of the United States and the World. Petroleum systems modeling incorporates these geoscience data in ways that strengthen the assessment process and results are presented visually and numerically. The purpose of this report is to outline the requirements, advantages, and limitations of one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) petroleum systems modeling that can be applied to the assessment of oil and gas resources. Primary focus is on the application of the Integrated Exploration Systems (IES) PetroMod? software because of familiarity with that program as well as the emphasis by the USGS Energy Program on standardizing to one modeling application. The Western Canada Sedimentary Basin (WCSB) is used to demonstrate the use of the PetroMod? software. Petroleum systems modeling quantitatively extends the 'total petroleum systems' (TPS) concept (Magoon and Dow, 1994; Magoon and Schmoker, 2000) that is employed in USGS resource assessments. Modeling allows integration of state-of-the-art analysis techniques, and provides the means to test and refine understanding of oil and gas generation, migration, and accumulation. Results of modeling are presented visually, numerically, and statistically, which enhances interpretation of the processes that affect TPSs through time. Modeling also provides a framework for the input and processing of many kinds of data essential in resource assessment, including (1) petroleum system elements such as reservoir, seal, and source rock intervals; (2) timing of depositional, hiatus, and erosional events and their influences on petroleum systems; (3) incorporation of vertical and lateral distribution and lithologies of

  8. Large wind power plants modeling techniques for power system simulation studies

    Energy Technology Data Exchange (ETDEWEB)

    Larose, Christian; Gagnon, Richard; Turmel, Gilbert; Giroux, Pierre; Brochu, Jacques [IREQ Hydro-Quebec Research Institute, Varennes, QC (Canada); McNabb, Danielle; Lefebvre, Daniel [Hydro-Quebec TransEnergie, Montreal, QC (Canada)

    2009-07-01

    This paper presents efficient modeling techniques for the simulation of large wind power plants in the EMT domain using a parallel supercomputer. Using these techniques, large wind power plants can be simulated in detail, with each wind turbine individually represented, as well as the collector and receiving network. The simulation speed of the resulting models is fast enough to perform both EMT and transient stability studies. The techniques are applied to develop an EMT detailed model of a generic wind power plant consisting of 73 x 1.5-MW doubly-fed induction generator (DFIG) wind turbine. Validation of the modeling techniques is presented using a comparison with a Matlab/SimPowerSystems simulation. To demonstrate the simulation capabilities using these modeling techniques, simulations involving a 120-bus receiving network with two generic wind power plants (146 wind turbines) are performed. The complete system is modeled using the Hypersim simulator and Matlab/SimPowerSystems. The simulations are performed on a 32-processor supercomputer using an EMTP-like solution with a time step of 18.4 {mu}s. The simulation performance is 10 times slower than in real-time, which is a huge gain in performance compared to traditional tools. The simulation is designed to run in real-time so it never stops, resulting in a capability to perform thousand of tests via automatic testing tools. (orig.)

  9. Respirometry techniques and activated sludge models

    NARCIS (Netherlands)

    Benes, O.; Spanjers, H.; Holba, M.

    2002-01-01

    This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of

  10. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    de Haan, G.; de Haan, G.; van der Veer, Gerrit C.; van Vliet, J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in

  11. Selective pressurized liquid extraction technique capable of analyzing dioxins, furans, and PCBs in clams and crab tissue.

    Science.gov (United States)

    Subedi, Bikram; Aguilar, Lissette; Williams, E Spencer; Brooks, Bryan W; Usenko, Sascha

    2014-04-01

    A selective pressurized liquid extraction technique (SPLE) was developed for the analysis of polychlorodibenzo-p-dioxins, polychlorodibenzofurans (PCDD/Fs) and dioxin-like polychlorobiphenyls (dl-PCBs) in clam and crab tissue. The SPLE incorporated multiple cleanup adsorbents (alumina, florisil, silica gel, celite, and carbopack) within the extraction cell. Tissue extracts were analyzed by high resolution gas chromatography coupled with electron capture negative ionization mass spectrometry. Mean recovery (n = 3) and percent relative standard deviation for PCDD/Fs and dl-PCBs in clam and crabs was 89 ± 2.3 and 85 ± 4.0, respectively. The SPLE method was applied to clams and crabs collected from the San Jacinto River Waste Pits, a Superfund site in Houston, TX. The dl-PCBs concentrations in clams and crabs ranged from 50 to 2,450 and 5 to 800 ng/g ww, respectively. Sample preparation time and solvents were reduced by 92 % and 65 %, respectively, as compared to USEPA method 1613.

  12. Immune Modulating Capability of Two Exopolysaccharide-Producing Bifidobacterium Strains in a Wistar Rat Model

    Directory of Open Access Journals (Sweden)

    Nuria Salazar

    2014-01-01

    Full Text Available Fermented dairy products are the usual carriers for the delivery of probiotics to humans, Bifidobacterium and Lactobacillus being the most frequently used bacteria. In this work, the strains Bifidobacterium animalis subsp. lactis IPLA R1 and Bifidobacterium longum IPLA E44 were tested for their capability to modulate immune response and the insulin-dependent glucose homeostasis using male Wistar rats fed with a standard diet. Three intervention groups were fed daily for 24 days with 10% skimmed milk, or with 109 cfu of the corresponding strain suspended in the same vehicle. A significant increase of the suppressor-regulatory TGF-β cytokine occurred with both strains in comparison with a control (no intervention group of rats; the highest levels were reached in rats fed IPLA R1. This strain presented an immune protective profile, as it was able to reduce the production of the proinflammatory IL-6. Moreover, phosphorylated Akt kinase decreased in gastroctemius muscle of rats fed the strain IPLA R1, without affecting the glucose, insulin, and HOMA index in blood, or levels of Glut-4 located in the membrane of muscle and adipose tissue cells. Therefore, the strain B. animalis subsp. lactis IPLA R1 is a probiotic candidate to be tested in mild grade inflammation animal models.

  13. Aerodynamic modelling of a Cretaceous bird reveals thermal soaring capabilities during early avian evolution.

    Science.gov (United States)

    Serrano, Francisco José; Chiappe, Luis María

    2017-07-01

    Several flight modes are thought to have evolved during the early evolution of birds. Here, we use a combination of computational modelling and morphofunctional analyses to infer the flight properties of the raven-sized, Early Cretaceous bird Sapeornis chaoyangensis -a likely candidate to have evolved soaring capabilities. Specifically, drawing information from (i) mechanical inferences of the deltopectoral crest of the humerus, (ii) wing shape (i.e. aspect ratio), (iii) estimations of power margin (i.e. difference between power required for flight and available power from muscles), (iv) gliding behaviour (i.e. forward speed and sinking speed), and (v) palaeobiological evidence, we conclude that S. chaoyangensis was a thermal soarer with an ecology similar to that of living South American screamers. Our results indicate that as early as 125 Ma, some birds evolved the morphological and aerodynamic requirements for soaring on continental thermals, a conclusion that highlights the degree of ecological, functional and behavioural diversity that resulted from the first major evolutionary radiation of birds. © 2017 The Author(s).

  14. Model-based Assessment for Balancing Privacy Requirements and Operational Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Knirsch, Fabian [Salzburg Univ. (Austria); Engel, Dominik [Salzburg Univ. (Austria); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States)

    2015-02-17

    The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and – if feasible – an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.

  15. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  16. A Grid Connected Transformerless Inverter and its Model Predictive Control Strategy with Leakage Current Elimination Capability

    Directory of Open Access Journals (Sweden)

    J. Fallah Ardashir

    2017-06-01

    Full Text Available This paper proposes a new single phase transformerless Photovoltaic (PV inverter for grid connected systems. It consists of six power switches, two diodes, one capacitor and filter at the output stage. The neutral of the grid is directly connected to the negative terminal of the source. This results in constant common mode voltage and zero leakage current. Model Predictive Controller (MPC technique is used to modulate the converter to reduce the output current ripple and filter requirements. The main advantages of this inverter are compact size, low cost, flexible grounding configuration. Due to brevity, the operating principle and analysis of the proposed circuit are presented in brief. Simulation and experimental results of 200W prototype are shown at the end to validate the proposed topology and concept. The results obtained clearly verifies the performance of the proposed inverter and its practical application for grid connected PV systems.

  17. Ambient temperature modelling with soft computing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); De Felice, Matteo [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); University of Rome ' ' Roma 3' ' , Dipartimento di Informatica e Automazione (DIA), Via della Vasca Navale 79, 00146 Rome (Italy)

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  18. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  19. Materials and techniques for model construction

    Science.gov (United States)

    Wigley, D. A.

    1985-01-01

    The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.

  20. Advanced techniques for modeling avian nest survival

    Science.gov (United States)

    Dinsmore, S.J.; White, Gary C.; Knopf, F.L.

    2002-01-01

    Estimation of avian nest survival has traditionally involved simple measures of apparent nest survival or Mayfield constant-nest-survival models. However, these methods do not allow researchers to build models that rigorously assess the importance of a wide range of biological factors that affect nest survival. Models that incorporate greater detail, such as temporal variation in nest survival and covariates representative of individual nests represent a substantial improvement over traditional estimation methods. In an attempt to improve nest survival estimation procedures, we introduce the nest survival model now available in the program MARK and demonstrate its use on a nesting study of Mountain Plovers (Charadrius montanus Townsend) in Montana, USA. We modeled the daily survival of Mountain Plover nests as a function of the sex of the incubating adult, nest age, year, linear and quadratic time trends, and two weather covariates (maximum daily temperature and daily precipitation) during a six-year study (1995–2000). We found no evidence for yearly differences or an effect of maximum daily temperature on the daily nest survival of Mountain Plovers. Survival rates of nests tended by female and male plovers differed (female rate = 0.33; male rate = 0.49). The estimate of the additive effect for males on nest survival rate was 0.37 (95% confidence limits were 0.03, 0.71) on a logit scale. Daily survival rates of nests increased with nest age; the estimate of daily nest-age change in survival in the best model was 0.06 (95% confidence limits were 0.04, 0.09) on a logit scale. Daily precipitation decreased the probability that the nest would survive to the next day; the estimate of the additive effect of daily precipitation on the nest survival rate was −1.08 (95% confidence limits were −2.12, −0.13) on a logit scale. Our approach to modeling daily nest-survival rates allowed several biological factors of interest to be easily included in nest survival models

  1. Model measurements for new accelerating techniques

    International Nuclear Information System (INIS)

    Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.

    1988-06-01

    We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs

  2. Implementation of linguistic models by holographic technique

    Science.gov (United States)

    Pavlov, Alexander V.; Shevchenko, Yanina Y.

    2004-01-01

    In this paper we consider linguistic model as an algebraic model and restrict our consideration to the semantics only. The concept allows "natural-like" language to be used by human-teacher to describe for machine the way of the problem solving, which is based on human"s knowledge and experience. Such imprecision words as "big", "very big", "not very big", etc can be used for human"s knowledge representation. Technically, the problem is to match metric scale, used by the technical device, with the linguistic scale, intuitively formed by the person. We develop an algebraic description of 4-f Fourier-holography setup by using triangular norms based approach. In the model we use the Fourier-duality of the t-norms and t-conorms, which is implemented by 4-f Fourier-holography setup. We demonstrate the setup is described adequately by De-Morgan"s law for involution. Fourier-duality of the t-norms and t-conorms leads to fuzzy-valued logic. We consider General Modus Ponens rule implementation to define the semantical operators, which are adequate to the setup. We consider scales, formed in both +1 and -1 orders of diffraction. We use representation of linguistic labels by fuzzy numbers to form the scale and discuss the dependence of the scale grading on the holographic recording medium operator. To implement reasoning with multi-parametric input variable we use Lorentz function to approximate linguistic labels. We use an example of medical diagnostics for experimental illustration of reasoning on the linguistic scale.

  3. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; p<0.01), which demonstrated that Doppler ultrasonography is a convenient and reliable technique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat

  4. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  5. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  6. Heavy Ion Fusion Science Virtual National Laboratory 1st Quarter FY08 Milestone Report: Report Initial Work on Developing Plasma Modeling Capability in WARP for NDCX Experiments Report. Initial work on developing Plasma Modeling Capability in WARP for NDCX Experiments

    International Nuclear Information System (INIS)

    Friedman, A.; Cohen, R.H.; Grote, D.P.; Vay, J.-L.

    2007-01-01

    This milestone has been accomplished. The Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) has developed and implemented an initial beam-in-plasma implicit modeling capability in Warp; has carried out tests validating the behavior of the models employed; has compared the results of electrostatic and electromagnetic models when applied to beam expansion in an NDCX-I relevant regime; has compared Warp and LSP results on a problem relevant to NDCX-I; has modeled wave excitation by a rigid beam propagating through plasma; and has implemented and begun testing a more advanced implicit method that correctly captures electron drift motion even when timesteps too large to resolve the electron gyro-period are employed. The HIFS-VNL is well on its way toward having a state-of-the-art source-to-target simulation capability that will enable more effective support of ongoing experiments in the NDCX series and allow more confident planning for future ones

  7. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  8. Re-framing Inclusive Education Through the Capability Approach: An Elaboration of the Model of Relational Inclusion

    Directory of Open Access Journals (Sweden)

    Maryam Dalkilic

    2016-09-01

    Full Text Available Scholars have called for the articulation of new frameworks in special education that are responsive to culture and context and that address the limitations of medical and social models of disability. In this article, we advance a theoretical and practical framework for inclusive education based on the integration of a model of relational inclusion with Amartya Sen’s (1985 Capability Approach. This integrated framework engages children, educators, and families in principled practices that acknowledge differences, rather than deficits, and enable attention to enhancing the capabilities of children with disabilities in inclusive educational environments. Implications include the development of policy that clarifies the process required to negotiate capabilities and valued functionings and the types of resources required to permit children, educators, and families to create relationally inclusive environments.

  9. Contact and Impact Dynamic Modeling Capabilities of LS-DYNA for Fluid-Structure Interaction Problems

    Science.gov (United States)

    2010-12-02

    2003, providing a summary of the major theoretical, experimental and numerical accomplishments in the field. Melis and Khanh Bui (2003) studied the ALE...and Khanh Bui (2003) studied the ALE capability to predict splashdown loads on a proposed replacement/upgrade of the hydrazine tanks on the thrust

  10. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    Surrogate modelling and optimization techniques are intended for engineering design in the case where an expensive physical model is involved. This thesis provides a literature overview of the field of surrogate modelling and optimization. The space mapping technique is one such method for constr......Surrogate modelling and optimization techniques are intended for engineering design in the case where an expensive physical model is involved. This thesis provides a literature overview of the field of surrogate modelling and optimization. The space mapping technique is one such method...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...... mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...

  11. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  12. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  13. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Feature extraction involves extracting speaker-specific features from the speech signal at reduced data rate. The extracted features are further combined using modelling techniques to generate speaker models. The speaker models are then tested using the features extracted from the test speech signal. The improvement in ...

  14. A Model-Based Architecture Approach to Ship Design Linking Capability Needs to System Solutions

    Science.gov (United States)

    2012-06-01

    Summers are rainy and warm with frequent typhoons. Fog is very common along the coasts and the water depth is very shallow on average, approximately...India, Iraq, Kuwait, Libya, Oman, Pakistan, Peru , Qatar, Saudia Arabia, Singapore, South Africa, UAE, Venezuela The threat aircraft used to deploy...system from performing this mission lies in the AAW mission area and sustained independent operation in littoral waters . The fictional capability gap

  15. A Model for a Single Unmanned Aircraft Systems (UAS) Program Office Managing Joint ISR Capabilities

    Science.gov (United States)

    2017-10-01

    to get new capability to the field. A single management structure provides a portfolio perspective and enables strategic management . Decisions... strategic management across all of the medium to high altitude UAS portfolio there will continue to be tension in achieving the joint nature of these... managing the medium to high altitude UAS assets. This would be done by employing agile methodology at the strategic level and by eliminating redundant

  16. Project AIR FORCE Modeling Capabilities for Support of Combat Operations in Denied Environments

    Science.gov (United States)

    2015-01-01

    typical MINLP maximization algorithm would approach finding the tallest peak in a mountain range. The solver begins by generally randomly selecting...organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more...illustrate ROBOT’s capabilities using two scenarios: a relatively complex, real- world scenario based on prior PAF research and the simpler, notional

  17. HYDROïD humanoid robot head with perception and emotion capabilities :Modeling, Design and Experimental Results

    Directory of Open Access Journals (Sweden)

    Samer eAlfayad

    2016-04-01

    Full Text Available In the framework of the HYDROïD humanoid robot project, this paper describes the modeling and design of an electrically actuated head mechanism. Perception and emotion capabilities are considered in the design process. Since HYDROïD humanoid robot is hydraulically actuated, the choice of electrical actuation for the head mechanism addressed in this paper is justified. Considering perception and emotion capabilities leads to a total number of 15 degrees of freedom for the head mechanism which are split on four main sub-mechanisms: the neck, the mouth, the eyes and the eyebrows. Biological data and kinematics performances of human head are taken as inputs of the design process. A new solution of uncoupled eyes is developed to possibly address the master-slave process that links the human eyes as well as vergence capabilities. Modeling each sub-system is carried out in order to get equations of motion, their frequency responses and their transfer functions. The neck pitch rotation is given as a study example. Then, the head mechanism performances are presented through a comparison between model and experimental results validating the hardware capabilities. Finally, the head mechanism is integrated on the HYDROïD upper-body. An object tracking experiment coupled with emotional expressions is carried out to validate the synchronization of the eye rotations with the body motions.

  18. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  19. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  20. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  1. The Airlift Capabilities Estimation Prototype: A Case Study in Model Validation

    Science.gov (United States)

    1993-03-01

    34 Airlif" 18 - 21 (Spring 1988). 2.4 Hflliard, Michael R., Rajendra S. Solanki, Cheng Liu, Ingrid K. Busch, Gleu Harrison, and Ronald D. Kraemer...aircraft. Again, it is assumed that any non-preferred capability will be in cargo classes "bulk" or "passengers" only. UYlu V lr 7,uXtUp2r, (5) ’. YUy ...New York: McGraw-Hill Book Company, 1986. ’ •,Hilliaid, Michael R., Rajendra S. Solanki, Cheng Liu, Ingrid K. Busch, Glen Harrison and y Ronald D

  2. Capability ethics

    NARCIS (Netherlands)

    I.A.M. Robeyns (Ingrid)

    2012-01-01

    textabstractThe capability approach is one of the most recent additions to the landscape of normative theories in ethics and political philosophy. Yet in its present stage of development, the capability approach is not a full-blown normative theory, in contrast to utilitarianism, deontological

  3. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  4. Composite use of numerical groundwater flow modeling and geoinformatics techniques for monitoring Indus Basin aquifer, Pakistan.

    Science.gov (United States)

    Ahmad, Zulfiqar; Ashraf, Arshad; Fryar, Alan; Akhter, Gulraiz

    2011-02-01

    The integration of the Geographic Information System (GIS) with groundwater modeling and satellite remote sensing capabilities has provided an efficient way of analyzing and monitoring groundwater behavior and its associated land conditions. A 3-dimensional finite element model (Feflow) has been used for regional groundwater flow modeling of Upper Chaj Doab in Indus Basin, Pakistan. The approach of using GIS techniques that partially fulfill the data requirements and define the parameters of existing hydrologic models was adopted. The numerical groundwater flow model is developed to configure the groundwater equipotential surface, hydraulic head gradient, and estimation of the groundwater budget of the aquifer. GIS is used for spatial database development, integration with a remote sensing, and numerical groundwater flow modeling capabilities. The thematic layers of soils, land use, hydrology, infrastructure, and climate were developed using GIS. The Arcview GIS software is used as additive tool to develop supportive data for numerical groundwater flow modeling and integration and presentation of image processing and modeling results. The groundwater flow model was calibrated to simulate future changes in piezometric heads from the period 2006 to 2020. Different scenarios were developed to study the impact of extreme climatic conditions (drought/flood) and variable groundwater abstraction on the regional groundwater system. The model results indicated a significant response in watertable due to external influential factors. The developed model provides an effective tool for evaluating better management options for monitoring future groundwater development in the study area.

  5. Evaluation of lesion detection capabilities of anatomically based MAP image reconstruction methods using the computer observer model

    International Nuclear Information System (INIS)

    Kobayashi, Tetsuya; Kudo, Hiroyuki

    2010-01-01

    This study was conducted to evaluate the lesion detection capabilities of anatomically based maximum a posteriori (MAP) image reconstruction methods in emission computed tomography using the computer observer model. In lesion detection tasks, conventional anatomically based MAP reconstruction methods cannot preserve lesions not present in the anatomical image with high contrast and at the same time suppress noise in the background regions. We previously proposed a new anatomically based MAP reconstruction method called the SOS-MAP method, which is based on the spots-on-smooth image model in which the image is modeled by the sum of the smooth background image and the sparse spot image, and showed that the SOS-MAP method can overcome the above-mentioned drawback of conventional anatomically based MAP methods. However, the lesion detection capabilities of the SOS-MAP method remained to be clarified. In the present study, the computer observer model was used to evaluate the lesion detection capabilities of the SOS-MAP method, and it was found that the SOS-MAP method is superior to conventional anatomically based MAP methods for the detection of lesions. (author)

  6. Developing tolled-route demand estimation capabilities for Texas : opportunities for enhancement of existing models.

    Science.gov (United States)

    2014-08-01

    The travel demand models developed and applied by the Transportation Planning and Programming Division : (TPP) of the Texas Department of Transportation (TxDOT) are daily three-step models (i.e., trip generation, trip : distribution, and traffic assi...

  7. Applicability of the capability maturity model for engineer-to-order firms

    NARCIS (Netherlands)

    Veldman, J.; Klingenberg, W.

    2009-01-01

    Most of the well-known management and improvement systems and techniques, Such as Lean Production (e.g. Just-In-Time (JIT) Pull production, one piece flow) and Six Sigma (reduction in variation) were developed in high Volume industries. In order to measure the progress of the implementation of Such

  8. Applicability of the capability maturity model for engineer-to-order firms

    NARCIS (Netherlands)

    Veldman, Jasper; Klingenberg, Warse

    2009-01-01

    Most of the well-known management and improvement systems and techniques, such as Lean Production (e.g. Just-In-Time (JIT) pull production, one piece flow) and Six Sigma (reduction in variation) were developed in high volume industries. In order to measure the progress of the implementation of such

  9. Modeling and numerical techniques for high-speed digital simulation of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Wulff, W.; Cheng, H.S.; Mallen, A.N.

    1987-01-01

    Conventional computing methods are contrasted with newly developed high-speed and low-cost computing techniques for simulating normal and accidental transients in nuclear power plants. Six principles are formulated for cost-effective high-fidelity simulation with emphasis on modeling of transient two-phase flow coolant dynamics in nuclear reactors. Available computing architectures are characterized. It is shown that the combination of the newly developed modeling and computing principles with the use of existing special-purpose peripheral processors is capable of achieving low-cost and high-speed simulation with high-fidelity and outstanding user convenience, suitable for detailed reactor plant response analyses.

  10. Modeling and numerical techniques for high-speed digital simulation of nuclear power plants

    International Nuclear Information System (INIS)

    Conventional computing methods are contrasted with newly developed high-speed and low-cost computing techniques for simulating normal and accidental transients in nuclear power plants. Six principles are formulated for cost-effective high-fidelity simulation with emphasis on modeling of transient two-phase flow coolant dynamics in nuclear reactors. Available computing architectures are characterized. It is shown that the combination of the newly developed modeling and computing principles with the use of existing special-purpose peripheral processors is capable of achieving low-cost and high-speed simulation with high-fidelity and outstanding user convenience, suitable for detailed reactor plant response analyses

  11. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  12. Application of the numerical modelling techniques to the simulation ...

    African Journals Online (AJOL)

    The aquifer was modelled by the application of Finite Element Method (F.E.M), with appropriate initial and boundary conditions. The matrix solver technique adopted for the F.E.M. was that of the Conjugate Gradient Method. After the steady state calibration and transient verification, the model was used to predict the effect of ...

  13. Fuzzy Control Technique Applied to Modified Mathematical Model ...

    African Journals Online (AJOL)

    In this paper, fuzzy control technique is applied to the modified mathematical model for malaria control presented by the authors in an earlier study. Five Mamdani fuzzy controllers are constructed to control the input (some epidemiological parameters) to the malaria model simulated by 9 fully nonlinear ordinary differential ...

  14. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  15. Gossiping Capabilities

    DEFF Research Database (Denmark)

    Mogensen, Martin; Frey, Davide; Guerraoui, Rachid

    declare a high capability in order to augment their perceived quality without contributing accordingly. We evaluate HEAP in the context of a video streaming application on a 236 PlanetLab nodes testbed. Our results shows that HEAP improves the quality of the streaming by 25% over a standard gossip......Gossip-based protocols are now acknowledged as a sound basis to implement collaborative high-bandwidth content dissemination: content location is disseminated through gossip, the actual contents being subsequently pulled. In this paper, we present HEAP, HEterogeneity Aware gossip Protocol, where...... nodes dynamically adjust their contribution to gossip dissemination according to their capabilities. Using a continuous, itself gossip-based, approximation of relative capabilities, HEAP dynamically leverages the most capable nodes by (a) increasing their fanouts (while decreasing by the same proportion...

  16. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  17. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  18. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds...

  19. Simulating run-up on steep slopes with operational Boussinesq models; capabilities, spurious effects and instabilities

    Directory of Open Access Journals (Sweden)

    F. Løvholt

    2013-06-01

    Full Text Available Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.

  20. Model-Based Real Time Assessment of Capability Left for Spacecraft Under Failure Mode, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project is aimed at developing a model based diagnostics system for spacecraft that will allow real time assessment of its state, while it is impacted...

  1. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  2. Comparison of Fuzzy AHP Buckley and ANP Models in Forestry Capability Evaluation (Case Study: Behbahan City Fringe

    Directory of Open Access Journals (Sweden)

    V. Rahimi

    2015-12-01

    Full Text Available The area of Zagros forests is continuously in danger of destruction. Therefore, the remaining forests should be carefully managed based on ecological capability evaluation. In fact, land evaluation includes prediction or assessment of land quality for a special land use with regard to production, vulnerability and management requirements. In this research, we studied the ecological capability of Behbahan city fringe for forestry land use. After the basic studies were completed and the thematic maps such as soil criteria, climate, physiography, vegetation and bedrock were prepared, the fuzzy multi-criteria decision-making methods of Fuzzy AHP Buckley and ANP were used to standardize and determine the weights of criteria. Finally, the ecological model of the region’s capability was generated to prioritize forestry land use and prepare the final map of evaluation using WLC model in seven classes. The results showed that in ANP method, 55.58% of the area is suitable for forestry land use which is more consistent with the reality, while in the Fuzzy AHP method, 95.23% of the area was found suitable. Finally, it was concluded that the ANP method shows more flexibility and ability to determine suitable areas for forestry land use in the study area.

  3. Advancing botnet modeling techniques for military and security simulations

    Science.gov (United States)

    Banks, Sheila B.; Stytz, Martin R.

    2011-06-01

    Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.

  4. Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique

    Directory of Open Access Journals (Sweden)

    Hussein Rappel

    2014-01-01

    integration technique (EFIT as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capable of properly propagating in the material, interfering with defects/damages, and being received in good conditions. Modern simulation tools based on numerical methods such as finite integration technique (FIT, finite element method (FEM, and boundary element method (BEM may be used for modeling. In this paper, two sets of simulation are performed. In the first set, group velocities of lamb wave in a steel plate are obtained numerically. Results are then compared with analytical results to validate the simulation. In the second set, EFIT is employed to study fundamental symmetric mode interaction with a surface braking defect.

  5. Accurate modeling of a DOI capable small animal PET scanner using GATE

    International Nuclear Information System (INIS)

    Zagni, F.; D'Ambrosio, D.; Spinelli, AE.; Cicoria, G.; Fanti, S.; Marengo, M.

    2013-01-01

    In this work we developed a Monte Carlo (MC) model of the Sedecal Argus pre-clinical PET scanner, using GATE (Geant4 Application for Tomographic Emission). This is a dual-ring scanner which features DOI compensation by means of two layers of detector crystals (LYSO and GSO). Geometry of detectors and sources, pulses readout and selection of coincidence events were modeled with GATE, while a separate code was developed in order to emulate the processing of digitized data (for example, customized time windows and data flow saturation), the final binning of the lines of response and to reproduce the data output format of the scanner's acquisition software. Validation of the model was performed by modeling several phantoms used in experimental measurements, in order to compare the results of the simulations. Spatial resolution, sensitivity, scatter fraction, count rates and NECR were tested. Moreover, the NEMA NU-4 phantom was modeled in order to check for the image quality yielded by the model. Noise, contrast of cold and hot regions and recovery coefficient were calculated and compared using images of the NEMA phantom acquired with our scanner. The energy spectrum of coincidence events due to the small amount of 176 Lu in LYSO crystals, which was suitably included in our model, was also compared with experimental measurements. Spatial resolution, sensitivity and scatter fraction showed an agreement within 7%. Comparison of the count rates curves resulted satisfactory, being the values within the uncertainties, in the range of activities practically used in research scans. Analysis of the NEMA phantom images also showed a good agreement between simulated and acquired data, within 9% for all the tested parameters. This work shows that basic MC modeling of this kind of system is possible using GATE as a base platform; extension through suitably written customized code allows for an adequate level of accuracy in the results. Our careful validation against experimental

  6. Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Rivera, Michael K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC)

    2014-04-01

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  7. Transient Mathematical Modeling for Liquid Rocket Engine Systems: Methods, Capabilities, and Experience

    Science.gov (United States)

    Seymour, David C.; Martin, Michael A.; Nguyen, Huy H.; Greene, William D.

    2005-01-01

    The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.

  8. Capabilities and performance of Elmer/Ice, a new-generation ice sheet model

    Directory of Open Access Journals (Sweden)

    O. Gagliardini

    2013-08-01

    Full Text Available The Fourth IPCC Assessment Report concluded that ice sheet flow models, in their current state, were unable to provide accurate forecast for the increase of polar ice sheet discharge and the associated contribution to sea level rise. Since then, the glaciological community has undertaken a huge effort to develop and improve a new generation of ice flow models, and as a result a significant number of new ice sheet models have emerged. Among them is the parallel finite-element model Elmer/Ice, based on the open-source multi-physics code Elmer. It was one of the first full-Stokes models used to make projections for the evolution of the whole Greenland ice sheet for the coming two centuries. Originally developed to solve local ice flow problems of high mechanical and physical complexity, Elmer/Ice has today reached the maturity to solve larger-scale problems, earning the status of an ice sheet model. Here, we summarise almost 10 yr of development performed by different groups. Elmer/Ice solves the full-Stokes equations, for isotropic but also anisotropic ice rheology, resolves the grounding line dynamics as a contact problem, and contains various basal friction laws. Derived fields, like the age of the ice, the strain rate or stress, can also be computed. Elmer/Ice includes two recently proposed inverse methods to infer badly known parameters. Elmer is a highly parallelised code thanks to recent developments and the implementation of a block preconditioned solver for the Stokes system. In this paper, all these components are presented in detail, as well as the numerical performance of the Stokes solver and developments planned for the future.

  9. Evaluation of remote-sensing-based rainfall products through predictive capability in hydrological runoff modelling

    DEFF Research Database (Denmark)

    Stisen, Simon; Sandholt, Inge

    2010-01-01

    were similar. The results showed that the Climate Prediction Center/Famine Early Warning System (CPC-FEWS) and cold cloud duration (CCD) products, which are partly based on rain gauge data and produced specifically for the African continent, performed better in the modelling context than the global......The emergence of regional and global satellite-based rainfall products with high spatial and temporal resolution has opened up new large-scale hydrological applications in data-sparse or ungauged catchments. Particularly, distributed hydrological models can benefit from the good spatial coverage...

  10. Realizing joined-up government : Dynamic capabilities and stage models for transformation

    NARCIS (Netherlands)

    Klievink, B.; Janssen, M.

    2009-01-01

    Joining up remains a high priority on the e-government agenda and requires extensive transformation. Stage models are predictable patterns which exist in the growth of organizations and unfold as discrete time periods that result in discontinuity and can help e-government development towards

  11. Seasonal Characteristics of Widespread Ozone Pollution in China and India: Current Model Capabilities and Source Attributions

    Science.gov (United States)

    Gao, M.; Song, S.; Beig, G.; Zhang, H.; Hu, J.; Ying, Q.; McElroy, M. B.

    2017-12-01

    Fast urbanization and industrialization in China and India have led to severe ozone pollution, threatening public health in these densely populated countries. We show the spatial and seasonal characteristics of ozone concentrations using nation-wide observations for these two countries in 2013. We used the Weather Research and Forecasting model coupled to chemistry (WRF-Chem) to conduct one-year simulations and to evaluate how current models capture the important photochemical processes using the exhaustive available datasets in China and India, including surface measurements, ozonesonde data and satellite retrievals. We also employed the factor separation approach to distinguish the contributions of different sectors to ozone during different seasons. The back trajectory model FLEXPART was applied to investigate the role of transport in highly polluted regions (e.g., North China Plain, Yangtze River delta, and Pearl River Delta) during different seasons. Preliminary results indicate that the WRF-Chem model provides a satisfactory representation of the temporal and spatial variations of ozone for both China and India. The factor separation approach offers valuable insights into relevant sources of ozone for both countries providing valuable guidance for policy options designed to mitigate the related problem.

  12. Joint Command and Control (JC2) capability development utilising a Modelling and Simulation Framework

    CSIR Research Space (South Africa)

    Ramadeen, P

    2010-09-01

    Full Text Available The command, control and information warfare competency of Defence, Peace, Safety and Security (DPSS), an operating unit of the CSIR is using systems modelling and simulation in its Joint Command and Control (JC2) research. JC2 encompasses systems...

  13. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  14. Assessing the capability of CORDEX models in simulating onset of rainfall in West Africa

    Science.gov (United States)

    Mounkaila, Moussa S.; Abiodun, Babatunde J.; `Bayo Omotosho, J.

    2015-01-01

    Reliable forecasts of rainfall-onset dates (RODs) are crucial for agricultural planning and food security in West Africa. This study evaluates the ability of nine CORDEX regional climate models (RCMs: ARPEGE, CRCM5, RACMO, RCA35, REMO, RegCM3, PRECIS, CCLM and WRF) in simulating RODs over the region. Four definitions are used to compute RODs, and two observation datasets (GPCP and TRMM) are used in the model evaluation. The evaluation considers how well the RCMs, driven by ERA-Interim reanalysis (ERAIN), simulate the observed mean, standard deviation and inter-annual variability of RODs over West Africa. It also investigates how well the models link RODs with the northward movement of the monsoon system over the region. The model performances are compared to that of the driving reanalysis—ERAIN. Observations show that the mean RODs in West Africa have a zonal distribution, and the dates increase from the Guinea coast northward. ERAIN fails to reproduce the spatial distribution of the RODs as observed. The performance of some RCMs in simulating the RODs depends on the ROD definition used. For instance, ARPEGE, RACMO, PRECIS and CCLM produce a better ROD distribution than that of ERAIN when three of the ROD definitions are used, but give a worse ROD distribution than that of ERAIN when the fourth definition is used. However, regardless of the definition used, CCRM5, RCA35, REMO, RegCM3 and WRF show a remarkable improvement over ERAIN. The study shows that the ability of the RCMs in simulating RODs over West Africa strongly depends on how well the models reproduce the northward movement of the monsoon system and the associated features. The results show that there are some differences in the RODs obtained between the two observation datasets and RCMs, and the differences are magnified by differences in the ROD definitions. However, the study shows that most CORDEX RCMs have remarkable skills in predicting the RODs in West Africa.

  15. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  16. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  17. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  18. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  19. The Cyber Defense (CyDef) Model for Assessing Countermeasure Capabilities.

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Margot [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); DeVries, Troy Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gordon, Susanna P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-06-01

    Cybersecurity is essential to maintaining operations, and is now a de facto cost of business. Despite this, there is little consensus on how to systematically make decisions about cyber countermeasures investments. Identifying gaps and determining the expected return on investment (ROI) of adding a new cybersecurity countermeasure is frequently a hand-waving exercise at best. Worse, cybersecurity nomenclature is murky and frequently over-loaded, which further complicates issues by inhibiting clear communication. This paper presents a series of foundational models and nomenclature for discussing cybersecurity countermeasures, and then introduces the Cyber Defense (CyDef) model, which provides a systematic and intuitive way for decision-makers to effectively communicate with operations and device experts.

  20. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques

    Science.gov (United States)

    Jain, Ashu; Srinivasulu, Sanaga

    2006-02-01

    This paper presents the findings of a study aimed at decomposing a flow hydrograph into different segments based on physical concepts in a catchment, and modelling different segments using different technique viz. conceptual and artificial neural networks (ANNs). An integrated modelling framework is proposed capable of modelling infiltration, base flow, evapotranspiration, soil moisture accounting, and certain segments of the decomposed flow hydrograph using conceptual techniques and the complex, non-linear, and dynamic rainfall-runoff process using ANN technique. Specifically, five different multi-layer perceptron (MLP) and two self-organizing map (SOM) models have been developed. The rainfall and streamflow data derived from the Kentucky River catchment were employed to test the proposed methodology and develop all the models. The performance of all the models was evaluated using seven different standard statistical measures. The results obtained in this study indicate that (a) the rainfall-runoff relationship in a large catchment consists of at least three or four different mappings corresponding to different dynamics of the underlying physical processes, (b) an integrated approach that models the different segments of the decomposed flow hydrograph using different techniques is better than a single ANN in modelling the complex, dynamic, non-linear, and fragmented rainfall runoff process, (c) a simple model based on the concept of flow recession is better than an ANN to model the falling limb of a flow hydrograph, and (d) decomposing a flow hydrograph into the different segments corresponding to the different dynamics based on the physical concepts is better than using the soft decomposition employed using SOM.

  1. Capability approach

    DEFF Research Database (Denmark)

    Jensen, Niels Rosendal; Kjeldsen, Christian Christrup

    Lærebogen er den første samlede danske præsentation af den af Amartya Sen og Martha Nussbaum udviklede Capability Approach. Bogen indeholder en præsentation og diskussion af Sen og Nussbaums teoretiske platform. I bogen indgår eksempler fra såvel uddannelse/uddannelsespolitik, pædagogik og omsorg....

  2. Effects of Peer Modelling Technique in Reducing Substance Abuse ...

    African Journals Online (AJOL)

    The study investigated the effects of peer modelling techniques in reducing substance abuse among undergraduates in Nigeria. The participants were one hundred and twenty (120) undergraduate students in 100 and 400 levels respectively. There are two groups: one treatment group and one control group.

  3. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  4. DESTINY: A Comprehensive Tool with 3D and Multi-Level Cell Memory Modeling Capability

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2017-09-01

    Full Text Available To enable the design of large capacity memory structures, novel memory technologies such as non-volatile memory (NVM and novel fabrication approaches, e.g., 3D stacking and multi-level cell (MLC design have been explored. The existing modeling tools, however, cover only a few memory technologies, technology nodes and fabrication approaches. We present DESTINY, a tool for modeling 2D/3D memories designed using SRAM, resistive RAM (ReRAM, spin transfer torque RAM (STT-RAM, phase change RAM (PCM and embedded DRAM (eDRAM and 2D memories designed using spin orbit torque RAM (SOT-RAM, domain wall memory (DWM and Flash memory. In addition to single-level cell (SLC designs for all of these memories, DESTINY also supports modeling MLC designs for NVMs. We have extensively validated DESTINY against commercial and research prototypes of these memories. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g., latency, area or energy-delay product for a given memory technology, choosing the suitable memory technology or fabrication method (i.e., 2D v/s 3D for a given optimization target, etc. We believe that DESTINY will boost studies of next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers. The latest source-code of DESTINY is available from the following git repository: https://bitbucket.org/sparshmittal/destinyv2.

  5. A hybrid model of QFD, SERVQUAL and KANO to increase bank's capabilities

    Directory of Open Access Journals (Sweden)

    Hasan Rajabi

    2012-10-01

    Full Text Available In global market, factors such as precedence of competitors extending shave on market, promoting quality of services and identifying customers' needs are important. This paper attempts to identify strategic services in one of the biggest governmental banks in Iran called Melli bank for getting competition merit using Kano and SERVQUAL compound models and to extend operation quality and to provide suitable strategies. The primary question of this paper is on how to introduce high quality services in this bank. The proposed model of this paper uses a hybrid of three quality-based methods including SERVQUAL, QFD and Kano models. Statistical society in this article is all clients and customers of Melli bank who use this banks' services and based on random sampling method, 170 customers were selected. The study was held in one of provinces located in west part of Iran called Semnan. Research findings show that Melli banks' customers are dissatisfied from the quality of services and to solve this problem the bank should do some restructuring to place some special characteristics to reach better operation at the heed of its affairs. The characteristics include, in terms of their priorities, possibility of transferring money by sale terminal, possibility of creating wireless pos, accelerating in doing bank works, getting special merits to customers who use electronic services, eliminating such bank commission, solving problems in least time as disconnecting system, possibility of receiving foreign exchange by ATM and suitable parking in city.

  6. Development of Detonation Modeling Capabilities for Rocket Test Facilities: Hydrogen-Oxygen-Nitrogen Mixtures

    Science.gov (United States)

    Allgood, Daniel C.

    2016-01-01

    The objective of the presented work was to develop validated computational fluid dynamics (CFD) based methodologies for predicting propellant detonations and their associated blast environments. Applications of interest were scenarios relevant to rocket propulsion test and launch facilities. All model development was conducted within the framework of the Loci/CHEM CFD tool due to its reliability and robustness in predicting high-speed combusting flow-fields associated with rocket engines and plumes. During the course of the project, verification and validation studies were completed for hydrogen-fueled detonation phenomena such as shock-induced combustion, confined detonation waves, vapor cloud explosions, and deflagration-to-detonation transition (DDT) processes. The DDT validation cases included predicting flame acceleration mechanisms associated with turbulent flame-jets and flow-obstacles. Excellent comparison between test data and model predictions were observed. The proposed CFD methodology was then successfully applied to model a detonation event that occurred during liquid oxygen/gaseous hydrogen rocket diffuser testing at NASA Stennis Space Center.

  7. Environmental Harmony and Evaluation of Advertisement Billboards with Digital Photogrammetry Technique and GIS Capabilities: A Case Study in the City of Ankara.

    Science.gov (United States)

    Aydın, Cevdetx C; Nisancı, Recep

    2008-05-19

    Geographical Information Systems (GIS) have been gaining a growing interest in Turkey. Many local governments and public agencies have been struggling to set up such systems to serve the needs and meet public requirements. Urban life shelters the advertisement reality which is presented at various places, on vehicles, shops etc. in daily life. It can be said that advertisement is a part of daily life in urban area, especially in city centers. In addition, one of the main sources of revenue for municipalities comes from advertising and notices. The advertising sector provides a great level of income today. Therefore advertising is individually very important for local governments and urban management. Although it is valuable for local governments, it is also very important for urban management to place these advertisement signs and billboards in an orderly fashion which is pleasing to the eye. Another point related to this subject is the systematic control mechanism which is necessary for collecting taxes regularly and updating. In this paper, first practical meaning of notice and advertisement subject, problem definition and objectives are described and then legal support and daily practice are revised. Current practice and problems are mentioned. Possibilities of measuring and obtaining necessary information by using digital images and transferring them to spatial databases are studied. By this study, a modern approach was developed for urban management and municipalities by using information technology which is an alternative to current application. Criteria which provide environmental harmony such as urban beauty, colour, compatibility and safety were also evaluated. It was finally concluded that measuring commercial signs and keeping environmental harmony under control for urban beauty can be provided by Digital Photogrammetry (DP) technique and GIS capabilities which were studied with pilot applications in the city center of Ankara.

  8. Environmental Harmony and Evaluation of Advertisement Billboards with Digital Photogrammetry Technique and GIS Capabilities: A Case Study in the City of Ankara

    Directory of Open Access Journals (Sweden)

    Recep Nisancı

    2008-05-01

    Full Text Available Geographical Information Systems (GIS have been gaining a growing interest in Turkey. Many local governments and public agencies have been struggling to set up such systems to serve the needs and meet public requirements. Urban life shelters the advertisement reality which is presented at various places, on vehicles, shops etc. in daily life. It can be said that advertisement is a part of daily life in urban area, especially in city centers. In addition, one of the main sources of revenue for municipalities comes from advertising and notices. The advertising sector provides a great level of income today. Therefore advertising is individually very important for local governments and urban management. Although it is valuable for local governments, it is also very important for urban management to place these advertisement signs and billboards in an orderly fashion which is pleasing to the eye. Another point related to this subject is the systematic control mechanism which is necessary for collecting taxes regularly and updating. In this paper, first practical meaning of notice and advertisement subject, problem definition and objectives are described and then legal support and daily practice are revised. Current practice and problems are mentioned. Possibilities of measuring and obtaining necessary information by using digital images and transferring them to spatial databases are studied. By this study, a modern approach was developed for urban management and municipalities by using information technology which is an alternative to current application. Criteria which provide environmental harmony such as urban beauty, colour, compatibility and safety were also evaluated. It was finally concluded that measuring commercial signs and keeping environmental harmony under control for urban beauty can be provided by Digital Photogrammetry (DP technique and GIS capabilities which were studied with pilot applications in the city center of Ankara.

  9. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  10. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    Science.gov (United States)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  11. A Performance Evaluation for IT/IS Implementation in Organisation: Preliminary New IT/IS Capability Evaluation (NICE Model

    Directory of Open Access Journals (Sweden)

    Hafez Salleh

    2011-12-01

    Full Text Available Most of the traditional IT/IS performance measures are based on productivity and process, which mainly focus on method of investment appraisal. There is a need to produce alternative holistic measurement models that enable soft and hard issues to be measured qualitatively. A New IT/IS Capability Evaluation (NICE framework has been designed to measure the capability of organisations to'successfully implement IT systems' and it is applicable across industries.The idea is to provide managers with measurement tools to enable them to identify where improvements are required within their organisations and to indicate their readiness prior to IT investment. The NICE framework investigates four organisational key elements: IT, Environment, Process and People, and is composed of six progressive stages of maturity that a company can achieve its IT/IS capabilities. For each maturity stage, the NICE framework describes a set of critical success factors that must be in place for the company to achieve each stage.

  12. Contractor Development Models for Promoting Sustainable Building – a case for developing management capabilities of contractors

    CSIR Research Space (South Africa)

    Dlungwana, Wilkin S

    2004-11-01

    Full Text Available effectively. The register, he argues, are used merely record contractor information in an unstructured manner which does not allow for monitoring of performance and similar support interventions, and therefore have little benefit to the construction clients... stream_source_info Dlungwana_2004.pdf.txt stream_content_type text/plain stream_size 30030 Content-Encoding UTF-8 stream_name Dlungwana_2004.pdf.txt Content-Type text/plain; charset=UTF-8 CONTRACTOR DEVELOPMENT MODELS...

  13. QMU as an approach to strengthening the predictive capabilities of complex models.

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Genetha Anne.; Boggs, Paul T.; Grace, Matthew D.

    2010-09-01

    Complex systems are made up of multiple interdependent parts, and the behavior of the entire system cannot always be directly inferred from the behavior of the individual parts. They are nonlinear and system responses are not necessarily additive. Examples of complex systems include energy, cyber and telecommunication infrastructures, human and animal social structures, and biological structures such as cells. To meet the goals of infrastructure development, maintenance, and protection for cyber-related complex systems, novel modeling and simulation technology is needed. Sandia has shown success using M&S in the nuclear weapons (NW) program. However, complex systems represent a significant challenge and relative departure from the classical M&S exercises, and many of the scientific and mathematical M&S processes must be re-envisioned. Specifically, in the NW program, requirements and acceptable margins for performance, resilience, and security are well-defined and given quantitatively from the start. The Quantification of Margins and Uncertainties (QMU) process helps to assess whether or not these safety, reliability and performance requirements have been met after a system has been developed. In this sense, QMU is used as a sort of check that requirements have been met once the development process is completed. In contrast, performance requirements and margins may not have been defined a priori for many complex systems, (i.e. the Internet, electrical distribution grids, etc.), particularly not in quantitative terms. This project addresses this fundamental difference by investigating the use of QMU at the start of the design process for complex systems. Three major tasks were completed. First, the characteristics of the cyber infrastructure problem were collected and considered in the context of QMU-based tools. Second, UQ methodologies for the quantification of model discrepancies were considered in the context of statistical models of cyber activity. Third

  14. Investigating Integration Capabilities Between Ifc and Citygml LOD3 for 3d City Modelling

    Science.gov (United States)

    Floros, G.; Pispidikis, I.; Dimopoulou, E.

    2017-10-01

    Smart cities are applied to an increasing number of application fields. This evolution though urges data collection and integration, hence major issues arise that need to be tackled. One of the most important challenges is the heterogeneity of collected data, especially if those data derive from different standards and vary in terms of geometry, topology and semantics. Another key challenge is the efficient analysis and visualization of spatial data, which due to the complexity of the physical reality in modern world, 2D GIS struggles to cope with. So, in order to facilitate data analysis and enhance the role of smart cities, the 3rd dimension needs to be implemented. Standards such as CityGML and IFC fulfill that necessity but they present major differences in their schemas that render their integration a challenging task. This paper focuses on addressing those differences, examining the up to date research work and investigates an alternative methodology in order to bridge the gap between those Standards. Within this framework, a generic IFC model is generated and converted to a CityGML Model, which is validated and evaluated on its geometrical correctness and semantical coherence. General results as well as future research considerations are presented.

  15. Sustainable solar energy capability studies by using S2H model in treating groundwater supply

    Science.gov (United States)

    Musa, S.; Anuar, M. F.; Shahabuddin, M. M.; Ridzuan, M. B.; Radin Mohamed, R. M. S.; Madun, M. A.

    2018-04-01

    Groundwater extracted in Research Centre for Soft Soil Malaysia (RECESS) contains a number of pollutants that exceed the safe level for consumption. A Solar-Hydro (S2H) model which is a practical prototype has been introduced to treat the groundwater sustainably by solar energy process (evaporation method). Selected parameters was tested which are sulphate, nitrate, chloride, fluoride, pH and dissolved oxygen. The water quality result shows that all parameters have achieved 100% of the drinking water quality standard issued by the Ministry of Health Malaysia. Evaporation method was proven that this solar energy can be applied in sustainably treating groundwater quality with up to 90% effectiveness. On the other hand, the quantitative analysis has shown that the production of clean water is below than 2% according to time constraints and design factors. Thus, this study can be generate clean and fresh water from groundwater by using a simplified model and it has huge potential to be implemented by the local communities with a larger scale and affordable design.

  16. Validation of foF2 and TEC Modeling During Geomagnetic Disturbed Times: Preliminary Outcomes of International Forum for Space Weather Modeling Capabilities Assessment

    Science.gov (United States)

    Shim, J. S.; Tsagouri, I.; Goncharenko, L. P.; Kuznetsova, M. M.

    2017-12-01

    To address challenges of assessment of space weather modeling capabilities, the CCMC (Community Coordinated Modeling Center) is leading the newly established "International Forum for Space Weather Modeling Capabilities Assessment." This presentation will focus on preliminary outcomes of the International Forum on validation of modeled foF2 and TEC during geomagnetic storms. We investigate the ionospheric response to 2013 Mar. geomagnetic storm event using ionosonde and GPS TEC observations in North American and European sectors. To quantify storm impacts on foF2 and TEC, we first quantify quiet-time variations of foF2 and TEC (e.g., the median and the average of the five quietest days for the 30 days during quiet conditions). It appears that the quiet time variation of foF2 and TEC are about 10% and 20-30%, respectively. Therefore, to quantify storm impact, we focus on foF2 and TEC changes during the storm main phase larger than 20% and 50%, respectively, compared to 30-day median. We find that in European sector, both foF2 and TEC response to the storm are mainly positive phase with foF2 increase of up to 100% and TEC increase of 150%. In North America sector, however, foF2 shows negative effects (up to about 50% decrease), while TEC shows positive response (the largest increase is about 200%). To assess modeling capability of reproducing the changes of foF2 and TEC due to the storm, we use various model simulations, which are obtained from empirical, physics-based, and data assimilation models. The performance of each model depends on the selected metrics, therefore, only one metrics is not enough to evaluate the models' predictive capabilities in capturing the storm impact. The performance of the model also varies with latitude and longitude.

  17. E-LEARNING APPLICATIONS FOR URBAN MODELLING AND OGC STANDARDS USING HTML5 CAPABILITIES

    Directory of Open Access Journals (Sweden)

    R. Kaden

    2012-07-01

    Full Text Available This article reports on the development of HTML5 based web-content related to urban modelling with special focus on GML and CityGML, allowing participants to access it regardless of the device platform. An essential part of the learning modules are short video lectures, supplemented by exercises and tests during the lecture to improve students' individual progress and success. The evaluation of the tests is used to guide students through the course content, depending on individual knowledge. With this approach, we provide learning applications on a wide range of devices, either mobile or desktop, fulfil the needs of just-in-time knowledge, and increase the emphasis on lifelong learning.

  18. BUSINESS MODELS FOR EXTENDING OF 112 EMERGENCY CALL CENTER CAPABILITIES WITH E-CALL FUNCTION INSERTION

    Directory of Open Access Journals (Sweden)

    Pop Dragos Paul

    2010-12-01

    Full Text Available The present article concerns present status of implementation in Romania and Europe of eCall service and the proposed business models regarding eCall function implementation in Romania. eCall system is used for reliable transmission in case of crush between In Vehicle System and Public Service Answering Point, via the voice channel of cellular and Public Switched Telephone Network (PSTN. eCall service could be initiated automatically or manual the driver. All data presented in this article are part of researches made by authors in the Sectorial Contract Implementation study regarding eCall system, having as partners ITS Romania and Electronic Solution, with the Romanian Ministry of Communication and Information Technology as beneficiary.

  19. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  20. A study on the modeling techniques using LS-INGRID

    Energy Technology Data Exchange (ETDEWEB)

    Ku, J. H.; Park, S. W

    2001-03-01

    For the development of radioactive material transport packages, the verification of structural safety of a package against the free drop impact accident should be carried out. The use of LS-DYNA, which is specially developed code for impact analysis, is essential for impact analysis of the package. LS-INGRID is a pre-processor for LS-DYNA with considerable capability to deal with complex geometries and allows for parametric modeling. LS-INGRID is most effective in combination with LS-DYNA code. Although the usage of LS-INGRID seems very difficult relative to many commercial mesh generators, the productivity of users performing parametric modeling tasks with LS-INGRID can be much higher in some cases. Therefore, LS-INGRID has to be used with LS-DYNA. This report presents basic explanations for the structure and commands, basic modelling examples and advanced modelling of LS-INGRID to use it for the impact analysis of various packages. The new users can build the complex model easily, through a study for the basic examples presented in this report from the modelling to the loading and constraint conditions.

  1. A study on the modeling techniques using LS-INGRID

    International Nuclear Information System (INIS)

    Ku, J. H.; Park, S. W.

    2001-03-01

    For the development of radioactive material transport packages, the verification of structural safety of a package against the free drop impact accident should be carried out. The use of LS-DYNA, which is specially developed code for impact analysis, is essential for impact analysis of the package. LS-INGRID is a pre-processor for LS-DYNA with considerable capability to deal with complex geometries and allows for parametric modeling. LS-INGRID is most effective in combination with LS-DYNA code. Although the usage of LS-INGRID seems very difficult relative to many commercial mesh generators, the productivity of users performing parametric modeling tasks with LS-INGRID can be much higher in some cases. Therefore, LS-INGRID has to be used with LS-DYNA. This report presents basic explanations for the structure and commands, basic modelling examples and advanced modelling of LS-INGRID to use it for the impact analysis of various packages. The new users can build the complex model easily, through a study for the basic examples presented in this report from the modelling to the loading and constraint conditions

  2. Using analytic hierarchy process to identify the nurses with high stress-coping capability: model and application.

    Science.gov (United States)

    F C Pan, Frank

    2014-03-01

    Nurses have long been relied as the major labor force in hospitals. Featured with complicated and highly labor-intensive job requirement, multiple pressures from different sources was inevitable. Success in identifying stresses and accordingly coping with such stresses is important for job performance of nurses, and service quality of a hospital. Purpose of this research is to identify the determinants of nurses' capabilities. A modified Analytic Hierarchy Process (AHP) was adopted. Overall, 105 nurses from several randomly selected hospitals in southern Taiwan were investigated to generate factors. Ten experienced practitioners were included as the expert in the AHP to produce weights of each criterion. Six nurses from two regional hospitals were then selected to test the model. Four factors are then identified as the second level of hierarchy. The study result shows that the family factor is the most important factor, and followed by the personal attributes. Top three sub-criteria that attribute to the nurse's stress-coping capability are children's education, good career plan, and healthy family. The practical simulation provided evidence for the usefulness of this model. The study suggested including these key determinants into the practice of human-resource management, and restructuring the hospital's organization, creating an employee-support system as well as a family-friendly working climate. The research provided evidence that supports the usefulness of AHP in identifying the key factors that help stabilizing a nursing team.

  3. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    International Nuclear Information System (INIS)

    Herranz, Luis E.; Garcia, Monica; Morandi, Sonia

    2013-01-01

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that

  4. A Process and Environment Aware Sierra/SolidMechanics Cohesive Zone Modeling Capability for Polymer/Solid Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Reedy, E. D. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Chambers, Robert S. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Hughes, Lindsey Gloe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Kropka, Jamie Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Stavig, Mark E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Stevens, Mark J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    The performance and reliability of many mechanical and electrical components depend on the integrity of po lymer - to - solid interfaces . Such interfaces are found in adhesively bonded joints, encapsulated or underfilled electronic modules, protective coatings, and laminates. The work described herein was aimed at improving Sandia's finite element - based capability to predict interfacial crack growth by 1) using a high fidelity nonlinear viscoelastic material model for the adhesive in fracture simulations, and 2) developing and implementing a novel cohesive zone fracture model that generates a mode - mixity dependent toughness as a natural consequence of its formulation (i.e., generates the observed increase in interfacial toughness wi th increasing crack - tip interfacial shear). Furthermore, molecular dynamics simulations were used to study fundamental material/interfa cial physics so as to develop a fuller understanding of the connection between molecular structure and failure . Also reported are test results that quantify how joint strength and interfacial toughness vary with temperature.

  5. Modelling and Assessment of the Capabilities of a Supermarket Refrigeration System for the Provision of Regulating Power

    DEFF Research Database (Denmark)

    O'Connell, Niamh; Madsen, Henrik; Pinson, Pierre

    This report presents an analysis of the demand response capabilities of a supermarket refrigeration system, with a particular focus on the suitability of this resource for participation in the regulating power market. An ARMAX model of the system is identified from experimental data, and the model...... are revealed that would complicate the task of devising bids on a conventional power market. These complexities are incurred due to the physical characteristics and constraints of the system as well as the particular characteristics of the control frameworks employed. Simulations considering the provision...... of the system this behaviour can be simplified. These restrictions result in a loss of optimality, but a result in a resource that can be communicated to the market operator in the form of a bid containing a quantity of power for up- or down-regulation and the duration for which the service can be provided....

  6. GENERALIZATION TECHNIQUE FOR 2D+SCALE DHE DATA MODEL

    Directory of Open Access Journals (Sweden)

    H. Karim

    2016-10-01

    Full Text Available Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information in scale dimension could be used for the future 3D-scale applications.

  7. Use of hydrological modelling and isotope techniques in Guvenc basin

    International Nuclear Information System (INIS)

    Altinbilek, D.

    1991-07-01

    The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs

  8. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  9. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    OpenAIRE

    Frederico R. Romero; Claudemir Trapp; Michael Muntener; Fabio A. Brito; Louis R. Kavoussi; Thomas W. Jarrett

    2007-01-01

    OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbabl...

  10. A Bayesian Technique for Selecting a Linear Forecasting Model

    OpenAIRE

    Ramona L. Trader

    1983-01-01

    The specification of a forecasting model is considered in the context of linear multiple regression. Several potential predictor variables are available, but some of them convey little information about the dependent variable which is to be predicted. A technique for selecting the "best" set of predictors which takes into account the inherent uncertainty in prediction is detailed. In addition to current data, there is often substantial expert opinion available which is relevant to the forecas...

  11. Fuzzy techniques for subjective workload-score modeling under uncertainties.

    Science.gov (United States)

    Kumar, Mohit; Arndt, Dagmar; Kreuzfeld, Steffi; Thurow, Kerstin; Stoll, Norbert; Stoll, Regina

    2008-12-01

    This paper deals with the development of a computer model to estimate the subjective workload score of individuals by evaluating their heart-rate (HR) signals. The identification of a model to estimate the subjective workload score of individuals under different workload situations is too ambitious a task because different individuals (due to different body conditions, emotional states, age, gender, etc.) show different physiological responses (assessed by evaluating the HR signal) under different workload situations. This is equivalent to saying that the mathematical mappings between physiological parameters and the workload score are uncertain. Our approach to deal with the uncertainties in a workload-modeling problem consists of the following steps: 1) The uncertainties arising due the individual variations in identifying a common model valid for all the individuals are filtered out using a fuzzy filter; 2) stochastic modeling of the uncertainties (provided by the fuzzy filter) use finite-mixture models and utilize this information regarding uncertainties for identifying the structure and initial parameters of a workload model; and 3) finally, the workload model parameters for an individual are identified in an online scenario using machine learning algorithms. The contribution of this paper is to propose, with a mathematical analysis, a fuzzy-based modeling technique that first filters out the uncertainties from the modeling problem, analyzes the uncertainties statistically using finite-mixture modeling, and, finally, utilizes the information about uncertainties for adapting the workload model to an individual's physiological conditions. The approach of this paper, demonstrated with the real-world medical data of 11 subjects, provides a fuzzy-based tool useful for modeling in the presence of uncertainties.

  12. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  13. Practical Techniques for Modeling Gas Turbine Engine Performance

    Science.gov (United States)

    Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2016-01-01

    The cost and risk associated with the design and operation of gas turbine engine systems has led to an increasing dependence on mathematical models. In this paper, the fundamentals of engine simulation will be reviewed, an example performance analysis will be performed, and relationships useful for engine control system development will be highlighted. The focus will be on thermodynamic modeling utilizing techniques common in industry, such as: the Brayton cycle, component performance maps, map scaling, and design point criteria generation. In general, these topics will be viewed from the standpoint of an example turbojet engine model; however, demonstrated concepts may be adapted to other gas turbine systems, such as gas generators, marine engines, or high bypass aircraft engines. The purpose of this paper is to provide an example of gas turbine model generation and system performance analysis for educational uses, such as curriculum creation or student reference.

  14. New techniques and models for assessing ischemic heart disease risks

    Directory of Open Access Journals (Sweden)

    I.N. Yakovina

    2017-09-01

    Full Text Available The paper focuses on tasks of creating and implementing a new technique aimed at assessing ischemic heart diseases risk. The techniques is based on a laboratory-diagnostic complex which includes oxidative, lipid-lipoprotein, inflammatory and metabolic biochemical parameters; s system of logic-mathematic models used for obtaining numeric risk assessments; and a program module which allows to calculate and analyze the results. we justified our models in the course of our re-search which included 172 patients suffering from ischemic heart diseases (IHD combined with coronary atherosclerosis verified by coronary arteriography and 167 patients who didn't have ischemic heart diseases. Our research program in-cluded demographic and social data, questioning on tobacco and alcohol addiction, questioning about dietary habits, chronic diseases case history and medications intake, cardiologic questioning as per Rose, anthropometry, 3-times meas-ured blood pressure, spirometry, and electrocardiogram taking and recording with decoding as per Minnesota code. We detected biochemical parameters of each patient and adjusted our task of creating techniques and models for assessing ischemic heart disease risks on the basis of inflammatory, oxidative, and lipid biological markers. We created a system of logic and mathematic models which is a universal scheme for laboratory parameters processing allowing for dissimilar data specificity. The system of models is universal, but a diagnostic approach to applied biochemical parameters is spe-cific. The created program module (calculator helps a physician to obtain a result on the basis of laboratory research data; the result characterizes numeric risks of coronary atherosclerosis and ischemic heart disease for a patient. It also allows to obtain a visual image of a system of parameters and their deviation from a conditional «standard – pathology» boundary. The complex is implemented into practice by the Scientific

  15. Mathematical and Numerical Techniques in Energy and Environmental Modeling

    Science.gov (United States)

    Chen, Z.; Ewing, R. E.

    Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms

  16. Modeling resources and capabilities in enterprise architecture: A well-founded ontology-based proposal for ArchiMate

    NARCIS (Netherlands)

    Azevedo, Carlos; Azevedo, Carlos L.B.; Iacob, Maria Eugenia; Andrade Almeida, João; van Sinderen, Marten J.; Ferreira Pires, Luis; Guizzardi, Giancarlo

    2015-01-01

    The importance of capabilities and resources for portfolio management and business strategy has been recognized in the management literature. Despite that, little attention has been given to integrate the notions of capabilities and resources in enterprise architecture descriptions. One notable

  17. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  18. Numerical and modeling techniques used in the EPIC code

    International Nuclear Information System (INIS)

    Pizzica, P.A.; Abramson, P.B.

    1977-01-01

    EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

  19. Teaching scientific concepts through simple models and social communication techniques

    International Nuclear Information System (INIS)

    Tilakaratne, K.

    2011-01-01

    For science education, it is important to demonstrate to students the relevance of scientific concepts in every-day life experiences. Although there are methods available for achieving this goal, it is more effective if cultural flavor is also added to the teaching techniques and thereby the teacher and students can easily relate the subject matter to their surroundings. Furthermore, this would bridge the gap between science and day-to-day experiences in an effective manner. It could also help students to use science as a tool to solve problems faced by them and consequently they would feel science is a part of their lives. In this paper, it has been described how simple models and cultural communication techniques can be used effectively in demonstrating important scientific concepts to the students of secondary and higher secondary levels by using two consecutive activities carried out at the Institute of Fundamental Studies (IFS), Sri Lanka. (author)

  20. A Simple Technique For Visualising Three Dimensional Models in Landscape Contexts

    Directory of Open Access Journals (Sweden)

    Stuart Jeffrey

    2001-05-01

    Full Text Available One of the Scottish Early Medieval Sculptured Stones project (SEMSS project's objectives is to generate accurate three dimensional models of these monuments using a variety of data capture techniques from photogrammetry to Time of Flight laser measurement. As the landscape context of these monuments is often considered crucial to their understanding, the model's ultimate presentation to the user should include some level of contextual information. In addition there are a number of presentation issues that must be considered such as interactivity, the relationship of reconstructed to non-reconstructed sections, lighting and suitability for presentation over the WWW. This article discusses the problem of presenting three dimensional models of monumental stones in their landscape contexts. This problem is discussed in general, but special attention is paid to the difficulty of capturing landscape detail,interactivity, reconstructing landscapes and providing accurate representations of landscapes to the horizon. Comparison is made between 3D modelling packages and Internet specific presentation formats such as VRML and QTVR. The proposed technique provides some level of interactivity as well as photorealistic landscape representation extended to the horizon, without the need for a complete DEM/DTM, thereby making file sizes manageable and capable of WWW presentation. It also allows for the issues outlined to be tackled in a more efficient manner than by using either 3D modelling or QTVR on their own.

  1. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560

  2. Demonstration of the Recent Additions in Modeling Capabilities for the WEC-Sim Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tom, N.; Lawson, M.; Yu, Y. H.

    2015-03-01

    WEC-Sim is a mid-fidelity numerical tool for modeling wave energy conversion (WEC) devices. The code uses the MATLAB SimMechanics package to solve the multi-body dynamics and models the wave interactions using hydrodynamic coefficients derived from frequency domain boundary element methods. In this paper, the new modeling features introduced in the latest release of WEC-Sim will be presented. The first feature discussed is the conversion of the fluid memory kernel to a state-space approximation that provides significant gains in computational speed. The benefit of the state-space calculation becomes even greater after the hydrodynamic body-to-body coefficients are introduced as the number of interactions increases exponentially with the number of floating bodies. The final feature discussed is the capability toadd Morison elements to provide additional hydrodynamic damping and inertia. This is generally used as a tuning feature, because performance is highly dependent on the chosen coefficients. In this paper, a review of the hydrodynamic theory for each of the features is provided and successful implementation is verified using test cases.

  3. An Iterative Uncertainty Assessment Technique for Environmental Modeling

    International Nuclear Information System (INIS)

    Engel, David W.; Liebetrau, Albert M.; Jarman, Kenneth D.; Ferryman, Thomas A.; Scheibe, Timothy D.; Didier, Brett T.

    2004-01-01

    The reliability of and confidence in predictions from model simulations are crucial--these predictions can significantly affect risk assessment decisions. For example, the fate of contaminants at the U.S. Department of Energy's Hanford Site has critical impacts on long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-Wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties resulting from model input uncertainties. The approach is designed for application to widely diverse problems across multiple scientific domains. Results are presented for both an analytical model where the response surface is ''known'' and a simplified contaminant fate transport and groundwater flow model. The results show that our iterative method for approximating a response surface (for subsequent calculation of uncertainty estimates) of specified precision requires less computing time than traditional approaches based upon noniterative sampling methods

  4. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Directory of Open Access Journals (Sweden)

    Nikolaos Gkantidis

    Full Text Available To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data.Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch were tested using eight pairs of pre-existing CT data (pre- and post-treatment. These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses.There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05. The AC + F technique was the most accurate (D0.05, the detected structural changes differed significantly between different techniques (p<0.05. Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error.Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  5. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Science.gov (United States)

    Gkantidis, Nikolaos; Schauseil, Michael; Pazera, Pawel; Zorkun, Berna; Katsaros, Christos; Ludwig, Björn

    2015-01-01

    To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D0.05), the detected structural changes differed significantly between different techniques (p<0.05). Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error. Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  6. Assessment of the reliability of reproducing two-dimensional resistivity models using an image processing technique.

    Science.gov (United States)

    Ishola, Kehinde S; Nawawi, Mohd Nm; Abdullah, Khiruddin; Sabri, Ali Idriss Aboubakar; Adiat, Kola Abdulnafiu

    2014-01-01

    This study attempts to combine the results of geophysical images obtained from three commonly used electrode configurations using an image processing technique in order to assess their capabilities to reproduce two-dimensional (2-D) resistivity models. All the inverse resistivity models were processed using the PCI Geomatica software package commonly used for remote sensing data sets. Preprocessing of the 2-D inverse models was carried out to facilitate further processing and statistical analyses. Four Raster layers were created, three of these layers were used for the input images and the fourth layer was used as the output of the combined images. The data sets were merged using basic statistical approach. Interpreted results show that all images resolved and reconstructed the essential features of the models. An assessment of the accuracy of the images for the four geologic models was performed using four criteria: the mean absolute error and mean percentage absolute error, resistivity values of the reconstructed blocks and their displacements from the true models. Generally, the blocks of the images of maximum approach give the least estimated errors. Also, the displacement of the reconstructed blocks from the true blocks is the least and the reconstructed resistivities of the blocks are closer to the true blocks than any other combined used. Thus, it is corroborated that when inverse resistivity models are combined, most reliable and detailed information about the geologic models is obtained than using individual data sets.

  7. Modelling galaxy formation with multi-scale techniques

    International Nuclear Information System (INIS)

    Hobbs, A.

    2011-01-01

    Full text: Galaxy formation and evolution depends on a wide variety of physical processes - star formation, gas cooling, supernovae explosions and stellar winds etc. - that span an enormous range of physical scales. We present a novel technique for modelling such massively multiscale systems. This has two key new elements: Lagrangian re simulation, and convergent 'sub-grid' physics. The former allows us to hone in on interesting simulation regions with very high resolution. The latter allows us to increase resolution for the physics that we can resolve, without unresolved physics spoiling convergence. We illustrate the power of our new approach by showing some new results for star formation in the Milky Way. (author)

  8. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...... steady atmospheric wind shear profile with and without wind direction changes up through the atmospheric boundary layer. Results show that the main impact on the turbine is captured by the model. Analysis of the wake behind the wind turbine, reveal the formation of a skewed wake geometry interacting...

  9. Laparoscopic anterior resection: new anastomosis technique in a pig model.

    Science.gov (United States)

    Bedirli, Abdulkadir; Yucel, Deniz; Ekim, Burcu

    2014-01-01

    Bowel anastomosis after anterior resection is one of the most difficult tasks to perform during laparoscopic colorectal surgery. This study aims to evaluate a new feasible and safe intracorporeal anastomosis technique after laparoscopic left-sided colon or rectum resection in a pig model. The technique was evaluated in 5 pigs. The OrVil device (Covidien, Mansfield, Massachusetts) was inserted into the anus and advanced proximally to the rectum. A 0.5-cm incision was made in the sigmoid colon, and the 2 sutures attached to its delivery tube were cut. After the delivery tube was evacuated through the anus, the tip of the anvil was removed through the perforation. The sigmoid colon was transected just distal to the perforation with an endoscopic linear stapler. The rectosigmoid segment to be resected was removed through the anus with a grasper, and distal transection was performed. A 25-mm circular stapler was inserted and combined with the anvil, and end-to-side intracorporeal anastomosis was then performed. We performed the technique in 5 pigs. Anastomosis required an average of 12 minutes. We observed that the proximal and distal donuts were completely removed in all pigs. No anastomotic air leakage was observed in any of the animals. This study shows the efficacy and safety of intracorporeal anastomosis with the OrVil device after laparoscopic anterior resection.

  10. Biological modelling of pelvic radiotherapy. Potential gains from conformal techniques

    Energy Technology Data Exchange (ETDEWEB)

    Fenwick, J.D

    1999-07-01

    Models have been developed which describe the dose and volume dependences of various long-term rectal complications of radiotherapy; assumptions underlying the models are consistent with clinical and experimental descriptions of complication pathogenesis. In particular, rectal bleeding - perhaps the most common complication of modern external beam prostate radiotherapy, and which might be viewed as its principle dose-limiting toxicity - has been modelled as a parallel-type complication. Rectal dose-surface-histograms have been calculated for 79 patients treated, in the course of the Royal Marsden trial of pelvic conformal radiotherapy, for prostate cancer using conformal or conventional techniques; rectal bleeding data is also available for these patients. The maximum- likelihood fit of the parallel bleeding model to the dose-surface-histograms and complication data shows that the complication status of the patients analysed (most of whom received reference point doses of 64 Gy) was significantly dependent on, and almost linearly proportional to, the volume of highly dosed rectal wall: a 1% decrease in the fraction of rectal wall (outlined over an 11 cm rectal length) receiving a dose of 58 Gy or more lead to a reduction in the (RTOG) grade 1,2,3 bleeding rate of about 1.1% - 95% confidence interval [0.04%, 2.2%]. The parallel model fit to the bleeding data is only marginally biased by uncertainties in the calculated dose-surface-histograms (due to setup errors, rectal wall movement and absolute rectal surface area variability), causing the gradient of the observed volume-response curve to be slightly lower than that which would be seen in the absence of these uncertainties. An analysis of published complication data supports these single-centre findings and indicates that the reductions in highly dosed rectal wall volumes obtainable using conformal radiotherapy techniques can be exploited to allow escalation of the dose delivered to the prostate target volume, the

  11. Design and modeling of an autonomous multi-link snake robot, capable of 3D-motion

    Directory of Open Access Journals (Sweden)

    Rizkallah Rabel

    2016-01-01

    Full Text Available The paper presents the design of an autonomous, wheeless, mechanical snake robot that was modeled and built at Notre Dame University – Louaize. The robot is also capable of 3D motion with an ability to climb in the z-direction. The snake is made of a series links, each containing one to three high torque DC motors and a gearing system. They are connected to each other through Aluminum hollow rods that can be rotated through a 180° span. This allows the snake to move in various environments including unfriendly and cluttered ones. The front link has a proximity sensor used to map the environment. This mapping is sent to a microcontroller which controls and adapts the motion pattern of the snake. The snake can therefore choose to avoid obstacles, or climb over them if their height is within its range. The presented model is made of five links, but this number can be increased as their role is repetitive. The novel design is meant to overcome previous limitations by allowing 3D motion through electric actuators and low energy consumption.

  12. CASL Dakota Capabilities Summary

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Simmons, Chris [Univ. of Texas, Austin, TX (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-10

    The Dakota software project serves the mission of Sandia National Laboratories and supports a worldwide user community by delivering state-of-the-art research and robust, usable software for optimization and uncertainty quantification. These capabilities enable advanced exploration and riskinformed prediction with a wide range of computational science and engineering models. Dakota is the verification and validation (V&V) / uncertainty quantification (UQ) software delivery vehicle for CASL, allowing analysts across focus areas to apply these capabilities to myriad nuclear engineering analyses.

  13. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  14. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  15. ATLAS Event Data Organization and I/O Framework Capabilities in Support of Heterogeneous Data Access and Processing Models

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219732; The ATLAS collaboration; Cranshaw, Jack; van Gemmeren, Peter; Nowak, Marcin

    2016-01-01

    Choices in persistent data models and data organization have significant performance ramifications for data-intensive scientific computing. In experimental high energy physics, organizing file-based event data for efficient per-attribute retrieval may improve the I/O performance of some physics analyses but hamper the performance of processing that requires full-event access. In-file data organization tuned for serial access by a single process may be less suitable for opportunistic sub-file-based processing on distributed computing resources. Unique I/O characteristics of high-performance computing platforms pose additional challenges. The ATLAS experiment at the Large Hadron Collider employs a flexible I/O framework and a suite of tools and techniques for persistent data organization to support an increasingly heterogeneous array of data access and processing models.

  16. Effects of nursing intervention models on social adaption capability development in preschool children with malignant tumors: a randomized control trial.

    Science.gov (United States)

    Yu, Lu; Mo, Lin; Tang, Yan; Huang, Xiaoyan; Tan, Juan

    2014-06-01

    The objectives of this study are to compare the effects of two nursing intervention models on the ability of preschool children with malignant tumors to socialize and to determine if these interventions improved their social adaption capability (SAC) and quality of life. Inpatient preschool children with malignant tumors admitted to the hospital between December 2009 and March 2012 were recruited and randomized into either the experimental or control groups. The control group received routine nursing care, and the experimental group received family-centered nursing care, including physical, psychological, and social interventions. The Infants-Junior Middle School Student's Social-Life Abilities Scale was used to evaluate SAC development of participants. Participants (n = 240) were recruited and randomized into two groups. After the intervention, the excellent and normal SAC rates were 27.5% and 55% in the experimental group, respectively, compared with 2.5% and 32.5% in the control group (p intervention, SAC in experimental group was improved compared with before intervention (54.68 ± 10.85 vs 79.9 ± 22.3, p intervention in the control group (54.70 ± 11.47 vs. 52 ± 15.8, p = 0.38). The family-centered nursing care model that included physical, psychological, and social interventions improved the SAC of children with malignancies compared with children receiving routine nursing care. Establishing a standardized family-school-community-hospital hierarchical multi-management intervention model for children is important to the efficacy of long-term interventions and to the improvement of SAC of children with malignancies. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Verification of the New FAST v8 Capabilities for the Modeling of Fixed-Bottom Offshore Wind Turbines: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Barahona, B.; Jonkman, J.; Damiani, R.; Robertson, A.; Hayman, G.

    2014-12-01

    Coupled dynamic analysis has an important role in the design of offshore wind turbines because the systems are subject to complex operating conditions from the combined action of waves and wind. The aero-hydro-servo-elastic tool FAST v8 is framed in a novel modularization scheme that facilitates such analysis. Here, we present the verification of new capabilities of FAST v8 to model fixed-bottom offshore wind turbines. We analyze a series of load cases with both wind and wave loads and compare the results against those from the previous international code comparison projects-the International Energy Agency (IEA) Wind Task 23 Subtask 2 Offshore Code Comparison Collaboration (OC3) and the IEA Wind Task 30 OC3 Continued (OC4) projects. The verification is performed using the NREL 5-MW reference turbine supported by monopile, tripod, and jacket substructures. The substructure structural-dynamics models are built within the new SubDyn module of FAST v8, which uses a linear finite-element beam model with Craig-Bampton dynamic system reduction. This allows the modal properties of the substructure to be synthesized and coupled to hydrodynamic loads and tower dynamics. The hydrodynamic loads are calculated using a new strip theory approach for multimember substructures in the updated HydroDyn module of FAST v8. These modules are linked to the rest of FAST through the new coupling scheme involving mapping between module-independent spatial discretizations and a numerically rigorous implicit solver. The results show that the new structural dynamics, hydrodynamics, and coupled solutions compare well to the results from the previous code comparison projects.

  18. Modeling and Forecasting Electricity Demand in Azerbaijan Using Cointegration Techniques

    Directory of Open Access Journals (Sweden)

    Fakhri J. Hasanov

    2016-12-01

    Full Text Available Policymakers in developing and transitional economies require sound models to: (i understand the drivers of rapidly growing energy consumption and (ii produce forecasts of future energy demand. This paper attempts to model electricity demand in Azerbaijan and provide future forecast scenarios—as far as we are aware this is the first such attempt for Azerbaijan using a comprehensive modelling framework. Electricity consumption increased and decreased considerably in Azerbaijan from 1995 to 2013 (the period used for the empirical analysis—it increased on average by about 4% per annum from 1995 to 2006 but decreased by about 4½% per annum from 2006 to 2010 and increased thereafter. It is therefore vital that Azerbaijani planners and policymakers understand what drives electricity demand and be able to forecast how it will grow in order to plan for future power production. However, modeling electricity demand for such a country has many challenges. Azerbaijan is rich in energy resources, consequently GDP is heavily influenced by oil prices; hence, real non-oil GDP is employed as the activity driver in this research (unlike almost all previous aggregate energy demand studies. Moreover, electricity prices are administered rather than market driven. Therefore, different cointegration and error correction techniques are employed to estimate a number of per capita electricity demand models for Azerbaijan, which are used to produce forecast scenarios for up to 2025. The resulting estimated models (in terms of coefficients, etc. and forecasts of electricity demand for Azerbaijan in 2025 prove to be very similar; with the Business as Usual forecast ranging from about of 19½ to 21 TWh.

  19. A Model of Entrepreneurial Capability Based on a Holistic Review of the Literature from Three Academic Domains

    Science.gov (United States)

    Lewis, Hilary

    2011-01-01

    While there has been a noted variation in the "species" of entrepreneur so that no single list of traits, characteristics or attributes is definitive, it is posited that to be an entrepreneur a certain amount of entrepreneurial capability is required. "Entrepreneurial capability" is a concept developed to place some form of…

  20. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  1. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    Directory of Open Access Journals (Sweden)

    Frederico R. Romero

    2007-02-01

    Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.

  2. Mapping the Complexities of Online Dialogue: An Analytical Modeling Technique

    Directory of Open Access Journals (Sweden)

    Robert Newell

    2014-03-01

    Full Text Available The e-Dialogue platform was developed in 2001 to explore the potential of using the Internet for engaging diverse groups of people and multiple perspectives in substantive dialogue on sustainability. The system is online, text-based, and serves as a transdisciplinary space for bringing together researchers, practitioners, policy-makers and community leaders. The Newell-Dale Conversation Modeling Technique (NDCMT was designed for in-depth analysis of e-Dialogue conversations and uses empirical methodology to minimize observer bias during analysis of a conversation transcript. NDCMT elucidates emergent ideas, identifies connections between ideas and themes, and provides a coherent synthesis and deeper understanding of the underlying patterns of online conversations. Continual application and improvement of NDCMT can lead to powerful methodologies for empirically analyzing digital discourse and better capture of innovations produced through such discourse. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs140221

  3. Vector machine techniques for modeling of seismic liquefaction data

    Directory of Open Access Journals (Sweden)

    Pijush Samui

    2014-06-01

    Full Text Available This article employs three soft computing techniques, Support Vector Machine (SVM; Least Square Support Vector Machine (LSSVM and Relevance Vector Machine (RVM, for prediction of liquefaction susceptibility of soil. SVM and LSSVM are based on the structural risk minimization (SRM principle which seeks to minimize an upper bound of the generalization error consisting of the sum of the training error and a confidence interval. RVM is a sparse Bayesian kernel machine. SVM, LSSVM and RVM have been used as classification tools. The developed SVM, LSSVM and RVM give equations for prediction of liquefaction susceptibility of soil. A comparative study has been carried out between the developed SVM, LSSVM and RVM models. The results from this article indicate that the developed SVM gives the best performance for prediction of liquefaction susceptibility of soil.

  4. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  5. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  6. Development of a New Analog Test System Capable of Modeling Tectonic Deformation Incorporating the Effects of Pore Fluid Pressure

    Science.gov (United States)

    Zhang, M.; Nakajima, H.; Takeda, M.; Aung, T. T.

    2005-12-01

    Understanding and predicting the tectonic deformation within geologic strata has been a very important research subject in many fields such as structural geology and petroleum geology. In recent years, such research has also become a fundamental necessity for the assessment of active fault migration, site selection for geological disposal of radioactive nuclear waste and exploration for methane hydrate. Although analog modeling techniques have played an important role in the elucidation of the tectonic deformation mechanisms, traditional approaches have typically used dry materials and ignored the effects of pore fluid pressure. In order for analog models to properly depict the tectonic deformation of the targeted, large-prototype system within a small laboratory-scale configuration, physical properties of the models, including geometry, force, and time, must be correctly scaled. Model materials representing brittle rock behavior require an internal friction identical to the prototype rock and virtually zero cohesion. Granular materials such as sand, glass beads, or steel beads of dry condition have been preferably used for this reason in addition to their availability and ease of handling. Modeling protocols for dry granular materials have been well established but such model tests cannot account for the pore fluid effects. Although the concept of effective stress has long been recognized and the role of pore-fluid pressure in tectonic deformation processes is evident, there have been few analog model studies that consider the effects of pore fluid movement. Some new applications require a thorough understanding of the coupled deformation and fluid flow processes within the strata. Taking the field of waste management as an example, deep geological disposal of radioactive waste has been thought to be an appropriate methodology for the safe isolation of the wastes from the human environment until the toxicity of the wastes decays to non-hazardous levels. For the

  7. Capabilities for innovation

    DEFF Research Database (Denmark)

    Nielsen, Peter; Nielsen, René Nesgaard; Bamberger, Simon Grandjean

    2012-01-01

    and in particular their ability to develop firm-specific innovative capabilities through employee participation and creation of innovative workplaces. In this article, we argue that national institutional conditions can play an enhancing or hampering role in this. Especially the norms and values governing relations...... on some of the important institutional conditions in Danish firms derived from the Nordic model, such as the formal and informal relations of cooperation between employers and employees in firms and their function in building capabilities for innovation. The foundation of the empirical analysis...... is a survey that collected information from 601 firms belonging to the private urban sector in Denmark. The survey was carried out in late 2010. Keywords: dynamic capabilities/innovation/globalization/employee/employer cooperation/Nordic model Acknowledgment: The GOPA study was financed by grant 20080053113...

  8. The phase field technique for modeling multiphase materials

    Science.gov (United States)

    Singer-Loginova, I.; Singer, H. M.

    2008-10-01

    This paper reviews methods and applications of the phase field technique, one of the fastest growing areas in computational materials science. The phase field method is used as a theory and computational tool for predictions of the evolution of arbitrarily shaped morphologies and complex microstructures in materials. In this method, the interface between two phases (e.g. solid and liquid) is treated as a region of finite width having a gradual variation of different physical quantities, i.e. it is a diffuse interface model. An auxiliary variable, the phase field or order parameter \\phi(\\vec{x}) , is introduced, which distinguishes one phase from the other. Interfaces are identified by the variation of the phase field. We begin with presenting the physical background of the phase field method and give a detailed thermodynamical derivation of the phase field equations. We demonstrate how equilibrium and non-equilibrium physical phenomena at the phase interface are incorporated into the phase field methods. Then we address in detail dendritic and directional solidification of pure and multicomponent alloys, effects of natural convection and forced flow, grain growth, nucleation, solid-solid phase transformation and highlight other applications of the phase field methods. In particular, we review the novel phase field crystal model, which combines atomistic length scales with diffusive time scales. We also discuss aspects of quantitative phase field modeling such as thin interface asymptotic analysis and coupling to thermodynamic databases. The phase field methods result in a set of partial differential equations, whose solutions require time-consuming large-scale computations and often limit the applicability of the method. Subsequently, we review numerical approaches to solve the phase field equations and present a finite difference discretization of the anisotropic Laplacian operator.

  9. Implementation and validation of the extended Hill-type muscle model with robust routing capabilities in LS-DYNA for active human body models.

    Science.gov (United States)

    Kleinbach, Christian; Martynenko, Oleksandr; Promies, Janik; Haeufle, Daniel F B; Fehr, Jörg; Schmitt, Syn

    2017-09-02

    In the state of the art finite element AHBMs for car crash analysis in the LS-DYNA software material named *MAT_MUSCLE (*MAT_156) is used for active muscles modeling. It has three elements in parallel configuration, which has several major drawbacks: restraint approximation of the physical reality, complicated parameterization and absence of the integrated activation dynamics. This study presents implementation of the extended four element Hill-type muscle model with serial damping and eccentric force-velocity relation including [Formula: see text] dependent activation dynamics and internal method for physiological muscle routing. Proposed model was implemented into the general-purpose finite element (FE) simulation software LSDYNA as a user material for truss elements. This material model is verified and validated with three different sets of mammalian experimental data, taken from the literature. It is compared to the *MAT_MUSCLE (*MAT_156) Hill-type muscle model already existing in LS-DYNA, which is currently used in finite element human body models (HBMs). An application example with an arm model extracted from the FE ViVA OpenHBM is given, taking into account physiological muscle paths. The simulation results show better material model accuracy, calculation robustness and improved muscle routing capability compared to *MAT_156. The FORTRAN source code for the user material subroutine dyn21.f and the muscle parameters for all simulations, conducted in the study, are given at https://zenodo.org/record/826209 under an open source license. This enables a quick application of the proposed material model in LS-DYNA, especially in active human body models (AHBMs) for applications in automotive safety.

  10. Characterizing the Trade Space Between Capability and Complexity in Next Generation Cloud and Precipitation Observing Systems Using Markov Chain Monte Carlos Techniques

    Science.gov (United States)

    Xu, Z.; Mace, G. G.; Posselt, D. J.

    2017-12-01

    As we begin to contemplate the next generation atmospheric observing systems, it will be critically important that we are able to make informed decisions regarding the trade space between scientific capability and the need to keep complexity and cost within definable limits. To explore this trade space as it pertains to understanding key cloud and precipitation processes, we are developing a Markov Chain Monte Carlo (MCMC) algorithm suite that allows us to arbitrarily define the specifications of candidate observing systems and then explore how the uncertainties in key retrieved geophysical parameters respond to that observing system. MCMC algorithms produce a more complete posterior solution space, and allow for an objective examination of information contained in measurements. In our initial implementation, MCMC experiments are performed to retrieve vertical profiles of cloud and precipitation properties from a spectrum of active and passive measurements collected by aircraft during the ACE Radiation Definition Experiments (RADEX). Focusing on shallow cumulus clouds observed during the Integrated Precipitation and Hydrology EXperiment (IPHEX), observing systems in this study we consider W and Ka-band radar reflectivity, path-integrated attenuation at those frequencies, 31 and 94 GHz brightness temperatures as well as visible and near-infrared reflectance. By varying the sensitivity and uncertainty of these measurements, we quantify the capacity of various combinations of observations to characterize the physical properties of clouds and precipitation.

  11. THE TECHNIQUE OF ANALYSIS OF SOFTWARE OF ON-BOARD COMPUTERS OF AIR VESSEL TO ABSENCE OF UNDECLARED CAPABILITIES BY SIGNATURE-HEURISTIC WAY

    Directory of Open Access Journals (Sweden)

    Viktor Ivanovich Petrov

    2017-01-01

    Full Text Available The article considers the issues of civil aviation aircraft onboard computers data safety. Infor- mation security undeclared capabilities stand for technical equipment or software possibilities, which are not mentioned in the documentation. Documentation and tests content requirements are imposed during the software certification. Documentation requirements include documents composition and content of control (specification, description and program code, the source code. Test requirements include: static analysis of program codes (including the compliance of the sources with their loading modules monitoring; dynamic analysis of source code (including implementation of routes monitor- ing. Currently, there are no complex measures for checking onboard computer software. There are no rules and regulations that can allow controlling foreign production aircraft software, and the actual receiving of software is difficult. Consequently, the author suggests developing the basics of aviation rules and regulations, which allow to analyze the programs of CA aircraft onboard computers. If there are no software source codes the two approaches of code analysis are used: a structural static and dy- namic analysis of the source code; signature-heuristic analysis of potentially dangerous operations. Static analysis determines the behavior of the program by reading the program code (without running the program which is represented in the assembler language - disassembly listing. Program tracing is performed by the dynamic analysis. The analysis of aircraft software ability to detect undeclared capa- bilities using the interactive disassembler was considered in this article.

  12. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory

    2012-09-11

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  13. Designing and Validating a Model for Measuring Sustainability of Overall Innovation Capability of Small and Medium-Sized Enterprises

    Directory of Open Access Journals (Sweden)

    Mohd Nizam Ab Rahman

    2015-01-01

    Full Text Available The business environment is currently characterized by intensified competition at both the national and firm levels. Many studies have shown that innovation positively affect firms in enhancing their competitiveness. Innovation is a dynamic process that requires a continuous, evolving, and mastered management. Evaluating the sustainability of overall innovation capability of a business is a major means of determining how well this firm effectively and efficiently manages its innovation process. A psychometrically valid scale of evaluating the sustainability of overall innovation capability of a firm is still insufficient in the current innovation literature. Thus, this study developed a reliable and valid scale of measuring the sustainability of overall innovation capability construct. The unidimensionality, reliability, and several validity components of the developed scale were tested using the data collected from 175 small and medium-sized enterprises in Iran. A series of systematic statistical analyses were performed. Results of the reliability measures, exploratory and confirmatory factor analyses, and several components of validity tests strongly supported an eight-dimensional (8D scale of measuring the sustainability of overall innovation capability construct. The dimensions of the scale were strategic management, supportive culture and structure, resource allocation, communication and networking, knowledge and technology management, idea management, project development, and commercialization capabilities.

  14. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  15. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  16. A discussion of calibration techniques for evaluating binary and categorical predictive models.

    Science.gov (United States)

    Fenlon, Caroline; O'Grady, Luke; Doherty, Michael L; Dunnion, John

    2018-01-01

    Modelling of binary and categorical events is a commonly used tool to simulate epidemiological processes in veterinary research. Logistic and multinomial regression, naïve Bayes, decision trees and support vector machines are popular data mining techniques used to predict the probabilities of events with two or more outcomes. Thorough evaluation of a predictive model is important to validate its ability for use in decision-support or broader simulation modelling. Measures of discrimination, such as sensitivity, specificity and receiver operating characteristics, are commonly used to evaluate how well the model can distinguish between the possible outcomes. However, these discrimination tests cannot confirm that the predicted probabilities are accurate and without bias. This paper describes a range of calibration tests, which typically measure the accuracy of predicted probabilities by comparing them to mean event occurrence rates within groups of similar test records. These include overall goodness-of-fit statistics in the form of the Hosmer-Lemeshow and Brier tests. Visual assessment of prediction accuracy is carried out using plots of calibration and deviance (the difference between the outcome and its predicted probability). The slope and intercept of the calibration plot are compared to the perfect diagonal using the unreliability test. Mean absolute calibration error provides an estimate of the level of predictive error. This paper uses sample predictions from a binary logistic regression model to illustrate the use of calibration techniques. Code is provided to perform the tests in the R statistical programming language. The benefits and disadvantages of each test are described. Discrimination tests are useful for establishing a model's diagnostic abilities, but may not suitably assess the model's usefulness for other predictive applications, such as stochastic simulation. Calibration tests may be more informative than discrimination tests for evaluating

  17. An ontology-based well-founded proposal for modeling resources and capabilities in ArchiMate

    NARCIS (Netherlands)

    Azevedo, Carlos L.B.; Iacob, Maria Eugenia; Andrade Almeida, João; van Sinderen, Marten J.; Ferreira Pires, Luis; Guizzardi, G.; Gasevic, D; Hatala, M.; Motahari Nezhad, H.R.; Reichert, M.U.

    The importance of capabilities and resources for portfolio management and business strategy has been recognized in the management literature and on a recent proposal to extend ArchiMate, which includes these concepts in order to improve ArchiMate’s coverage of portfolio management. This paper

  18. Widening the Educational Capabilities of Socio-Economically Disadvantaged Students through a Model of Social and Cultural Capital Development

    Science.gov (United States)

    Hannon, Cliona; Faas, Daniel; O'Sullivan, Katriona

    2017-01-01

    Widening participation programmes aim to increase the progression of students from low socio-economic status (SES) groups to higher education. This research proposes that the human capabilities approach is a good justice-based framework within which to consider the social and cultural capital processes that impact upon the educational capabilities…

  19. A Critical Review of Model-Based Economic Studies of Depression: Modelling Techniques, Model Structure and Data Sources

    OpenAIRE

    Hossein Haji Ali Afzali; Jonathan Karnon; Jodi Gray

    2012-01-01

    Depression is the most common mental health disorder and is recognized as a chronic disease characterized by multiple acute episodes/relapses. Although modelling techniques play an increasingly important role in the economic evaluation of depression interventions, comparatively little attention has been paid to issues around modelling studies with a focus on potential biases. This, however, is important as different modelling approaches, variations in model structure and input parameters may ...

  20. Improving high-altitude emp modeling capabilities by using a non-equilibrium electron swarm model to monitor conduction electron evolution

    Science.gov (United States)

    Pusateri, Elise Noel

    abruptly. The objective of the PhD research is to mitigate this effect by integrating a conduction electron model into CHAP-LA which can calculate the conduction current based on a non-equilibrium electron distribution. We propose to use an electron swarm model to monitor the time evolution of conduction electrons in the EMP environment which is characterized by electric field and pressure. Swarm theory uses various collision frequencies and reaction rates to study how the electron distribution and the resultant transport coefficients change with time, ultimately reaching an equilibrium distribution. Validation of the swarm model we develop is a necessary step for completion of the thesis work. After validation, the swarm model is integrated in the air chemistry model CHAP-LA employs for conduction electron simulations. We test high altitude EMP simulations with the swarm model option in the air chemistry model to show improvements in the computational capability of CHAP-LA. A swarm model has been developed that is based on a previous swarm model developed by Higgins, Longmire and O'Dell 1973, hereinafter HLO. The code used for the swarm model calculation solves a system of coupled differential equations for electric field, electron temperature, electron number density, and drift velocity. Important swarm parameters, including the momentum transfer collision frequency, energy transfer collision frequency, and ionization rate, are recalculated and compared to the previously reported empirical results given by HLO. These swarm parameters are found using BOLSIG+, a two term Boltzmann solver developed by Hagelaar and Pitchford 2005. BOLSIG+ utilizes updated electron scattering cross sections that are defined over an expanded energy range found in the atomic and molecular cross section database published by Phelps in the Phelps Database 2014 on the LXcat website created by Pancheshnyi et al. 2012. The swarm model is also updated from the original HLO model by including

  1. River suspended sediment modelling using the CART model: A comparative study of machine learning techniques.

    Science.gov (United States)

    Choubin, Bahram; Darabi, Hamid; Rahmati, Omid; Sajedi-Hosseini, Farzaneh; Kløve, Bjørn

    2018-02-15

    Suspended sediment load (SSL) modelling is an important issue in integrated environmental and water resources management, as sediment affects water quality and aquatic habitats. Although classification and regression tree (CART) algorithms have been applied successfully to ecological and geomorphological modelling, their applicability to SSL estimation in rivers has not yet been investigated. In this study, we evaluated use of a CART model to estimate SSL based on hydro-meteorological data. We also compared the accuracy of the CART model with that of the four most commonly used models for time series modelling of SSL, i.e. adaptive neuro-fuzzy inference system (ANFIS), multi-layer perceptron (MLP) neural network and two kernels of support vector machines (RBF-SVM and P-SVM). The models were calibrated using river discharge, stage, rainfall and monthly SSL data for the Kareh-Sang River gauging station in the Haraz watershed in northern Iran, where sediment transport is a considerable issue. In addition, different combinations of input data with various time lags were explored to estimate SSL. The best input combination was identified through trial and error, percent bias (PBIAS), Taylor diagrams and violin plots for each model. For evaluating the capability of the models, different statistics such as Nash-Sutcliffe efficiency (NSE), Kling-Gupta efficiency (KGE) and percent bias (PBIAS) were used. The results showed that the CART model performed best in predicting SSL (NSE=0.77, KGE=0.8, PBIAS<±15), followed by RBF-SVM (NSE=0.68, KGE=0.72, PBIAS<±15). Thus the CART model can be a helpful tool in basins where hydro-meteorological data are readily available. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Adaptive Atmospheric Modeling Key Techniques in Grid Generation, Data Structures, and Numerical Operations with Applications

    CERN Document Server

    Behrens, Jörn

    2006-01-01

    Gives an overview and guidance in the development of adaptive techniques for atmospheric modeling. This book covers paradigms of adaptive techniques, such as error estimation and adaptation criteria. Considering applications, it demonstrates several techniques for discretizing relevant conservation laws from atmospheric modeling.

  3. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  4. Constitutional Model and Rationality in Judicial Decisions from Proportionality Technique

    OpenAIRE

    Feio, Thiago Alves

    2016-01-01

    In the current legal systems, the content of the Constitutions consists of values that serve to limit state action. The department in charge of the control of this system is, usually, the Judiciary. This choice leads to two major problems, the tension between democracy and constitutionalism and the subjectivity that control. One of the solutions to subjectivity is weighting of principles through the proportionality technique, which aims to give rational decisions. This technique doesn’t elimi...

  5. Capability of the "Ball-Berry" model for predicting stomatal conductance and water use efficiency of potato leaves under different irrigation regimes

    DEFF Research Database (Denmark)

    Liu, Fulai; Andersen, Mathias N.; Jensen, Christian Richardt

    2009-01-01

    The capability of the ‘Ball-Berry' model (BB-model) in predicting stomatal conductance (gs) and water use efficiency (WUE) of potato (Solanum tuberosum L.) leaves under different irrigation regimes was tested using data from two independent pot experiments in 2004 and 2007. Data obtained from 2004...... of soil water deficits on gs, a simple equation modifying the slope (m) based on the mean soil water potential (Ψs) in the soil columns was incorporated into the original BB-model. Compared with the original BB-model, the modified BB-model showed better predictability for both gs and WUE of potato leaves....... The simulation results showed that the modified BB-model better simulated gs for the NI and DI treatments than the original BB-model, whilst the two models performed equally well for predicting gs of the FI and PRD treatments. Although both models had poor predictability for WUE (0.47 

  6. State-of-the-art Tools and Techniques for Quantitative Modeling and Analysis of Embedded Systems

    DEFF Research Database (Denmark)

    Bozga, Marius; David, Alexandre; Hartmanns, Arnd

    2012-01-01

    This paper surveys well-established/recent tools and techniques developed for the design of rigorous embedded sys- tems. We will first survey U PPAAL and M ODEST, two tools capable of dealing with both timed and stochastic aspects. Then, we will overview the BIP framework for modular design...

  7. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    Science.gov (United States)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  8. Review of air quality modeling techniques. Volume 8

    International Nuclear Information System (INIS)

    Rosen, L.C.

    1977-01-01

    Air transport and diffusion models which are applicable to the assessment of the environmental effects of nuclear, geothermal, and fossil-fuel electric generation are reviewed. The general classification of models and model inputs are discussed. A detailed examination of the statistical, Gaussian plume, Gaussian puff, one-box and species-conservation-of-mass models is given. Representative models are discussed with attention given to the assumptions, input data requirement, advantages, disadvantages and applicability of each

  9. Modeling technique for the process of liquid film disintegration

    Science.gov (United States)

    Modorskii, V. Ya.; Sipatov, A. M.; Babushkina, A. V.; Kolodyazhny, D. Yu.; Nagorny, V. S.

    2016-10-01

    In the course of numerical experiments the method of calculation of two-phase flows was developed by solving a model problem. The results of the study were compared between the two models that describe the processes of two-phase flow and the collapse of the liquid jet into droplets. VoF model and model QMOM - two mathematical models were considered the implementation of the spray.

  10. Subsurface flow and transport of organic chemicals: an assessment of current modeling capability and priority directions for future research (1987-1995)

    Energy Technology Data Exchange (ETDEWEB)

    Streile, G.P.; Simmons, C.S.

    1986-09-01

    Theoretical and computer modeling capability for assessing the subsurface movement and fate of organic contaminants in groundwater was examined. Hence, this study is particularly concerned with energy-related, organic compounds that could enter a subsurface environment and move as components of a liquid phase separate from groundwater. The migration of organic chemicals that exist in an aqueous dissolved state is certainly a part of this more general scenario. However, modeling of the transport of chemicals in aqueous solution has already been the subject of several reviews. Hence, this study emphasizes the multiphase scenario. This study was initiated to focus on the important physicochemical processes that control the behavior of organic substances in groundwater systems, to evaluate the theory describing these processes, and to search for and evaluate computer codes that implement models that correctly conceptualize the problem situation. This study is not a code inventory, and no effort was made to identify every available code capable of representing a particular process.

  11. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  12. Improving Performance of LVRT Capability in Single-phase Grid-tied PV Inverters by a Model Predictive Controller

    DEFF Research Database (Denmark)

    Zangeneh Bighash, Esmaeil; Sadeghzadeh, Seyed Mohammad; Ebrahimzadeh, Esmaeil

    2018-01-01

    New interconnection standards for Photovoltaic systems are going to be mandatory in some countries. Such that the next generation of PV should support a full range of operation mode like in a power plant and also support Low-Voltage Ride-Through (LVRT) capability during voltage sag fault. Since......, these methods have had uncertainties in respect their contribution in LVRT mode. In PR controllers, a fast dynamic response can be obtained by tuning the gains of PR controllers for a high bandwidth, but typically the phase margin is decreased. Therefore, the design of PR controllers needs a tradeoff between...

  13. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  14. Application of integrated modeling technique for data services ...

    African Journals Online (AJOL)

    This paper, therefore, describes the application of the integrated simulation technique for deriving the optimum resources required for data services in an asynchronous transfer mode (ATM) based private wide area network (WAN) to guarantee specific QoS requirement. The simulation tool drastically cuts the simulation ...

  15. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified...

  16. (NHIS) using data mining technique as a statistical model

    African Journals Online (AJOL)

    kofi.mereku

    2014-05-23

    May 23, 2014 ... Scheme (NHIS) claims in the Awutu-Effutu-Senya District using data mining techniques, with a specific focus on .... transform them into a format that is friendly to data mining algorithms, such as .... many groups to access the data, facilitate updating the data, and improve the efficiency of checking the data for ...

  17. Use of System Dynamics Techniques in the Garrison Health Modelling Tool

    Science.gov (United States)

    2010-11-01

    Joint Health Command (JHC) tasked DSTO to develop techniques for modelling Defence health service delivery both in a Garrison environment in Australia ...UNCLASSIFIED UNCLASSIFIED Use of System Dynamics Techniques in the Garrison Health Modelling Tool Mark Burnett, Kerry Clifford and...Garrison Health Modelling Tool, a prototype software package designed to provide decision-support to JHC health officers and managers in a garrison

  18. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  19. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  20. OFF-LINE HANDWRITING RECOGNITION USING VARIOUS HYBRID MODELING TECHNIQUES AND CHARACTER N-GRAMS

    NARCIS (Netherlands)

    Brakensiek, A.; Rottland, J.; Kosmala, A.; Rigoll, G.

    2004-01-01

    In this paper a system for on-line cursive handwriting recognition is described. The system is based on Hidden Markov Models (HMMs) using discrete and hybrid modeling techniques. Here, we focus on two aspects of the recognition system. First, we present different hybrid modeling techniques, whereas

  1. The commissioning of the advanced radiographic capability laser system: experimental and modeling results at the main laser output

    Science.gov (United States)

    Di Nicola, J. M.; Yang, S. T.; Boley, C. D.; Crane, J. K.; Heebner, J. E.; Spinka, T. M.; Arnold, P.; Barty, C. P. J.; Bowers, M. W.; Budge, T. S.; Christensen, K.; Dawson, J. W.; Erbert, G.; Feigenbaum, E.; Guss, G.; Haefner, C.; Hermann, M. R.; Homoelle, D.; Jarboe, J. A.; Lawson, J. K.; Lowe-Webb, R.; McCandless, K.; McHale, B.; Pelz, L. J.; Pham, P. P.; Prantil, M. A.; Rehak, M. L.; Rever, M. A.; Rushford, M. C.; Sacks, R. A.; Shaw, M.; Smauley, D.; Smith, L. K.; Speck, R.; Tietbohl, G.; Wegner, P. J.; Widmayer, C.

    2015-02-01

    The National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is the first of a kind megajoule-class laser with 192 beams capable of delivering over 1.8 MJ and 500TW of 351nm light [1], [2]. It has been commissioned and operated since 2009 to support a wide range of missions including the study of inertial confinement fusion, high energy density physics, material science, and laboratory astrophysics. In order to advance our understanding, and enable short-pulse multi-frame radiographic experiments of dense cores of cold material, the generation of very hard x-rays above 50 keV is necessary. X-rays with such characteristics can be efficiently generated with high intensity laser pulses above 1017 W/cm² [3]. The Advanced Radiographic Capability (ARC) [4] which is currently being commissioned on the NIF will provide eight, 1 ps to 50 ps, adjustable pulses with up to 1.7 kJ each to create x-ray point sources enabling dynamic, multi-frame x-ray backlighting. This paper will provide an overview of the ARC system and report on the laser performance tests conducted with a stretched-pulse up to the main laser output and their comparison with the results of our laser propagation codes.

  2. Numerical modelling of CO2 migration in saline reservoirs using geoelectric and seismic techniques - first results

    Science.gov (United States)

    Hagrey, S. A. Al; Strahser, M. H. P.; Rabbel, W.

    2009-04-01

    The research project "CO2 MoPa" (modelling and parameterisation of CO2 storage in deep saline formations for dimensions and risk analysis) has been initiated in 2008 by partners from different disciplines (e.g. geology, hydrogeology, geochemistry, geophysics, geomechanics, hydraulic engineering and law). It deals with the parameterisation of virtual subsurface storage sites to characterise rock properties, with high pressure-temperature experiments to determine in situ hydro-petrophysical and mechanical parameters, and with modelling of processes related to CCS in deep saline reservoirs. One objective is the estimation of the sensitivity and the resolution of reflection seismic and geoelectrical time-lapse measurements in order to determine the underground distribution of CO2. Compared with seismic, electric resistivity tomography (ERT) has lower resolution, but its permanent installation and continuous monitoring can make it an economical alternative or complement. Seismic and ERT (in boreholes) applications to quantify changes of intrinsic aquifers properties with time are justified by the velocity and resistivity decrease related to CO2 injection. Our numerical 2D/3D modelling reveals the capability of the techniques to map CO2 plumes and changes as a function of thickness, concentration, receiver/electrode configuration, aspect ratio and modelling and inversion constraint parameters. Depending on these factors, some configurations are favoured due to their better spatial resolution and lower artefacts. Acknowledgements This work has been carried out in the framework of "CO2 MoPa" research project funded by the Federal German Ministry of Education and Research (BMBF) and a consortium of energy companies (E.ON Energy, EnBW AG, RWE Dea AG, Stadtwerke Kiel AG, Vattenfall Europe Technology Research GmbH and Wintershall Holding AG).

  3. Novel techniques give dozers powerful ripping capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Chironis, N.P.

    1986-03-01

    A significant breakthrough in bulldozer ripping technology has been achieved with the aid of a hydraulic auxiliary system. Developed by Caterpillar Tractor Co., for its track-type D9L tractor, the system imparts a series of impulses to the dozer's ripper shank to make it function somewhat like a jack hammer and thus enhance the dozer's ability to fracture rock. Such a system is expected to offer economical advantages over drilling and blasting, especially in hard partings between coal seams. The impact ripper is the latest in a series of auxiliary systems designed by various companies and organizations to enhance a dozer's ripping and pushing operations in difficult overburden. Other recent innovations include: A repetitive-explosive system, developed by Southwest Research Institute (SWRI), San Antonio, Tex., that releases high-pressure gases in the vicinity of the ripper tip. A similar explosive system has been applied successfully to help break up hard soil ahead of a dozer's blades; An oscillating ripper-shank system, developed by Kaelble-Gmeinder, Mosbach, West Germany, that is mounted ahead of the conventional ripper shanks to improve the penetrating ability of the ripper-shanks; An oscillating-mass system, designed and built by researchers at Mississippi State University, that induces a dozer's blade to vibrate during excavation to increase the work output of the dozer.

  4. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  5. Models, Web-Based Simulations, and Integrated Analysis Techniques for Improved Logistical Performance

    National Research Council Canada - National Science Library

    Hill, Raymond

    2001-01-01

    ... Laboratory, Logistics Research Division, Logistics Readiness Branch to propose a research agenda entitled, "Models, Web-based Simulations, and Integrated Analysis Techniques for Improved Logistical Performance...

  6. A predictive model for a radioactive contamination in an urban environment and its performance capability to the EMRAS project

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Han, Moon Hee; Jeong, Hyo Joon; Kim, Eun Han

    2008-01-01

    A model, called METRO-K, has been developed for a radiological dose assessment due to a radioactive contamination for the Korean urban environment. The model has been taking part in the Urban Remediation Working Group within the IAEA's EMRAS project to provide an opportunity to compare the modeling approaches and the predictive results of models that describe the behavior of radionuclide in an urban environment. The modeling approaches of METRO-K and the predictive results that have been carried out as a part of the Working Group's activities are presented and discussed. Contribution of contaminated surfaces to absorbed dose rates revealed a distinct difference for the locations of a receptor. (author)

  7. User's instructions for the Guyton circulatory dynamics model using the Univac 1110 batch and demand processing (with graphic capabilities)

    Science.gov (United States)

    Archer, G. T.

    1974-01-01

    The model presents a systems analysis of a human circulatory regulation based almost entirely on experimental data and cumulative present knowledge of the many facets of the circulatory system. The model itself consists of eighteen different major systems that enter into circulatory control. These systems are grouped into sixteen distinct subprograms that are melded together to form the total model. The model develops circulatory and fluid regulation in a simultaneous manner. Thus, the effects of hormonal and autonomic control, electrolyte regulation, and excretory dynamics are all important and are all included in the model.

  8. Modelling Data Mining Dynamic Code Attributes with Scheme Definition Technique

    OpenAIRE

    Sipayung, Evasaria M; Fiarni, Cut; Tanudjaja, Randy

    2014-01-01

    Data mining is a technique used in differentdisciplines to search for significant relationships among variablesin large data sets. One of the important steps on data mining isdata preparation. On these step, we need to transform complexdata with more than one attributes into representative format fordata mining algorithm. In this study, we concentrated on thedesigning a proposed system to fetch attributes from a complexdata such as product ID. Then the proposed system willdetermine the basic ...

  9. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  10. An eigenexpansion technique for modelling plasma start-up

    International Nuclear Information System (INIS)

    Pillsbury, R.D.

    1989-01-01

    An algorithm has been developed and implemented in a computer program that allows the estimation of PF coil voltages required to start-up an axisymmetric plasma in a tokamak in the presence of eddy currents in toroidally continuous conducting structures. The algorithm makes use of an eigen-expansion technique to solve the lumped parameter circuit loop voltage equations associated with the PF coils and passive (conducting) structures. An example of start-up for CIT (Compact Ignition Tokamak) is included

  11. TESTING DIFFERENT SURVEY TECHNIQUES TO MODEL ARCHITECTONIC NARROW SPACES

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH field. In fact, even if the technical specifications (range, accuracy and field of view are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan’s cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  12. Testing Different Survey Techniques to Model Architectonic Narrow Spaces

    Science.gov (United States)

    Mandelli, A.; Fassi, F.; Perfetti, L.; Polari, C.

    2017-08-01

    In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH) field. In fact, even if the technical specifications (range, accuracy and field of view) are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan's cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  13. A Hybrid Multi-Criteria Decision Model for Technological Innovation Capability Assessment: Research on Thai Automotive Parts Firms

    Directory of Open Access Journals (Sweden)

    Sumrit Detcharat

    2013-01-01

    Full Text Available The efficient appraisal of technological innovation capabilities (TICs of enterprises is an important factor to enhance competitiveness. This study aims to evaluate and rank TICs evaluation criteria in order to provide a practical insight of systematic analysis by gathering the qualified experts’ opinions combined with three methods of multi-criteria decision making approach. Firstly, Fuzzy Delphi method is used to screen TICs evaluation criteria from the recent published researches. Secondly, the Analytic Hierarchy Process is utilized to compute the relative important weights. Lastly, the VIKOR method is used to rank the enterprises based on TICs evaluation criteria. An empirical study is applied for Thai automotive parts firms to illustrate the proposed methods. This study found that the interaction between criteria is essential and influences TICs; furthermore, this ranking development of TICs assessment is also one of key management tools to simply facilitate and offer a new mindset for managements of other related industries.

  14. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  15. Rights, goals, and capabilities

    NARCIS (Netherlands)

    van Hees, M.V.B.P.M

    This article analyses the relationship between rights and capabilities in order to get a better grasp of the kind of consequentialism that the capability theory represents. Capability rights have been defined as rights that have a capability as their object (rights to capabilities). Such a

  16. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  17. Improving 3D spatial queries search: newfangled technique of space filling curves in 3D city modeling

    DEFF Research Database (Denmark)

    Uznir, U.; Anton, François; Suhaibah, A.

    2013-01-01

    , in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city......The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using....... In this research, we propose an opponent data constellation technique of space-filling curves (space-filling curve) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings...

  18. Multiparous Ewe as a Model for Teaching Vaginal Hysterectomy Techniques.

    Science.gov (United States)

    Kerbage, Yohan; Cosson, Michel; Hubert, Thomas; Giraudet, Géraldine

    2017-12-01

    Despite being linked to improving patient outcomes and limiting costs, the use of vaginal hysterectomy is on the wane. Although a combination of reasons might explain this trend, one cause is a lack of practical training. An appropriate teaching model must therefore be devised. Currently, only low-fidelity simulators exist. Ewes provide an appropriate model for pelvic anatomy and are well-suited for testing vaginal mesh properties. This article sets out a vaginal hysterectomy procedure for use as an education and training model. A multiparous ewe was the model. Surgery was performed under general anesthesia. The ewe was in a lithotomy position resembling that assumed by women on the operating table. Two vaginal hysterectomies were performed on two ewes, following every step precisely as if the model were human. Each surgical step of vaginal hysterectomy performed on the ewe and on a woman were compared side by side. We identified that all surgical steps were particularly similar. The main limitations of this model are costs ($500/procedure), logistic problems (housing large animals), and public opposition to animal training models. The ewe appears to be an appropriate model for teaching and training of vaginal hysterectomy.

  19. Filament Breakage Monitoring in Fused Deposition Modeling Using Acoustic Emission Technique

    Directory of Open Access Journals (Sweden)

    Zhensheng Yang

    2018-03-01

    Full Text Available Polymers are being used in a wide range of Additive Manufacturing (AM applications and have been shown to have tremendous potential for producing complex, individually customized parts. In order to improve part quality, it is essential to identify and monitor the process malfunctions of polymer-based AM. The present work endeavored to develop an alternative method for filament breakage identification in the Fused Deposition Modeling (FDM AM process. The Acoustic Emission (AE technique was applied due to the fact that it had the capability of detecting bursting and weak signals, especially from complex background noises. The mechanism of filament breakage was depicted thoroughly. The relationship between the process parameters and critical feed rate was obtained. In addition, the framework of filament breakage detection based on the instantaneous skewness and relative similarity of the AE raw waveform was illustrated. Afterwards, we conducted several filament breakage tests to validate their feasibility and effectiveness. Results revealed that the breakage could be successfully identified. Achievements of the present work could be further used to develop a comprehensive in situ FDM monitoring system with moderate cost.

  20. A comprehensive pipeline for multi-resolution modeling of the mitral valve: Validation, computational efficiency, and predictive capability.

    Science.gov (United States)

    Drach, Andrew; Khalighi, Amir H; Sacks, Michael S

    2018-02-01

    Multiple studies have demonstrated that the pathological geometries unique to each patient can affect the durability of mitral valve (MV) repairs. While computational modeling of the MV is a promising approach to improve the surgical outcomes, the complex MV geometry precludes use of simplified models. Moreover, the lack of complete in vivo geometric information presents significant challenges in the development of patient-specific computational models. There is thus a need to determine the level of detail necessary for predictive MV models. To address this issue, we have developed a novel pipeline for building attribute-rich computational models of MV with varying fidelity directly from the in vitro imaging data. The approach combines high-resolution geometric information from loaded and unloaded states to achieve a high level of anatomic detail, followed by mapping and parametric embedding of tissue attributes to build a high-resolution, attribute-rich computational models. Subsequent lower resolution models were then developed and evaluated by comparing the displacements and surface strains to those extracted from the imaging data. We then identified the critical levels of fidelity for building predictive MV models in the dilated and repaired states. We demonstrated that a model with a feature size of about 5 mm and mesh size of about 1 mm was sufficient to predict the overall MV shape, stress, and strain distributions with high accuracy. However, we also noted that more detailed models were found to be needed to simulate microstructural events. We conclude that the developed pipeline enables sufficiently complex models for biomechanical simulations of MV in normal, dilated, repaired states. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Household water use and conservation models using Monte Carlo techniques

    Directory of Open Access Journals (Sweden)

    R. Cahill

    2013-10-01

    Full Text Available The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006–2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  2. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions......-passive behaviour of the proposed method comes from the combination of the non intrusive behaviour of the passive methods with a better accuracy of the active methods. The simulation results reveal the good accuracy of the proposed method....

  3. The feasibility of using particle-in-cell technique for modeling liquid crystal devices

    Science.gov (United States)

    Leung, Wing Ching

    1997-12-01

    , which yields a relatively high bound charge density at the location of the defect wall in the medium. The asymmetry in the electric field perturbations associated with the bound charge for non- zero pretilt angles causes the wall to move. The motion is consequence of a dedicate imbalance of electric and elastic torques on the molecules. In view of our simulation results agreeing with known analytical results and having the capability of enabling us to study the dynamical behavior, we have successfully demonstrated the feasibility of using PIC technique for modeling both static and dynamical behavior of liquid crystal devices.

  4. Size reduction techniques for vital compliant VHDL simulation models

    Science.gov (United States)

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  5. Validation of a mathematical model for Bell 427 Helicopter using parameter estimation techniques and flight test data

    Science.gov (United States)

    Crisan, Emil Gabriel

    Certification requirements, optimization and minimum project costs, design of flight control laws and the implementation of flight simulators are among the principal applications of system identification in the aeronautical industry. This document examines the practical application of parameter estimation techniques to the problem of estimating helicopter stability and control derivatives from flight test data provided by Bell Helicopter Textron Canada. The purpose of this work is twofold: a time-domain application of the Output Error method using the Gauss-Newton algorithm and a frequency-domain identification method to obtain the aerodynamic and control derivatives of a helicopter. The adopted model for this study is a fully coupled, 6 degree of freedom (DoF) state space model. The technique used for rotorcraft identification in time-domain was the Maximum Likelihood Estimation method, embodied in a modified version of NASA's Maximum Likelihood Estimator program (MMLE3) obtained from the National Research Council (NRC). The frequency-domain system identification procedure is incorporated in a comprehensive package of user-oriented programs referred to as CIFERRTM. The coupled, 6 DoF model does not include the high frequency main rotor modes (flapping, lead-lag, twisting), yet it is capable of modeling rotorcraft dynamics fairly accurately as resulted from the model verification. The identification results demonstrate that MMLE3 is a powerful and effective tool for extracting reliable helicopter models from flight test data. The results obtained in frequency-domain approach demonstrated that CIFERRTM could achieve good results even on limited data.

  6. Advancing hydrometeorological prediction capabilities through standards-based cyberinfrastructure development: The community WRF-Hydro modeling system

    Science.gov (United States)

    gochis, David; Parodi, Antonio; Hooper, Rick; Jha, Shantenu; Zaslavsky, Ilya

    2013-04-01

    The need for improved assessments and predictions of many key environmental variables is driving a multitude of model development efforts in the geosciences. The proliferation of weather and climate impacts research is driving a host of new environmental prediction model development efforts as society seeks to understand how climate does and will impact key societal activities and resources and, in turn, how human activities influence climate and the environment. This surge in model development has highlighted the role of model coupling as a fundamental activity itself and, at times, a significant bottleneck in weather and climate impacts research. This talk explores some of the recent activities and progress that has been made in assessing the attributes of various approaches to the coupling of physics-based process models for hydrometeorology. One example modeling system that is emerging from these efforts is the community 'WRF-Hydro' modeling system which is based on the modeling architecture of the Weather Research and Forecasting (WRF). An overview of the structural components of WRF-Hydro will be presented as will results from several recent applications which include the prediction of flash flooding events in the Rocky Mountain Front Range region of the U.S. and along the Ligurian coastline in the northern Mediterranean. Efficient integration of the coupled modeling system with distributed infrastructure for collecting and sharing hydrometeorological observations is one of core themes of the work. Specifically, we aim to demonstrate how data management infrastructures used in the US and Europe, in particular data sharing technologies developed within the CUAHSI Hydrologic Information System and UNIDATA, can interoperate based on international standards for data discovery and exchange, such as standards developed by the Open Geospatial Consortium and adopted by GEOSS. The data system we envision will help manage WRF-Hydro prediction model data flows, enabling

  7. An experimentally verified model for estimating the distance resolution capability of direct time of flight 3D optical imaging systems

    International Nuclear Information System (INIS)

    Nguyen, K Q K; Fisher, E M D; Walton, A J; Underwood, I

    2013-01-01

    This report introduces a new statistical model for time-resolved photon detection in a generic single-photon-sensitive sensor array. The model is validated by comparing modelled data with experimental data collected on a single-photon avalanche diode sensor array. Data produced by the model are used alongside corresponding experimental data to calculate, for the first time, the effective distance resolution of a pulsed direct time of flight 3D optical imaging system over a range of conditions using four peak-detection algorithms. The relative performance of the algorithms is compared. The model can be used to improve the system design process and inform selection of the optimal peak-detection algorithm. (paper)

  8. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  9. Modeling and Control of Multivariable Process Using Intelligent Techniques

    Directory of Open Access Journals (Sweden)

    Subathra Balasubramanian

    2010-10-01

    Full Text Available For nonlinear dynamic systems, the first principles based modeling and control is difficult to implement. In this study, a fuzzy controller and recurrent fuzzy controller are developed for MIMO process. Fuzzy logic controller is a model free controller designed based on the knowledge about the process. In fuzzy controller there are two types of rule-based fuzzy models are available: one the linguistic (Mamdani model and the other is Takagi–Sugeno model. Of these two, Takagi-Sugeno model (TS has attracted most attention. The fuzzy controller application is limited to static processes due to their feedforward structure. But, most of the real-time processes are dynamic and they require the history of input/output data. In order to store the past values a memory unit is needed, which is introduced by the recurrent structure. The proposed recurrent fuzzy structure is used to develop a controller for the two tank heating process. Both controllers are designed and implemented in a real time environment and their performance is compared.

  10. Evaluating the capability of regional-scale air quality models to capture the vertical distribution of pollutants

    Directory of Open Access Journals (Sweden)

    E. Solazzo

    2013-06-01

    Full Text Available This study is conducted in the framework of the Air Quality Modelling Evaluation International Initiative (AQMEII and aims at the operational evaluation of an ensemble of 12 regional-scale chemical transport models used to predict air quality over the North American (NA and European (EU continents for 2006. The modelled concentrations of ozone and CO, along with the meteorological fields of wind speed (WS and direction (WD, temperature (T, and relative humidity (RH, are compared against high-quality in-flight measurements collected by instrumented commercial aircraft as part of the Measurements of OZone, water vapour, carbon monoxide and nitrogen oxides by Airbus In-service airCraft (MOZAIC programme. The evaluation is carried out for five model domains positioned around four major airports in NA (Portland, Philadelphia, Atlanta, and Dallas and one in Europe (Frankfurt, from the surface to 8.5 km. We compare mean vertical profiles of modelled and measured variables for all airports to compute error and variability statistics, perform analysis of altitudinal error correlation, and examine the seasonal error distribution for ozone, including an estimation of the bias introduced by the lateral boundary conditions (BCs. The results indicate that model performance is highly dependent on the variable, location, season, and height (e.g. surface, planetary boundary layer (PBL or free troposphere being analysed. While model performance for T is satisfactory at all sites (correlation coefficient in excess of 0.90 and fractional bias ≤ 0.01 K, WS is not replicated as well within the PBL (exhibiting a positive bias in the first 100 m and also underestimating observed variability, while above 1000 m, the model performance improves (correlation coefficient often above 0.9. The WD at NA airports is found to be biased in the PBL, primarily due to an overestimation of westerly winds. RH is modelled well within the PBL, but in the free troposphere large

  11. Mobile Test Capabilities

    Data.gov (United States)

    Federal Laboratory Consortium — The Electrical Power Mobile Test capabilities are utilized to conduct electrical power quality testing on aircraft and helicopters. This capability allows that the...

  12. Analysis of the 314th Contracting Squadrons Contract Management Capability Using the Contract Management Maturity Model (CMMM)

    National Research Council Canada - National Science Library

    Jackson, Jr, Carl J

    2007-01-01

    .... The purpose of this research project is to analyze the 314th Contracting Squadron contracting processes and requirement target areas for improvement efforts by the application of the Contract Management Maturity Model (CMMM...

  13. Fusing Observations and Model Results for Creation of Enhanced Ozone Spatial Fields: Comparison of Three Techniques

    Science.gov (United States)

    This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...

  14. Modeling the effects of land cover and use on landscape capability for urban ungulate populations: Chapter 11

    Science.gov (United States)

    Underwood, Harold; Kilheffer, Chellby R.; Francis, Robert A.; Millington, James D. A.; Chadwick, Michael A.

    2016-01-01

    Expanding ungulate populations are causing concerns for wildlife professionals and residents in many urban areas worldwide. Nowhere is the phenomenon more apparent than in the eastern US, where urban white-tailed deer (Odocoileus virginianus) populations are increasing. Most habitat suitability models for deer have been developed in rural areas and across large (>1000 km2) spatial extents. Only recently have we begun to understand the factors that contribute to space use by deer over much smaller spatial extents. In this study, we explore the concepts, terminology, methodology and state-of-the-science in wildlife abundance modeling as applied to overabundant deer populations across heterogeneous urban landscapes. We used classified, high-resolution digital orthoimagery to extract landscape characteristics in several urban areas of upstate New York. In addition, we assessed deer abundance and distribution in 1-km2 blocks across each study area from either aerial surveys or ground-based distance sampling. We recorded the number of detections in each block and used binomial mixture models to explore important relationships between abundance and key landscape features. Finally, we cross-validated statistical models of abundance and compared covariate relationships across study sites. Study areas were characterized along a gradient of urbanization based on the proportions of impervious surfaces and natural vegetation which, based on the best-supported models, also distinguished blocks potentially occupied by deer. Models performed better at identifying occurrence of deer and worse at predicting abundance in cross-validation comparisons. We attribute poor predictive performance to differences in deer population trajectories over time. The proportion of impervious surfaces often yielded better predictions of abundance and occurrence than did the proportion of natural vegetation, which we attribute to a lack of certain land cover classes during cold and snowy winters

  15. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... though extant literature has shown the importance of formal modelling techniques, the impact of utilising these techniques remains relatively unknown. Therefore, this article studies three main areas: (1) the impact of using modelling techniques based on Unified Modelling Language (UML), in which...... ability to reduce the number of product variants. This paper contributes to an increased understanding of what companies can gain from using more formalised modelling techniques in configurator projects, and under what circumstances they should be used....

  16. Biliary System Architecture: Experimental Models and Visualization Techniques

    Czech Academy of Sciences Publication Activity Database

    Sarnová, Lenka; Gregor, Martin

    2017-01-01

    Roč. 66, č. 3 (2017), s. 383-390 ISSN 0862-8408 R&D Projects: GA MŠk(CZ) LQ1604; GA ČR GA15-23858S Institutional support: RVO:68378050 Keywords : Biliary system * Mouse model * Cholestasis * Visualisation * Morphology Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Cell biology Impact factor: 1.461, year: 2016

  17. Discovering Process Reference Models from Process Variants Using Clustering Techniques

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    2008-01-01

    In today's dynamic business world, success of an enterprise increasingly depends on its ability to react to changes in a quick and flexible way. In response to this need, process-aware information systems (PAIS) emerged, which support the modeling, orchestration and monitoring of business processes

  18. Suitability of sheet bending modelling techniques in CAPP applications

    NARCIS (Netherlands)

    Streppel, A.H.; de Vin, L.J.; de Vin, L.J.; Brinkman, J.; Brinkman, J.; Kals, H.J.J.

    1993-01-01

    The use of CNC machine tools, together with decreasing lot sizes and stricter tolerance prescriptions, has led to changes in sheet-metal part manufacturing. In this paper, problems introduced by the difference between the actual material behaviour and the results obtained from analytical models and

  19. Air quality modelling using chemometric techniques | Azid | Journal ...

    African Journals Online (AJOL)

    DA shows all seven parameters (CO, O3, PM10, SO2, NOx, NO and NO2) gave the most significant variables after stepwise backward mode. PCA identifies the major source of air pollution is due to combustion of fossil fuels in motor vehicles and industrial activities. The ANN model shows a better prediction compared to the ...

  20. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  1. Improving the capability of an integrated CA-Markov model to simulate spatio-temporal urban growth trends using an Analytical Hierarchy Process and Frequency Ratio

    Science.gov (United States)

    Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan

    2017-07-01

    The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.

  2. Experimental validation of an analytical model for predicting the thermal and hydrodynamic capabilities of flat micro heat pipes

    Energy Technology Data Exchange (ETDEWEB)

    Revellin, Remi; Rulliere, Romuald [Centre de Thermique de Lyon (CETHIL), UMR5008, CNRS-INSA-Univ. Lyon 1, Bat. Sadi Carnot, INSA-Lyon, F-69621 Villeurbanne Cedex (France); Lefevre, Frederic [Centre de Thermique de Lyon (CETHIL), UMR5008, CNRS-INSA-Univ. Lyon 1, Bat. Sadi Carnot, INSA-Lyon, F-69621 Villeurbanne Cedex (France)], E-mail: frederic.lefevre@insa-lyon.fr; Bonjour, Jocelyn [Centre de Thermique de Lyon (CETHIL), UMR5008, CNRS-INSA-Univ. Lyon 1, Bat. Sadi Carnot, INSA-Lyon, F-69621 Villeurbanne Cedex (France)

    2009-04-15

    An analytical model by Lefevre and Lallemand [F. Lefevre, M. Lallemand, Coupled thermal and hydrodynamic models of flat micro heat pipes for the cooling of multiple electronic components, Int. J. Heat Mass Transfer 49 (2006) 1375-1383] that couples a 2D hydrodynamic model for both the liquid and the vapor phases inside a flat micro heat pipe (FMHP) and a 3D thermal model of heat conduction inside the FMHP wall has been modified. It consists of superposing two independent solutions in order to take into account the impact of evaporation or condensation on the equivalent thermal conductivity of the capillary structure. The temperature, pressure and velocity fields can be determined using Fourier solutions. The model has been experimentally validated based on literature data from a grooved FMHP. Two new correlations for the equivalent thermal conductivities during evaporation and condensation inside rectangular micro-grooves have been proposed based on a numerical database. The influence of the saturation temperature and geometry on the maximum heat flux transferred by the system is presented.

  3. Crude Oil Model Emulsion Characterised by means of Near Infrared Spectroscopy and Multivariate Techniques

    DEFF Research Database (Denmark)

    Kallevik, H.; Hansen, Susanne Brunsgaard; Sæther, Ø.

    2000-01-01

    Water-in-oil emulsions are investigated by means of multivariate analysis of near infrared (NIR) spectroscopic profiles in the range 1100 - 2250 nm. The oil phase is a paraffin-diluted crude oil from the Norwegian Continental Shelf. The influence of water absorption and light scattering...... of the water droplets are shown to be strong. Despite the strong influence of the water phase, the NIR technique is still capable of predicting the composition of the investigated oil phase....

  4. HYSPLIT's Capability for Radiological Aerial Monitoring in Nuclear Emergencies: Model Validation and Assessment on the Chernobyl Accident

    International Nuclear Information System (INIS)

    Jung, Gunhyo; Kim, Juyoul; Shin, Hyeongki

    2007-01-01

    The Chernobyl accident took place on 25 April 1986 in Ukraine. Consequently large amount of radionuclides were released into the atmosphere. The release was a widespread distribution of radioactivity throughout the northern hemisphere, mainly across Europe. A total of 31 persons died as a consequence of the accident, and about 140 persons suffered various degrees of radiation sickness and health impairment in the acute health impact. The possible increase of cancer incidence has been a real and significant increase of carcinomas of the thyroid among the children living in the contaminated regions as the late health effects. Recently, a variety of atmospheric dispersion models have been developed and used around the world. Among them, HYSPLIT (HYbrid Single-Particle Lagrangian Integrated Trajectory) model developed by NOAA (National Oceanic and Atmospheric Administration)/ARL (Air Resources Laboratory) is being widely used. To verify the HYSPLIT model for radiological aerial monitoring in nuclear emergencies, a case study on the Chernobyl accident is performed

  5. Recent Additions in the Modeling Capabilities of an Open-Source Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tom, N.; Lawson, M.; Yu, Y. H.

    2015-04-20

    WEC-Sim is a midfidelity numerical tool for modeling wave energy conversion devices. The code uses the MATLAB SimMechanics package to solve multibody dynamics and models wave interactions using hydrodynamic coefficients derived from frequency-domain boundary-element methods. This paper presents the new modeling features introduced in the latest release of WEC-Sim. The first feature discussed conversion of the fluid memory kernel to a state-space form. This enhancement offers a substantial computational benefit after the hydrodynamic body-to-body coefficients are introduced and the number of interactions increases exponentially with each additional body. Additional features include the ability to calculate the wave-excitation forces based on the instantaneous incident wave angle, allowing the device to weathervane, as well as import a user-defined wave elevation time series. A review of the hydrodynamic theory for each feature is provided and the successful implementation is verified using test cases.

  6. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  7. Techniques for studies of unbinned model independent CP violation

    Energy Technology Data Exchange (ETDEWEB)

    Bedford, Nicholas; Weisser, Constantin; Parkes, Chris; Gersabeck, Marco; Brodzicka, Jolanta; Chen, Shanzhen [University of Manchester (United Kingdom)

    2016-07-01

    Charge-Parity (CP) violation is a known part of the Standard Model and has been observed and measured in both the B and K meson systems. The observed levels, however, are insufficient to explain the observed matter-antimatter asymmetry in the Universe, and so other sources need to be found. One area of current investigation is the D meson system, where predicted levels of CP violation are much lower than in the B and K meson systems. This means that more sensitive methods are required when searching for CP violation in this system. Several unbinned model independent methods have been proposed for this purpose, all of which need to be optimised and their sensitivities compared.

  8. Suitability of sheet bending modelling techniques in CAPP applications

    OpenAIRE

    Streppel, A.H.; de Vin, L.J.; de Vin, L.J.; Brinkman, J.; Brinkman, J.; Kals, H.J.J.

    1993-01-01

    The use of CNC machine tools, together with decreasing lot sizes and stricter tolerance prescriptions, has led to changes in sheet-metal part manufacturing. In this paper, problems introduced by the difference between the actual material behaviour and the results obtained from analytical models and FEM simulations are discussed against the background of the required predictable accuracy in small-batch part manufacturing and FMS environments. The topics are limited to those relevant to bending...

  9. Solving microwave heating model using Hermite-Pade approximation technique

    International Nuclear Information System (INIS)

    Makinde, O.D.

    2005-11-01

    We employ the Hermite-Pade approximation method to explicitly construct the approximate solution of steady state reaction- diffusion equations with source term that arises in modeling microwave heating in an infinite slab with isothermal walls. In particular, we consider the case where the source term decreases spatially and increases with temperature. The important properties of the temperature fields including bifurcations and thermal criticality are discussed. (author)

  10. PLATO: PSF modelling using a micro-scanning technique

    Directory of Open Access Journals (Sweden)

    Ouazzani R-M.

    2015-01-01

    Full Text Available The PLATO space mission is designed to detect telluric planets in the habitable zone of solar type stars, and simultaneously characterise the host star using ultra high precision photometry. The photometry will be performed on board using weighted masks. However, to reach the required precision, corrections will have to be performed by the ground segment and will rely on precise knowledge of the instrument PSF (Point Spread Function. We here propose to model the PSF using a microscanning method.

  11. Attempting to train a digital human model to reproduce human subject reach capabilities in an ejection seat aircraft

    NARCIS (Netherlands)

    Zehner, G.F.; Hudson, J.A.; Oudenhuijzen, A.

    2006-01-01

    From 1997 through 2002, the Air Force Research Lab and TNO Defence, Security and Safety (Business Unit Human Factors) were involved in a series of tests to quantify the accuracy of five Human Modeling Systems (HMSs) in determining accommodation limits of ejection seat aircraft. The results of these

  12. A Modified Multifrequency Passivity-Based Control for Shunt Active Power Filter With Model-Parameter-Adaptive Capability

    DEFF Research Database (Denmark)

    Mu, Xiaobin; Wang, Jiuhe; Wu, Weimin

    2018-01-01

    The passivity-based control (PBC) has a better control performance using an accurate mathematical model of the control object. It can offer an alternative tracking control scheme for the shunt active power filter (SAPF). However, the conventional PBC-based SAPF cannot achieve zero steady...

  13. Intercultural team maturity model: Unity, diversity, capability. Achieving optimal performance when leading \\ud a multicultural project team

    OpenAIRE

    Prabhakar, G. P.; Walker, S.

    2005-01-01

    Our research helps to judge ‘maturity’ as an asset to projects and heightens awareness of situational leadership, using intercultural team maturity levels as a tool for optimal project leadership success.\\ud \\ud This study focuses on exactly how to analyse the team members’ ability to adapt to complex intercultural project environments, using an intercultural team maturity model.

  14. How does a cadaver model work for testing ultrasound diagnostic capability for rheumatic-like tendon damage?

    DEFF Research Database (Denmark)

    Janta, Iustina; Morán, Julio; Naredo, Esperanza

    2016-01-01

    To establish whether a cadaver model can serve as an effective surrogate for the detection of tendon damage characteristic of rheumatoid arthritis (RA). In addition, we evaluated intraobserver and interobserver agreement in the grading of RA-like tendon tears shown by US, as well as the concordan...

  15. Development of a model capable of predicting the performance of piston ring-cylinder liner-like tribological interfaces

    DEFF Research Database (Denmark)

    Felter, C.L.; Vølund, A.; Imran, Tajammal

    2010-01-01

    on a measured temperature only; thus, it is not necessary to include the energy equation. Conservation of oil is ensured throughout the domain by considering the amount of oil outside the lubricated interface. A model for hard contact through asperities is also included. Second, a laboratory-scale test rig...

  16. An Energy-Equivalent d⁺/d-Damage Model with Enhanced Microcrack Closure-Reopening Capabilities for Cohesive-Frictional Materials.

    Science.gov (United States)

    Cervera, Miguel; Tesei, Claudia

    2017-04-20

    In this paper, an energy-equivalent orthotropic d ⁺/ d - damage model for cohesive-frictional materials is formulated. Two essential mechanical features are addressed, the damage-induced anisotropy and the microcrack closure-reopening (MCR) effects, in order to provide an enhancement of the original d ⁺/ d - model proposed by Faria et al. 1998, while keeping its high algorithmic efficiency unaltered. First, in order to ensure the symmetry and positive definiteness of the secant operator, the new formulation is developed in an energy-equivalence framework. This proves thermodynamic consistency and allows one to describe a fundamental feature of the orthotropic damage models, i.e., the reduction of the Poisson's ratio throughout the damage process. Secondly, a "multidirectional" damage procedure is presented to extend the MCR capabilities of the original model. The fundamental aspects of this approach, devised for generic cyclic conditions, lie in maintaining only two scalar damage variables in the constitutive law, while preserving memory of the degradation directionality. The enhanced unilateral capabilities are explored with reference to the problem of a panel subjected to in-plane cyclic shear, with or without vertical pre-compression; depending on the ratio between shear and pre-compression, an absent, a partial or a complete stiffness recovery is simulated with the new multidirectional procedure.

  17. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints.

    Science.gov (United States)

    van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W

    2014-12-22

    Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.

  18. A titration model for evaluating calcium hydroxide removal techniques

    Directory of Open Access Journals (Sweden)

    Mark PHILLIPS

    2015-02-01

    Full Text Available Objective Calcium hydroxide (Ca(OH2 has been used in endodontics as an intracanal medicament due to its antimicrobial effects and its ability to inactivate bacterial endotoxin. The inability to totally remove this intracanal medicament from the root canal system, however, may interfere with the setting of eugenol-based sealers or inhibit bonding of resin to dentin, thus presenting clinical challenges with endodontic treatment. This study used a chemical titration method to measure residual Ca(OH2 left after different endodontic irrigation methods. Material and Methods Eighty-six human canine roots were prepared for obturation. Thirty teeth were filled with known but different amounts of Ca(OH2 for 7 days, which were dissolved out and titrated to quantitate the residual Ca(OH2 recovered from each root to produce a standard curve. Forty-eight of the remaining teeth were filled with equal amounts of Ca(OH2 followed by gross Ca(OH2 removal using hand files and randomized treatment of either: 1 Syringe irrigation; 2 Syringe irrigation with use of an apical file; 3 Syringe irrigation with added 30 s of passive ultrasonic irrigation (PUI, or 4 Syringe irrigation with apical file and PUI (n=12/group. Residual Ca(OH2 was dissolved with glycerin and titrated to measure residual Ca(OH2 left in the root. Results No method completely removed all residual Ca(OH2. The addition of 30 s PUI with or without apical file use removed Ca(OH2 significantly better than irrigation alone. Conclusions This technique allowed quantification of residual Ca(OH2. The use of PUI (with or without apical file resulted in significantly lower Ca(OH2 residue compared to irrigation alone.

  19. ATLAS Event Data Organization and I/O Framework Capabilities in Support of Heterogeneous Data Access and Processing Models

    CERN Document Server

    Malon, David; The ATLAS collaboration; van Gemmeren, Peter

    2016-01-01

    Choices in persistent data models and data organization have significant performance ramifications for data-intensive scientific computing. In experimental high energy physics, organizing file-based event data for efficient per-attribute retrieval may improve the I/O performance of some physics analyses but hamper the performance of processing that requires full-event access. In-file data organization tuned for serial access by a single process may be less suitable for opportunistic sub-file-based processing on distributed computing resources. Unique I/O characteristics of high-performance computing platforms pose additional challenges. This paper describes work in the ATLAS experiment at the Large Hadron Collider to provide an I/O framework and tools for persistent data organization to support an increasingly heterogenous array of data access and processing models.

  20. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  1. Robust image modeling technique with a bioluminescence image segmentation application

    Science.gov (United States)

    Zhong, Jianghong; Wang, Ruiping; Tian, Jie

    2009-02-01

    A robust pattern classifier algorithm for the variable symmetric plane model, where the driving noise is a mixture of a Gaussian and an outlier process, is developed. The veracity and high-speed performance of the pattern recognition algorithm is proved. Bioluminescence tomography (BLT) has recently gained wide acceptance in the field of in vivo small animal molecular imaging. So that it is very important for BLT to how to acquire the highprecision region of interest in a bioluminescence image (BLI) in order to decrease loss of the customers because of inaccuracy in quantitative analysis. An algorithm in the mode is developed to improve operation speed, which estimates parameters and original image intensity simultaneously from the noise corrupted image derived from the BLT optical hardware system. The focus pixel value is obtained from the symmetric plane according to a more realistic assumption for the noise sequence in the restored image. The size of neighborhood is adaptive and small. What's more, the classifier function is base on the statistic features. If the qualifications for the classifier are satisfied, the focus pixel intensity is setup as the largest value in the neighborhood.Otherwise, it will be zeros.Finally,pseudo-color is added up to the result of the bioluminescence segmented image. The whole process has been implemented in our 2D BLT optical system platform and the model is proved.

  2. Evaluating Higher Education Institutions through Agency and Resources-Capabilities Theories. A Model for Measuring the Perceived Quality of Service

    Directory of Open Access Journals (Sweden)

    José Guadalupe Vargas-hernández

    2016-08-01

    Full Text Available The objective of this paper is to explain through the agency theory and theory of resources and capacities as is the process of assessment in higher education institutions. The actors that are involved in the decision-making and the use that is giving the resources derived from repeatedly to practices that opportunistic diminishing the value that is given to the evaluation, in addition to the decrease in team work. A model is presented to measure the perception of service quality by students of the Technological Institute of Celaya, as part of the system of quality control, based on the theoretical support of several authors who have developed this topic (SERVQUAL and SERPERF an instrument adapted to the student area of the institution called SERQUALITC is generated. The paper presents the areas or departments to assess and the convenient size, the number of items used by size and Likert scale, the validation study instrument is mentioned. Finally, it is presented the model that poses a global vision of quality measurement process including corrective action services that enable continuous improvement.

  3. Evaluating Higher Education Institutions through Agency and Resources-Capabilities Theories. A Model for Measuring the Perceived Quality of Service

    Directory of Open Access Journals (Sweden)

    José G. Vargas-Hernández

    2016-12-01

    Full Text Available The objective of this paper is to explain through the agency theory and theory of resources and capacities as is the process of assessment in higher education institutions. The actors that are involved in the decision-making and the use that is giving the resources derived from repeatedly to practices that opportunistic diminishing the value that is given to the evaluation, in addition to the decrease in team work. A model is presented to measure the perception of service quality by students of the Technological Institute of Celaya, as part of the system of quality control, based on the theoretical support of several authors who have developed this topic (SERVQUAL and SERPERF an instrument adapted to the student area of the institution called SERQUALITC is generated. The paper presents the areas or departments to assess and the convenient size, the number of items used by size and Likert scale, the validation study instrument is mentioned. Finally, it is presented the model that poses a global vision of quality measurement process including corrective action services that enable continuous improvement.

  4. Modeling and simulation of a novel autonomous underwater vehicle with glider and flapping-foil propulsion capabilities

    Science.gov (United States)

    Tian, Wen-long; Song, Bao-wei; Du, Xiao-xu; Mao, Zhao-yong; Ding, Hao

    2012-12-01

    HAISHEN is a long-ranged and highly maneuverable AUV which has two operating modes: glider mode and flapping-foil propulsion mode. As part of the vehicle development, a three-dimensional mathematical model of the conceptual vehicle was developed on the assumption that HAISHEN has a rigid body with two independently controlled oscillating hydrofoils. A flapping-foil model was developed based on the work done by Georgiades et al. (2009). Effect of controllable hydrofoils on the vehicle stable motion performance was studied theoretically. Finally, a dynamics simulation of the vehicle in both operating modes is created in this paper. The simulation demonstrates that: (1) in the glider mode, owing to the independent control of the pitch angle of each hydrofoil, HAISHEN travels faster and more efficiently and has a smaller turning radius than conventional fix-winged gliders; (2) in the flapping-foil propulsion mode, HAISHEN has a high maneuverability with a turning radius smaller than 15 m and a forward motion velocity about 1.8 m/s; (3) the vehicle is stable under all expected operating conditions.

  5. Modeling and Control System Design for an Integrated Solar Generation and Energy Storage System with a Ride-Through Capability: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X.; Yue, M.; Muljadi, E.

    2012-09-01

    This paper presents a generic approach for PV panel modeling. Data for this modeling can be easily obtained from manufacturer datasheet, which provides a convenient way for the researchers and engineers to investigate the PV integration issues. A two-stage power conversion system (PCS) is adopted in this paper for the PV generation system and a Battery Energy Storage System (BESS) can be connected to the dc-link through a bi-directional dc/dc converter. In this way, the BESS can provide some ancillary services which may be required in the high penetration PV generation scenario. In this paper, the fault ride-through (FRT) capability is specifically focused. The integrated BESS and PV generation system together with the associated control systems is modeled in PSCAD and Matlab platforms and the effectiveness of the controller is validated by the simulation results.

  6. Precision and trueness of dental models manufactured with different 3-dimensional printing techniques.

    Science.gov (United States)

    Kim, Soo-Yeon; Shin, Yoo-Seok; Jung, Hwi-Dong; Hwang, Chung-Ju; Baik, Hyoung-Seon; Cha, Jung-Yul

    2018-01-01

    In this study, we assessed the precision and trueness of dental models printed with 3-dimensional (3D) printers via different printing techniques. Digital reference models were printed 5 times using stereolithography apparatus (SLA), digital light processing (DLP), fused filament fabrication (FFF), and the PolyJet technique. The 3D printed models were scanned and evaluated for tooth, arch, and occlusion measurements. Precision and trueness were analyzed with root mean squares (RMS) for the differences in each measurement. Differences in measurement variables among the 3D printing techniques were analyzed by 1-way analysis of variance (α = 0.05). Except in trueness of occlusion measurements, there were significant differences in all measurements among the 4 techniques (P techniques exhibited significantly different mean RMS values of precision than the SLA (88 ± 14 μm) and FFF (99 ± 14 μm) techniques (P techniques (P techniques (P techniques: SLA (107 ± 11 μm), DLP (143 ± 8 μm), FFF (188 ± 14 μm), and PolyJet (78 ± 9 μm) (P techniques exhibited significantly different mean RMS values of trueness than DLP (469 ± 49 μm) and FFF (409 ± 36 μm) (P techniques showed significant differences in precision of all measurements and in trueness of tooth and arch measurements. The PolyJet and DLP techniques were more precise than the FFF and SLA techniques, with the PolyJet technique having the highest accuracy. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  7. Multivariate moment closure techniques for stochastic kinetic models

    Energy Technology Data Exchange (ETDEWEB)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H., E-mail: m.stumpf@imperial.ac.uk [Department of Life Sciences, Centre for Integrative Systems Biology and Bioinformatics, Imperial College London, London SW7 2AZ (United Kingdom)

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  8. Using simulation models to evaluate ape nest survey techniques.

    Directory of Open Access Journals (Sweden)

    Ryan H Boyko

    Full Text Available BACKGROUND: Conservationists frequently use nest count surveys to estimate great ape population densities, yet the accuracy and precision of the resulting estimates are difficult to assess. METHODOLOGY/PRINCIPAL FINDINGS: We used mathematical simulations to model nest building behavior in an orangutan population to compare the quality of the population size estimates produced by two of the commonly used nest count methods, the 'marked recount method' and the 'matrix method.' We found that when observers missed even small proportions of nests in the first survey, the marked recount method produced large overestimates of the population size. Regardless of observer reliability, the matrix method produced substantial overestimates of the population size when surveying effort was low. With high observer reliability, both methods required surveying approximately 0.26% of the study area (0.26 km(2 out of 100 km(2 in this simulation to achieve an accurate estimate of population size; at or above this sampling effort both methods produced estimates within 33% of the true population size 50% of the time. Both methods showed diminishing returns at survey efforts above 0.26% of the study area. The use of published nest decay estimates derived from other sites resulted in widely varying population size estimates that spanned nearly an entire order of magnitude. The marked recount method proved much better at detecting population declines, detecting 5% declines nearly 80% of the time even in the first year of decline. CONCLUSIONS/SIGNIFICANCE: These results highlight the fact that neither nest surveying method produces highly reliable population size estimates with any reasonable surveying effort, though either method could be used to obtain a gross population size estimate in an area. Conservation managers should determine if the quality of these estimates are worth the money and effort required to produce them, and should generally limit surveying effort to

  9. Assesment of Innovation Process Capability-Based on Innovation Value Chain Model in East Java Footwear Industry

    Directory of Open Access Journals (Sweden)

    Benny Lianto

    2015-12-01

    Full Text Available This study attempts to assess the innovation process based on  innovation value chain model in footwear industry in East Java, Indonesia. A strength and weakness mapping analysis was performed and it included three factors related to company characteristics: operation scale based on number of employees, operational priod, and market orientation. The samples were 62 footwear industries, members of East Java  Indonesian Footwear Association (Aprisindo. The questionnaire was sent via email. Thirty industries (48.38% sent the questionnaire back. A focus group discussion (FGD was conducted with several representatives from footwear industry before the questionnaire was sent.  The study found that companies are relatively good at idea conversion (42,30%  but the companies have  a little difficulties at diffusion (50,80% and  at idea generation (55,80%. From the result respose show (see table.2 that the weakest links (the innovation process bottleneck is cross-pollination activity [in which the people typically don't collaborate on projects across units, businesses, or subsidiaries (88,6%],  while the strongest links is selection activity [the companies have a risk- averse attitude toward  investing in novel ideas (39,3%]. Based on p-value, the study found that company characteristics influencing a certain phase of innovation value chain significantly were company period (age of company and market orientation. Specifically, both of them influenced idea generation phase.

  10. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F. [ORNL; Poore III, Willis P. [ORNL; Muhlheim, Michael David [ORNL

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.

  11. NASA GISS Climate Change Research Initiative: A Multidisciplinary Vertical Team Model for Improving STEM Education by Using NASA's Unique Capabilities.

    Science.gov (United States)

    Pearce, M. D.

    2017-12-01

    CCRI is a year-long STEM education program designed to bring together teams of NASA scientists, graduate, undergraduate and high school interns and high school STEM educators to become immersed in NASA research focused on atmospheric and climate changes in the 21st century. GISS climate research combines analysis of global datasets with global models of atmospheric, land surface, and oceanic processes to study climate change on Earth and other planetary atmospheres as a useful tool in assessing our general understanding of climate change. CCRI interns conduct research, gain knowledge in assigned research discipline, develop and present scientific presentations summarizing their research experience. Specifically, CCRI interns write a scientific research paper explaining basic ideas, research protocols, abstract, results, conclusion and experimental design. Prepare and present a professional presentation of their research project at NASA GISS, prepare and present a scientific poster of their research project at local and national research symposiums along with other federal agencies. CCRI Educators lead research teams under the direction of a NASA GISS scientist, conduct research, develop research based learning units and assist NASA scientists with the mentoring of interns. Educators create an Applied Research STEM Curriculum Unit Portfolio based on their research experience integrating NASA unique resources, tools and content into a teacher developed unit plan aligned with the State and NGSS standards. STEM Educators also Integrate and implement NASA unique units and content into their STEM courses during academic year, perform community education STEM engagement events, mentor interns in writing a research paper, oral research reporting, power point design and scientific poster design for presentation to local and national audiences. The CCRI program contributes to the Federal STEM Co-STEM initiatives by providing opportunities, NASA education resources and

  12. Platelet autologous growth factors decrease the osteochondral regeneration capability of a collagen-hydroxyapatite scaffold in a sheep model

    Directory of Open Access Journals (Sweden)

    Giavaresi Gianluca

    2010-09-01

    Full Text Available Abstract Background Current research aims to develop innovative approaches to improve chondral and osteochondral regeneration. The objective of this study was to investigate the regenerative potential of platelet-rich plasma (PRP to enhance the repair process of a collagen-hydroxyapatite scaffold in osteochondral defects in a sheep model. Methods PRP was added to a new, multi-layer gradient, nanocomposite scaffold that was obtained by nucleating collagen fibrils with hydroxyapatite nanoparticles. Twenty-four osteochondral lesions were created in sheep femoral condyles. The animals were randomised to three treatment groups: scaffold, scaffold loaded with autologous PRP, and empty defect (control. The animals were sacrificed and evaluated six months after surgery. Results Gross evaluation and histology of the specimens showed good integration of the chondral surface in both treatment groups. Significantly better bone regeneration and cartilage surface reconstruction were observed in the group treated with the scaffold alone. Incomplete bone regeneration and irregular cartilage surface integration were observed in the group treated with the scaffold where PRP was added. In the control group, no bone and cartilage defect healing occurred; defects were filled with fibrous tissue. Quantitative macroscopic and histological score evaluations confirmed the qualitative trends observed. Conclusions The hydroxyapatite-collagen scaffold enhanced osteochondral lesion repair, but the combination with platelet growth factors did not have an additive effect; on the contrary, PRP administration had a negative effect on the results obtained by disturbing the regenerative process. In the scaffold + PRP group, highly amorphous cartilaginous repair tissue and poorly spatially organised underlying bone tissue were found.

  13. Lightweighting Automotive Materials for Increased Fuel Efficiency and Delivering Advanced Modeling and Simulation Capabilities to U.S. Manufacturers

    Energy Technology Data Exchange (ETDEWEB)

    Hale, Steve

    2013-09-11

    Abstract The National Center for Manufacturing Sciences (NCMS) worked with the U.S. Department of Energy (DOE), National Energy Technology Laboratory (NETL), to bring together research and development (R&D) collaborations to develop and accelerate the knowledgebase and infrastructure for lightweighting materials and manufacturing processes for their use in structural and applications in the automotive sector. The purpose/importance of this DOE program: • 2016 CAFÉ standards. • Automotive industry technology that shall adopt the insertion of lightweighting material concepts towards manufacturing of production vehicles. • Development and manufacture of advanced research tools for modeling and simulation (M&S) applications to reduce manufacturing and material costs. • U.S. competitiveness that will help drive the development and manufacture of the next generation of materials. NCMS established a focused portfolio of applied R&D projects utilizing lightweighting materials for manufacture into automotive structures and components. Areas that were targeted in this program: • Functionality of new lightweighting materials to meet present safety requirements. • Manufacturability using new lightweighting materials. • Cost reduction for the development and use of new lightweighting materials. The automotive industry’s future continuously evolves through innovation, and lightweight materials are key in achieving a new era of lighter, more efficient vehicles. Lightweight materials are among the technical advances needed to achieve fuel/energy efficiency and reduce carbon dioxide (CO2) emissions: • Establish design criteria methodology to identify the best materials for lightweighting. • Employ state-of-the-art design tools for optimum material development for their specific applications. • Match new manufacturing technology to production volume. • Address new process variability with new production-ready processes.

  14. Full Semantics Preservation in Model Transformation - A Comparison of Proof Techniques

    NARCIS (Netherlands)

    Hülsbusch, Mathias; König, Barbara; Rensink, Arend; Semenyak, Maria; Soltenborn, Christian; Wehrheim, Heike

    Model transformation is a prime technique in modern, model-driven software design. One of the most challenging issues is to show that the semantics of the models is not affected by the transformation. So far, there is hardly any research into this issue, in particular in those cases where the source

  15. Development of Reservoir Characterization Techniques and Production Models for Exploiting Naturally Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Michael L.; Brown, Raymon L.; Civan, Frauk; Hughes, Richard G.

    2001-08-15

    Research continues on characterizing and modeling the behavior of naturally fractured reservoir systems. Work has progressed on developing techniques for estimating fracture properties from seismic and well log data, developing naturally fractured wellbore models, and developing a model to characterize the transfer of fluid from the matrix to the fracture system for use in the naturally fractured reservoir simulator.

  16. Capabilities for Strategic Adaptation

    DEFF Research Database (Denmark)

    Distel, Andreas Philipp

    This dissertation explores capabilities that enable firms to strategically adapt to environmental changes and preserve competitiveness over time – often referred to as dynamic capabilities. While dynamic capabilities being a popular research domain, too little is known about what these capabilities...... are in terms of their constituent elements, where these capabilities come from, and how their effectiveness can be fostered. Thus, the dissertation’s aim is to address these gaps by advancing our understanding of the multilevel aspects and micro-foundations of dynamic capabilities. In doing so, it focuses...... on capabilities for sensing and seizing new business opportunities and reconfiguring corporate resources. More specifically, the dissertation examines the role of key organization members, such as knowledge workers and top managers, in defining and building these capabilities. Moreover, it investigates how...

  17. Modeling the Differences in Biochemical Capabilities of Pseudomonas Species by Flux Balance Analysis: How Good Are Genome-Scale Metabolic Networks at Predicting the Differences?

    Directory of Open Access Journals (Sweden)

    Parizad Babaei

    2014-01-01

    Full Text Available To date, several genome-scale metabolic networks have been reconstructed. These models cover a wide range of organisms, from bacteria to human. Such models have provided us with a framework for systematic analysis of metabolism. However, little effort has been put towards comparing biochemical capabilities of closely related species using their metabolic models. The accuracy of a model is highly dependent on the reconstruction process, as some errors may be included in the model during reconstruction. In this study, we investigated the ability of three Pseudomonas metabolic models to predict the biochemical differences, namely, iMO1086, iJP962, and iSB1139, which are related to P. aeruginosa PAO1, P. putida KT2440, and P. fluorescens SBW25, respectively. We did a comprehensive literature search for previous works containing biochemically distinguishable traits over these species. Amongst more than 1700 articles, we chose a subset of them which included experimental results suitable for in silico simulation. By simulating the conditions provided in the actual biological experiment, we performed case-dependent tests to compare the in silico results to the biological ones. We found out that iMO1086 and iJP962 were able to predict the experimental data and were much more accurate than iSB1139.

  18. Modeling the differences in biochemical capabilities of pseudomonas species by flux balance analysis: how good are genome-scale metabolic networks at predicting the differences?

    Science.gov (United States)

    Babaei, Parizad; Ghasemi-Kahrizsangi, Tahereh; Marashi, Sayed-Amir

    2014-01-01

    To date, several genome-scale metabolic networks have been reconstructed. These models cover a wide range of organisms, from bacteria to human. Such models have provided us with a framework for systematic analysis of metabolism. However, little effort has been put towards comparing biochemical capabilities of closely related species using their metabolic models. The accuracy of a model is highly dependent on the reconstruction process, as some errors may be included in the model during reconstruction. In this study, we investigated the ability of three Pseudomonas metabolic models to predict the biochemical differences, namely, iMO1086, iJP962, and iSB1139, which are related to P. aeruginosa PAO1, P. putida KT2440, and P. fluorescens SBW25, respectively. We did a comprehensive literature search for previous works containing biochemically distinguishable traits over these species. Amongst more than 1700 articles, we chose a subset of them which included experimental results suitable for in silico simulation. By simulating the conditions provided in the actual biological experiment, we performed case-dependent tests to compare the in silico results to the biological ones. We found out that iMO1086 and iJP962 were able to predict the experimental data and were much more accurate than iSB1139.

  19. Personal recommender systems for learners in lifelong learning: requirements, techniques and model

    NARCIS (Netherlands)

    Drachsler, Hendrik; Hummel, Hans; Koper, Rob

    2007-01-01

    Drachsler, H., Hummel, H. G. K., & Koper, R. (2008). Personal recommender systems for learners in lifelong learning: requirements, techniques and model. International Journal of Learning Technology, 3(4), 404-423.

  20. Prediction of intracranial findings on CT-scans by alternative modelling techniques

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); M. Smits (Marion); D.W.J. Dippel (Diederik); M.G.M. Hunink (Myriam); E.W. Steyerberg (Ewout)

    2011-01-01

    textabstractBackground: Prediction rules for intracranial traumatic findings in patients with minor head injury are designed to reduce the use of computed tomography (CT) without missing patients at risk for complications. This study investigates whether alternative modelling techniques might

  1. SU-E-T-134: Assessing the Capabilities of An MU Model for Fields as Small as 2cm in a Passively Scattered Proton Beam

    International Nuclear Information System (INIS)

    Simpson, R; Ghebremedhin, A; Gordon, I; Patyal, B

    2015-01-01

    Purpose: To assess and expand the capabilities of the current MU model for a passively scattered proton beam. The expanded MU model can potentially be used to predict the dose/MU for fields smaller than 2cm in diameter and reduce time needed for physical calibrations. Methods: The current MU model accurately predicted the dose/MU for more than 800 fields when compared to physical patient calibrations. Three different ion chambers were used in a Plastic Water phantom for physical measurements: T1, PIN, and A-16. The original MU model predicted output for fields that were affected by the bolus gap factor (BGF) and nozzle extension factor (NEF). As the system was tested for smaller treatment fields, the mod wheel dependent field size factor (MWDFSF) had to be included to describe the changes observed in treatment fields smaller than 3cm. The expanded model used Clarkson integration to determine the appropriate value for each factor (field size factor (FSF), BGF, NEF, and MWDFSF), to accurately predict the dose/MU for fields smaller than 2.5cm in effective diameter. Results: The expanded MU model demonstrated agreement better than 2% for more than 800 physical calibrations that were tested. The minimum tested fields were 1.7cm effective diameter for 149MeV and 2.4cm effective diameter for 186MeV. The inclusion of Clarkson integration into the MU model enabled accurate prediction of the dose/MU for very small and irregularly shaped treatment fields. Conclusion: The MU model accurately predicted the dose/MU for a wide range of treatment fields used in the clinic. The original MU model has been refined using factors that were problematic to accurately predict the dose/MU: the BGF, NEF, and MWDFSF. The MU model has minimized the time for determining dose/MU and reduced the time needed for physical calibrations, improving the efficiency of the patient treatment process

  2. A Shell/3D Modeling Technique for the Analysis of Delaminated Composite Laminates

    Science.gov (United States)

    Krueger, Ronald; OBrien, T. Kevin

    2000-01-01

    A shell/3D modeling technique was developed for which a local solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a shell finite element model. Multi-point constraints provided a kinematically compatible interface between the local 3D model and the global structural model which has been meshed with shell finite elements. Double Cantilever Beam, End Notched Flexure, and Single Leg Bending specimens were analyzed first using full 3D finite element models to obtain reference solutions. Mixed mode strain energy release rate distributions were computed using the virtual crack closure technique. The analyses were repeated using the shell/3D technique to study the feasibility for pure mode I, mode II and mixed mode I/II cases. Specimens with a unidirectional layup and with a multidirectional layup were simulated. For a local 3D model, extending to a minimum of about three specimen thicknesses on either side of the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures the shell/3D modeling technique offers a great potential for reducing the model size, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  3. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  4. Propulsion modeling techniques and applications for the NASA Dryden X-30 real-time simulator

    Science.gov (United States)

    Hicks, John W.

    1991-01-01

    An overview is given of the flight planning activities to date in the current National Aero-Space Plane (NASP) program. The government flight-envelope expansion concept and other design flight operational assessments are discussed. The NASA Dryden NASP real-time simulator configuration is examined and hypersonic flight planning simulation propulsion modeling requirements are described. The major propulsion modeling techniques developed by the Edwards flight test team are outlined, and the application value of techniques for developmental hypersonic vehicles are discussed.

  5. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  6. Validation of a musculoskeletal model of lifting and its application for biomechanical evaluation of lifting techniques.

    Science.gov (United States)

    Mirakhorlo, Mojtaba; Azghani, Mahmood Reza; Kahrizi, Sedighe

    2014-01-01

    Lifting methods, including standing stance and techniques have wide effects on spine loading and stability. Previous studies explored lifting techniques in many biomechanical terms and documented changes in muscular and postural response of body as a function of techniques .However, the impact of standing stance and lifting technique on human musculoskeletal had not been investigated concurrently. A whole body musculoskeletal model of lifting had been built in order to evaluate standing stance impact on muscle activation patterns and spine loading during each distinctive lifting technique. Verified model had been used in different stances width during squat, stoop and semi-squat lifting for examining the effect of standing stance on each lifting technique. The model muscle's activity was validated by experimental muscle EMGs resulting in Pearson's coefficients of greater than 0.8. Results from analytical analyses show that the effect of stance width on biomechanical parameters consists in the lifting technique, depending on what kind of standing stance was used. Standing stance in each distinctive lifting technique exhibit positive and negative aspects and it can't be recommended either one as being better in terms of biomechanical parameters.

  7. 3D printing of high-resolution PLA-based structures by hybrid electrohydrodynamic and fused deposition modeling techniques

    International Nuclear Information System (INIS)

    Zhang, Bin; Seong, Baekhoon; Byun, Doyoung; Nguyen, VuDat

    2016-01-01

    Recently, the three-dimensional (3D) printing technique has received much attention for shape forming and manufacturing. The fused deposition modeling (FDM) printer is one of the various 3D printers available and has become widely used due to its simplicity, low-cost, and easy operation. However, the FDM technique has a limitation whereby its patterning resolution is too low at around 200 μm. In this paper, we first present a hybrid mechanism of electrohydrodynamic jet printing with the FDM technique, which we name E-FDM. We then develop a novel high-resolution 3D printer based on the E-FDM process. To determine the optimal condition for structuring, we also investigated the effect of several printing parameters, such as temperature, applied voltage, working height, printing speed, flow-rate, and acceleration on the patterning results. This method was capable of fabricating both high resolution 2D and 3D structures with the use of polylactic acid (PLA). PLA has been used to fabricate scaffold structures for tissue engineering, which has different hierarchical structure sizes. The fabrication speed was up to 40 mm/s and the pattern resolution could be improved to 10 μm. (paper)

  8. Rate capability for Na-doped Li1.167Ni0.18Mn0.548Co0.105O2 cathode material and characterization of Li-ion diffusion using galvanostatic intermittent titration technique

    International Nuclear Information System (INIS)

    Lim, Sung Nam; Seo, Jung Yoon; Jung, Dae Soo; Ahn, Wook; Song, Hoon Sub; Yeon, Sun-Hwa; Park, Seung Bin

    2015-01-01

    Highlights: • Spherical Na-doped Li-rich cathode material prepared by spray pyrolysis. • Na-doped samples show better rate capability than that of bare sample. • Na-doped sample has higher D Li+ value at 4 V compared with that of the bare sample. • The cycle performance was enhanced from 83% to 92%. - Abstract: Spherical Li 1.167−x Na x Ni 0.18 Mn 0.548 Co 0.105 O 2 (0 ⩽ x ⩽ 0.1) particles were prepared by spray pyrolysis, and subjected to electrochemical characterization for lithium battery applications. It was confirmed that Na doping enhances the charge/discharge rate capability. The structure of prepared samples was characterized by XRD: the c-axis lattice parameter increases with increase in the amount of Na ions (parameterized by x, above). The Na-doped sample with x = 0.05 shows capacities of 208 and 184 mA h g −1 at high current densities of 1.0 C and 2.0 C, respectively. These values are enhanced, compared to values of 189 and 167 mA h g −1 for the bare sample. The ratio of the capacity at 1.0 C to that at 0.1 C is enhanced from 77% for the bare sample to 84% for the Na-doped sample with x = 0.05. The Li diffusion coefficients obtained from the galvanostatic intermittent titration technique (GITT) are higher for Na-doped samples than for the bare sample. In particular, the Na-doped sample (x = 0.05), in the potential range around 4 V, has a higher D Li+ value of 3.34 × 10 −9 cm 2 s −1 , compared with 1.35 × 10 −9 cm 2 s −1 for the bare sample. The Na-doped samples (0 < x < 0.075) show high capacity retention: the Na-doped sample (x = 0.05) shows a capacity retention of 92% compared to 83% for the bare sample

  9. New sunshine-based models for predicting global solar radiation using PSO (particle swarm optimization) technique

    International Nuclear Information System (INIS)

    Behrang, M.A.; Assareh, E.; Noghrehabadi, A.R.; Ghanbarzadeh, A.

    2011-01-01

    PSO (particle swarm optimization) technique is applied to estimate monthly average daily GSR (global solar radiation) on horizontal surface for different regions of Iran. To achieve this, five new models were developed as well as six models were chosen from the literature. First, for each city, the empirical coefficients for all models were separately determined using PSO technique. The results indicate that new models which are presented in this study have better performance than existing models in the literature for 10 cities from 17 considered cities in this study. It is also shown that the empirical coefficients found for a given latitude can be generalized to estimate solar radiation in cities at similar latitude. Some case studies are presented to demonstrate this generalization with the result showing good agreement with the measurements. More importantly, these case studies further validate the models developed, and demonstrate the general applicability of the models developed. Finally, the obtained results of PSO technique were compared with the obtained results of SRTs (statistical regression techniques) on Angstrom model for all 17 cities. The results showed that obtained empirical coefficients for Angstrom model based on PSO have more accuracy than SRTs for all 17 cities. -- Highlights: → The first study to apply an intelligent optimization technique to more accurately determine empirical coefficients in solar radiation models. → New models which are presented in this study have better performance than existing models. → The empirical coefficients found for a given latitude can be generalized to estimate solar radiation in cities at similar latitude. → A fair comparison between the performance of PSO and SRTs on GSR modeling.

  10. People Capability Maturity Model. SM.

    Science.gov (United States)

    1995-09-01

    of Defense (DoD) necessary to complete this work. We also thank Arlene Dukanauskas (U.S. Army, DISC4) and Joyce France (Office of the Assistant...SEI-95-MM-02 Acknowledgments include Eduardo Cadena ( Servicios en Informatica, Mexico), Nancy Chauncey (U.S. Army), Pat D. Delohery (Trecom

  11. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  12. High Altitude Long Endurance UAV Analysis Model Development and Application Study Comparing Solar Powered Airplane and Airship Station-Keeping Capabilities

    Science.gov (United States)

    Ozoroski, Thomas A.; Nickol, Craig L.; Guynn, Mark D.

    2015-01-01

    There have been ongoing efforts in the Aeronautics Systems Analysis Branch at NASA Langley Research Center to develop a suite of integrated physics-based computational utilities suitable for modeling and analyzing extended-duration missions carried out using solar powered aircraft. From these efforts, SolFlyte has emerged as a state-of-the-art vehicle analysis and mission simulation tool capable of modeling both heavier-than-air (HTA) and lighter-than-air (LTA) vehicle concepts. This study compares solar powered airplane and airship station-keeping capability during a variety of high altitude missions, using SolFlyte as the primary analysis component. Three Unmanned Aerial Vehicle (UAV) concepts were designed for this study: an airplane (Operating Empty Weight (OEW) = 3285 kilograms, span = 127 meters, array area = 450 square meters), a small airship (OEW = 3790 kilograms, length = 115 meters, array area = 570 square meters), and a large airship (OEW = 6250 kilograms, length = 135 meters, array area = 1080 square meters). All the vehicles were sized for payload weight and power requirements of 454 kilograms and 5 kilowatts, respectively. Seven mission sites distributed throughout the United States were selected to provide a basis for assessing the vehicle energy budgets and site-persistent operational availability. Seasonal, 30-day duration missions were simulated at each of the sites during March, June, September, and December; one-year duration missions were simulated at three of the sites. Atmospheric conditions during the simulated missions were correlated to National Climatic Data Center (NCDC) historical data measurements at each mission site, at four flight levels. Unique features of the SolFlyte model are described, including methods for calculating recoverable and energy-optimal flight trajectories and the effects of shadows on solar energy collection. Results of this study indicate that: 1) the airplane concept attained longer periods of on

  13. Modelling techniques for predicting the long term consequences of radiation on natural aquatic populations

    International Nuclear Information System (INIS)

    Wallis, I.G.

    1978-01-01

    The purpose of this working paper is to describe modelling techniques for predicting the long term consequences of radiation on natural aquatic populations. Ideally, it would be possible to use aquatic population models: (1) to predict changes in the health and well-being of all aquatic populations as a result of changing the composition, amount and location of radionuclide discharges; (2) to compare the effects of steady, fluctuating and accidental releases of radionuclides; and (3) to evaluate the combined impact of the discharge of radionuclides and other wastes, and natural environmental stresses on aquatic populations. At the onset it should be stated that there is no existing model which can achieve this ideal performance. However, modelling skills and techniques are available to develop useful aquatic population models. This paper discusses the considerations involved in developing these models and briefly describes the various types of population models which have been developed to date

  14. Real-time reservoir geological model updating using the hybrid EnKF and geostatistical technique

    Energy Technology Data Exchange (ETDEWEB)

    Li, H.; Chen, S.; Yang, D. [Regina Univ., SK (Canada). Petroleum Technology Research Centre

    2008-07-01

    Reservoir simulation plays an important role in modern reservoir management. Multiple geological models are needed in order to analyze the uncertainty of a given reservoir development scenario. Ideally, dynamic data should be incorporated into a reservoir geological model. This can be done by using history matching and tuning the model to match the past performance of reservoir history. This study proposed an assisted history matching technique to accelerate and improve the matching process. The Ensemble Kalman Filter (EnKF) technique, which is an efficient assisted history matching method, was integrated with a conditional geostatistical simulation technique to dynamically update reservoir geological models. The updated models were constrained to dynamic data, such as reservoir pressure and fluid saturations, and approaches geologically realistic at each time step by using the EnKF technique. The new technique was successfully applied in a heterogeneous synthetic reservoir. The uncertainty of the reservoir characterization was significantly reduced. More accurate forecasts were obtained from the updated models. 3 refs., 2 figs.

  15. Numerical Stability and Control Analysis Towards Falling-Leaf Prediction Capabilities of Splitflow for Two Generic High-Performance Aircraft Models

    Science.gov (United States)

    Charlton, Eric F.

    1998-01-01

    Aerodynamic analysis are performed using the Lockheed-Martin Tactical Aircraft Systems (LMTAS) Splitflow computational fluid dynamics code to investigate the computational prediction capabilities for vortex-dominated flow fields of two different tailless aircraft models at large angles of attack and sideslip. These computations are performed with the goal of providing useful stability and control data to designers of high performance aircraft. Appropriate metrics for accuracy, time, and ease of use are determined in consultations with both the LMTAS Advanced Design and Stability and Control groups. Results are obtained and compared to wind-tunnel data for all six components of forces and moments. Moment data is combined to form a "falling leaf" stability analysis. Finally, a handful of viscous simulations were also performed to further investigate nonlinearities and possible viscous effects in the differences between the accumulated inviscid computational and experimental data.

  16. Uncertainty analysis in rainfall-runoff modelling : Application of machine learning techniques

    NARCIS (Netherlands)

    Shrestha, D.l.

    2009-01-01

    This thesis presents powerful machine learning (ML) techniques to build predictive models of uncertainty with application to hydrological models. Two different methods are developed and tested. First one focuses on parameter uncertainty analysis by emulating the results of Monte Carlo simulations of

  17. Uncertainty Analysis in Rainfall-Runoff Modelling: Application of Machine Learning Techniques

    NARCIS (Netherlands)

    Shrestha, D.L.

    2009-01-01

    This thesis presents powerful machine learning (ML) techniques to build predictive models of uncertainty with application to hydrological models. Two different methods are developed and tested. First one focuses on parameter uncertainty analysis by emulating the results of Monte Carlo simulations of

  18. Using Game Theory Techniques and Concepts to Develop Proprietary Models for Use in Intelligent Games

    Science.gov (United States)

    Christopher, Timothy Van

    2011-01-01

    This work is about analyzing games as models of systems. The goal is to understand the techniques that have been used by game designers in the past, and to compare them to the study of mathematical game theory. Through the study of a system or concept a model often emerges that can effectively educate students about making intelligent decisions…

  19. Application of separable parameter space techniques to multi-tracer PET compartment modeling.

    Science.gov (United States)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-02-07

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  20. Generation of 3-D finite element models of restored human teeth using micro-CT techniques.

    NARCIS (Netherlands)

    Verdonschot, N.J.J.; Fennis, W.M.M.; Kuys, R.H.; Stolk, J.; Kreulen, C.M.; Creugers, N.H.J.

    2001-01-01

    PURPOSE: This article describes the development of a three-dimensional finite element model of a premolar based on a microscale computed tomographic (CT) data-acquisition technique. The development of the model is part of a project studying the optimal design and geometry of adhesive tooth-colored

  1. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    Science.gov (United States)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  2. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    International Nuclear Information System (INIS)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. (paper)

  3. Techniques to extract physical modes in model-independent analysis of rings

    International Nuclear Information System (INIS)

    Wang, C.-X.

    2004-01-01

    A basic goal of Model-Independent Analysis is to extract the physical modes underlying the beam histories collected at a large number of beam position monitors so that beam dynamics and machine properties can be deduced independent of specific machine models. Here we discuss techniques to achieve this goal, especially the Principal Component Analysis and the Independent Component Analysis.

  4. Developing Alliance Capabilities

    DEFF Research Database (Denmark)

    Heimeriks, Koen H.; Duysters, Geert; Vanhaverbeke, Wim

    This paper assesses the differential performance effects of learning mechanisms on the development of alliance capabilities. Prior research has suggested that different capability levels could be identified in which specific intra-firm learning mechanisms are used to enhance a firm's alliance...... capability. However, empirical testing in this field is scarce and little is known as to what extent different learning mechanisms are indeed useful in advancing a firm's alliance capability. This paper analyzes to what extent intra-firm learning mechanisms help firms develop their alliance capability....... Differential learning may explain in what way firms yield superior returns from their alliances in comparison to competitors. The empirical results show that different learning mechanisms have different performance effects at different stages of the alliance capability development process. The main lesson from...

  5. A fully blanketed early B star LTE model atmosphere using an opacity sampling technique

    International Nuclear Information System (INIS)

    Phillips, A.P.; Wright, S.L.

    1980-01-01

    A fully blanketed LTE model of a stellar atmosphere with Tsub(e) = 21914 K (thetasub(e) = 0.23), log g = 4 is presented. The model includes an explicit representation of the opacity due to the strongest lines, and uses a statistical opacity sampling technique to represent the weaker line opacity. The sampling technique is subjected to several tests and the model is compared with an atmosphere calculated using the line-distribution function method. The limitations of the distribution function method and the particular opacity sampling method used here are discussed in the light of the results obtained. (author)

  6. Modeling techniques used in the communications link analysis and simulation system (CLASS)

    Science.gov (United States)

    Braun, W. R.; Mckenzie, T. M.

    1985-01-01

    CLASS (Communications Link Analysis and Simulation System) is a software package developed for NASA to predict the communication and tracking performance of the Tracking and Data Relay Satellite System (TDRSS) services. The modeling techniques used in CLASS are described. The components of TDRSS and the performance parameters to be computed by CLASS are too diverse to permit the use of a single technique to evaluate all performance measures. Hence, each CLASS module applies the modeling approach best suited for a particular subsystem and/or performance parameter in terms of model accuracy and computational speed.

  7. Comparison of bag-valve-mask hand-sealing techniques in a simulated model.

    Science.gov (United States)

    Otten, David; Liao, Michael M; Wolken, Robert; Douglas, Ivor S; Mishra, Ramya; Kao, Amanda; Barrett, Whitney; Drasler, Erin; Byyny, Richard L; Haukoos, Jason S

    2014-01-01

    Bag-valve-mask ventilation remains an essential component of airway management. Rescuers continue to use both traditional 1- or 2-handed mask-face sealing techniques, as well as a newer modified 2-handed technique. We compare the efficacy of 1-handed, 2-handed, and modified 2-handed bag-valve-mask technique. In this prospective, crossover study, health care providers performed 1-handed, 2-handed, and modified 2-handed bag-valve-mask ventilation on a standardized ventilation model. Subjects performed each technique for 5 minutes, with 3 minutes' rest between techniques. The primary outcome was expired tidal volume, defined as percentage of total possible expired tidal volume during a 5-minute bout. A specialized inline monitor measured expired tidal volume. We compared 2-handed versus modified 2-handed and 2-handed versus 1-handed techniques. We enrolled 52 subjects: 28 (54%) men, 32 (62%) with greater than or equal to 5 actual emergency bag-valve-mask situations. Median expired tidal volume percentage for 1-handed technique was 31% (95% confidence interval [CI] 17% to 51%); for 2-handed technique, 85% (95% CI 78% to 91%); and for modified 2-handed technique, 85% (95% CI 82% to 90%). Both 2-handed (median difference 47%; 95% CI 34% to 62%) and modified 2-handed technique (median difference 56%; 95% CI 29% to 65%) resulted in significantly higher median expired tidal volume percentages compared with 1-handed technique. The median expired tidal volume percentages between 2-handed and modified 2-handed techniques did not significantly differ from each other (median difference 0; 95% CI -2% to 2%). In a simulated model, both 2-handed mask-face sealing techniques resulted in higher ventilatory tidal volumes than 1-handed technique. Tidal volumes from 2-handed and modified 2-handed techniques did not differ. Rescuers should perform bag-valve-mask ventilation with 2-handed techniques. Copyright © 2013 American College of Emergency Physicians. Published by Mosby

  8. A novel model surgery technique for LeFort III advancement.

    Science.gov (United States)

    Vachiramon, Amornpong; Yen, Stephen L-K; Lypka, Michael; Bindignavale, Vijay; Hammoudeh, Jeffrey; Reinisch, John; Urata, Mark M

    2007-09-01

    Current techniques for model surgery and occlusal splint fabrication lack the ability to mark, measure and plan the position of the orbital rim for LeFort III and Monobloc osteotomies. This report describes a model surgery technique for planning the three dimensional repositioning of the orbital rims. Dual orbital pointers were used to mark the infraorbital rim during the facebow transfer. These pointer positions were transferred onto the surgical models in order to follow splint-determined movements. Case reports are presented to illustrate how the model surgery technique was used to differentiate the repositioning of the orbital rim from the occlusal correction in single segment and combined LeFort III/LeFort I osteotomies.

  9. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    Science.gov (United States)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  10. Towards a Business Process Modeling Technique for Agile Development of Case Management Systems

    Directory of Open Access Journals (Sweden)

    Ilia Bider

    2017-12-01

    Full Text Available A modern organization needs to adapt its behavior to changes in the business environment by changing its Business Processes (BP and corresponding Business Process Support (BPS systems. One way of achieving such adaptability is via separation of the system code from the process description/model by applying the concept of executable process models. Furthermore, to ease introduction of changes, such process model should separate different perspectives, for example, control-flow, human resources, and data perspectives, from each other. In addition, for developing a completely new process, it should be possible to start with a reduced process model to get a BPS system quickly running, and then continue to develop it in an agile manner. This article consists of two parts, the first sets requirements on modeling techniques that could be used in the tools that supports agile development of BPs and BPS systems. The second part suggests a business process modeling technique that allows to start modeling with the data/information perspective which would be appropriate for processes supported by Case or Adaptive Case Management (CM/ACM systems. In a model produced by this technique, called data-centric business process model, a process instance/case is defined as sequence of states in a specially designed instance database, while the process model is defined as a set of rules that set restrictions on allowed states and transitions between them. The article details the background for the project of developing the data-centric process modeling technique, presents the outline of the structure of the model, and gives formal definitions for a substantial part of the model.

  11. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  12. Two-dimensional gel electrophoresis image registration using block-matching techniques and deformation models.

    Science.gov (United States)

    Rodriguez, Alvaro; Fernandez-Lozano, Carlos; Dorado, Julian; Rabuñal, Juan R

    2014-06-01

    Block-matching techniques have been widely used in the task of estimating displacement in medical images, and they represent the best approach in scenes with deformable structures such as tissues, fluids, and gels. In this article, a new iterative block-matching technique-based on successive deformation, search, fitting, filtering, and interpolation stages-is proposed to measure elastic displacements in two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) images. The proposed technique uses different deformation models in the task of correlating proteins in real 2D electrophoresis gel images, obtaining an accuracy of 96.6% and improving the results obtained with other techniques. This technique represents a general solution, being easy to adapt to different 2D deformable cases and providing an experimental reference for block-matching algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. A Shell/3D Modeling Technique for the Analyses of Delaminated Composite Laminates

    Science.gov (United States)

    Krueger, Ronald; OBrien, T. Kevin

    2001-01-01

    A shell/3D modeling technique was developed for which a local three-dimensional solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a plate or shell finite element model. Multi-point constraints provided a kinematically compatible interface between the local three-dimensional model and the global structural model which has been meshed with plate or shell finite elements. Double Cantilever Beam (DCB), End Notched Flexure (ENF), and Single Leg Bending (SLB) specimens were modeled using the shell/3D technique to study the feasibility for pure mode I (DCB), mode II (ENF) and mixed mode I/II (SLB) cases. Mixed mode strain energy release rate distributions were computed across the width of the specimens using the virtual crack closure technique. Specimens with a unidirectional layup and with a multidirectional layup where the delamination is located between two non-zero degree plies were simulated. For a local three-dimensional model, extending to a minimum of about three specimen thicknesses on either side of the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures modeled with plate elements, the shell/3D modeling technique offers a great potential for reducing the model size, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  14. Modelling techniques for underwater noise generated by tidal turbines in shallow water

    OpenAIRE

    Lloyd, Thomas P.; Turnock, Stephen R.; Humphrey, Victor F.

    2011-01-01

    The modelling of underwater noise sources and their potential impact on the marine environment is considered, focusing on tidal turbines in shallow water. The requirement for device noise prediction as part of environmental impact assessment is outlined and the limited amount of measurement data and modelling research identified. Following the identification of potential noise sources, the dominant flowgenerated sources are modelled using empirical techniques. The predicted sound pressure lev...

  15. Study on ABCD Analysis Technique for Business Models, business strategies, Operating Concepts & Business Systems

    OpenAIRE

    Sreeramana Aithal

    2016-01-01

    Studying the implications of a business model, choosing success strategies, developing viable operational concepts or evolving a functional system, it is important to analyse it in all dimensions. For this purpose, various analysing techniques/frameworks are used. This paper is a discussion on how to use an innovative analysing framework called ABCD model on a given business model, or on a business strategy or an operational concept/idea or business system. Based on four constructs Advantages...

  16. Car sharing demand estimation and urban transport demand modelling using stated preference techniques

    OpenAIRE

    Catalano, Mario; Lo Casto, Barbara; Migliore, Marco

    2008-01-01

    The research deals with the use of the stated preference technique (SP) and transport demand modelling to analyse travel mode choice behaviour for commuting urban trips in Palermo, Italy. The principal aim of the study was the calibration of a demand model to forecast the modal split of the urban transport demand, allowing for the possibility of using innovative transport systems like car sharing and car pooling. In order to estimate the demand model parameters, a specific survey was carried ...

  17. Evaluation of land capability and suitability for irrigated agriculture in the Emirate of Abu Dhabi, UAE, using an integrated AHP-GIS model

    Science.gov (United States)

    Aldababseh, Amal; Temimi, Marouane; Maghelal, Praveen; Branch, Oliver; Wulfmeyer, Volker

    2017-12-01

    The rapid economic development and high population growth in the United Arab Emirates (UAE) have impacted utilization and management of agricultural land. The development of large-scale agriculture in unsuitable areas can severely impact groundwater resources in the UAE. More than 60% of UAE's water resources are being utilized by the agriculture, forestry, and urban greenery sectors. However, the contribution of the agricultural sector to the national GDP is negligible. Several programs have been introduced by the government aimed at achieving sustainable agriculture whilst preserving valuable water resources. Local subsistence farming has declined considerably during the past few years, due to low soil moisture content, sandy soil texture, lack of arable land, natural climatic disruptions, water shortages, and declined rainfall. The limited production of food and the continuing rise in the food prices on a global and local level are expected to increase low-income households' vulnerability to food insecurity. This research aims at developing a suitability index for the evaluation and prioritization of areas in the UAE for large-scale agriculture. The AHP-GIS integrated model developed in this study facilitates a step by step aggregation of a large number of datasets representing the most important criteria, and the generation of agricultural suitability and land capability maps. To provide the necessary criteria to run the model, a comprehensive geospatial database was built, including climate conditions, water potential, soil capabilities, topography, and land management. A hierarchical structure is built as a decomposition structure that includes all criteria and sub-criteria used to define land suitability based on literature review and experts' opinions. Pairwise comparisons matrix are used to calculate criteria' weights. The GIS Model Builder function is used to integrate all spatial processes to model land suitability. In order to preserve some flexibility

  18. Evaluation of land capability and suitability for irrigated agriculture in the Emirate of Abu Dhabi, UAE, using an integrated AHP-GIS model

    Science.gov (United States)

    Aldababseh, A.; Temimi, M.; Maghelal, P.; Branch, O.; Wulfmeyer, V.

    2017-12-01

    The rapid economic development and high population growth in the United Arab Emirates (UAE) have impacted utilization and management of agricultural land. The development of large-scale agriculture in unsuitable areas can severely impact groundwater resources in the UAE. More than 60% of UAE's water resources are being utilized by the agriculture, forestry, and urban greenery sectors. However, the contribution of the agricultural sector to the national GDP is negligible. Several programs have been introduced by the government aimed at achieving sustainable agriculture whilst preserving valuable water resources. Local subsistence farming has declined considerably during the past few years, due to low soil moisture content, sandy soil texture, lack of arable land, natural climatic disruptions, water shortages, and declined rainfall. The limited production of food and the continuing rise in the food prices on a global and local level are expected to increase low-income households' vulnerability to food insecurity. This research aims at developing a suitability index for the evaluation and prioritization of areas in the UAE for large-scale agriculture. The AHP-GIS integrated model developed in this study facilitates a step by step aggregation of a large number of datasets representing the most important criteria, and the generation of agricultural suitability and land capability maps. To provide the necessary criteria to run the model, a comprehensive geospatial database was built, including climate conditions, water potential, soil capabilities, topography, and land management. A hieratical structure is built as a decomposition structure that includes all criteria and sub-criteria used to define land suitability based on literature review and experts' opinions. Pairwise comparisons matrix are used to calculate criteria' weights. The GIS Model Builder function is used to integrate all spatial processes to model land suitability. In order to preserve some flexibility

  19. Transfer of physics detector models into CAD systems using modern techniques

    International Nuclear Information System (INIS)

    Dach, M.; Vuoskoski, J.

    1996-01-01

    Designing high energy physics detectors for future experiments requires sophisticated computer aided design and simulation tools. In order to satisfy the future demands in this domain, modern techniques, methods, and standards have to be applied. We present an interface application, designed and implemented using object-oriented techniques, for the widely used GEANT physics simulation package. It converts GEANT detector models into the future industrial standard, STEP. (orig.)

  20. Dynamic Capabilities and Performance

    DEFF Research Database (Denmark)

    Wilden, Ralf; Gudergan, Siegfried P.; Nielsen, Bo Bernhard

    2013-01-01

    are contingent on the competitive intensity faced by firms. Our findings demonstrate the performance effects of internal alignment between organizational structure and dynamic capabilities, as well as the external fit of dynamic capabilities with competitive intensity. We outline the advantages of PLS...

  1. Telematics Options and Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Hodge, Cabell [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-05

    This presentation describes the data tracking and analytical capabilities of telematics devices. Federal fleet managers can use the systems to keep their drivers safe, maintain a fuel efficient fleet, ease their reporting burden, and save money. The presentation includes an example of how much these capabilities can save fleets.

  2. Assessment of surface solar irradiance derived from real-time modelling techniques and verification with ground-based measurements

    Science.gov (United States)

    Kosmopoulos, Panagiotis G.; Kazadzis, Stelios; Taylor, Michael; Raptis, Panagiotis I.; Keramitsoglou, Iphigenia; Kiranoudis, Chris; Bais, Alkiviadis F.

    2018-02-01

    This study focuses on the assessment of surface solar radiation (SSR) based on operational neural network (NN) and multi-regression function (MRF) modelling techniques that produce instantaneous (in less than 1 min) outputs. Using real-time cloud and aerosol optical properties inputs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite and the Copernicus Atmosphere Monitoring Service (CAMS), respectively, these models are capable of calculating SSR in high resolution (1 nm, 0.05°, 15 min) that can be used for spectrally integrated irradiance maps, databases and various applications related to energy exploitation. The real-time models are validated against ground-based measurements of the Baseline Surface Radiation Network (BSRN) in a temporal range varying from 15 min to monthly means, while a sensitivity analysis of the cloud and aerosol effects on SSR is performed to ensure reliability under different sky and climatological conditions. The simulated outputs, compared to their common training dataset created by the radiative transfer model (RTM) libRadtran, showed median error values in the range -15 to 15 % for the NN that produces spectral irradiances (NNS), 5-6 % underestimation for the integrated NN and close to zero errors for the MRF technique. The verification against BSRN revealed that the real-time calculation uncertainty ranges from -100 to 40 and -20 to 20 W m-2, for the 15 min and monthly mean global horizontal irradiance (GHI) averages, respectively, while the accuracy of the input parameters, in terms of aerosol and cloud optical thickness (AOD and COT), and their impact on GHI, was of the order of 10 % as compared to the ground-based measurements. The proposed system aims to be utilized through studies and real-time applications which are related to solar energy production planning and use.

  3. Eros-based Confined Capability Client

    National Research Council Canada - National Science Library

    Shapiro, Jonathan S

    2006-01-01

    .... This was accomplished by constructing of a single exemplar application, a web browser using capability-based structuring techniques, and determining whether this application can defend itself against hostile content...

  4. Capability of Glossina tachinoides Westwood (Diptera: Glossinidae ...

    African Journals Online (AJOL)

    Capability of Glossina tachinoides Westwood (Diptera: Glossinidae) males to made and inseminate female flies in different mating ratios to sustain a laboratory tsetsefly colony for sterile insect technique control programme in Ghana.

  5. Determination of Complex-Valued Parametric Model Coefficients Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    A. M. Aibinu

    2010-01-01

    Full Text Available A new approach for determining the coefficients of a complex-valued autoregressive (CAR and complex-valued autoregressive moving average (CARMA model coefficients using complex-valued neural network (CVNN technique is discussed in this paper. The CAR and complex-valued moving average (CMA coefficients which constitute a CARMA model are computed simultaneously from the adaptive weights and coefficients of the linear activation functions in a two-layered CVNN. The performance of the proposed technique has been evaluated using simulated complex-valued data (CVD with three different types of activation functions. The results show that the proposed method can accurately determine the model coefficients provided that the network is properly trained. Furthermore, application of the developed CVNN-based technique for MRI K-space reconstruction results in images with improve resolution.

  6. A novel CT acquisition and analysis technique for breathing motion modeling

    International Nuclear Information System (INIS)

    Low, Daniel A; White, Benjamin M; Lee, Percy P; Thomas, David H; Gaudio, Sergio; Jani, Shyam S; Wu, Xiao; Lamb, James M

    2013-01-01

    To report on a novel technique for providing artifact-free quantitative four-dimensional computed tomography (4DCT) image datasets for breathing motion modeling. Commercial clinical 4DCT methods have difficulty managing irregular breathing. The resulting images contain motion-induced artifacts that can distort structures and inaccurately characterize breathing motion. We have developed a novel scanning and analysis method for motion-correlated CT that utilizes standard repeated fast helical acquisitions, a simultaneous breathing surrogate measurement, deformable image registration, and a published breathing motion model. The motion model differs from the CT-measured motion by an average of 0.65 mm, indicating the precision of the motion model. The integral of the divergence of one of the motion model parameters is predicted to be a constant 1.11 and is found in this case to be 1.09, indicating the accuracy of the motion model. The proposed technique shows promise for providing motion-artifact free images at user-selected breathing phases, accurate Hounsfield units, and noise characteristics similar to non-4D CT techniques, at a patient dose similar to or less than current 4DCT techniques. (fast track communication)

  7. Enhancement of loss detection capability using a combination of the Kalman Filter/Linear Smoother and controllable unit accounting approach

    International Nuclear Information System (INIS)

    Pike, D.H.; Morrison, G.W.

    1979-01-01

    An approach to loss detection is presented which combines the optimal loss detection capability of state estimation techniques with a controllable unit accounting approach. The state estimation theory makes use of a linear system model which is capable of modeling the interaction of various controllable unit areas within a given facility. An example is presented which illustrates the increase in loss detection probability which is realizable with state estimation techniques. Comparisons are made with a Shewhart Control Chart and the CUSUM statistic

  8. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  9. Simulation of the Tornado Event of 22 March, 2013 over Brahmanbaria, Bangladesh using WRF Model with 3DVar DA techniques

    Science.gov (United States)

    Ahasan, M. N.; Alam, M. M.; Debsarma, S. K.

    2015-02-01

    A severe thunderstorm produced a tornado (F2 on the enhanced Fujita-Pearson scale), which affected the Brahmanbaria district of Bangladesh during 1100-1130 UTC of 22 March, 2013. The tornado consumed 38, injured 388 and caused a huge loss of property. The total length travelled by the tornado was about 12-15 km and about 1728 households were affected. An attempt has been made to simulate this rare event using the Weather Research and Forecasting (WRF) model. The model was run in a single domain at 9 km resolution for a period of 24 hrs, starting at 0000 UTC on 22 March, 2013. The meteorological conditions that led to form this tornado have been analyzed. The model simulated meteorological conditions are compared with that of a `no severe thunderstorm observed day' on 22 March, 2012. Thus, the model also ran in the same domain at same resolution for 24 hrs, starting at 0000 UTC on 22 March, 2012. The model simulated meteorological parameters are consistent with each other, and all are in good agreement with the observation in terms of the region of occurrence of the tornado activity. The model has efficiently captured the common favourable synoptic conditions for the occurrence of severe tornadoes though there are some spatial and temporal biases in the simulation. The wind speed is not in good agreement with the observation as it has shown the strongest wind of only 15-20 ms-1, against the estimated wind speed of about 55 ms-1. The spatial distributions as well as intensity of rainfall are also in good agreement with the observation. The results of these analyses demonstrated the capability of high-resolution WRF model with 3DVar Data Assimilation (DA) techniques in simulation of tornado over Brahmanbaria, Bangladesh.

  10. The impact of training on women's capabilities in modifying their obesity-related dietary behaviors: applying family-centered empowerment model.

    Science.gov (United States)

    Mataji Amirrood, Maryam; Taghdisi, Mohammad Hosein; Shidfar, Farzad; Gohari, Mahmood Reza

    2014-01-01

    Dietary behaviors affect obesity; therefore, it seems necessary to conduct interventions to modify behavioral patterns leading to weight-gain in the family. Our goal was to determine the impact of training on women's capabilities in modifying their obesity-related dietary behaviors in Urmia, West Azerbaijan Province, Iran: applying family-centered empowerment model. A quasi-experimental study with Pretest-Posttest design was conducted on 90 over-weight/obese women in 2012 in two Health Centers of Urmia. Convenience sampling was done and the participants were randomly assigned to two 'test' and 'control' groups. Data collection was done by completing the demographic data questionnaire, the empowerment tool and dietary behavior checklist. The intervention was conducted in the form of 6 educational classes held for the 'test' group. After two months, posttest was performed by completing the forms once again. Data were analyzed with descriptive tests, t-tests, Chi2 and Fischer's test. The dietary behavior scores of the intervention group had risen from 7.4 ± 2.11 to 9.95 ± 2.41 (Pempowerment model in this study was proven effective in women. Hence it is advised to consider it in behavior changing interventions to promote the health of the family and community.

  11. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals.

    Science.gov (United States)

    Potnis, Prashant R; Tsou, Nien-Ti; Huber, John E

    2011-02-16

    The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  12. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals

    Directory of Open Access Journals (Sweden)

    John E. Huber

    2011-02-01

    Full Text Available The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  13. Application of rapid prototyping techniques for modelling of anatomical structures in medical training and education.

    Science.gov (United States)

    Torres, K; Staśkiewicz, G; Śnieżyński, M; Drop, A; Maciejewski, R

    2011-02-01

    Rapid prototyping has become an innovative method of fast and cost-effective production of three-dimensional models for manufacturing. Wide access to advanced medical imaging methods allows application of this technique for medical training purposes. This paper presents the feasibility of rapid prototyping technologies: stereolithography, selective laser sintering, fused deposition modelling, and three-dimensional printing for medical education. Rapid prototyping techniques are a promising method for improvement of anatomical education in medical students but also a valuable source of training tools for medical specialists.

  14. A Shell/3D Modeling Technique for Delaminations in Composite Laminates

    Science.gov (United States)

    Krueger, Ronald

    1999-01-01

    A shell/3D modeling technique was developed for which a local solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a plate or shell finite element model. Multi-point constraints provide a kinematically compatible interface between the local 3D model and the global structural model which has been meshed with plate or shell finite elements. For simple double cantilever beam (DCB), end notched flexure (ENF), and single leg bending (SLB) specimens, mixed mode energy release rate distributions were computed across the width from nonlinear finite element analyses using the virtual crack closure technique. The analyses served to test the accuracy of the shell/3D technique for the pure mode I case (DCB), mode II case (ENF) and a mixed mode I/II case (SLB). Specimens with a unidirectional layup where the delamination is located between two 0 plies, as well as a multidirectional layup where the delamination is located between two non-zero degree plies, were simulated. For a local 3D model extending to a minimum of about three specimen thicknesses in front of and behind the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures modeled with plate elements, the shell/3D modeling technique offers a great potential, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  15. Improving Air Quality (and Weather) Predictions using Advanced Data Assimilation Techniques Applied to Coupled Models during KORUS-AQ

    Science.gov (United States)

    Carmichael, G. R.; Saide, P. E.; Gao, M.; Streets, D. G.; Kim, J.; Woo, J. H.

    2017-12-01

    Ambient aerosols are important air pollutants with direct impacts on human health and on the Earth's weather and climate systems through their interactions with radiation and clouds. Their role is dependent on their distributions of size, number, phase and composition, which vary significantly in space and time. There remain large uncertainties in simulated aerosol distributions due to uncertainties in emission estimates and in chemical and physical processes associated with their formation and removal. These uncertainties lead to large uncertainties in weather and air quality predictions and in estimates of health and climate change impacts. Despite these uncertainties and challenges, regional-scale coupled chemistry-meteorological models such as WRF-Chem have significant capabilities in predicting aerosol distributions and explaining aerosol-weather interactions. We explore the hypothesis that new advances in on-line, coupled atmospheric chemistry/meteorological models, and new emission inversion and data assimilation techniques applicable to such coupled models, can be applied in innovative ways using current and evolving observation systems to improve predictions of aerosol distributions at regional scales. We investigate the impacts of assimilating AOD from geostationary satellite (GOCI) and surface PM2.5 measurements on predictions of AOD and PM in Korea during KORUS-AQ through a series of experiments. The results suggest assimilating datasets from multiple platforms can improve the predictions of aerosol temporal and spatial distributions.

  16. Data Farming Process and Initial Network Analysis Capabilities

    Directory of Open Access Journals (Sweden)

    Gary Horne

    2016-01-01

    Full Text Available Data Farming, network applications and approaches to integrate network analysis and processes to the data farming paradigm are presented as approaches to address complex system questions. Data Farming is a quantified approach that examines questions in large possibility spaces using modeling and simulation. It evaluates whole landscapes of outcomes to draw insights from outcome distributions and outliers. Social network analysis and graph theory are widely used techniques for the evaluation of social systems. Incorporation of these techniques into the data farming process provides analysts examining complex systems with a powerful new suite of tools for more fully exploring and understanding the effect of interactions in complex systems. The integration of network analysis with data farming techniques provides modelers with the capability to gain insight into the effect of network attributes, whether the network is explicitly defined or emergent, on the breadth of the model outcome space and the effect of model inputs on the resultant network statistics.

  17. Hybrid models for hydrological forecasting: Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  18. Hybrid models for hydrological forecasting : Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  19. Power Capability Investigation Based on Electrothermal Models of Press-pack IGBT Three-Level NPC and ANPC VSCs for Multimegawatt Wind Turbines

    DEFF Research Database (Denmark)

    Senturk, Osman Selcuk; Helle, Lars; Munk-Nielsen, Stig

    2012-01-01

    Wind turbine power capability is an essential set of data for both wind turbine manufacturers/operators and transmission system operators since the power capability determines whether a wind turbine is able to fulfill transmission system reactive power requirements and how much it is able...... to provide reactive power support as an ancillary service. For multimegawatt full-scale wind turbines, power capability depends on converter topology and semiconductor switch technology. As power capability limiting factors, switch current, semiconductor junction temperature, and converter output voltage...

  20. NEW TECHNIQUE FOR OBESITY SURGERY: INTERNAL GASTRIC PLICATION TECHNIQUE USING INTRAGASTRIC SINGLE-PORT (IGS-IGP) IN EXPERIMENTAL MODEL.

    Science.gov (United States)

    Müller, Verena; Fikatas, Panagiotis; Gül, Safak; Noesser, Maximilian; Fuehrer, Kirs Ten; Sauer, Igor; Pratschke, Johann; Zorron, Ricardo

    2017-01-01

    Bariatric surgery is currently the most effective method to ameliorate co-morbidities as consequence of morbidly obese patients with BMI over 35 kg/m2. Endoscopic techniques have been developed to treat patients with mild obesity and ameliorate comorbidities, but endoscopic skills are needed, beside the costs of the devices. To report a new technique for internal gastric plication using an intragastric single port device in an experimental swine model. Twenty experiments using fresh pig cadaver stomachs in a laparoscopic trainer were performed. The procedure was performed as follow in ten pigs: 1) volume measure; 2) insufflation of the stomach with CO2; 3) extroversion of the stomach through the simulator and installation of the single port device (Gelpoint Applied Mini) through a gastrotomy close to the pylorus; 4) performance of four intragastric handsewn 4-point sutures with Prolene 2-0, from the gastric fundus to the antrum; 5) after the performance, the residual volume was measured. Sleeve gastrectomy was also performed in further ten pigs and pre- and post-procedure gastric volume were measured. The internal gastric plication technique was performed successfully in the ten swine experiments. The mean procedure time was 27±4 min. It produced a reduction of gastric volume of a mean of 51%, and sleeve gastrectomy, a mean of 90% in this swine model. The internal gastric plication technique using an intragastric single port device required few skills to perform, had low operative time and achieved good reduction (51%) of gastric volume in an in vitro experimental model. A cirurgia bariátrica é atualmente o método mais efetivo para melhorar as co-morbidades decorrentes da obesidade mórbida com IMC acima de 35 kg/m2. Técnicas endoscópicas foram desenvolvidas para tratar pacientes com obesidade leve e melhorar as comorbidades, mas habilidades endoscópicas são necessárias, além dos custos. Relatar uma nova técnica para a plicatura gástrica interna

  1. Electricity market price spike analysis by a hybrid data model and feature selection technique

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)

  2. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  3. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    International Nuclear Information System (INIS)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J.

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques

  4. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  5. A review of techniques for spatial modeling in geographical, conservation and landscape genetics.

    Science.gov (United States)

    Diniz-Filho, José Alexandre Felizola; Nabout, João Carlos; de Campos Telles, Mariana Pires; Soares, Thannya Nascimento; Rangel, Thiago Fernando L V B

    2009-04-01

    Most evolutionary processes occur in a spatial context and several spatial analysis techniques have been employed in an exploratory context. However, the existence of autocorrelation can also perturb significance tests when data is analyzed using standard correlation and regression techniques on modeling genetic data as a function of explanatory variables. In this case, more complex models incorporating the effects of autocorrelation must be used. Here we review those models and compared their relative performances in a simple simulation, in which spatial patterns in allele frequencies were generated by a balance between random variation within populations and spatially-structured gene flow. Notwithstanding the somewhat idiosyncratic behavior of the techniques evaluated, it is clear that spatial autocorrelation affects Type I errors and that standard linear regression does not provide minimum variance estimators. Due to its flexibility, we stress that principal coordinate of neighbor matrices (PCNM) and related eigenvector mapping techniques seem to be the best approaches to spatial regression. In general, we hope that our review of commonly used spatial regression techniques in biology and ecology may aid population geneticists towards providing better explanations for population structures dealing with more complex regression problems throughout geographic space.

  6. Evaluation of mesh morphing and mapping techniques in patient specific modeling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2013-01-01

    Robust generation of pelvic finite element models is necessary to understand the variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis and their strain distributions evaluated. Morphing and mapping techniques were effectively applied to generate good quality geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  7. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  8. New techniques for the analysis of manual control systems. [mathematical models of human operator behavior

    Science.gov (United States)

    Bekey, G. A.

    1971-01-01

    Studies are summarized on the application of advanced analytical and computational methods to the development of mathematical models of human controllers in multiaxis manual control systems. Specific accomplishments include the following: (1) The development of analytical and computer methods for the measurement of random parameters in linear models of human operators. (2) Discrete models of human operator behavior in a multiple display situation were developed. (3) Sensitivity techniques were developed which make possible the identification of unknown sampling intervals in linear systems. (4) The adaptive behavior of human operators following particular classes of vehicle failures was studied and a model structure proposed.

  9. Modeling of PV Systems Based on Inflection Points Technique Considering Reverse Mode

    Directory of Open Access Journals (Sweden)

    Bonie J. Restrepo-Cuestas

    2013-11-01

    Full Text Available This paper proposes a methodology for photovoltaic (PV systems modeling, considering their behavior in both direct and reverse operating mode and considering mismatching conditions. The proposed methodology is based on the inflection points technique with a linear approximation to model the bypass diode and a simplified PV model. The proposed mathematical model allows to evaluate the energetic performance of a PV system, exhibiting short simulation times in large PV systems. In addition, this methodology allows to estimate the condition of the modules affected by the partial shading since it is possible to know the power dissipated due to its operation at the second quadrant.

  10. Control System Design for Cylindrical Tank Process Using Neural Model Predictive Control Technique

    Directory of Open Access Journals (Sweden)

    M. Sridevi

    2010-10-01

    Full Text Available Chemical manufacturing and process industry requires innovative technologies for process identification. This paper deals with model identification and control of cylindrical process. Model identification of the process was done using ARMAX technique. A neural model predictive controller was designed for the identified model. The performance of the controllers was evaluated using MATLAB software. The performance of NMPC controller was compared with Smith Predictor controller and IMC controller based on rise time, settling time, overshoot and ISE and it was found that the NMPC controller is better suited for this process.

  11. Comparison of Virtual Dental Implant Planning Using the Full Cross-Sectional and Transaxial Capabilities of Cone Beam Computed Tomography vs Reformatted Panoramic Imaging and 3D Modeling.

    Science.gov (United States)

    Khan, Moiz; Elathamna, Eiad N; Lin, Wei-Shao; Harris, Bryan T; Farman, Allan G; Scheetz, James P; Morton, Dean; Scarfe, William C

    2015-01-01

    To compare the choice and placement of virtual dental implants in the posterior edentulous bounded regions using the full cross-sectional and transaxial capabilities of cone beam computed tomography (CBCT) vs reformatted panoramic images and three-dimensional (3D) virtual models. Fifty-two cases with posterior bounded edentulous regions (61 dental implant sites) were identified from a retrospective audit of 4,014 radiographic volumes. Two image sets were created from selected CBCT data: (1) a combination of reformatted panoramic imaging and a 3D model (PIref/3D), and (2) the full 3D power in CBCT image volume analyses (XS). One virtual implant was placed by consensus of three prosthodontists in each image set: PIref/3D and XS. The choice of implant length and the perceived need for ridge augmentation were recorded for implant placement in both test situations. All the virtual implant placements from both PIref/3D and XS image sets were inspected retrospectively using virtual 3D models, and the number of exposed threads on both the buccal and lingual/palatal aspects of the virtual dental implant was evaluated. The chi-square and paired t tests were used with the level of significance set at α = .05. Shorter implants were chosen more often using XS than PIref/3D (P = .001). Fewer threads were exposed when placed with XS than with PIref/3D (P = .001). The use of XS reduced the perceived need for ridge augmentation compared with PIref/3D (P = .001). The use of the full 3D power of CBCT (including cross-sectional images in all three orthagonal planes and transaxially) provides supplemental information that significantly changes the choice of virtual implant length and vertical position of the implant, and reduces the frequency of perceived need for ridge augmentation before implant placement.

  12. RACLETTE: a model for evaluating the thermal response of plasma facing components to slow high power plasma transients. Pt. I. Theory and description of model capabilities

    International Nuclear Information System (INIS)

    Raffray, A.R.; Federici, G.

    1997-01-01

    For pt.II see ibid., p.101-30, 1997. RACLETTE (Rate Analysis Code for pLasma Energy Transfer Transient Evaluation), a comprehensive but relatively simple and versatile model, was developed to help in the design analysis of plasma facing components (PFCs) under 'slow' high power transients, such as those associated with plasma vertical displacement events. The model includes all the key surface heat transfer processes such as evaporation, melting, and radiation, and their interaction with the PFC block thermal response and the coolant behaviour. This paper represents part I of two sister and complementary papers. It covers the model description, calibration and validation, and presents a number of parametric analyses shedding light on and identifying trends in the PFC armour block response to high plasma energy deposition transients. Parameters investigated include the plasma energy density and deposition time, the armour thickness and the presence of vapour shielding effects. Part II of the paper focuses on specific design analyses of ITER plasma facing components (divertor, limiter, primary first wall and baffle), including improvements in the thermal-hydraulic modeling required for better understanding the consequences of high energy deposition transients in particular for the ITER limiter case. (orig.)

  13. RACLETTE: a model for evaluating the thermal response of plasma facing components to slow high power plasma transients. Part I: Theory and description of model capabilities

    Science.gov (United States)

    Raffray, A. René; Federici, Gianfranco

    1997-04-01

    RACLETTE (Rate Analysis Code for pLasma Energy Transfer Transient Evaluation), a comprehensive but relatively simple and versatile model, was developed to help in the design analysis of plasma facing components (PFCs) under 'slow' high power transients, such as those associated with plasma vertical displacement events. The model includes all the key surface heat transfer processes such as evaporation, melting, and radiation, and their interaction with the PFC block thermal response and the coolant behaviour. This paper represents part I of two sister and complementary papers. It covers the model description, calibration and validation, and presents a number of parametric analyses shedding light on and identifying trends in the PFC armour block response to high plasma energy deposition transients. Parameters investigated include the plasma energy density and deposition time, the armour thickness and the presence of vapour shielding effects. Part II of the paper focuses on specific design analyses of ITER plasma facing components (divertor, limiter, primary first wall and baffle), including improvements in the thermal-hydraulic modeling required for better understanding the consequences of high energy deposition transients in particular for the ITER limiter case.

  14. Coupled Numerical Methods to Analyze Interacting Acoustic-Dynamic Models by Multidomain Decomposition Techniques

    Directory of Open Access Journals (Sweden)

    Delfim Soares

    2011-01-01

    Full Text Available In this work, coupled numerical analysis of interacting acoustic and dynamic models is focused. In this context, several numerical methods, such as the finite difference method, the finite element method, the boundary element method, meshless methods, and so forth, are considered to model each subdomain of the coupled model, and multidomain decomposition techniques are applied to deal with the coupling relations. Two basic coupling algorithms are discussed here, namely the explicit direct coupling approach and the implicit iterative coupling approach, which are formulated based on explicit/implicit time-marching techniques. Completely independent spatial and temporal discretizations among the interacting subdomains are permitted, allowing optimal discretization for each sub-domain of the model to be considered. At the end of the paper, numerical results are presented, illustrating the performance and potentialities of the discussed methodologies.

  15. Animal models in bariatric surgery--a review of the surgical techniques and postsurgical physiology.

    Science.gov (United States)

    Rao, Raghavendra S; Rao, Venkatesh; Kini, Subhash

    2010-09-01

    Bariatric surgery is considered the most effective current treatment for morbid obesity. Since the first publication of an article by Kremen, Linner, and Nelson, many experiments have been performed using animal models. The initial experiments used only malabsorptive procedures like intestinal bypass which have largely been abandoned now. These experimental models have been used to assess feasibility and safety as well as to refine techniques particular to each procedure. We will discuss the surgical techniques and the postsurgical physiology of the four major current bariatric procedures (namely, Roux-en-Y gastric bypass, gastric banding, sleeve gastrectomy, and biliopancreatic diversion). We have also reviewed the anatomy and physiology of animal models. We have reviewed the literature and presented it such that it would be a reference to an investigator interested in animal experiments in bariatric surgery. Experimental animal models are further divided into two categories: large mammals that include dogs, cats, rabbits, and pig and small mammals that include rats and mice.

  16. Resources, constraints and capabilities

    NARCIS (Netherlands)

    Dhondt, S.; Oeij, P.R.A.; Schröder, A.

    2018-01-01

    Human and financial resources as well as organisational capabilities are needed to overcome the manifold constraints social innovators are facing. To unlock the potential of social innovation for the whole society new (social) innovation friendly environments and new governance structures

  17. Engineering Capabilities and Partnerships

    Science.gov (United States)

    Poulos, Steve

    2010-01-01

    This slide presentation reviews the engineering capabilities at Johnson Space Center, The presentation also reviews the partnerships that have resulted in successfully designed and developed projects that involved commercial and educational institutions.

  18. Analysis of Multipath Mitigation Techniques with Land Mobile Satellite Channel Model

    Directory of Open Access Journals (Sweden)

    M. Z. H. Bhuiyan J. Zhang

    2012-12-01

    Full Text Available Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this is of utmost importance to analyze the performance of different multipath mitigation techniques in some realistic measurement-based channel models, for example, the Land Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this

  19. Modeling and comparative study of various detection techniques for FMCW LIDAR using optisystem

    Science.gov (United States)

    Elghandour, Ahmed H.; Ren, Chen D.

    2013-09-01

    In this paper we investigated the different detection techniques especially direct detection, coherent heterodyne detection and coherent homodyne detection on FMCW LIDAR system using Optisystem package. A model for target, propagation channel and various detection techniques were developed using Optisystem package and then a comparative study among various detection techniques for FMCW LIDAR systems is done analytically and simulated using the developed model. Performance of direct detection, heterodyne detection and homodyne detection for FMCW LIDAR system was calculated and simulated using Optisystem package. The output simulated performance was checked using simulated results of MATLAB simulator. The results shows that direct detection is sensitive to the intensity of the received electromagnetic signal and has low complexity system advantage over the others detection architectures at the expense of the thermal noise is the dominant noise source and the sensitivity is relatively poor. In addition to much higher detection sensitivity can be achieved using coherent optical mixing which is performed by heterodyne and homodyne detection.

  20. Assessing the validity of two indirect questioning techniques: A Stochastic Lie Detector versus the Crosswise Model.

    Science.gov (United States)

    Hoffmann, Adrian; Musch, Jochen

    2016-09-01

    Estimates of the prevalence of sensitive attributes obtained through direct questions are prone to being distorted by untruthful responding. Indirect questioning procedures such as the Randomized Response Technique (RRT) aim to control for the influence of social desirability bias. However, even on RRT surveys, some participants may disobey the instructions in an attempt to conceal their true status. In the present study, we experimentally compared the validity of two competing indirect questioning techniques that presumably offer a solution to the problem of nonadherent respondents: the Stochastic Lie Detector and the Crosswise Model. For two sensitive attributes, both techniques met the "more is better" criterion. Their application resulted in higher, and thus presumably more valid, prevalence estimates than a direct question. Only the Crosswise Model, however, adequately estimated the known prevalence of a nonsensitive control attribute.