WorldWideScience

Sample records for modeling technique capable

  1. Experimental modeling of eddy current inspection capabilities

    International Nuclear Information System (INIS)

    Junker, W.R.; Clark, W.G.

    1984-01-01

    This chapter examines the experimental modeling of eddy current inspection capabilities based upon the use of liquid mercury samples designed to represent metal components containing discontinuities. A brief summary of past work with mercury modeling and a detailed discussion of recent experiments designed to further evaluate the technique are presented. The main disadvantages of the mercury modeling concept are that mercury is toxic and must be handled carefully, liquid mercury can only be used to represent nonferromagnetic materials, and wetting and meniscus problems can distort the effective size of artificial discontinuities. Artificial discontinuities placed in a liquid mercury sample can be used to represent discontinuities in solid metallic structures. Discontinuity size and type cannot be characterized from phase angle and signal amplitude data developed with a surface scanning, pancake-type eddy current probe. It is concluded that the mercury model approach can greatly enhance the overall understanding and applicability of eddy current inspection techniques

  2. People Capability Maturity Model. SM.

    Science.gov (United States)

    1995-09-01

    tailored so it consumes less time and resources than a traditional software process assessment or CMU/SEI-95-MM-02 People Capability Maturity Model...improved reputation or customer loyalty. CMU/SEI-95-MM-02 People Capability Maturity Model ■ L5-17 Coaching Level 5: Optimizing Activity 1...Maturity Model CMU/SEI-95-MM-62 Carnegie-Mellon University Software Engineering Institute DTIC ELECTE OCT 2 7 1995 People Capability Maturity

  3. The Advanced Modeling, Simulation and Analysis Capability Roadmap Vision for Engineering

    Science.gov (United States)

    Zang, Thomas; Lieber, Mike; Norton, Charles; Fucik, Karen

    2006-01-01

    This paper summarizes a subset of the Advanced Modeling Simulation and Analysis (AMSA) Capability Roadmap that was developed for NASA in 2005. The AMSA Capability Roadmap Team was chartered to "To identify what is needed to enhance NASA's capabilities to produce leading-edge exploration and science missions by improving engineering system development, operations, and science understanding through broad application of advanced modeling, simulation and analysis techniques." The AMSA roadmap stressed the need for integration, not just within the science, engineering and operations domains themselves, but also across these domains. Here we discuss the roadmap element pertaining to integration within the engineering domain, with a particular focus on implications for future observatory missions. The AMSA products supporting the system engineering function are mission information, bounds on information quality, and system validation guidance. The Engineering roadmap element contains 5 sub-elements: (1) Large-Scale Systems Models, (2) Anomalous Behavior Models, (3) advanced Uncertainty Models, (4) Virtual Testing Models, and (5) space-based Robotics Manufacture and Servicing Models.

  4. Building Alliance Capability: Management Techniques for Superior Alliance Performance

    NARCIS (Netherlands)

    J.A.J. Draulans (Johan); A-P. de Man (Ard-Pieter); H.W. Volberda (Henk)

    2003-01-01

    textabstractDespite the fact that they represent a growing element of business strategy, alliances between organisations quite often result in failure. This is partly due to the fact that firms have not built up adequate capabilities to manage alliances. Special management techniques have to be

  5. Building alliance capability : management techniques for superior alliance performance

    NARCIS (Netherlands)

    Draulans, J.; Man, de A.P.; Volberda, H.W.

    2003-01-01

    Despite the fact that they represent a growing element of business strategy, alliances between organisations quite often result in failure. This is partly due to the fact that firms have not built up adequate capabilities to manage alliances. Special management techniques have to be implemented in

  6. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    Science.gov (United States)

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  7. Simulation and Modeling Capability for Standard Modular Hydropower Technology

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Kevin M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Brennan T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Witt, Adam M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); DeNeale, Scott T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevelhimer, Mark S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pries, Jason L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burress, Timothy A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kao, Shih-Chieh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mobley, Miles H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Kyutae [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Curd, Shelaine L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Tsakiris, Achilleas [Univ. of Tennessee, Knoxville, TN (United States); Mooneyham, Christian [Univ. of Tennessee, Knoxville, TN (United States); Papanicolaou, Thanos [Univ. of Tennessee, Knoxville, TN (United States); Ekici, Kivanc [Univ. of Tennessee, Knoxville, TN (United States); Whisenant, Matthew J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Welch, Tim [US Department of Energy, Washington, DC (United States); Rabon, Daniel [US Department of Energy, Washington, DC (United States)

    2017-08-01

    Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.

  8. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  9. Group Capability Model

    Science.gov (United States)

    Olejarski, Michael; Appleton, Amy; Deltorchio, Stephen

    2009-01-01

    The Group Capability Model (GCM) is a software tool that allows an organization, from first line management to senior executive, to monitor and track the health (capability) of various groups in performing their contractual obligations. GCM calculates a Group Capability Index (GCI) by comparing actual head counts, certifications, and/or skills within a group. The model can also be used to simulate the effects of employee usage, training, and attrition on the GCI. A universal tool and common method was required due to the high risk of losing skills necessary to complete the Space Shuttle Program and meet the needs of the Constellation Program. During this transition from one space vehicle to another, the uncertainty among the critical skilled workforce is high and attrition has the potential to be unmanageable. GCM allows managers to establish requirements for their group in the form of head counts, certification requirements, or skills requirements. GCM then calculates a Group Capability Index (GCI), where a score of 1 indicates that the group is at the appropriate level; anything less than 1 indicates a potential for improvement. This shows the health of a group, both currently and over time. GCM accepts as input head count, certification needs, critical needs, competency needs, and competency critical needs. In addition, team members are categorized by years of experience, percentage of contribution, ex-members and their skills, availability, function, and in-work requirements. Outputs are several reports, including actual vs. required head count, actual vs. required certificates, CGI change over time (by month), and more. The program stores historical data for summary and historical reporting, which is done via an Excel spreadsheet that is color-coded to show health statistics at a glance. GCM has provided the Shuttle Ground Processing team with a quantifiable, repeatable approach to assessing and managing the skills in their organization. They now have a common

  10. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available In this second part of the two-part paper, the data driven modeling (DDM experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs, genetic programming (GP, evolutionary polynomial regression (EPR, Support vector machines (SVM, M5 model trees (M5, K-nearest neighbors (K-nn, and multiple linear regression (MLR techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it

  11. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Science.gov (United States)

    Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D. P.

    2010-10-01

    In this second part of the two-part paper, the data driven modeling (DDM) experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets) are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations) were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs), genetic programming (GP), evolutionary polynomial regression (EPR), Support vector machines (SVM), M5 model trees (M5), K-nearest neighbors (K-nn), and multiple linear regression (MLR) techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it should

  12. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 1: Concepts and methodology

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available A comprehensive data driven modeling experiment is presented in a two-part paper. In this first part, an extensive data-driven modeling experiment is proposed. The most important concerns regarding the way data driven modeling (DDM techniques and data were handled, compared, and evaluated, and the basis on which findings and conclusions were drawn are discussed. A concise review of key articles that presented comparisons among various DDM techniques is presented. Six DDM techniques, namely, neural networks, genetic programming, evolutionary polynomial regression, support vector machines, M5 model trees, and K-nearest neighbors are proposed and explained. Multiple linear regression and naïve models are also suggested as baseline for comparison with the various techniques. Five datasets from Canada and Europe representing evapotranspiration, upper and lower layer soil moisture content, and rainfall-runoff process are described and proposed, in the second paper, for the modeling experiment. Twelve different realizations (groups from each dataset are created by a procedure involving random sampling. Each group contains three subsets; training, cross-validation, and testing. Each modeling technique is proposed to be applied to each of the 12 groups of each dataset. This way, both prediction accuracy and uncertainty of the modeling techniques can be evaluated. The description of the datasets, the implementation of the modeling techniques, results and analysis, and the findings of the modeling experiment are deferred to the second part of this paper.

  13. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  14. Dynamic capabilities, Marketing Capability and Organizational Performance

    Directory of Open Access Journals (Sweden)

    Adriana Roseli Wünsch Takahashi

    2017-01-01

    Full Text Available The goal of the study is to investigate the influence of dynamic capabilities on organizational performance and the role of marketing capabilities as a mediator in this relationship in the context of private HEIs in Brazil. As a research method we carried out a survey with 316 IES and data analysis was operationalized with the technique of structural equation modeling. The results indicate that the dynamic capabilities have influence on organizational performance only when mediated by marketing ability. The marketing capability has an important role in the survival, growth and renewal on educational services offerings for HEIs in private sector, and consequently in organizational performance. It is also demonstrated that mediated relationship is more intense for HEI with up to 3,000 students and other organizational profile variables such as amount of courses, the constitution, the type of institution and type of education do not significantly alter the results.

  15. Conceptual Model of IT Infrastructure Capability and Its Empirical Justification

    Institute of Scientific and Technical Information of China (English)

    QI Xianfeng; LAN Boxiong; GUO Zhenwei

    2008-01-01

    Increasing importance has been attached to the value of information technology (IT) infrastructure in today's organizations. The development of efficacious IT infrastructure capability enhances business performance and brings sustainable competitive advantage. This study analyzed the IT infrastructure capability in a holistic way and then presented a concept model of IT capability. IT infrastructure capability was categorized into sharing capability, service capability, and flexibility. This study then empirically tested the model using a set of survey data collected from 145 firms. Three factors emerge from the factor analysis as IT flexibility, IT service capability, and IT sharing capability, which agree with those in the conceptual model built in this study.

  16. Towards a national cybersecurity capability development model

    CSIR Research Space (South Africa)

    Jacobs, Pierre C

    2017-06-01

    Full Text Available to be broken down into its components, a model serves as a blueprint to ensure that those building the capability considers all components, allows for cost estimation and facilitates the evaluation of trade-offs. One national cybersecurity capability...

  17. A combined technique using SEM and TOPSIS for the commercialization capability of R&D project evaluation

    Directory of Open Access Journals (Sweden)

    Charttirot Karaveg

    2015-07-01

    Full Text Available There is a high risk of R&D based innovation being commercialized, especially in the innovation transfer process which is a concern to many entrepreneurs and researchers. The purpose of this research is to develop the criteria of R&D commercialization capability and to propose a combined technique of Structural Equation Modelling (SEM and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS for R&D project evaluation. The research utilized a mixed-method approach. The first phase comprised a qualitative study on commercialization criteria development though the survey research of 272 successful entrepreneurs and researchers in all industrial sectors in Thailand. The data was collected with a structured questionnaire and analyzed by SEM. The second phase was involved with SEM-TOPSIS technique development and a case study of 45 R&D projects in research institutes and incubators for technique validation. The research results reveal that there were six criteria for R&D project commercialization capability, these are arranged according to the significance; marketing, technology, finance, non-financial impact, intellectual property, and human resource. The holistic criteria is presented in decreasing order on the ambiguous subjectivity of the fuzzy-expert system, to help with effectively funding R&D and to prevent a resource meltdown. This study applies SEM to the relative weighting of hierarchical criteria. The TOPSIS approach is employed to rank the alternative performance. An integrated SEM-TOPSIS is proposed for the first time and applied to present R&D projects shown to be effective and feasible in evaluating R&D commercialization capacity.

  18. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  19. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  20. Proposing a Capability Perspective on Digital Business Models

    OpenAIRE

    Bärenfänger, Rieke; Otto, Boris

    2015-01-01

    Business models comprehensively describe the functioning of businesses in contemporary economic, technological, and societal environments. This paper focuses on the characteristics of digital business models from the perspective of capability research and develops a capability model for digital businesses. Following the design science research (DSR) methodology, multiple evaluation and design iterations were performed. Contributions to the design process came from IS/IT practice and the resea...

  1. Spent fuel reprocessing system security engineering capability maturity model

    International Nuclear Information System (INIS)

    Liu Yachun; Zou Shuliang; Yang Xiaohua; Ouyang Zigen; Dai Jianyong

    2011-01-01

    In the field of nuclear safety, traditional work places extra emphasis on risk assessment related to technical skills, production operations, accident consequences through deterministic or probabilistic analysis, and on the basis of which risk management and control are implemented. However, high quality of product does not necessarily mean good safety quality, which implies a predictable degree of uniformity and dependability suited to the specific security needs. In this paper, we make use of the system security engineering - capability maturity model (SSE-CMM) in the field of spent fuel reprocessing, establish a spent fuel reprocessing systems security engineering capability maturity model (SFR-SSE-CMM). The base practices in the model are collected from the materials of the practice of the nuclear safety engineering, which represent the best security implementation activities, reflect the regular and basic work of the implementation of the security engineering in the spent fuel reprocessing plant, the general practices reveal the management, measurement and institutional characteristics of all process activities. The basic principles that should be followed in the course of implementation of safety engineering activities are indicated from 'what' and 'how' aspects. The model provides a standardized framework and evaluation system for the safety engineering of the spent fuel reprocessing system. As a supplement to traditional methods, this new assessment technique with property of repeatability and predictability with respect to cost, procedure and quality control, can make or improve the activities of security engineering to become a serial of mature, measurable and standard activities. (author)

  2. Model-Based Military Scenario Management for Defence Capability

    National Research Council Canada - National Science Library

    Gori, Ronnie; Chen, Pin; Pozgay, Angela

    2004-01-01

    .... This paper describes initial work towards the development of an information model that links scenario and capability related information, and the results of capability analysis and experimentation...

  3. An experimental evaluation of the generalizing capabilities of process discovery techniques and black-box sequence models

    NARCIS (Netherlands)

    Tax, N.; van Zelst, S.J.; Teinemaa, I.; Gulden, Jens; Reinhartz-Berger, Iris; Schmidt, Rainer; Guerreiro, Sérgio; Guédria, Wided; Bera, Palash

    2018-01-01

    A plethora of automated process discovery techniques have been developed which aim to discover a process model based on event data originating from the execution of business processes. The aim of the discovered process models is to describe the control-flow of the underlying business process. At the

  4. Computable general equilibrium model fiscal year 2014 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Laboratory; Boero, Riccardo [Los Alamos National Laboratory

    2016-05-11

    This report provides an overview of the development of the NISAC CGE economic modeling capability since 2012. This capability enhances NISAC's economic modeling and analysis capabilities to answer a broader set of questions than possible with previous economic analysis capability. In particular, CGE modeling captures how the different sectors of the economy, for example, households, businesses, government, etc., interact to allocate resources in an economy and this approach captures these interactions when it is used to estimate the economic impacts of the kinds of events NISAC often analyzes.

  5. Geospatial Information System Capability Maturity Models

    Science.gov (United States)

    2017-06-01

    To explore how State departments of transportation (DOTs) evaluate geospatial tool applications and services within their own agencies, particularly their experiences using capability maturity models (CMMs) such as the Urban and Regional Information ...

  6. Innovation and dynamic capabilities of the firm: Defining an assessment model

    Directory of Open Access Journals (Sweden)

    André Cherubini Alves

    2017-05-01

    Full Text Available Innovation and dynamic capabilities have gained considerable attention in both academia and practice. While one of the oldest inquiries in economic and strategy literature involves understanding the features that drive business success and a firm’s perpetuity, the literature still lacks a comprehensive model of innovation and dynamic capabilities. This study presents a model that assesses firms’ innovation and dynamic capabilities perspectives based on four essential capabilities: development, operations, management, and transaction capabilities. Data from a survey of 1,107 Brazilian manufacturing firms were used for empirical testing and discussion of the dynamic capabilities framework. Regression and factor analyses validated the model; we discuss the results, contrasting with the dynamic capabilities’ framework. Operations Capability is the least dynamic of all capabilities, with the least influence on innovation. This reinforces the notion that operations capabilities as “ordinary capabilities,” whereas management, development, and transaction capabilities better explain firms’ dynamics and innovation.

  7. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  8. Mac OS X Snow Leopard for Power Users Advanced Capabilities and Techniques

    CERN Document Server

    Granneman, Scott

    2010-01-01

    Mac OS X Snow Leopard for Power Users: Advanced Capabilities and Techniques is for Mac OS X users who want to go beyond the obvious, the standard, and the easy. If want to dig deeper into Mac OS X and maximize your skills and productivity using the world's slickest and most elegant operating system, then this is the book for you. Written by Scott Granneman, an experienced teacher, developer, and consultant, Mac OS X for Power Users helps you push Mac OS X to the max, unveiling advanced techniques and options that you may have not known even existed. Create custom workflows and apps with Automa

  9. Extension of ERIM multispectral data processing capabilities through improved data handling techniques

    Science.gov (United States)

    Kriegler, F. J.

    1973-01-01

    The improvement and extension of the capabilities of the Environmental Research Institute of Michigan processing facility in handling multispectral data are discussed. Improvements consisted of implementing hardware modifications which permitted more rapid access to the recorded data through improved numbering and indexing of such data. In addition, techniques are discussed for handling data from sources other than the ERIM M-5 and M-7 scanner systems.

  10. Off-Gas Adsorption Model Capabilities and Recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Lyon, Kevin L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Welty, Amy K. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Law, Jack [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ladshaw, Austin [Georgia Inst. of Technology, Atlanta, GA (United States); Yiacoumi, Sotira [Georgia Inst. of Technology, Atlanta, GA (United States); Tsouris, Costas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-03-01

    Off-gas treatment is required to reduce emissions from aqueous fuel reprocessing. Evaluating the products of innovative gas adsorption research requires increased computational simulation capability to more effectively transition from fundamental research to operational design. Early modeling efforts produced the Off-Gas SeParation and REcoverY (OSPREY) model that, while efficient in terms of computation time, was of limited value for complex systems. However, the computational and programming lessons learned in development of the initial model were used to develop Discontinuous Galerkin OSPREY (DGOSPREY), a more effective model. Initial comparisons between OSPREY and DGOSPREY show that, while OSPREY does reasonably well to capture the initial breakthrough time, it displays far too much numerical dispersion to accurately capture the real shape of the breakthrough curves. DGOSPREY is a much better tool as it utilizes a more stable set of numerical methods. In addition, DGOSPREY has shown the capability to capture complex, multispecies adsorption behavior, while OSPREY currently only works for a single adsorbing species. This capability makes DGOSPREY ultimately a more practical tool for real world simulations involving many different gas species. While DGOSPREY has initially performed very well, there is still need for improvement. The current state of DGOSPREY does not include any micro-scale adsorption kinetics and therefore assumes instantaneous adsorption. This is a major source of error in predicting water vapor breakthrough because the kinetics of that adsorption mechanism is particularly slow. However, this deficiency can be remedied by building kinetic kernels into DGOSPREY. Another source of error in DGOSPREY stems from data gaps in single species, such as Kr and Xe, isotherms. Since isotherm data for each gas is currently available at a single temperature, the model is unable to predict adsorption at temperatures outside of the set of data currently

  11. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  12. Are Hydrostatic Models Still Capable of Simulating Oceanic Fronts

    Science.gov (United States)

    2016-11-10

    Hydrostatic Models Still Capable of Simulating Oceanic Fronts Yalin Fan Zhitao Yu Ocean Dynamics and Prediction Branch Oceanography Division FengYan Shi...OF PAGES 17. LIMITATION OF ABSTRACT Are Hydrostatic Models Still Capable of Simulating Oceanic Fronts? Yalin Fan, Zhitao Yu, and, Fengyan Shi1 Naval...mixed layer and thermocline simulations as well as large scale circulations. Numerical experiments are conducted using hydrostatic (HY) and

  13. Sensor Alerting Capability

    Science.gov (United States)

    Henriksson, Jakob; Bermudez, Luis; Satapathy, Goutam

    2013-04-01

    There is a large amount of sensor data generated today by various sensors, from in-situ buoys to mobile underwater gliders. Providing sensor data to the users through standardized services, language and data model is the promise of OGC's Sensor Web Enablement (SWE) initiative. As the amount of data grows it is becoming difficult for data providers, planners and managers to ensure reliability of data and services and to monitor critical data changes. Intelligent Automation Inc. (IAI) is developing a net-centric alerting capability to address these issues. The capability is built on Sensor Observation Services (SOSs), which is used to collect and monitor sensor data. The alerts can be configured at the service level and at the sensor data level. For example it can alert for irregular data delivery events or a geo-temporal statistic of sensor data crossing a preset threshold. The capability provides multiple delivery mechanisms and protocols, including traditional techniques such as email and RSS. With this capability decision makers can monitor their assets and data streams, correct failures or be alerted about a coming phenomena.

  14. A user's guide to the SASSYS-1 control system modeling capability

    International Nuclear Information System (INIS)

    Vilim, R.B.

    1987-06-01

    This report describes a control system modeling capability that has been developed for the analysis of control schemes for advanced liquid metal reactors. The general class of control equations that can be represented using the modeling capability is identified, and the numerical algorithms used to solve these equations are described. The modeling capability has been implemented in the SASSYS-1 systems analysis code. A description of the card input, a sample input deck and some guidelines for running the code are given

  15. Neural network modeling of a dolphin's sonar discrimination capabilities

    OpenAIRE

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL; Nachtigall, PE; Roitblat, H.

    1994-01-01

    The capability of an echo-locating dolphin to discriminate differences in the wall thickness of cylinders was previously modeled by a counterpropagation neural network using only spectral information of the echoes [W. W. L. Au, J. Acoust. Soc. Am. 95, 2728–2735 (1994)]. In this study, both time and frequency information were used to model the dolphin discrimination capabilities. Echoes from the same cylinders were digitized using a broadband simulated dolphin sonar signal with the transducer ...

  16. Capability maturity models for offshore organisational management.

    Science.gov (United States)

    Strutt, J E; Sharp, J V; Terry, E; Miles, R

    2006-12-01

    The goal setting regime imposed by the UK safety regulator has important implications for an organisation's ability to manage health and safety related risks. Existing approaches to safety assurance based on risk analysis and formal safety assessments are increasingly considered unlikely to create the step change improvement in safety to which the offshore industry aspires and alternative approaches are being considered. One approach, which addresses the important issue of organisational behaviour and which can be applied at a very early stage of design, is the capability maturity model (CMM). The paper describes the development of a design safety capability maturity model, outlining the key processes considered necessary to safety achievement, definition of maturity levels and scoring methods. The paper discusses how CMM is related to regulatory mechanisms and risk based decision making together with the potential of CMM to environmental risk management.

  17. Human, Social, Cultural Behavior (HSCB) Modeling Workshop I: Characterizing the Capability Needs for HSCB Modeling

    Science.gov (United States)

    2008-07-01

    The expectations correspond to different roles individuals perform SocialConstructionis Social constructionism is a school of thought Peter L...HUMAN, SOCIAL , CULTURAL BEHAVIOR (HSCB) MODELING WORKSHOP I: CHARACTERIZING THE CAPABILITY NEEDS FOR HSCB MODELING FINAL REPORT... Social , Cultural Behavior (HSCB) Modeling Workshop I: Characterizing the Capability Needs for HSCB Modeling 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  18. Enhancing the capabilities of eddy current techniques for non-destructive evaluation of austenitic stainless steels

    International Nuclear Information System (INIS)

    Rao, B.P.C.; Thirunavukkarasu, S.; Sasi, B.; Jayakumar, T.; Baldev Raj

    2010-01-01

    Eddy current non-destructive evaluation (NDE) techniques find many applications during fabrication and in-service inspection of components made of stainless steel. In recent years, concurrent developments in electromagnetic field detection sensors such as giant magneto-resistive (GMR), giant-magneto impedance (GMI) and SQUIDs sensors, computers, microelectronics, and incorporating advanced signal and image processing techniques, have paved the way for enhancing the capabilities of existing eddy current (EC) techniques for examination of austenitic stainless steel (SS) plates, tubes and other geometries and several innovative methodologies have been developed. This paper highlights a few such applications in EC testing to austenitic stainless steel components used in fast reactors. (author)

  19. Evaluating the habitat capability model for Merriam's turkeys

    Science.gov (United States)

    Mark A. Rumble; Stanley H. Anderson

    1995-01-01

    Habitat capability (HABCAP) models for wildlife assist land managers in predicting the consequences of their management decisions. Models must be tested and refined prior to using them in management planning. We tested the predicted patterns of habitat selection of the R2 HABCAP model using observed patterns of habitats selected by radio-marked Merriam’s turkey (

  20. Systems Security Engineering Capability Maturity Model SSE-CMM Model Description Document

    National Research Council Canada - National Science Library

    1999-01-01

    The Systems Security Engineering Capability Maturity Model (SSE-CMM) describes the essential characteristics of an organization's security engineering process that must exist to ensure good security engineering...

  1. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  2. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Garcia, Humberto; Burr, Tom; Coles, Garill A.; Edmunds, Thomas A.; Garrett, Alfred; Gorensek, Maximilian; Hamm, Luther; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Tzanos, Constantine P.; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  3. Integration Of Facility Modeling Capabilities For Nuclear Nonproliferation Analysis

    International Nuclear Information System (INIS)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  4. An assessment system for the system safety engineering capability maturity model in the case of spent fuel reprocessing

    International Nuclear Information System (INIS)

    Yang Xiaohua; Liu Zhenghai; Liu Zhiming; Wan Yaping; Bai Xiaofeng

    2012-01-01

    We can improve the processing, the evaluation of capability and promote the user's trust by using system security engineering capability maturity model (SSE-CMM). SSE-CMM is the common method for organizing and implementing safety engineering, and it is a mature method for system safety engineering. Combining capability maturity model (CMM) with total quality management and statistic theory, SSE-CMM turns systems security engineering into a well-defined, mature, measurable, advanced engineering discipline. Lack of domain knowledge, the size of data, the diversity of evidences, the cumbersomeness of processes, and the complexity of matching evidences with problems are the main issues that SSE-CMM assessment has to face. To improve effectively the efficiency of assessment of spent fuel reprocessing system security engineering capability maturity model (SFR-SSE-CMM), in this paper we de- signed an intelligent assessment software based on domain ontology and that uses methods such as ontology, evidence theory, semantic web, intelligent information retrieval and intelligent auto-matching techniques. This software includes four subsystems, which are domain ontology creation and management system, evidence auto collection system, and a problem and evidence matching system. The architecture of the software is divided into five layers: a data layer, an oncology layer, a knowledge layer, a service layer arid a presentation layer. (authors)

  5. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  6. Co-firing biomass and coal-progress in CFD modelling capabilities

    DEFF Research Database (Denmark)

    Kær, Søren Knudsen; Rosendahl, Lasse Aistrup; Yin, Chungen

    2005-01-01

    This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra-particle......This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra......-particle gradients. The advanced reaction model predicts moisture and volatiles release characteristics that differ significantly from those found from a 0-dimensional model partly due to the processes occurring in parallel rather than sequentially. This is demonstrated for a test case that illustrates single...

  7. Structural Modeling Using "Scanning and Mapping" Technique

    Science.gov (United States)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  8. Security Process Capability Model Based on ISO/IEC 15504 Conformant Enterprise SPICE

    Directory of Open Access Journals (Sweden)

    Mitasiunas Antanas

    2014-07-01

    Full Text Available In the context of modern information systems, security has become one of the most critical quality attributes. The purpose of this paper is to address the problem of quality of information security. An approach to solve this problem is based on the main assumption that security is a process oriented activity. According to this approach, product quality can be achieved by means of process quality - process capability. Introduced in the paper, SPICE conformant information security process capability model is based on process capability modeling elaborated by world-wide software engineering community during the last 25 years, namely ISO/IEC 15504 that defines the capability dimension and the requirements for process definition and domain independent integrated model for enterprise-wide assessment and Enterprise SPICE improvement

  9. Predictive capabilities of a two-dimensional model in the ground water transport of radionuclides

    International Nuclear Information System (INIS)

    Gureghian, A.B.; Beskid, N.J.; Marmer, G.J.

    1978-01-01

    The discharge of low-level radioactive waste into tailings ponds is a potential source of ground water contamination. The estimation of the radiological hazards related to the ground water transport of radionuclides from tailings retention systems depends on reasonably accurate estimates of the movement of both water and solute. A two-dimensional mathematical model having predictive capability for ground water flow and solute transport has been developed. The flow equation has been solved under steady-state conditions and the mass transport equation under transient conditions. The simultaneous solution of both equations is achieved through the finite element technique using isoparametric elements, based on the Galerkin formulation. However, in contrast to the flow equation solution, the weighting functions used in the solution of the mass transport equation have a non-symmetric form. The predictive capability of the model is demonstrated using an idealized case based on analyses of field data obtained from the sites of operating uranium mills. The pH of the solution, which regulates the variation of the distribution coefficient (K/sub d/) in a particular site, appears to be the most important factor in the assessment of the rate of migration of the elements considered herein

  10. Impact of Personnel Capabilities on Organizational Innovation Capability

    DEFF Research Database (Denmark)

    Nielsen, Susanne Balslev; Momeni, Mostafa

    2016-01-01

    in this rapidly changing world. This research focuses on definition of the personnel aspect of innovation capability, and proposes a conceptual model based on the scientific articles of academic literature on organisations innovation capability. This paper includes an expert based validation in three rounds...... of the Delphi method. And for the purpose of a better appreciation of the relationship dominating the factors of the model, it has distributed the questionnaire to Iranian companies in the Food industry. This research proposed a direct relationship between Innovation Capability and the Personnel Capability...

  11. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  12. Determining Plutonium Mass in Spent Fuel with Nondestructive Assay Techniques -- Preliminary Modeling Results Emphasizing Integration among Techniques

    International Nuclear Information System (INIS)

    Tobin, S.J.; Fensin, M.L.; Ludewigt, B.A.; Menlove, H.O.; Quiter, B.J.; Sandoval, N.P.; Swinhoe, M.T.; Thompson, S.J.

    2009-01-01

    There are a variety of motivations for quantifying Pu in spent (used) fuel assemblies by means of nondestructive assay (NDA) including the following: strengthen the capabilities of the International Atomic Energy Agencies to safeguards nuclear facilities, quantifying shipper/receiver difference, determining the input accountability value at reprocessing facilities and providing quantitative input to burnup credit determination for repositories. For the purpose of determining the Pu mass in spent fuel assemblies, twelve NDA techniques were identified that provide information about the composition of an assembly. A key point motivating the present research path is the realization that none of these techniques, in isolation, is capable of both (1) quantifying the elemental Pu mass of an assembly and (2) detecting the diversion of a significant number of pins. As such, the focus of this work is determining how to best integrate 2 or 3 techniques into a system that can quantify elemental Pu and to assess how well this system can detect material diversion. Furthermore, it is important economically to down-select among the various techniques before advancing to the experimental phase. In order to achieve this dual goal of integration and down-selection, a Monte Carlo library of PWR assemblies was created and is described in another paper at Global 2009 (Fensin et al.). The research presented here emphasizes integration among techniques. An overview of a five year research plan starting in 2009 is given. Preliminary modeling results for the Monte Carlo assembly library are presented for 3 NDA techniques: Delayed Neutrons, Differential Die-Away, and Nuclear Resonance Fluorescence. As part of the focus on integration, the concept of 'Pu isotopic correlation' is discussed and the role of cooling time determination.

  13. Systems Security Engineering Capability Maturity Model (SSECMM), Model Description, Version 1.1

    National Research Council Canada - National Science Library

    1997-01-01

    This document is designed to acquaint the reader with the SSE-CMM Project as a whole and present the project's major work product - the Systems Security Engineering Capability Maturity Model (SSE- CMM...

  14. Constructing a justice model based on Sen's capability approach

    OpenAIRE

    Yüksel, Sevgi; Yuksel, Sevgi

    2008-01-01

    The thesis provides a possible justice model based on Sen's capability approach. For this goal, we first analyze the general structure of a theory of justice, identifying the main variables and issues. Furthermore, based on Sen (2006) and Kolm (1998), we look at 'transcendental' and 'comparative' approaches to justice and concentrate on the sufficiency condition for the comparative approach. Then, taking Rawls' theory of justice as a starting point, we present how Sen's capability approach em...

  15. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  16. The Community WRF-Hydro Modeling System Version 4 Updates: Merging Toward Capabilities of the National Water Model

    Science.gov (United States)

    McAllister, M.; Gochis, D.; Dugger, A. L.; Karsten, L. R.; McCreight, J. L.; Pan, L.; Rafieeinasab, A.; Read, L. K.; Sampson, K. M.; Yu, W.

    2017-12-01

    The community WRF-Hydro modeling system is publicly available and provides researchers and operational forecasters a flexible and extensible capability for performing multi-scale, multi-physics options for hydrologic modeling that can be run independent or fully-interactive with the WRF atmospheric model. The core WRF-Hydro physics model contains very high-resolution descriptions of terrestrial hydrologic process representations such as land-atmosphere exchanges of energy and moisture, snowpack evolution, infiltration, terrain routing, channel routing, basic reservoir representation and hydrologic data assimilation. Complementing the core physics components of WRF-Hydro are an ecosystem of pre- and post-processing tools that facilitate the preparation of terrain and meteorological input data, an open-source hydrologic model evaluation toolset (Rwrfhydro), hydrologic data assimilation capabilities with DART and advanced model visualization capabilities. The National Center for Atmospheric Research (NCAR), through collaborative support from the National Science Foundation and other funding partners, provides community support for the entire WRF-Hydro system through a variety of mechanisms. This presentation summarizes the enhanced user support capabilities that are being developed for the community WRF-Hydro modeling system. These products and services include a new website, open-source code repositories, documentation and user guides, test cases, online training materials, live, hands-on training sessions, an email list serve, and individual user support via email through a new help desk ticketing system. The WRF-Hydro modeling system and supporting tools which now include re-gridding scripts and model calibration have recently been updated to Version 4 and are merging toward capabilities of the National Water Model.

  17. Three Models of Education: Rights, Capabilities and Human Capital

    Science.gov (United States)

    Robeyns, Ingrid

    2006-01-01

    This article analyses three normative accounts that can underlie educational policies, with special attention to gender issues. These three models of education are human capital theory, rights discourses and the capability approach. I first outline five different roles that education can play. Then I analyse these three models of educational…

  18. Capability maturity models in engineering companies: case study analysis

    Directory of Open Access Journals (Sweden)

    Titov Sergei

    2016-01-01

    Full Text Available In the conditions of the current economic downturn engineering companies in Russia and worldwide are searching for new approaches and frameworks to improve their strategic position, increase the efficiency of the internal business processes and enhance the quality of the final products. Capability maturity models are well-known tools used by many foreign engineering companies to assess the productivity of the processes, to elaborate the program of business process improvement and to prioritize the efforts to optimize the whole company performance. The impact of capability maturity model implementation on cost and time are documented and analyzed in the existing research. However, the potential of maturity models as tools of quality management is less known. The article attempts to analyze the impact of CMM implementation on the quality issues. The research is based on a case study methodology and investigates the real life situation in a Russian engineering company.

  19. Stochastic and simulation models of maritime intercept operations capabilities

    OpenAIRE

    Sato, Hiroyuki

    2005-01-01

    The research formulates and exercises stochastic and simulation models to assess the Maritime Intercept Operations (MIO) capabilities. The models focus on the surveillance operations of the Maritime Patrol Aircraft (MPA). The analysis using the models estimates the probability with which a terrorist vessel (Red) is detected, correctly classified, and escorted for intensive investigation and neutralization before it leaves an area of interest (AOI). The difficulty of obtaining adequate int...

  20. Models to enhance research capacity and capability in clinical nurses: a narrative review.

    Science.gov (United States)

    O'Byrne, Louise; Smith, Sheree

    2011-05-01

    To identify models used as local initiatives to build capability and capacity in clinical nurses. The National Health Service, Nursing and Midwifery Council and the United Kingdom Clinical Research Collaboration all support the development of the building of research capability and capacity in clinical nurses in the UK. Narrative review. A literature search of databases (including Medline and Pubmed) using the search terms nursing research, research capacity and research capability combined with building, development, model and collaboration. Publications which included a description or methodological study of a structured initiative to tackle research capacity and capability development in clinical nurses were selected. Three models were found to be dominant in the literature. These comprised evidence-based practice, facilitative and experiential learning models. Strong leadership, organisational need and support management were elements found in all three models. Methodological issues were evident and pertain to small sample sizes, inconsistent and poorly defined outcomes along with a lack of data. Whilst the vision of a research ready and active National Health Service is to be applauded to date, there appears to be limited research on the best approach to support local initiatives for nurses that build research capability and capacity. Future studies will need to focus on well-defined objectives and outcomes to enable robust evidence to support local initiatives. To build research capability and capacity in clinical nurses, there is a need to evaluate models and determine the best approach that will provide clinical nurses with research opportunities. © 2010 Blackwell Publishing Ltd.

  1. Proving the capabilities of the phased-array probe/ALOK inspection technique

    International Nuclear Information System (INIS)

    Bohn, H.; Kroening, M.; Rathgeb, W.; Gebhardt, W.; Kappes, W.; Barbian, O.A.

    1987-01-01

    The capability of the ALOK phased-probe-array inspection technique results from the simplicity of the testing system structure, the reflector detection and identification by means of transit time curves, and the analytical capacity of the system. Testing times are shortened, with the test results meeting the current standards, and further possibilities being given: Areas that could so far only be inspected with difficulty of not at all, can be examined, thanks to the compact equipment and without having to modify the system, very informative analysis measurements for interpreting indications can be made. Furthermore, it may be expected from the testing practice that descriptions of indications will become more reliable and reproducible, due to transit time curve identification. In addition to conventional criteria for reflector evaluation, the potential of transit time curve identification, and flaw boundary imaging of the reconstruction image can be utilized. (orig.)

  2. Development of a fourth generation predictive capability maturity model.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel; Rider, William J.; Trucano, Timothy Guy

    2013-09-01

    The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, the PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.

  3. Full optical model of micro-endoscope with optical coherence microscopy, multiphoton microscopy and visible capabilities

    Science.gov (United States)

    Vega, David; Kiekens, Kelli C.; Syson, Nikolas C.; Romano, Gabriella; Baker, Tressa; Barton, Jennifer K.

    2018-02-01

    While Optical Coherence Microscopy (OCM), Multiphoton Microscopy (MPM), and narrowband imaging are powerful imaging techniques that can be used to detect cancer, each imaging technique has limitations when used by itself. Combining them into an endoscope to work in synergy can help achieve high sensitivity and specificity for diagnosis at the point of care. Such complex endoscopes have an elevated risk of failure, and performing proper modelling ensures functionality and minimizes risk. We present full 2D and 3D models of a multimodality optical micro-endoscope to provide real-time detection of carcinomas, called a salpingoscope. The models evaluate the endoscope illumination and light collection capabilities of various modalities. The design features two optical paths with different numerical apertures (NA) through a single lens system with a scanning optical fiber. The dual path is achieved using dichroic coatings embedded in a triplet. A high NA optical path is designed to perform OCM and MPM while a low NA optical path is designed for the visible spectrum to navigate the endoscope to areas of interest and narrowband imaging. Different tests such as the reflectance profile of homogeneous epithelial tissue were performed to adjust the models properly. Light collection models for the different modalities were created and tested for efficiency. While it is challenging to evaluate the efficiency of multimodality endoscopes, the models ensure that the system is design for the expected light collection levels to provide detectable signal to work for the intended imaging.

  4. The Integrated Use of Enterprise and System Dynamics Modelling Techniques in Support of Business Decisions

    Directory of Open Access Journals (Sweden)

    K. Agyapong-Kodua

    2012-01-01

    Full Text Available Enterprise modelling techniques support business process (reengineering by capturing existing processes and based on perceived outputs, support the design of future process models capable of meeting enterprise requirements. System dynamics modelling tools on the other hand are used extensively for policy analysis and modelling aspects of dynamics which impact on businesses. In this paper, the use of enterprise and system dynamics modelling techniques has been integrated to facilitate qualitative and quantitative reasoning about the structures and behaviours of processes and resource systems used by a Manufacturing Enterprise during the production of composite bearings. The case study testing reported has led to the specification of a new modelling methodology for analysing and managing dynamics and complexities in production systems. This methodology is based on a systematic transformation process, which synergises the use of a selection of public domain enterprise modelling, causal loop and continuous simulation modelling techniques. The success of the modelling process defined relies on the creation of useful CIMOSA process models which are then converted to causal loops. The causal loop models are then structured and translated to equivalent dynamic simulation models using the proprietary continuous simulation modelling tool iThink.

  5. Data Farming Process and Initial Network Analysis Capabilities

    Directory of Open Access Journals (Sweden)

    Gary Horne

    2016-01-01

    Full Text Available Data Farming, network applications and approaches to integrate network analysis and processes to the data farming paradigm are presented as approaches to address complex system questions. Data Farming is a quantified approach that examines questions in large possibility spaces using modeling and simulation. It evaluates whole landscapes of outcomes to draw insights from outcome distributions and outliers. Social network analysis and graph theory are widely used techniques for the evaluation of social systems. Incorporation of these techniques into the data farming process provides analysts examining complex systems with a powerful new suite of tools for more fully exploring and understanding the effect of interactions in complex systems. The integration of network analysis with data farming techniques provides modelers with the capability to gain insight into the effect of network attributes, whether the network is explicitly defined or emergent, on the breadth of the model outcome space and the effect of model inputs on the resultant network statistics.

  6. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    Science.gov (United States)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  7. Stochastic Feedforward Control Technique

    Science.gov (United States)

    Halyo, Nesim

    1990-01-01

    Class of commanded trajectories modeled as stochastic process. Advanced Transport Operating Systems (ATOPS) research and development program conducted by NASA Langley Research Center aimed at developing capabilities for increases in capacities of airports, safe and accurate flight in adverse weather conditions including shear, winds, avoidance of wake vortexes, and reduced consumption of fuel. Advances in techniques for design of modern controls and increased capabilities of digital flight computers coupled with accurate guidance information from Microwave Landing System (MLS). Stochastic feedforward control technique developed within context of ATOPS program.

  8. Capability to model reactor regulating system in RFSP

    Energy Technology Data Exchange (ETDEWEB)

    Chow, H C; Rouben, B; Younis, M H; Jenkins, D A [Atomic Energy of Canada Ltd., Mississauga, ON (Canada); Baudouin, A [Hydro-Quebec, Montreal, PQ (Canada); Thompson, P D [New Brunswick Electric Power Commission, Point Lepreau, NB (Canada). Point Lepreau Generating Station

    1996-12-31

    The Reactor Regulating System package extracted from SMOKIN-G2 was linked within RFSP to the spatial kinetics calculation. The objective is to use this new capability in safety analysis to model the actions of RRS in hypothetical events such as in-core LOCA or moderator drain scenarios. This paper describes the RRS modelling in RFSP and its coupling to the neutronics calculations, verification of the RRS control routine functions, sample applications and comparisons to SMOKIN-G2 results for the same transient simulations. (author). 7 refs., 6 figs.

  9. Hybrid modeling approach to improve the forecasting capability for the gaseous radionuclide in a nuclear site

    International Nuclear Information System (INIS)

    Jeong, Hyojoon; Hwang, Wontae; Kim, Eunhan; Han, Moonhee

    2012-01-01

    Highlights: ► This study is to improve the reliability of air dispersion modeling. ► Tracer experiments assumed gaseous radionuclides were conducted at a nuclear site. ► The performance of a hybrid modeling combined ISC with ANFIS was investigated.. ► Hybrid modeling approach shows better performance rather than a single ISC model. - Abstract: Predicted air concentrations of radioactive materials are important for an environmental impact assessment for the public health. In this study, the performance of a hybrid modeling combined with the industrial source complex (ISC) model and an adaptive neuro-fuzzy inference system (ANFIS) for predicting tracer concentrations was investigated. Tracer dispersion experiments were performed to produce the field data assuming the accidental release of radioactive material. ANFIS was trained in order that the outputs of the ISC model are similar to the measured data. Judging from the higher correlation coefficients between the measured and the calculated ones, the hybrid modeling approach could be an appropriate technique for an improvement of the modeling capability to predict the air concentrations for radioactive materials.

  10. An Observation Capability Metadata Model for EO Sensor Discovery in Sensor Web Enablement Environments

    Directory of Open Access Journals (Sweden)

    Chuli Hu

    2014-10-01

    Full Text Available Accurate and fine-grained discovery by diverse Earth observation (EO sensors ensures a comprehensive response to collaborative observation-required emergency tasks. This discovery remains a challenge in an EO sensor web environment. In this study, we propose an EO sensor observation capability metadata model that reuses and extends the existing sensor observation-related metadata standards to enable the accurate and fine-grained discovery of EO sensors. The proposed model is composed of five sub-modules, namely, ObservationBreadth, ObservationDepth, ObservationFrequency, ObservationQuality and ObservationData. The model is applied to different types of EO sensors and is formalized by the Open Geospatial Consortium Sensor Model Language 1.0. The GeosensorQuery prototype retrieves the qualified EO sensors based on the provided geo-event. An actual application to flood emergency observation in the Yangtze River Basin in China is conducted, and the results indicate that sensor inquiry can accurately achieve fine-grained discovery of qualified EO sensors and obtain enriched observation capability information. In summary, the proposed model enables an efficient encoding system that ensures minimum unification to represent the observation capabilities of EO sensors. The model functions as a foundation for the efficient discovery of EO sensors. In addition, the definition and development of this proposed EO sensor observation capability metadata model is a helpful step in extending the Sensor Model Language (SensorML 2.0 Profile for the description of the observation capabilities of EO sensors.

  11. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  12. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  13. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    Science.gov (United States)

    Iacobucci, Joseph V.

    problem domain by establishing an effective means to communicate the semantics from the RAAM framework. These techniques make it possible to include diverse multi-metric models within the RAAM framework in addition to system and operational level trades. A canonical example was used to explore the uses of the methodology. The canonical example contains all of the features of a full system of systems architecture analysis study but uses fewer tasks and systems. Using RAAM with the canonical example it was possible to consider both system and operational level trades in the same analysis. Once the methodology had been tested with the canonical example, a Suppression of Enemy Air Defenses (SEAD) capability model was developed. Due to the sensitive nature of analyses on that subject, notional data was developed. The notional data has similar trends and properties to realistic Suppression of Enemy Air Defenses data. RAAM was shown to be traceable and provided a mechanism for a unified treatment of a variety of metrics. The SEAD capability model demonstrated lower computer runtimes and reduced model creation complexity as compared to methods currently in use. To determine the usefulness of the implementation of the methodology on current computing hardware, RAAM was tested with system of system architecture studies of different sizes. This was necessary since system of systems may be called upon to accomplish thousands of tasks. It has been clearly demonstrated that RAAM is able to enumerate and evaluate the types of large, complex design spaces usually encountered in capability based design, oftentimes providing the ability to efficiently search the entire decision space. The core algorithms for generation and evaluation of alternatives scale linearly with expected problem sizes. The SEAD capability model outputs prompted the discovery a new issue, the data storage and manipulation requirements for an analysis. Two strategies were developed to counter large data sizes, the use

  14. Neural network modeling of a dolphin's sonar discrimination capabilities

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL

    1994-01-01

    The capability of an echo-locating dolphin to discriminate differences in the wall thickness of cylinders was previously modeled by a counterpropagation neural network using only spectral information of the echoes [W. W. L. Au, J. Acoust. Soc. Am. 95, 2728–2735 (1994)]. In this study, both time a...

  15. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  16. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  17. Technique development for modulus, microcracking, hermeticity, and coating evaluation capability characterization of SiC/SiC tubes

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Xunxiang [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States); Ang, Caen K. [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States); Singh, Gyanender P. [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States); Katoh, Yutai [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)

    2016-08-01

    Driven by the need to enlarge the safety margins of nuclear fission reactors in accident scenarios, research and development of accident-tolerant fuel has become an important topic in the nuclear engineering and materials community. A continuous-fiber SiC/SiC composite is under consideration as a replacement for traditional zirconium alloy cladding owing to its high-temperature stability, chemical inertness, and exceptional irradiation resistance. An important task is the development of characterization techniques for SiC/SiC cladding, since traditional work using rectangular bars or disks cannot directly provide useful information on the properties of SiC/SiC composite tubes for fuel cladding applications. At Oak Ridge National Laboratory, experimental capabilities are under development to characterize the modulus, microcracking, and hermeticity of as-fabricated, as-irradiated SiC/SiC composite tubes. Resonant ultrasound spectroscopy has been validated as a promising technique to evaluate the elastic properties of SiC/SiC composite tubes and microcracking within the material. A similar technique, impulse excitation, is efficient in determining the basic mechanical properties of SiC bars prepared by chemical vapor deposition; it also has potential for application in studying the mechanical properties of SiC/SiC composite tubes. Complete evaluation of the quality of the developed coatings, a major mitigation strategy against gas permeation and hydrothermal corrosion, requires the deployment of various experimental techniques, such as scratch indentation, tensile pulling-off tests, and scanning electron microscopy. In addition, a comprehensive permeation test station is being established to assess the hermeticity of SiC/SiC composite tubes and to determine the H/D/He permeability of SiC/SiC composites. This report summarizes the current status of the development of these experimental capabilities.

  18. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  19. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  20. Modeling and numerical techniques for high-speed digital simulation of nuclear power plants

    International Nuclear Information System (INIS)

    Wulff, W.; Cheng, H.S.; Mallen, A.N.

    1987-01-01

    Conventional computing methods are contrasted with newly developed high-speed and low-cost computing techniques for simulating normal and accidental transients in nuclear power plants. Six principles are formulated for cost-effective high-fidelity simulation with emphasis on modeling of transient two-phase flow coolant dynamics in nuclear reactors. Available computing architectures are characterized. It is shown that the combination of the newly developed modeling and computing principles with the use of existing special-purpose peripheral processors is capable of achieving low-cost and high-speed simulation with high-fidelity and outstanding user convenience, suitable for detailed reactor plant response analyses

  1. Advanced capabilities for materials modelling with Quantum ESPRESSO

    Science.gov (United States)

    Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.

    2017-11-01

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  2. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    Science.gov (United States)

    Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano

    2017-09-27

    Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.

  3. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques

    Science.gov (United States)

    Jain, Ashu; Srinivasulu, Sanaga

    2006-02-01

    This paper presents the findings of a study aimed at decomposing a flow hydrograph into different segments based on physical concepts in a catchment, and modelling different segments using different technique viz. conceptual and artificial neural networks (ANNs). An integrated modelling framework is proposed capable of modelling infiltration, base flow, evapotranspiration, soil moisture accounting, and certain segments of the decomposed flow hydrograph using conceptual techniques and the complex, non-linear, and dynamic rainfall-runoff process using ANN technique. Specifically, five different multi-layer perceptron (MLP) and two self-organizing map (SOM) models have been developed. The rainfall and streamflow data derived from the Kentucky River catchment were employed to test the proposed methodology and develop all the models. The performance of all the models was evaluated using seven different standard statistical measures. The results obtained in this study indicate that (a) the rainfall-runoff relationship in a large catchment consists of at least three or four different mappings corresponding to different dynamics of the underlying physical processes, (b) an integrated approach that models the different segments of the decomposed flow hydrograph using different techniques is better than a single ANN in modelling the complex, dynamic, non-linear, and fragmented rainfall runoff process, (c) a simple model based on the concept of flow recession is better than an ANN to model the falling limb of a flow hydrograph, and (d) decomposing a flow hydrograph into the different segments corresponding to the different dynamics based on the physical concepts is better than using the soft decomposition employed using SOM.

  4. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    Science.gov (United States)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  5. Simulation modeling on the growth of firm's safety management capability

    Institute of Scientific and Technical Information of China (English)

    LIU Tie-zhong; LI Zhi-xiang

    2008-01-01

    Aiming to the deficiency of safety management measure, established simulation model about firm's safety management capability(FSMC) based on organizational learning theory. The system dynamics(SD) method was used, in which level and rate system, variable equation and system structure flow diagram was concluded. Simulation model was verified from two aspects: first, model's sensitivity to variable was tested from the gross of safety investment and the proportion of safety investment; second, variables dependency was checked up from the correlative variable of FSMC and organizational learning. The feasibility of simulation model is verified though these processes.

  6. Enhancement of loss detection capability using a combination of the Kalman Filter/Linear Smoother and controllable unit accounting approach

    International Nuclear Information System (INIS)

    Pike, D.H.; Morrison, G.W.

    1979-01-01

    An approach to loss detection is presented which combines the optimal loss detection capability of state estimation techniques with a controllable unit accounting approach. The state estimation theory makes use of a linear system model which is capable of modeling the interaction of various controllable unit areas within a given facility. An example is presented which illustrates the increase in loss detection probability which is realizable with state estimation techniques. Comparisons are made with a Shewhart Control Chart and the CUSUM statistic

  7. TRISTAN I: techniques, capabilities, and accomplishments

    International Nuclear Information System (INIS)

    Talbert, W.L. Jr.

    1977-01-01

    Following a brief description of the TRISTAN facility, the techniques developed for on-line nuclear spectroscopy of short-lived fission products, the studies possible, and the activities studied are presented. All journal publications relating to the development of the facility and the studies carried out using it are referenced, and co-workers identified

  8. Capability-based Access Control Delegation Model on the Federated IoT Network

    DEFF Research Database (Denmark)

    Anggorojati, Bayu; Mahalle, Parikshit N.; Prasad, Neeli R.

    2012-01-01

    Flexibility is an important property for general access control system and especially in the Internet of Things (IoT), which can be achieved by access or authority delegation. Delegation mechanisms in access control that have been studied until now have been intended mainly for a system that has...... no resource constraint, such as a web-based system, which is not very suitable for a highly pervasive system such as IoT. To this end, this paper presents an access delegation method with security considerations based on Capability-based Context Aware Access Control (CCAAC) model intended for federated...... machine-to-machine communication or IoT networks. The main idea of our proposed model is that the access delegation is realized by means of a capability propagation mechanism, and incorporating the context information as well as secure capability propagation under federated IoT environments. By using...

  9. Space Weather Models at the CCMC And Their Capabilities

    Science.gov (United States)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2007-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. In this presentation, we will provide an overview of the community-provided, space weather-relevant, model suite, which resides at CCMC. We will discuss current capabilities, and analyze expected future developments of space weather related modeling.

  10. Soft computing techniques toward modeling the water supplies of Cyprus.

    Science.gov (United States)

    Iliadis, L; Maris, F; Tachos, S

    2011-10-01

    This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Probabilistic method/techniques of evaluation/modeling that permits to optimize/reduce the necessary resources

    International Nuclear Information System (INIS)

    Florescu, G.; Apostol, M.; Farcasiu, M.; Luminita Bedreaga, M.; Nitoi, M.; Turcu, I.

    2004-01-01

    Fault tree/event tree modeling approach is widely used in modeling and behavior simulation of nuclear structures, systems and components (NSSCs), during different condition of operation. Evaluation of NSSCs reliability, availability, risk or safety, during operation, by using probabilistic techniques, is also largely used. Development of computer capabilities offered new possibilities for large NSSCs models designing, processing and using. There are situations where large, complex and correct NSSC models are desired to be associated with rapid results/solutions/decisions or with multiple processing in order to obtain specific results. Large fault/event trees are hardly to be developed, reviewed and processed. During operation of NSSCs, the time, especially, is an important factor in taking decision. The paper presents a probabilistic method that permits evaluation/modeling of NSSCs and intents to solve the above problems by adopting appropriate techniques. The method is stated for special applications and is based on specific PSA analysis steps, information, algorithms, criteria and relations, in correspondence with the fault tree/event tree modeling and similar techniques, in order to obtain appropriate results for NSSC model analysis. Special classification of NSSCs is stated in order to reflect aspects of use of the method. Also the common reliability databases are part of information necessary to complete the analysis process. Special data and information bases contribute to state the proposed method/techniques. The paper also presents the specific steps of the method, its applicability, the main advantages and problems to be furthermore studied. The method permits optimization/reducing of resources used to perform the PSA activities. (author)

  12. State-of-the-art modeling capabilities for Orimulsion modeling

    International Nuclear Information System (INIS)

    Cekirge, H.M.; Palmer, S.L.; Convery, K.; Ileri, L.

    1996-01-01

    The pollution response of Orimulsion was discussed. Orimulsion is an inexpensive alternative to fuel oil No. 6. It has the capability to heat large industrial and electric utility boilers. It is an emulsion composed of approximately 70% bitumen (a heavy hydrocarbon) and 30% water to which a surfactant has been added. It has a specific gravity of one or higher, so it is of particular concern in the event of a spill. The physical and chemical processes that would take place in an Orimulsion spill were studied and incorporated into the design of the model ORI SLIK, a fate and transport model for marine environments. The most critical decision in using ORI SLIK is the assignment of the number of parcels into which the initial spill volume will be divided since an underspecification would result in inaccurate results. However, no reliable methods for determining this, other than a decision based on trial and error, has been found. It was concluded that while many of the complex processes of Orimulsion in marine environments are approximated in currently available models, some areas still need further study. Among these are the effect of current shear, changing particle densities, and differential settling. 24 refs., 1 tab., 5 figs

  13. University-Industry Research Collaboration: A Model to Assess University Capability

    Science.gov (United States)

    Abramo, Giovanni; D'Angelo, Ciriaco Andrea; Di Costa, Flavia

    2011-01-01

    Scholars and policy makers recognize that collaboration between industry and the public research institutions is a necessity for innovation and national economic development. This work presents an econometric model which expresses the university capability for collaboration with industry as a function of size, location and research quality. The…

  14. Experiences with the Capability Maturity Model in a research environment

    NARCIS (Netherlands)

    Velden, van der M.J.; Vreke, J.; Wal, van der B.; Symons, A.

    1996-01-01

    The project described here was aimed at evaluating the Capability Maturity Model (CMM) in the context of a research organization. Part of the evaluation was a standard CMM assessment. It was found that CMM could be applied to a research organization, although its five maturity levels were considered

  15. Climbing the ladder: capability maturity model integration level 3

    Science.gov (United States)

    Day, Bryce; Lutteroth, Christof

    2011-02-01

    This article details the attempt to form a complete workflow model for an information and communication technologies (ICT) company in order to achieve a capability maturity model integration (CMMI) maturity rating of 3. During this project, business processes across the company's core and auxiliary sectors were documented and extended using modern enterprise modelling tools and a The Open Group Architectural Framework (TOGAF) methodology. Different challenges were encountered with regard to process customisation and tool support for enterprise modelling. In particular, there were problems with the reuse of process models, the integration of different project management methodologies and the integration of the Rational Unified Process development process framework that had to be solved. We report on these challenges and the perceived effects of the project on the company. Finally, we point out research directions that could help to improve the situation in the future.

  16. Imaging capabilities of germanium gamma cameras

    International Nuclear Information System (INIS)

    Steidley, J.W.

    1977-01-01

    Quantitative methods of analysis based on the use of a computer simulation were developed and used to investigate the imaging capabilities of germanium gamma cameras. The main advantage of the computer simulation is that the inherent unknowns of clinical imaging procedures are removed from the investigation. The effects of patient scattered radiation were incorporated using a mathematical LSF model which was empirically developed and experimentally verified. Image modifying effects of patient motion, spatial distortions, and count rate capabilities were also included in the model. Spatial domain and frequency domain modeling techniques were developed and used in the simulation as required. The imaging capabilities of gamma cameras were assessed using low contrast lesion source distributions. The results showed that an improvement in energy resolution from 10% to 2% offers significant clinical advantages in terms of improved contrast, increased detectability, and reduced patient dose. The improvements are of greatest significance for small lesions at low contrast. The results of the computer simulation were also used to compare a design of a hypothetical germanium gamma camera with a state-of-the-art scintillation camera. The computer model performed a parametric analysis of the interrelated effects of inherent and technological limitations of gamma camera imaging. In particular, the trade-off between collimator resolution and collimator efficiency for detection of a given low contrast lesion was directly addressed. This trade-off is an inherent limitation of both gamma cameras. The image degrading effects of patient motion, camera spatial distortions, and low count rate were shown to modify the improvements due to better energy resolution. Thus, based on this research, the continued development of germanium cameras to the point of clinical demonstration is recommended

  17. The Creation and Use of an Analysis Capability Maturity Model (trademark) (ACMM)

    National Research Council Canada - National Science Library

    Covey, R. W; Hixon, D. J

    2005-01-01

    .... Capability Maturity Models (trademark) (CMMs) are being used in several intellectual endeavors, such as software engineering, software acquisition, and systems engineering. This Analysis CMM (ACMM...

  18. IMPROVEMENTS IN THE CAPABILITY PROFILE OF 3-D PRINTING: AN UPDATE

    Directory of Open Access Journals (Sweden)

    Dimitrov, Dimitar Marinov

    2014-08-01

    Full Text Available Knowledge about the capabilities of a production system is an important issue. The three- dimensional (3-D printing (drop-on-bed process has become a well-established Additive Manufacturing (AM technology. Initially intended for use mainly as a concept modeller, its scope of application has expanded to include, among others, fit and functional models, pattern-making for casting and moulding processes, rapid tooling, and medical and architectural models. This growth in applications has stimulated a reciprocal improvement in available materials and the technological capabilities of 3-D printing, such as accuracy, strength and elongation, surface finish, build time, and cost. These factors are of significance to users who want to control their processes better and to designers who want to define their expectations and determine their requirements. Thus this paper aims to provide a technical update, highlighting the influence level of different factors on a system’s capabilities. This paper uses the example of the ZPrinter 310 system from the Z Corporation, applies appropriate statistical techniques, and takes into consideration the latest material and machine developments, in order to report on the current improvements of the capability profile of this important process.

  19. The Aviation System Analysis Capability Airport Capacity and Delay Models

    Science.gov (United States)

    Lee, David A.; Nelson, Caroline; Shapiro, Gerald

    1998-01-01

    The ASAC Airport Capacity Model and the ASAC Airport Delay Model support analyses of technologies addressing airport capacity. NASA's Aviation System Analysis Capability (ASAC) Airport Capacity Model estimates the capacity of an airport as a function of weather, Federal Aviation Administration (FAA) procedures, traffic characteristics, and the level of technology available. Airport capacity is presented as a Pareto frontier of arrivals per hour versus departures per hour. The ASAC Airport Delay Model allows the user to estimate the minutes of arrival delay for an airport, given its (weather dependent) capacity. Historical weather observations and demand patterns are provided by ASAC as inputs to the delay model. The ASAC economic models can translate a reduction in delay minutes into benefit dollars.

  20. Boiling water reactor modeling capabilities of MMS-02

    International Nuclear Information System (INIS)

    May, R.S.; Abdollahian, D.A.; Elias, E.; Shak, D.P.

    1987-01-01

    During the development period for the Modular Modeling System (MMS) library modules, the Boiling Water Reactor (BWR) has been the last major component to be addressed. The BWRX module includes models of the reactor core, reactor vessel, and recirculation loop. A pre-release version was made available for utility use in September 1983. Since that time a number of changes have been incorporated in BWRX to (1) improve running time for most transient events of interest, (2) extend its capability to include certain events of interest in reactor safety analysis, and (3) incorporate a variety of improvements to the module interfaces and user input formats. The purposes of this study were to briefly review the module structure and physical models, to point the differences between the MMS-02 BWRX module and the BWRX version previously available in the TESTREV1 library, to provide guidelines for choosing among the various user options, and to present some representative results

  1. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  2. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    International Nuclear Information System (INIS)

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  3. Towards a dynamic concept of alliance capability

    OpenAIRE

    SLUYTS, Kim; MARTENS, Rudy; MATTHYSSENS, Paul

    2008-01-01

    This paper has a threefold purpose. First, we offer a literature review on alliance capability based on strategic and competence based management literature. Second, we extend existing literature on alliance capability by breaking this concept down into five sub capabilities, which are each linked to a stage of the alliance life cycle. Finally, we suggest how firms can support these capabilities through structural, technological and people-related tools and techniques. We argue that current l...

  4. New techniques for subdivision modelling

    OpenAIRE

    BEETS, Koen

    2006-01-01

    In this dissertation, several tools and techniques for modelling with subdivision surfaces are presented. Based on the huge amount of theoretical knowledge about subdivision surfaces, we present techniques to facilitate practical 3D modelling which make subdivision surfaces even more useful. Subdivision surfaces have reclaimed attention several years ago after their application in full-featured 3D animation movies, such as Toy Story. Since then and due to their attractive properties an ever i...

  5. EASEWASTE-life cycle modeling capabilities for waste management technologies

    DEFF Research Database (Denmark)

    Bhander, Gurbakhash Singh; Christensen, Thomas Højlund; Hauschild, Michael Zwicky

    2010-01-01

    Background, Aims and Scope The management of municipal solid waste and the associated environmental impacts are subject of growing attention in industrialized countries. EU has recently strongly emphasized the role of LCA in its waste and resource strategies. The development of sustainable solid...... waste management systems applying a life-cycle perspective requires readily understandable tools for modelling the life cycle impacts of waste management systems. The aim of the paper is to demonstrate the structure, functionalities and LCA modelling capabilities of the PC-based life cycle oriented...... waste management model EASEWASTE, developed at the Technical University of Denmark specifically to meet the needs of the waste system developer with the objective to evaluate the environmental performance of the various elements of existing or proposed solid waste management systems. Materials...

  6. Nuclear Hybrid Energy System Modeling: RELAP5 Dynamic Coupling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Piyush Sabharwall; Nolan Anderson; Haihua Zhao; Shannon Bragg-Sitton; George Mesina

    2012-09-01

    The nuclear hybrid energy systems (NHES) research team is currently developing a dynamic simulation of an integrated hybrid energy system. A detailed simulation of proposed NHES architectures will allow initial computational demonstration of a tightly coupled NHES to identify key reactor subsystem requirements, identify candidate reactor technologies for a hybrid system, and identify key challenges to operation of the coupled system. This work will provide a baseline for later coupling of design-specific reactor models through industry collaboration. The modeling capability addressed in this report focuses on the reactor subsystem simulation.

  7. Application of data assimilation to improve the forecasting capability of an atmospheric dispersion model for a radioactive plume

    International Nuclear Information System (INIS)

    Jeong, H.J.; Han, M.H.; Hwang, W.T.; Kim, E.H.

    2008-01-01

    Modeling an atmospheric dispersion of a radioactive plume plays an influential role in assessing the environmental impacts caused by nuclear accidents. The performance of data assimilation techniques combined with Gaussian model outputs and measurements to improve forecasting abilities are investigated in this study. Tracer dispersion experiments are performed to produce field data by assuming a radiological emergency. Adaptive neuro-fuzzy inference system (ANFIS) and linear regression filter are considered to assimilate the Gaussian model outputs with measurements. ANFIS is trained so that the model outputs are likely to be more accurate for the experimental data. Linear regression filter is designed to assimilate measurements similar to the ANFIS. It is confirmed that ANFIS could be an appropriate method for an improvement of the forecasting capability of an atmospheric dispersion model in the case of a radiological emergency, judging from the higher correlation coefficients between the measured and the assimilated ones rather than a linear regression filter. This kind of data assimilation method could support a decision-making system when deciding on the best available countermeasures for public health from among emergency preparedness alternatives

  8. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  9. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  10. Evacuation emergency response model coupling atmospheric release advisory capability output

    International Nuclear Information System (INIS)

    Rosen, L.C.; Lawver, B.S.; Buckley, D.W.; Finn, S.P.; Swenson, J.B.

    1983-01-01

    A Federal Emergency Management Agency (FEMA) sponsored project to develop a coupled set of models between those of the Lawrence Livermore National Laboratory (LLNL) Atmospheric Release Advisory Capability (ARAC) system and candidate evacuation models is discussed herein. This report describes the ARAC system and discusses the rapid computer code developed and the coupling with ARAC output. The computer code is adapted to the use of color graphics as a means to display and convey the dynamics of an emergency evacuation. The model is applied to a specific case of an emergency evacuation of individuals surrounding the Rancho Seco Nuclear Power Plant, located approximately 25 miles southeast of Sacramento, California. The graphics available to the model user for the Rancho Seco example are displayed and noted in detail. Suggestions for future, potential improvements to the emergency evacuation model are presented

  11. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis

    Science.gov (United States)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2013-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.

  12. Characterization and modelling techniques for gas metal arc welding of DP 600 sheet steels

    Energy Technology Data Exchange (ETDEWEB)

    Mukherjee, K.; Prahl, U.; Bleck, W. [RWTH Aachen University, Department of Ferrous Metallurgy (IEHK) (Germany); Reisgen, U.; Schleser, M.; Abdurakhmanov, A. [RWTH Aachen University, Welding and Joining Institute (ISF) (Germany)

    2010-11-15

    The objectives of the present work are to characterize the Gas Metal Arc Welding process of DP 600 sheet steel and to summarize the modelling techniques. The time-temperature evolution during the welding cycle was measured experimentally and modelled with the softwaretool SimWeld. To model the phase transformations during the welding cycle dilatometer tests were done to quantify the parameters for phase field modelling by MICRESS {sup registered}. The important input parameters are interface mobility, nucleation density, etc. A contribution was made to include austenite to bainite transformation in MICRESS {sup registered}. This is useful to predict the microstructure in the fast cooling segments. The phase transformation model is capable to predict the microstructure along the heating and cooling cycles of welding. Tensile tests have shown the evidence of failure at the heat affected zone, which has the ferrite-tempered martensite microstructure. (orig.)

  13. Numerical modeling capabilities to predict repository performance

    International Nuclear Information System (INIS)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used

  14. Improved ceramic slip casting technique. [application to aircraft model fabrication

    Science.gov (United States)

    Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)

    1993-01-01

    A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell

  15. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  16. Towards the Next Generation of Space Environment Prediction Capabilities.

    Science.gov (United States)

    Kuznetsova, M. M.

    2015-12-01

    Since its establishment more than 15 years ago, the Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) is serving as an assess point to expanding collection of state-of-the-art space environment models and frameworks as well as a hub for collaborative development of next generation space weather forecasting systems. In partnership with model developers and international research and operational communities the CCMC integrates new data streams and models from diverse sources into end-to-end space weather impacts predictive systems, identifies week links in data-model & model-model coupling and leads community efforts to fill those gaps. The presentation will highlight latest developments, progress in CCMC-led community-wide projects on testing, prototyping, and validation of models, forecasting techniques and procedures and outline ideas on accelerating implementation of new capabilities in space weather operations.

  17. Evaluating transfer capability of economic-driven power markets

    DEFF Research Database (Denmark)

    Xu, Zhao

    2007-01-01

    in the present economic-driven electricity markets. A mathematical model of a multi-objective optimization (MOOP) technique has been adopted and presented here for transfer capability studies; which can be helpful for power system planning and operation procedures. The newly-developed algorithm is being tested......The on-going restructuring of electric power utilities poses great challenges for power system engineers to plan and operate power systems as economical and reliable as possible. This paper discusses an important issue, which has been usually neglected, when quantifying active power transfer levels...

  18. Development of evaluation method for software hazard identification techniques

    International Nuclear Information System (INIS)

    Huang, H. W.; Chen, M. H.; Shih, C.; Yih, S.; Kuo, C. T.; Wang, L. H.; Yu, Y. C.; Chen, C. W.

    2006-01-01

    This research evaluated the applicable software hazard identification techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flow-graph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/noise ratio, complexity, and implementation cost. By this proposed method, the analysts can evaluate various software hazard identification combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (with transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and simulation-based model-analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantages are the completeness complexity and implementation cost. This evaluation method can be a platform to reach common consensus for the stakeholders. Following the evolution of software hazard identification techniques, the evaluation results could be changed. However, the insight of software hazard identification techniques is much more important than the numbers obtained by the evaluation. (authors)

  19. Composite use of numerical groundwater flow modeling and geoinformatics techniques for monitoring Indus Basin aquifer, Pakistan.

    Science.gov (United States)

    Ahmad, Zulfiqar; Ashraf, Arshad; Fryar, Alan; Akhter, Gulraiz

    2011-02-01

    The integration of the Geographic Information System (GIS) with groundwater modeling and satellite remote sensing capabilities has provided an efficient way of analyzing and monitoring groundwater behavior and its associated land conditions. A 3-dimensional finite element model (Feflow) has been used for regional groundwater flow modeling of Upper Chaj Doab in Indus Basin, Pakistan. The approach of using GIS techniques that partially fulfill the data requirements and define the parameters of existing hydrologic models was adopted. The numerical groundwater flow model is developed to configure the groundwater equipotential surface, hydraulic head gradient, and estimation of the groundwater budget of the aquifer. GIS is used for spatial database development, integration with a remote sensing, and numerical groundwater flow modeling capabilities. The thematic layers of soils, land use, hydrology, infrastructure, and climate were developed using GIS. The Arcview GIS software is used as additive tool to develop supportive data for numerical groundwater flow modeling and integration and presentation of image processing and modeling results. The groundwater flow model was calibrated to simulate future changes in piezometric heads from the period 2006 to 2020. Different scenarios were developed to study the impact of extreme climatic conditions (drought/flood) and variable groundwater abstraction on the regional groundwater system. The model results indicated a significant response in watertable due to external influential factors. The developed model provides an effective tool for evaluating better management options for monitoring future groundwater development in the study area.

  20. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni

    2017-05-01

    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  1. Modelling, analysis and validation of microwave techniques for the characterisation of metallic nanoparticles

    Science.gov (United States)

    Sulaimalebbe, Aslam

    High Frequency Structure Simulator (HFSS), followed by the electrical characterisation of synthesised Pt NP films using the novel miniature fabricated OCP technique. The results obtained from this technique provided the inspiration to synthesise and evaluate the microwave properties of Au NPs. The findings from this technique provided the motivation to characterise both the Pt and Au NP films using the DR technique. Unlike the OCP technique, the DR method is highly sensitive but the achievable measurement accuracy is limited since this technique does not have broadband frequency capability like the OCP method. The results obtained from the DR technique show a good agreement with the theoretical prediction. In the last phase of this research, a further validation of the aperture admittance models on different types OCP (i.e. RG-405 and RG-402 cables and SMA connector) have been carried out on the developed 3D full wave models using HFSS software, followed by the development of universal models for the aforementioned OCPs based on the same 3D full wave models.

  2. Center of Excellence for Applied Mathematical and Statistical Research in support of development of multicrop production monitoring capability

    Science.gov (United States)

    Woodward, W. A.; Gray, H. L.

    1983-01-01

    Efforts in support of the development of multicrop production monitoring capability are reported. In particular, segment level proportion estimation techniques based upon a mixture model were investigated. Efforts have dealt primarily with evaluation of current techniques and development of alternative ones. A comparison of techniques is provided on both simulated and LANDSAT data along with an analysis of the quality of profile variables obtained from LANDSAT data.

  3. A model based lean approach to capability management

    CSIR Research Space (South Africa)

    Venter, Jacobus P

    2017-09-01

    Full Text Available It is argued that the definition of the required operational capabilities in the short and long term is an essential element of command. Defence Capability Management can be a cumbersome, long and very resource intensive activity. Given the new...

  4. Fuel analysis code FAIR and its high burnup modelling capabilities

    International Nuclear Information System (INIS)

    Prasad, P.S.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1995-01-01

    A computer code FAIR has been developed for analysing performance of water cooled reactor fuel pins. It is capable of analysing high burnup fuels. This code has recently been used for analysing ten high burnup fuel rods irradiated at Halden reactor. In the present paper, the code FAIR and its various high burnup models are described. The performance of code FAIR in analysing high burnup fuels and its other applications are highlighted. (author). 21 refs., 12 figs

  5. Systems Modeling to Implement Integrated System Health Management Capability

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark; Morris, Jonathan; Smith, Harvey; Schmalzel, John

    2007-01-01

    ISHM capability includes: detection of anomalies, diagnosis of causes of anomalies, prediction of future anomalies, and user interfaces that enable integrated awareness (past, present, and future) by users. This is achieved by focused management of data, information and knowledge (DIaK) that will likely be distributed across networks. Management of DIaK implies storage, sharing (timely availability), maintaining, evolving, and processing. Processing of DIaK encapsulates strategies, methodologies, algorithms, etc. focused on achieving high ISHM Functional Capability Level (FCL). High FCL means a high degree of success in detecting anomalies, diagnosing causes, predicting future anomalies, and enabling health integrated awareness by the user. A model that enables ISHM capability, and hence, DIaK management, is denominated the ISHM Model of the System (IMS). We describe aspects of the IMS that focus on processing of DIaK. Strategies, methodologies, and algorithms require proper context. We describe an approach to define and use contexts, implementation in an object-oriented software environment (G2), and validation using actual test data from a methane thruster test program at NASA SSC. Context is linked to existence of relationships among elements of a system. For example, the context to use a strategy to detect leak is to identify closed subsystems (e.g. bounded by closed valves and by tanks) that include pressure sensors, and check if the pressure is changing. We call these subsystems Pressurizable Subsystems. If pressure changes are detected, then all members of the closed subsystem become suspect of leakage. In this case, the context is defined by identifying a subsystem that is suitable for applying a strategy. Contexts are defined in many ways. Often, a context is defined by relationships of function (e.g. liquid flow, maintaining pressure, etc.), form (e.g. part of the same component, connected to other components, etc.), or space (e.g. physically close

  6. Spatial Preference Modelling for equitable infrastructure provision: an application of Sen's Capability Approach

    Science.gov (United States)

    Wismadi, Arif; Zuidgeest, Mark; Brussel, Mark; van Maarseveen, Martin

    2014-01-01

    To determine whether the inclusion of spatial neighbourhood comparison factors in Preference Modelling allows spatial decision support systems (SDSSs) to better address spatial equity, we introduce Spatial Preference Modelling (SPM). To evaluate the effectiveness of this model in addressing equity, various standardisation functions in both Non-Spatial Preference Modelling and SPM are compared. The evaluation involves applying the model to a resource location-allocation problem for transport infrastructure in the Special Province of Yogyakarta in Indonesia. We apply Amartya Sen's Capability Approach to define opportunity to mobility as a non-income indicator. Using the extended Moran's I interpretation for spatial equity, we evaluate the distribution output regarding, first, `the spatial distribution patterns of priority targeting for allocation' (SPT) and, second, `the effect of new distribution patterns after location-allocation' (ELA). The Moran's I index of the initial map and its comparison with six patterns for SPT as well as ELA consistently indicates that the SPM is more effective for addressing spatial equity. We conclude that the inclusion of spatial neighbourhood comparison factors in Preference Modelling improves the capability of SDSS to address spatial equity. This study thus proposes a new formal method for SDSS with specific attention on resource location-allocation to address spatial equity.

  7. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  8. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-05-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  9. A Computational Model of the SC Multisensory Neurons: Integrative Capabilities, Maturation, and Plasticity

    Directory of Open Access Journals (Sweden)

    Cristiano Cuppini

    2011-10-01

    Full Text Available Different cortical and subcortical structures present neurons able to integrate stimuli of different sensory modalities. Among the others, one of the most investigated integrative regions is the Superior Colliculus (SC, a midbrain structure whose aim is to guide attentive behaviour and motor responses toward external events. Despite the large amount of experimental data in the literature, the neural mechanisms underlying the SC response are not completely understood. Moreover, recent data indicate that multisensory integration ability is the result of maturation after birth, depending on sensory experience. Mathematical models and computer simulations can be of value to investigate and clarify these phenomena. In the last few years, several models have been implemented to shed light on these mechanisms and to gain a deeper comprehension of the SC capabilities. Here, a neural network model (Cuppini et al., 2010 is extensively discussed. The model considers visual-auditory interaction, and is able to reproduce and explain the main physiological features of multisensory integration in SC neurons, and their acquisition during postnatal life. To reproduce a neonatal condition, the model assumes that during early life: 1 cortical-SC synapses are present but not active; 2 in this phase, responses are driven by non-cortical inputs with very large receptive fields (RFs and little spatial tuning; 3 a slight spatial preference for the visual inputs is present. Sensory experience is modeled by a “training phase” in which the network is repeatedly exposed to modality-specific and cross-modal stimuli at different locations. As results, Cortical-SC synapses are crafted during this period thanks to the Hebbian rules of potentiation and depression, RFs are reduced in size, and neurons exhibit integrative capabilities to cross-modal stimuli, such as multisensory enhancement, inverse effectiveness, and multisensory depression. The utility of the modelling

  10. Post Irradiation Capabilities at the Idaho National Laboratory

    International Nuclear Information System (INIS)

    Schulthess, J.L.; Rosenberg, K.E.

    2011-01-01

    The U.S. Department of Energy (DOE), Office of Nuclear Energy (NE) oversees the efforts to ensure nuclear energy remains a viable option for the United States. A significant portion of these efforts are related to post-irradiation examinations (PIE) of highly activated fuel and materials that are subject to the extreme environment inside a nuclear reactor. As the lead national laboratory, Idaho National Laboratory (INL) has a rich history, experience, workforce and capabilities for performing PIE. However, new advances in tools and techniques for performing PIE now enable understanding the performance of fuels and materials at the nano-scale and smaller level. Examination at this level is critical since this is the scale at which irradiation damage occurs. The INL is on course to adopt these advanced tools and techniques to develop a comprehensive nuclear fuels and materials characterization capability that is unique in the world. Because INL has extensive PIE capabilities currently in place, a strong foundation exist to build upon as new capabilities are implemented and work load increases. In the recent past, INL has adopted significant capability to perform advanced PIE characterization. Looking forward, INL is planning for the addition of two facilities that will be built to meet the stringent demands of advanced tools and techniques for highly activated fuels and materials characterization. Dubbed the Irradiated Materials Characterization Laboratory (IMCL) and Advanced Post Irradiation Examination Capability, these facilities are next generation PIE laboratories designed to perform the work of PIE that cannot be performed in current DOE facilities. In addition to physical capabilities, INL has recently added two significant contributors to the Advanced Test Reactor-National Scientific User Facility (ATR-NSUF), Oak Ridge National Laboratory and University of California, Berkeley.

  11. Reasoning about Object Capabilities with Logical Relations and Effect Parametricity

    DEFF Research Database (Denmark)

    Devriese, Dominique; Piessens, Frank; Birkedal, Lars

    -of-the-art techniques from programming languages research, we define a logical relation for a core calculus of JavaScript that better characterises capability-safety. The relation is powerful enough to reason about typical capability patterns and supports evolvable invariants on shared data structures, capabilities...

  12. Initiative-taking, Improvisational Capability and Business Model Innovation in Emerging Market

    DEFF Research Database (Denmark)

    Cao, Yangfeng

    Business model innovation plays a very important role in developing competitive advantage when multinational small and medium-sized enterprises (SMEs) from developed country enter into emerging markets because of the large contextual distances or gaps between the emerging and developed economies....... Many prior researches have shown that the foreign subsidiaries play important role in shaping the overall strategy of the parent company. However, little is known about how subsidiary specifically facilitates business model innovation (BMI) in emerging markets. Adopting the method of comparative...... innovation in emerging markets. We find that high initiative-taking and strong improvisational capability can accelerate the business model innovation. Our research contributes to the literatures on international and strategic entrepreneurship....

  13. Enhancement of kalman filter single loss detection capability

    International Nuclear Information System (INIS)

    Morrison, G.W.; Downing, D.J.; Pike, D.H.

    1980-01-01

    A new technique to significantly increase the sensitivity of the Kalman filter to detect one-time losses for nuclear marterial accountability and control has been developed. The technique uses the innovations sequence obtained from a Kalman filter analysis of a material balance area. The innovations are distributed as zero mean independent Gaussion random variables with known variance. This property enables an estimator to be formed with enhanced one time loss detection capabilities. Simulation studies of a material balance area indicate the new estimator greatly enhances the one time loss detection capability of the Kalman filter

  14. Assessment of cold neutron radiography capability

    International Nuclear Information System (INIS)

    McDonald, T.E. Jr.; Roberts, J.A.

    1998-01-01

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The authors goals were to demonstrate and assess cold neutron radiography techniques at the Los Alamos Neutron Science Center (LANSCE), Manual Lujan Neutron Scattering Center (Lujan Center), and to investigate potential applications of the capability. The authors have obtained images using film and an amorphous silicon detector. In addition, a new technique they have developed allows neutron radiographs to be made using only a narrow range of neutron energies. Employing this approach and the Bragg cut-off phenomena in certain materials, they have demonstrated material discrimination in radiography. They also demonstrated the imaging of cracks in a sample of a fire-set case that was supplied by Sandia National Laboratory, and they investigated whether the capability could be used to determine the extent of coking in jet engine nozzles. The LANSCE neutron radiography capability appears to have applications in the DOE stockpile maintenance and science-based stockpile stewardship (SBSS) programs, and in industry

  15. Building a Conceptual Model of Routines, Capabilities, and Absorptive Capacity Interplay

    Directory of Open Access Journals (Sweden)

    Ivan Stefanovic

    2014-05-01

    Full Text Available Researchers have often used constructs such as routines, operational capability, dynamic capability, absorptive capacity, etc., to explain various organizational phenomena, especially a competitive advantage of firms. As a consequence of their frequent use in different contexts, these constructs have become extremely broad and blurred, thus making a void in strategic management literature. In this paper we attempt to bring a sense of holistic perspective on these constructs by briefly reviewing the current state of the research and presenting a conceptual model that provides an explanation for the causal relationships between them. The final section of the paper sheds some light on this topic from the econophysics perspective. Authors hope that findings in this paper may serve as a foundation for other research endeavours related to the topic of how firms achieve competitive advantage and thrive in their environments.

  16. Experimental verifications of a structural damage identification technique using reduced order finite-element model

    Science.gov (United States)

    Li, Rui; Zhou, Li; Yang, Jann N.

    2010-04-01

    An objective of the structural health monitoring system is to identify the state of the structure and to detect the damage when it occurs. Analysis techniques for the damage identification of structures, based on vibration data measured from sensors, have received considerable attention. Recently, a new damage tracking technique, referred to as the adaptive quadratic sum-square error (AQSSE) technique, has been proposed, and simulation studies demonstrated that the AQSSE technique is quite effective in identifying structural damages. In this paper, the adaptive quadratic sumsquare error (AQSSE) along with the reduced-order finite-element method is proposed to identify the damages of complex structures. Experimental tests were conducted to verify the capability of the proposed damage detection approach. A series of experimental tests were performed using a scaled cantilever beam subject to the white noise and sinusoidal excitations. The capability of the proposed reduced-order finite-element based adaptive quadratic sum-square error (AQSSE) method in detecting the structural damage is demonstrated by the experimental results.

  17. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Science.gov (United States)

    Gkantidis, Nikolaos; Schauseil, Michael; Pazera, Pawel; Zorkun, Berna; Katsaros, Christos; Ludwig, Björn

    2015-01-01

    To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D0.05), the detected structural changes differed significantly between different techniques (p<0.05). Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error. Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  18. The capability and constraint model of recoverability: An integrated theory of continuity planning.

    Science.gov (United States)

    Lindstedt, David

    2017-01-01

    While there are best practices, good practices, regulations and standards for continuity planning, there is no single model to collate and sort their various recommended activities. To address this deficit, this paper presents the capability and constraint model of recoverability - a new model to provide an integrated foundation for business continuity planning. The model is non-linear in both construct and practice, thus allowing practitioners to remain adaptive in its application. The paper presents each facet of the model, outlines the model's use in both theory and practice, suggests a subsequent approach that arises from the model, and discusses some possible ramifications to the industry.

  19. A cellular automata model for traffic flow based on kinetics theory, vehicles capabilities and driver reactions

    Science.gov (United States)

    Guzmán, H. A.; Lárraga, M. E.; Alvarez-Icaza, L.; Carvajal, J.

    2018-02-01

    In this paper, a reliable cellular automata model oriented to faithfully reproduce deceleration and acceleration according to realistic reactions of drivers, when vehicles with different deceleration capabilities are considered is presented. The model focuses on describing complex traffic phenomena by coding in its rules the basic mechanisms of drivers behavior, vehicles capabilities and kinetics, while preserving simplicity. In particular, vehiclés kinetics is based on uniform accelerated motion, rather than in impulsive accelerated motion as in most existing CA models. Thus, the proposed model calculates in an analytic way three safe preserving distances to determine the best action a follower vehicle can take under a worst case scenario. Besides, the prediction analysis guarantees that under the proper assumptions, collision between vehicles may not happen at any future time. Simulations results indicate that all interactions of heterogeneous vehicles (i.e., car-truck, truck-car, car-car and truck-truck) are properly reproduced by the model. In addition, the model overcomes one of the major limitations of CA models for traffic modeling: the inability to perform smooth approach to slower or stopped vehicles. Moreover, the model is also capable of reproducing most empirical findings including the backward speed of the downstream front of the traffic jam, and different congested traffic patterns induced by a system with open boundary conditions with an on-ramp. Like most CA models, integer values are used to make the model run faster, which makes the proposed model suitable for real time traffic simulation of large networks.

  20. Learning-based computing techniques in geoid modeling for precise height transformation

    Science.gov (United States)

    Erol, B.; Erol, S.

    2013-03-01

    Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.

  1. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  2. The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle

    OpenAIRE

    Laaksonen, Pekka

    2011-01-01

    Laaksonen, Pekka The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle Jyväskylä: Jyväskylän yliopisto, 2011, 42 s. Tietojärjestelmätiede, kandidaatintutkielma Ohjaaja(t): Käkölä, Timo Tässä kandidaatintutkielmassa selvitettiin sitä, miten the eSourcing Capability Model for Service Providers-mallin käytännöt (practices) ovat liittyneet tietä-myksenhallinnan neljään prosessiin: tiedon luominen, varastointi/noutaminen, jakamine...

  3. Adaptive Planning: Understanding Organizational Workload to Capability/ Capacity through Modeling and Simulation

    Science.gov (United States)

    Hase, Chris

    2010-01-01

    In August 2003, the Secretary of Defense (SECDEF) established the Adaptive Planning (AP) initiative [1] with an objective of reducing the time necessary to develop and revise Combatant Commander (COCOM) contingency plans and increase SECDEF plan visibility. In addition to reducing the traditional plan development timeline from twenty-four months to less than twelve months (with a goal of six months)[2], AP increased plan visibility to Department of Defense (DoD) leadership through In-Progress Reviews (IPRs). The IPR process, as well as the increased number of campaign and contingency plans COCOMs had to develop, increased the workload while the number of planners remained fixed. Several efforts from collaborative planning tools to streamlined processes were initiated to compensate for the increased workload enabling COCOMS to better meet shorter planning timelines. This paper examines the Joint Strategic Capabilities Plan (JSCP) directed contingency planning and staffing requirements assigned to a combatant commander staff through the lens of modeling and simulation. The dynamics of developing a COCOM plan are captured with an ExtendSim [3] simulation. The resulting analysis provides a quantifiable means by which to measure a combatant commander staffs workload associated with development and staffing JSCP [4] directed contingency plans with COCOM capability/capacity. Modeling and simulation bring significant opportunities in measuring the sensitivity of key variables in the assessment of workload to capability/capacity analysis. Gaining an understanding of the relationship between plan complexity, number of plans, planning processes, and number of planners with time required for plan development provides valuable information to DoD leadership. Through modeling and simulation AP leadership can gain greater insight in making key decisions on knowing where to best allocate scarce resources in an effort to meet DoD planning objectives.

  4. Re-framing Inclusive Education Through the Capability Approach: An Elaboration of the Model of Relational Inclusion

    Directory of Open Access Journals (Sweden)

    Maryam Dalkilic

    2016-09-01

    Full Text Available Scholars have called for the articulation of new frameworks in special education that are responsive to culture and context and that address the limitations of medical and social models of disability. In this article, we advance a theoretical and practical framework for inclusive education based on the integration of a model of relational inclusion with Amartya Sen’s (1985 Capability Approach. This integrated framework engages children, educators, and families in principled practices that acknowledge differences, rather than deficits, and enable attention to enhancing the capabilities of children with disabilities in inclusive educational environments. Implications include the development of policy that clarifies the process required to negotiate capabilities and valued functionings and the types of resources required to permit children, educators, and families to create relationally inclusive environments.

  5. Testing an integrated model of operations capabilities An empirical study of Australian airlines

    NARCIS (Netherlands)

    Nand, Alka Ashwini; Singh, Prakash J.; Power, Damien

    2013-01-01

    Purpose - The purpose of this paper is to test the integrated model of operations strategy as proposed by Schmenner and Swink to explain whether firms trade-off or accumulate capabilities, taking into account their positions relative to their asset and operating frontiers.

  6. Architectural capability analysis using a model-checking technique

    Directory of Open Access Journals (Sweden)

    Darío José Delgado-Quintero

    2017-01-01

    Full Text Available Este trabajo describe un enfoque matemático basado en una técnica de validación de modelos para analizar capacidades en arquitecturas empresariales construidas utilizando los marcos arquitecturales DoDAF y TOGAF. La base de este enfoque es la validación de requerimientos relacionados con las capacidades empresariales empleando artefactos arquitecturales operacionales o de negocio asociados con el comportamiento dinámico de los procesos. Se muestra cómo este enfoque puede ser utilizado para verificar, de forma cuantitativa, si los modelos operacionales en una arquitectura empresarial pueden satisfacer las capacidades empresariales. Para ello, se utiliza un estudio de caso relacionado con un problema de integración de capacidades.

  7. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  8. Developing maturity grids for assessing organisational capabilities

    DEFF Research Database (Denmark)

    Maier, Anja; Moultrie, James; Clarkson, P John

    2009-01-01

    Keyword: Maturity Model,Maturity Grid,Maturity Matrix,Organisational Capabilities,Benchmarking,New Product Development,Perfirmance Assessment......Keyword: Maturity Model,Maturity Grid,Maturity Matrix,Organisational Capabilities,Benchmarking,New Product Development,Perfirmance Assessment...

  9. Dynamic capabilities and innovation: a Multiple-Case Study

    OpenAIRE

    Bravo Ibarra, Edna Rocío; Mundet Hiern, Joan; Suñé Torrents, Albert

    2009-01-01

    After a detailed survey of the scientific literature, it was found that several characteristics of dynamic capabilities were similar to those of innovation capability. Therefore, with a deeper study of the first ones, it could be possible to design a model aimed to structure innovation capability. Thus, this work presents a conceptual model, where the innovation capability is shown as result of three processes: knowledge absorption and creation capability, knowledge integration and knowledge ...

  10. Landscape capability models as a tool to predict fine-scale forest bird occupancy and abundance

    Science.gov (United States)

    Loman, Zachary G.; DeLuca, William; Harrison, Daniel J.; Loftin, Cynthia S.; Rolek, Brian W.; Wood, Petra B.

    2018-01-01

    ContextSpecies-specific models of landscape capability (LC) can inform landscape conservation design. Landscape capability is “the ability of the landscape to provide the environment […] and the local resources […] needed for survival and reproduction […] in sufficient quantity, quality and accessibility to meet the life history requirements of individuals and local populations.” Landscape capability incorporates species’ life histories, ecologies, and distributions to model habitat for current and future landscapes and climates as a proactive strategy for conservation planning.ObjectivesWe tested the ability of a set of LC models to explain variation in point occupancy and abundance for seven bird species representative of spruce-fir, mixed conifer-hardwood, and riparian and wooded wetland macrohabitats.MethodsWe compiled point count data sets used for biological inventory, species monitoring, and field studies across the northeastern United States to create an independent validation data set. Our validation explicitly accounted for underestimation in validation data using joint distance and time removal sampling.ResultsBlackpoll warbler (Setophaga striata), wood thrush (Hylocichla mustelina), and Louisiana (Parkesia motacilla) and northern waterthrush (P. noveboracensis) models were validated as predicting variation in abundance, although this varied from not biologically meaningful (1%) to strongly meaningful (59%). We verified all seven species models [including ovenbird (Seiurus aurocapilla), blackburnian (Setophaga fusca) and cerulean warbler (Setophaga cerulea)], as all were positively related to occupancy data.ConclusionsLC models represent a useful tool for conservation planning owing to their predictive ability over a regional extent. As improved remote-sensed data become available, LC layers are updated, which will improve predictions.

  11. Validation of foF2 and TEC Modeling During Geomagnetic Disturbed Times: Preliminary Outcomes of International Forum for Space Weather Modeling Capabilities Assessment

    Science.gov (United States)

    Shim, J. S.; Tsagouri, I.; Goncharenko, L. P.; Kuznetsova, M. M.

    2017-12-01

    To address challenges of assessment of space weather modeling capabilities, the CCMC (Community Coordinated Modeling Center) is leading the newly established "International Forum for Space Weather Modeling Capabilities Assessment." This presentation will focus on preliminary outcomes of the International Forum on validation of modeled foF2 and TEC during geomagnetic storms. We investigate the ionospheric response to 2013 Mar. geomagnetic storm event using ionosonde and GPS TEC observations in North American and European sectors. To quantify storm impacts on foF2 and TEC, we first quantify quiet-time variations of foF2 and TEC (e.g., the median and the average of the five quietest days for the 30 days during quiet conditions). It appears that the quiet time variation of foF2 and TEC are about 10% and 20-30%, respectively. Therefore, to quantify storm impact, we focus on foF2 and TEC changes during the storm main phase larger than 20% and 50%, respectively, compared to 30-day median. We find that in European sector, both foF2 and TEC response to the storm are mainly positive phase with foF2 increase of up to 100% and TEC increase of 150%. In North America sector, however, foF2 shows negative effects (up to about 50% decrease), while TEC shows positive response (the largest increase is about 200%). To assess modeling capability of reproducing the changes of foF2 and TEC due to the storm, we use various model simulations, which are obtained from empirical, physics-based, and data assimilation models. The performance of each model depends on the selected metrics, therefore, only one metrics is not enough to evaluate the models' predictive capabilities in capturing the storm impact. The performance of the model also varies with latitude and longitude.

  12. The Aviation System Analysis Capability Air Carrier Cost-Benefit Model

    Science.gov (United States)

    Gaier, Eric M.; Edlich, Alexander; Santmire, Tara S.; Wingrove, Earl R.., III

    1999-01-01

    To meet its objective of assisting the U.S. aviation industry with the technological challenges of the future, NASA must identify research areas that have the greatest potential for improving the operation of the air transportation system. Therefore, NASA is developing the ability to evaluate the potential impact of various advanced technologies. By thoroughly understanding the economic impact of advanced aviation technologies and by evaluating how the new technologies will be used in the integrated aviation system, NASA aims to balance its aeronautical research program and help speed the introduction of high-leverage technologies. To meet these objectives, NASA is building the Aviation System Analysis Capability (ASAC). NASA envisions ASAC primarily as a process for understanding and evaluating the impact of advanced aviation technologies on the U.S. economy. ASAC consists of a diverse collection of models and databases used by analysts and other individuals from the public and private sectors brought together to work on issues of common interest to organizations in the aviation community. ASAC also will be a resource available to the aviation community to analyze; inform; and assist scientists, engineers, analysts, and program managers in their daily work. The ASAC differs from previous NASA modeling efforts in that the economic behavior of buyers and sellers in the air transportation and aviation industries is central to its conception. Commercial air carriers, in particular, are an important stakeholder in this community. Therefore, to fully evaluate the implications of advanced aviation technologies, ASAC requires a flexible financial analysis tool that credibly links the technology of flight with the financial performance of commercial air carriers. By linking technical and financial information, NASA ensures that its technology programs will continue to benefit the user community. In addition, the analysis tool must be capable of being incorporated into the

  13. An Analysis of the Twenty-Nine Capabilities of the Marine Corps Expeditionary Unit (Special Operations Capable)

    National Research Council Canada - National Science Library

    Love, John

    1998-01-01

    ... (Special Operations Capable) (MEU (SOC) to determine their relative validity. The methodology utilizes a multiple criteria decision-making model to determine the relative validity of each MEU (SOC) capability...

  14. Guidelines for Applying the Capability Maturity Model Analysis to Connected and Automated Vehicle Deployment

    Science.gov (United States)

    2017-11-23

    The Federal Highway Administration (FHWA) has adapted the Transportation Systems Management and Operations (TSMO) Capability Maturity Model (CMM) to describe the operational maturity of Infrastructure Owner-Operator (IOO) agencies across a range of i...

  15. Evaluation of plasma arc welding capabilities and applications

    International Nuclear Information System (INIS)

    Mills, G.S.

    1978-01-01

    Unique capabilities of plasma arc welding in the keyhole mode are described, and the potential applicability of these capabilities to Rocky Flats production needs are evaluated. For the areas of potential benefits studied, the benefits of this welding technique either did not materialize or the complication of implementing the process in production was not warranted by the demonstrated benefits

  16. A pilot modeling technique for handling-qualities research

    Science.gov (United States)

    Hess, R. A.

    1980-01-01

    A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.

  17. HYDROïD humanoid robot head with perception and emotion capabilities :Modeling, Design and Experimental Results

    Directory of Open Access Journals (Sweden)

    Samer eAlfayad

    2016-04-01

    Full Text Available In the framework of the HYDROïD humanoid robot project, this paper describes the modeling and design of an electrically actuated head mechanism. Perception and emotion capabilities are considered in the design process. Since HYDROïD humanoid robot is hydraulically actuated, the choice of electrical actuation for the head mechanism addressed in this paper is justified. Considering perception and emotion capabilities leads to a total number of 15 degrees of freedom for the head mechanism which are split on four main sub-mechanisms: the neck, the mouth, the eyes and the eyebrows. Biological data and kinematics performances of human head are taken as inputs of the design process. A new solution of uncoupled eyes is developed to possibly address the master-slave process that links the human eyes as well as vergence capabilities. Modeling each sub-system is carried out in order to get equations of motion, their frequency responses and their transfer functions. The neck pitch rotation is given as a study example. Then, the head mechanism performances are presented through a comparison between model and experimental results validating the hardware capabilities. Finally, the head mechanism is integrated on the HYDROïD upper-body. An object tracking experiment coupled with emotional expressions is carried out to validate the synchronization of the eye rotations with the body motions.

  18. Capabilities for innovation

    DEFF Research Database (Denmark)

    Nielsen, Peter; Nielsen, Rene Nesgaard; Bamberger, Simon Grandjean

    2012-01-01

    is a survey that collected information from 601 firms belonging to the private urban sector in Denmark. The survey was carried out in late 2010. Keywords: dynamic capabilities/innovation/globalization/employee/employer cooperation/Nordic model Acknowledgment: The GOPA study was financed by grant 20080053113......Technological developments combined with increasing levels of competition related to the ongoing globalization imply that firms find themselves in dynamic, changing environments that call for dynamic capabilities. This challenges the internal human and organizational resources of firms in general...

  19. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  20. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Directory of Open Access Journals (Sweden)

    Nikolaos Gkantidis

    Full Text Available To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data.Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch were tested using eight pairs of pre-existing CT data (pre- and post-treatment. These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses.There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05. The AC + F technique was the most accurate (D0.05, the detected structural changes differed significantly between different techniques (p<0.05. Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error.Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  1. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  2. Non-Destructive Evaluation for Corrosion Monitoring in Concrete: A Review and Capability of Acoustic Emission Technique

    Science.gov (United States)

    Zaki, Ahmad; Chai, Hwa Kian; Aggelis, Dimitrios G.; Alver, Ninel

    2015-01-01

    Corrosion of reinforced concrete (RC) structures has been one of the major causes of structural failure. Early detection of the corrosion process could help limit the location and the extent of necessary repairs or replacement, as well as reduce the cost associated with rehabilitation work. Non-destructive testing (NDT) methods have been found to be useful for in-situ evaluation of steel corrosion in RC, where the effect of steel corrosion and the integrity of the concrete structure can be assessed effectively. A complementary study of NDT methods for the investigation of corrosion is presented here. In this paper, acoustic emission (AE) effectively detects the corrosion of concrete structures at an early stage. The capability of the AE technique to detect corrosion occurring in real-time makes it a strong candidate for serving as an efficient NDT method, giving it an advantage over other NDT methods. PMID:26251904

  3. Non-Destructive Evaluation for Corrosion Monitoring in Concrete: A Review and Capability of Acoustic Emission Technique

    Directory of Open Access Journals (Sweden)

    Ahmad Zaki

    2015-08-01

    Full Text Available Corrosion of reinforced concrete (RC structures has been one of the major causes of structural failure. Early detection of the corrosion process could help limit the location and the extent of necessary repairs or replacement, as well as reduce the cost associated with rehabilitation work. Non-destructive testing (NDT methods have been found to be useful for in-situ evaluation of steel corrosion in RC, where the effect of steel corrosion and the integrity of the concrete structure can be assessed effectively. A complementary study of NDT methods for the investigation of corrosion is presented here. In this paper, acoustic emission (AE effectively detects the corrosion of concrete structures at an early stage. The capability of the AE technique to detect corrosion occurring in real-time makes it a strong candidate for serving as an efficient NDT method, giving it an advantage over other NDT methods.

  4. Probabilistic evaluation of process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2016-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  5. Evaluation of prediction capability, robustness, and sensitivity in non-linear landslide susceptibility models, Guantánamo, Cuba

    Science.gov (United States)

    Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.

    2011-04-01

    This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.

  6. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F. [ORNL; Poore III, Willis P. [ORNL; Muhlheim, Michael David [ORNL

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.

  7. Transitioning Enhanced Land Surface Initialization and Model Verification Capabilities to the Kenya Meteorological Department (KMD)

    Science.gov (United States)

    Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Zavodsky, Bradley T.; Srikishen, Jayanthi; Limaye, Ashutosh; Blankenship, Clay B.

    2016-01-01

    Flooding, severe weather, and drought are key forecasting challenges for the Kenya Meteorological Department (KMD), based in Nairobi, Kenya. Atmospheric processes leading to convection, excessive precipitation and/or prolonged drought can be strongly influenced by land cover, vegetation, and soil moisture content, especially during anomalous conditions and dry/wet seasonal transitions. It is thus important to represent accurately land surface state variables (green vegetation fraction, soil moisture, and soil temperature) in Numerical Weather Prediction (NWP) models. The NASA SERVIR and the Short-term Prediction Research and Transition (SPoRT) programs in Huntsville, AL have established a working partnership with KMD to enhance its regional modeling capabilities. SPoRT and SERVIR are providing experimental land surface initialization datasets and model verification capabilities for capacity building at KMD. To support its forecasting operations, KMD is running experimental configurations of the Weather Research and Forecasting (WRF; Skamarock et al. 2008) model on a 12-km/4-km nested regional domain over eastern Africa, incorporating the land surface datasets provided by NASA SPoRT and SERVIR. SPoRT, SERVIR, and KMD participated in two training sessions in March 2014 and June 2015 to foster the collaboration and use of unique land surface datasets and model verification capabilities. Enhanced regional modeling capabilities have the potential to improve guidance in support of daily operations and high-impact weather and climate outlooks over Eastern Africa. For enhanced land-surface initialization, the NASA Land Information System (LIS) is run over Eastern Africa at 3-km resolution, providing real-time land surface initialization data in place of interpolated global model soil moisture and temperature data available at coarser resolutions. Additionally, real-time green vegetation fraction (GVF) composites from the Suomi-NPP VIIRS instrument is being incorporated

  8. Frameworks for Assessing the Quality of Modeling and Simulation Capabilities

    Science.gov (United States)

    Rider, W. J.

    2012-12-01

    The importance of assuring quality in modeling and simulation has spawned several frameworks for structuring the examination of quality. The format and content of these frameworks provides an emphasis, completeness and flow to assessment activities. I will examine four frameworks that have been developed and describe how they can be improved and applied to a broader set of high consequence applications. Perhaps the first of these frameworks was known as CSAU [Boyack] (code scaling, applicability and uncertainty) used for nuclear reactor safety and endorsed the United States' Nuclear Regulatory Commission (USNRC). This framework was shaped by nuclear safety practice, and the practical structure needed after the Three Mile Island accident. It incorporated the dominant experimental program, the dominant analysis approach, and concerns about the quality of modeling. The USNRC gave it the force of law that made the nuclear industry take it seriously. After the cessation of nuclear weapons' testing the United States began a program of examining the reliability of these weapons without testing. This program utilizes science including theory, modeling, simulation and experimentation to replace the underground testing. The emphasis on modeling and simulation necessitated attention on the quality of these simulations. Sandia developed the PCMM (predictive capability maturity model) to structure this attention [Oberkampf]. PCMM divides simulation into six core activities to be examined and graded relative to the needs of the modeling activity. NASA [NASA] has built yet another framework in response to the tragedy of the space shuttle accidents. Finally, Ben-Haim and Hemez focus upon modeling robustness and predictive fidelity in another approach. These frameworks are similar, and applied in a similar fashion. The adoption of these frameworks at Sandia and NASA has been slow and arduous because the force of law has not assisted acceptance. All existing frameworks are

  9. X-ray streak and framing camera techniques

    International Nuclear Information System (INIS)

    Coleman, L.W.; Attwood, D.T.

    1975-01-01

    This paper reviews recent developments and applications of ultrafast diagnostic techniques for x-ray measurements. These techniques, based on applications of image converter devices, are already capable of significantly important resolution capabilities. Techniques capable of time resolution in the sub-nanosecond regime are being considered. Mechanical cameras are excluded from considerations as are devices using phosphors or fluors as x-ray converters

  10. Konsep Tingkat Kematangan penerapan Internet Protokol versi 6 (Capability Maturity Model for IPv6 Implementation

    Directory of Open Access Journals (Sweden)

    Riza Azmi

    2015-03-01

    Full Text Available Internet Protocol atau IP merupakan standar penomoran internet di dunia yang jumlahnya terbatas. Di dunia, alokasi IP diatur oleh Internet Assignd Number Authority (IANA dan didelegasikan ke melalui otoritas masing-masing benua. IP sendiri terdiri dari 2 jenis versi yaitu IPv4 dan IPv6 dimana alokasi IPv4 dinyatakan habis di tingkat IANA pada bulan April 2011. Oleh karena itu, penggunaan IP diarahkan kepada penggunaan IPv6. Untuk melihat bagaimana kematangan suatu organisasi terhadap implementasi IPv6, penelitian ini mencoba membuat sebuah model tingkat kematangan penerapan IPv6. Konsep dasar dari model ini mengambil konsep Capability Maturity Model Integrated (CMMI, dengan beberapa tambahan yaitu roadmap migrasi IPv6 di Indonesia, Request for Comment (RFC yang terkait dengan IPv6 serta beberapa best-practice implementasi dari IPv6. Dengan konsep tersebut, penelitian ini menghasilkan konsep Capability Maturity for IPv6 Implementation.

  11. Human push capability.

    Science.gov (United States)

    Barnett, Ralph L; Liber, Theodore

    2006-02-22

    Use of unassisted human push capability arises from time to time in the areas of crowd and animal control, the security of locked doors, the integrity of railings, the removal of tree stumps and entrenched vehicles, the manoeuvering of furniture, and athletic pursuits such as US football or wrestling. Depending on the scenario, human push capability involves strength, weight, weight distribution, push angle, footwear/floor friction, and the friction between the upper body and the pushed object. Simple models are used to establish the relationships among these factors.

  12. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    in the wellbore); and (3) accurate approaches to account for the effects of reservoir heterogeneity and for the optimization of nonconventional well deployment. An overview of our progress in each of these main areas is as follows. A general purpose object-oriented research simulator (GPRS) was developed under this project. The GPRS code is managed using modern software management techniques and has been deployed to many companies and research institutions. The simulator includes general black-oil and compositional modeling modules. The formulation is general in that it allows for the selection of a wide variety of primary and secondary variables and accommodates varying degrees of solution implicitness. Specifically, we developed and implemented an IMPSAT procedure (implicit in pressure and saturation, explicit in all other variables) for compositional modeling as well as an adaptive implicit procedure. Both of these capabilities allow for efficiency gains through selective implicitness. The code treats cell connections through a general connection list, which allows it to accommodate both structured and unstructured grids. The GPRS code was written to be easily extendable so new modeling techniques can be readily incorporated. Along these lines, we developed a new dual porosity module compatible with the GPRS framework, as well as a new discrete fracture model applicable for fractured or faulted reservoirs. Both of these methods display substantial advantages over previous implementations. Further, we assessed the performance of different preconditioners in an attempt to improve the efficiency of the linear solver. As a result of this investigation, substantial improvements in solver performance were achieved.

  13. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  14. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  15. Artificial reef evaluation capabilities of Florida counties

    OpenAIRE

    Halusky, Joseph G.; Antonini, Gustavo A.; Seaman, William

    1993-01-01

    Florida's coastal county artificial reef sampling and data management programs are surveyed in this report. The survey describes the county level capability for artificial reef documentation and performance assessment based on their needs, interests, organizational structure and "in-situ" data collection and data management techniques. The. primary purpose of this study is to describe what staffing, training, techniques, organizational procedures and equipment are used by the c...

  16. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... that include shorter lead times, improved quality of specifications and products, and lower overall product costs. The design and implementation of configurators are a challenging task that calls for scientifically based modelling techniques to support the formal representation of configurator knowledge. Even...... the phenomenon model and information model are considered visually, (2) non-UML-based modelling techniques, in which only the phenomenon model is considered and (3) non-formal modelling techniques. This study analyses the impact to companies from increased availability of product knowledge and improved control...

  17. Evaluating Internal Technological Capabilities in Energy Companies

    Directory of Open Access Journals (Sweden)

    Mingook Lee

    2016-03-01

    Full Text Available As global competition increases, technological capability must be evaluated objectively as one of the most important factors for predominance in technological competition and to ensure sustainable business excellence. Most existing capability evaluation models utilize either quantitative methods, such as patent analysis, or qualitative methods, such as expert panels. Accordingly, they may be in danger of reflecting only fragmentary aspects of technological capabilities, and produce inconsistent results when different models are used. To solve these problems, this paper proposes a comprehensive framework for evaluating technological capabilities in energy companies by considering the complex properties of technological knowledge. For this purpose, we first explored various factors affecting technological capabilities and divided the factors into three categories: individual, organizational, and technology competitiveness. Second, we identified appropriate evaluation items for each category to measure the technological capability. Finally, by using a hybrid approach of qualitative and quantitative methods, we developed an evaluation method for each item and suggested a method to combine the results. The proposed framework was then verified with an energy generation and supply company to investigate its practicality. As one of the earliest attempts to evaluate multi-faceted technological capabilities, the suggested model can support technology and strategic planning.

  18. Functional capabilities of the breadboard model of SIDRA satellite-borne instrument

    International Nuclear Information System (INIS)

    Dudnik, O.V.; Kurbatov, E.V.; Titov, K.G.; Prieto, M.; Sanchez, S.; Sylwester, J.; Gburek, S.; Podgorski, P.

    2013-01-01

    This paper presents the structure, principles of operation and functional capabilities of the breadboard model of SIDRA compact satellite-borne instrument. SIDRA is intended for monitoring fluxes of high-energy charged particles under outer-space conditions. We present the reasons to develop a particle spectrometer and we list the main objectives to be achieved with the help of this instrument. The paper describes the major specifications of the analog and digital signal processing units of the breadboard model. A specially designed and developed data processing module based on the Actel ProAsic3E A3PE3000 FPGA is presented and compared with the all-in one digital processing signal board based on the Xilinx Spartan 3 XC3S1500 FPGA.

  19. The IDEAL (Integrated Design and Engineering Analysis Languages) modeling methodology: Capabilities and Applications

    Science.gov (United States)

    Evers, Ken H.; Bachert, Robert F.

    1987-01-01

    The IDEAL (Integrated Design and Engineering Analysis Languages) modeling methodology has been formulated and applied over a five-year period. It has proven to be a unique, integrated approach utilizing a top-down, structured technique to define and document the system of interest; a knowledge engineering technique to collect and organize system descriptive information; a rapid prototyping technique to perform preliminary system performance analysis; and a sophisticated simulation technique to perform in-depth system performance analysis.

  20. Uncertainty quantification's role in modeling and simulation planning, and credibility assessment through the predictive capability maturity model

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Witkowski, Walter R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-04-13

    The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationally simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.

  1. Resource-Based Capability on Development Knowledge Management Capabilities of Coastal Community

    Science.gov (United States)

    Teniwut, Roberto M. K.; Hasyim, Cawalinya L.; Teniwut, Wellem A.

    2017-10-01

    Building sustainable knowledge management capabilities in the coastal area might face a whole new challenge since there are many intangible factors involved from openness on new knowledge, access and ability to use the latest technology to the various local wisdom that still in place. The aimed of this study was to identify and analyze the resource-based condition of coastal community in this area to have an empirical condition of tangible and intangible infrastructure on developing knowledge management capability coastal community in Southeast Maluku, Indonesia. We used qualitative and quantitative analysis by depth interview and questionnaire for collecting the data with multiple linear regression as our analysis method. The result provided the information on current state of resource-based capability of a coastal community in this Southeast Maluku to build a sustainability model of knowledge management capabilities especially on utilization marine and fisheries resources. The implication of this study can provide an empirical information for government, NGO and research institution to dictate on how they conducted their policy and program on developing coastal community region.

  2. Establishing an infrared measurement and modelling capability

    CSIR Research Space (South Africa)

    Willers, CJ

    2011-04-01

    Full Text Available The protection of own aircraft assets against infrared missile threats requires a deep understanding of the vulnerability of these assets with regard to specific threats and specific environments of operation. A key capability in the protection...

  3. Analyzing the field of bioinformatics with the multi-faceted topic modeling technique.

    Science.gov (United States)

    Heo, Go Eun; Kang, Keun Young; Song, Min; Lee, Jeong-Hoon

    2017-05-31

    Bioinformatics is an interdisciplinary field at the intersection of molecular biology and computing technology. To characterize the field as convergent domain, researchers have used bibliometrics, augmented with text-mining techniques for content analysis. In previous studies, Latent Dirichlet Allocation (LDA) was the most representative topic modeling technique for identifying topic structure of subject areas. However, as opposed to revealing the topic structure in relation to metadata such as authors, publication date, and journals, LDA only displays the simple topic structure. In this paper, we adopt the Tang et al.'s Author-Conference-Topic (ACT) model to study the field of bioinformatics from the perspective of keyphrases, authors, and journals. The ACT model is capable of incorporating the paper, author, and conference into the topic distribution simultaneously. To obtain more meaningful results, we use journals and keyphrases instead of conferences and bag-of-words.. For analysis, we use PubMed to collected forty-six bioinformatics journals from the MEDLINE database. We conducted time series topic analysis over four periods from 1996 to 2015 to further examine the interdisciplinary nature of bioinformatics. We analyze the ACT Model results in each period. Additionally, for further integrated analysis, we conduct a time series analysis among the top-ranked keyphrases, journals, and authors according to their frequency. We also examine the patterns in the top journals by simultaneously identifying the topical probability in each period, as well as the top authors and keyphrases. The results indicate that in recent years diversified topics have become more prevalent and convergent topics have become more clearly represented. The results of our analysis implies that overtime the field of bioinformatics becomes more interdisciplinary where there is a steady increase in peripheral fields such as conceptual, mathematical, and system biology. These results are

  4. The Global Modeling Test Bed - Building a New National Capability for Advancing Operational Global Modeling in the United States.

    Science.gov (United States)

    Toepfer, F.; Cortinas, J. V., Jr.; Kuo, W.; Tallapragada, V.; Stajner, I.; Nance, L. B.; Kelleher, K. E.; Firl, G.; Bernardet, L.

    2017-12-01

    NOAA develops, operates, and maintains an operational global modeling capability for weather, sub seasonal and seasonal prediction for the protection of life and property and fostering the US economy. In order to substantially improve the overall performance and accelerate advancements of the operational modeling suite, NOAA is partnering with NCAR to design and build the Global Modeling Test Bed (GMTB). The GMTB has been established to provide a platform and a capability for researchers to contribute to the advancement primarily through the development of physical parameterizations needed to improve operational NWP. The strategy to achieve this goal relies on effectively leveraging global expertise through a modern collaborative software development framework. This framework consists of a repository of vetted and supported physical parameterizations known as the Common Community Physics Package (CCPP), a common well-documented interface known as the Interoperable Physics Driver (IPD) for combining schemes into suites and for their configuration and connection to dynamic cores, and an open evidence-based governance process for managing the development and evolution of CCPP. In addition, a physics test harness designed to work within this framework has been established in order to facilitate easier like-to-like comparison of physics advancements. This paper will present an overview of the design of the CCPP and test platform. Additionally, an overview of potential new opportunities of how physics developers can engage in the process, from implementing code for CCPP/IPD compliance to testing their development within an operational-like software environment, will be presented. In addition, insight will be given as to how development gets elevated to CPPP-supported status, the pre-cursor to broad availability and use within operational NWP. An overview of how the GMTB can be expanded to support other global or regional modeling capabilities will also be presented.

  5. Development of evaluation method for software safety analysis techniques

    International Nuclear Information System (INIS)

    Huang, H.; Tu, W.; Shih, C.; Chen, C.; Yang, W.; Yih, S.; Kuo, C.; Chen, M.

    2006-01-01

    Full text: Full text: Following the massive adoption of digital Instrumentation and Control (I and C) system for nuclear power plant (NPP), various Software Safety Analysis (SSA) techniques are used to evaluate the NPP safety for adopting appropriate digital I and C system, and then to reduce risk to acceptable level. However, each technique has its specific advantage and disadvantage. If the two or more techniques can be complementarily incorporated, the SSA combination would be more acceptable. As a result, if proper evaluation criteria are available, the analyst can then choose appropriate technique combination to perform analysis on the basis of resources. This research evaluated the applicable software safety analysis techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flowgraph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/ noise ratio, complexity, and implementation cost. These indexes may help the decision makers and the software safety analysts to choose the best SSA combination arrange their own software safety plan. By this proposed method, the analysts can evaluate various SSA combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (without transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and Simulation-based model analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantage are the completeness complexity

  6. A Stochastic Model for the Landing Dispersion of Hazard Detection and Avoidance Capable Flight Systems

    Science.gov (United States)

    Witte, L.

    2014-06-01

    To support landing site assessments for HDA-capable flight systems and to facilitate trade studies between the potential HDA architectures versus the yielded probability of safe landing a stochastic landing dispersion model has been developed.

  7. Modeling techniques for quantum cascade lasers

    Energy Technology Data Exchange (ETDEWEB)

    Jirauschek, Christian [Institute for Nanoelectronics, Technische Universität München, D-80333 Munich (Germany); Kubis, Tillmann [Network for Computational Nanotechnology, Purdue University, 207 S Martin Jischke Drive, West Lafayette, Indiana 47907 (United States)

    2014-03-15

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  8. Modeling techniques for quantum cascade lasers

    Science.gov (United States)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-01

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  9. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    Science.gov (United States)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  10. Creating Synthetic Coronal Observational Data From MHD Models: The Forward Technique

    Science.gov (United States)

    Rachmeler, Laurel A.; Gibson, Sarah E.; Dove, James; Kucera, Therese Ann

    2010-01-01

    We present a generalized forward code for creating simulated corona) observables off the limb from numerical and analytical MHD models. This generalized forward model is capable of creating emission maps in various wavelengths for instruments such as SXT, EIT, EIS, and coronagraphs, as well as spectropolari metric images and line profiles. The inputs to our code can be analytic models (of which four come with the code) or 2.5D and 3D numerical datacubes. We present some examples of the observable data created with our code as well as its functional capabilities. This code is currently available for beta-testing (contact authors), with the ultimate goal of release as a SolarSoft package

  11. How do dynamic capabilities transform external technologies into firms’ renewed technological resources? – A mediation model

    DEFF Research Database (Denmark)

    Li-Ying, Jason; Wang, Yuandi; Ning, Lutao

    2016-01-01

    microfoundations of dynamic technological capabilities, mediate the relationship between external technology breadth and firms’ technological innovation performance, based on the resource-based view and dynamic capability view. Using a sample of listed Chinese licensee firms, we find that firms must broadly......How externally acquired resources may become valuable, rare, hard-to-imitate, and non-substitute resource bundles through the development of dynamic capabilities? This study proposes and tests a mediation model of how firms’ internal technological diversification and R&D, as two distinctive...... explore external technologies to ignite the dynamism in internal technological diversity and in-house R&D, which play their crucial roles differently to transform and reconfigure firms’ technological resources....

  12. Towards developing product applications of thick origami using the offset panel technique

    Directory of Open Access Journals (Sweden)

    M. R. Morgan

    2016-03-01

    Full Text Available Several methods have been developed to accommodate for the use of thick materials in origami models which preserve either the model's full range of motion or its kinematics. The offset panel technique (OPT preserves both the range of motion and the kinematics while allowing for a great deal of flexibility in design. This work explores new possibilities for origami-based product applications presented by the OPT. Examples are included to illustrate fundamental capabilities that can be realized with thick materials such as accommodation of various materials in a design and manipulation of panel geometry resulting in an increased stiffness and strength. These capabilities demonstrate the potential of techniques such as the OPT to further inspire origami-based solutions to engineering problems.

  13. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  14. Improving National Capability in Biogeochemical Flux Modelling: the UK Environmental Virtual Observatory (EVOp)

    Science.gov (United States)

    Johnes, P.; Greene, S.; Freer, J. E.; Bloomfield, J.; Macleod, K.; Reaney, S. M.; Odoni, N. A.

    2012-12-01

    The best outcomes from watershed management arise where policy and mitigation efforts are underpinned by strong science evidence, but there are major resourcing problems associated with the scale of monitoring needed to effectively characterise the sources rates and impacts of nutrient enrichment nationally. The challenge is to increase national capability in predictive modelling of nutrient flux to waters, securing an effective mechanism for transferring knowledge and management tools from data-rich to data-poor regions. The inadequacy of existing tools and approaches to address these challenges provided the motivation for the Environmental Virtual Observatory programme (EVOp), an innovation from the UK Natural Environment Research Council (NERC). EVOp is exploring the use of a cloud-based infrastructure in catchment science, developing an exemplar to explore N and P fluxes to inland and coastal waters in the UK from grid to catchment and national scale. EVOp is bringing together for the first time national data sets, models and uncertainty analysis into cloud computing environments to explore and benchmark current predictive capability for national scale biogeochemical modelling. The objective is to develop national biogeochemical modelling capability, capitalising on extensive national investment in the development of science understanding and modelling tools to support integrated catchment management, and supporting knowledge transfer from data rich to data poor regions, The AERC export coefficient model (Johnes et al., 2007) has been adapted to function within the EVOp cloud environment, and on a geoclimatic basis, using a range of high resolution, geo-referenced digital datasets as an initial demonstration of the enhanced national capacity for N and P flux modelling using cloud computing infrastructure. Geoclimatic regions are landscape units displaying homogenous or quasi-homogenous functional behaviour in terms of process controls on N and P cycling

  15. Disability, Capability, and Special Education: Towards a Capability-Based Theory

    Science.gov (United States)

    Reindal, Solveig Magnus

    2009-01-01

    The main objective of the article was to investigate the claim that the capability approach fares better with an understanding of disability as presented by the World Health Organization's "International Classification of Functioning, Disability and Health" (ICF) than by the social model, which has been promoted within disability studies. Scholars…

  16. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  17. CASL Dakota Capabilities Summary

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Simmons, Chris [Univ. of Texas, Austin, TX (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-10

    The Dakota software project serves the mission of Sandia National Laboratories and supports a worldwide user community by delivering state-of-the-art research and robust, usable software for optimization and uncertainty quantification. These capabilities enable advanced exploration and riskinformed prediction with a wide range of computational science and engineering models. Dakota is the verification and validation (V&V) / uncertainty quantification (UQ) software delivery vehicle for CASL, allowing analysts across focus areas to apply these capabilities to myriad nuclear engineering analyses.

  18. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  19. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    Science.gov (United States)

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; hide

    2016-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  20. Moessbauer backscatter spectrometer with full data processing capability

    International Nuclear Information System (INIS)

    1976-01-01

    The design and operation of a Moessbauer backscatter spectrometer with full data processing capability is described, and the investigation of the applicability of this technique to a variety of practical metallurgical problems is discussed

  1. Development of new techniques and enhancement of automatic capability of neutron activation analysis at the Dalat Research Reactor

    International Nuclear Information System (INIS)

    Ho Manh Dung; Ho Van Doanh; Tran Quang Thien; Pham Ngoc Tuan; Pham Ngoc Son; Tran Quoc Duong; Nguyen Van Cuong; Nguyen Minh Tuan; Nguyen Giang; Nguyen Thi Sy

    2017-01-01

    The techniques of neutron activation analysis (NAA) including cyclic, epithermal and prompt-gamma (CNAA, ENAA and PGNAA, respectively) have been developed at the Dalat research reactor (DRR). In addition, the efforts has been spent to improve the automatic capability of irradiation, measurement and data processing of NAA. The renewal of necessary devices/tools for sample preparation have also been done. Eventually, the performance and the utility in terms of sensitivity, accuracy and stability of the analytical results generated by NAA at DRR have significantly been improved. The main results of the project are: 1) Upgrading of the fast irradiation system on Channel 13-2/TC to allow the cyclic irradiations; 2) Development of CNAA; 3) Development of ENAA; 4) Application of k0-method for PGNAA; 5) Investigation of the automatic sample changer (ASC2); 6) Upgrading of Ko-DALAT software for ENAA and modification of k0-IAEA software for CNAA and PGNAA; and 7) Optimization of irradiation and measurement facilities as well as sample preparation devices/tools. A set of procedures of relevant developed techniques in the project were established. The procedures have been evaluated by analysis of the reference materials for which they are meeting the requirements of multi-element analysis for the intended applications. (author)

  2. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    Energy Technology Data Exchange (ETDEWEB)

    Schraad, Mark William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Physics and Engineering Models; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Advanced Simulation and Computing

    2016-09-06

    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  3. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  4. Expand the Modeling Capabilities of DOE's EnergyPlus Building Energy Simulation Program

    Energy Technology Data Exchange (ETDEWEB)

    Don Shirey

    2008-02-28

    EnergyPlus{trademark} is a new generation computer software analysis tool that has been developed, tested, and commercialized to support DOE's Building Technologies (BT) Program in terms of whole-building, component, and systems R&D (http://www.energyplus.gov). It is also being used to support evaluation and decision making of zero energy building (ZEB) energy efficiency and supply technologies during new building design and existing building retrofits. Version 1.0 of EnergyPlus was released in April 2001, followed by semiannual updated versions over the ensuing seven-year period. This report summarizes work performed by the University of Central Florida's Florida Solar Energy Center (UCF/FSEC) to expand the modeling capabilities of EnergyPlus. The project tasks involved implementing, testing, and documenting the following new features or enhancement of existing features: (1) A model for packaged terminal heat pumps; (2) A model for gas engine-driven heat pumps with waste heat recovery; (3) Proper modeling of window screens; (4) Integrating and streamlining EnergyPlus air flow modeling capabilities; (5) Comfort-based controls for cooling and heating systems; and (6) An improved model for microturbine power generation with heat recovery. UCF/FSEC located existing mathematical models or generated new model for these features and incorporated them into EnergyPlus. The existing or new models were (re)written using Fortran 90/95 programming language and were integrated within EnergyPlus in accordance with the EnergyPlus Programming Standard and Module Developer's Guide. Each model/feature was thoroughly tested and identified errors were repaired. Upon completion of each model implementation, the existing EnergyPlus documentation (e.g., Input Output Reference and Engineering Document) was updated with information describing the new or enhanced feature. Reference data sets were generated for several of the features to aid program users in selecting proper

  5. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  6. Evaluating Pillar Industry's Transformation Capability: A Case Study of Two Chinese Steel-Based Cities.

    Science.gov (United States)

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China's steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities' abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns.

  7. National Research Council Dialogue to Assess Progress on NASA's Advanced Modeling, Simulation and Analysis Capability and Systems Engineering Capability Roadmap Development

    Science.gov (United States)

    Aikins, Jan

    2005-01-01

    Contents include the following: General Background and Introduction of Capability Roadmaps. Agency Objective. Strategic Planning Transformation. Advanced Planning Organizational Roles. Public Involvement in Strategic Planning. Strategic Roadmaps and Schedule. Capability Roadmaps and Schedule. Purpose of NRC Review. Capability Roadmap Development (Progress to Date).

  8. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  9. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  10. VLBI FOR GRAVITY PROBE B. IV. A NEW ASTROMETRIC ANALYSIS TECHNIQUE AND A COMPARISON WITH RESULTS FROM OTHER TECHNIQUES

    International Nuclear Information System (INIS)

    Lebach, D. E.; Ratner, M. I.; Shapiro, I. I.; Bartel, N.; Bietenholz, M. F.; Lederman, J. I.; Ransom, R. R.; Campbell, R. M.; Gordon, D.; Lestrade, J.-F.

    2012-01-01

    When very long baseline interferometry (VLBI) observations are used to determine the position or motion of a radio source relative to reference sources nearby on the sky, the astrometric information is usually obtained via (1) phase-referenced maps or (2) parametric model fits to measured fringe phases or multiband delays. In this paper, we describe a 'merged' analysis technique which combines some of the most important advantages of these other two approaches. In particular, our merged technique combines the superior model-correction capabilities of parametric model fits with the ability of phase-referenced maps to yield astrometric measurements of sources that are too weak to be used in parametric model fits. We compare the results from this merged technique with the results from phase-referenced maps and from parametric model fits in the analysis of astrometric VLBI observations of the radio-bright star IM Pegasi (HR 8703) and the radio source B2252+172 nearby on the sky. In these studies we use central-core components of radio sources 3C 454.3 and B2250+194 as our positional references. We obtain astrometric results for IM Peg with our merged technique even when the source is too weak to be used in parametric model fits, and we find that our merged technique yields astrometric results superior to the phase-referenced mapping technique. We used our merged technique to estimate the proper motion and other astrometric parameters of IM Peg in support of the NASA/Stanford Gravity Probe B mission.

  11. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-01-01

    Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  12. Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Hamlin, Nathaniel D., E-mail: nh322@cornell.edu [438 Rhodes Hall, Cornell University, Ithaca, NY, 14853 (United States); Seyler, Charles E., E-mail: ces7@cornell.edu [Cornell University, Ithaca, NY, 14853 (United States)

    2014-12-15

    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm’s law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.

  13. NSF's Perspective on Space Weather Research for Building Forecasting Capabilities

    Science.gov (United States)

    Bisi, M. M.; Pulkkinen, A. A.; Bisi, M. M.; Pulkkinen, A. A.; Webb, D. F.; Oughton, E. J.; Azeem, S. I.

    2017-12-01

    Space weather research at the National Science Foundation (NSF) is focused on scientific discovery and on deepening knowledge of the Sun-Geospace system. The process of maturation of knowledge base is a requirement for the development of improved space weather forecast models and for the accurate assessment of potential mitigation strategies. Progress in space weather forecasting requires advancing in-depth understanding of the underlying physical processes, developing better instrumentation and measurement techniques, and capturing the advancements in understanding in large-scale physics based models that span the entire chain of events from the Sun to the Earth. This presentation will provide an overview of current and planned programs pertaining to space weather research at NSF and discuss the recommendations of the Geospace Section portfolio review panel within the context of space weather forecasting capabilities.

  14. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  15. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  16. Proceedings of the OECD/CSNI specialist meeting on advanced instrumentation and measurement techniques

    Energy Technology Data Exchange (ETDEWEB)

    Lehner, J [comp.

    1998-09-01

    In the last few years, tremendous advances in the local instrumentation technology for two-phase flow have been accomplished by the applications of new sensor techniques, optical or beam methods and electronic technology. The detailed measurements gave new insight to the true nature of local mechanisms of interfacial transfer between phases, interfacial structure and two-phase flow turbulent transfers. These new developments indicate that more accurate and reliable two-phase flow models can be obtained, if focused experiments are designed and performed by utilizing this advanced instrumentation. The purpose of this Specialist Meeting on Advanced Instrumentation and Measurement Techniques was to review the recent instrumentation developments and the relation between thermal-hydraulic codes and instrumentation capabilities. Four specific objectives were identified for this meeting: bring together international experts on instrumentation, experiments, and modeling; review recent developments in multiphase flow instrumentation; discuss the relation between modeling needs and instrumentation capabilities, and discuss future directions for instrumentation development, modeling, and experiments.

  17. Proceedings of the OECD/CSNI specialist meeting on advanced instrumentation and measurement techniques

    International Nuclear Information System (INIS)

    Lehner, J.

    1998-09-01

    In the last few years, tremendous advances in the local instrumentation technology for two-phase flow have been accomplished by the applications of new sensor techniques, optical or beam methods and electronic technology. The detailed measurements gave new insight to the true nature of local mechanisms of interfacial transfer between phases, interfacial structure and two-phase flow turbulent transfers. These new developments indicate that more accurate and reliable two-phase flow models can be obtained, if focused experiments are designed and performed by utilizing this advanced instrumentation. The purpose of this Specialist Meeting on Advanced Instrumentation and Measurement Techniques was to review the recent instrumentation developments and the relation between thermal-hydraulic codes and instrumentation capabilities. Four specific objectives were identified for this meeting: bring together international experts on instrumentation, experiments, and modeling; review recent developments in multiphase flow instrumentation; discuss the relation between modeling needs and instrumentation capabilities, and discuss future directions for instrumentation development, modeling, and experiments

  18. Structural Capability of an Organization toward Innovation Capability

    DEFF Research Database (Denmark)

    Nielsen, Susanne Balslev; Momeni, Mostafa

    2016-01-01

    The scholars in the field of strategic management have developed two major approaches for attainment of competitive advantage: an approach based on environmental opportunities, and another one based on internal capabilities of an organization. Some investigations in the last two decades have...... indicated that the advantages relying on the internal capabilities of organizations may determine the competitive position of organizations better than environmental opportunities do. Characteristics of firms shows that one of the most internal capabilities that lead the organizations to the strongest...... competitive advantage in the organizations is the innovation capability. The innovation capability is associated with other organizational capabilities, and many organizations have focused on the need to identify innovation capabilities.This research focuses on recognition of the structural aspect...

  19. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  20. High Altitude Platforms for Disaster Recovery: Capabilities, Strategies, and Techniques for Emergency Telecommunications

    Directory of Open Access Journals (Sweden)

    Juan D. Deaton

    2008-09-01

    Full Text Available Natural disasters and terrorist acts have significant potential to disrupt emergency communication systems. These emergency communication networks include first-responder, cellular, landline, and emergency answering services such as 911, 112, or 999. Without these essential emergency communications capabilities, search, rescue, and recovery operations during a catastrophic event will be severely debilitated. High altitude platforms could be fitted with telecommunications equipment and used to support these critical communications missions once the catastrophic event occurs. With the ability to be continuously on station, HAPs provide excellent options for providing emergency coverage over high-risk areas before catastrophic incidents occur. HAPs could also provide enhanced 911 capabilities using either GPS or reference stations. This paper proposes potential emergency communications architecture and presents a method for estimating emergency communications systems traffic patterns for a catastrophic event.

  1. Best Practices for Evaluating the Capability of Nondestructive Evaluation (NDE) and Structural Health Monitoring (SHM) Techniques for Damage Characterization (Post-Print)

    Science.gov (United States)

    2016-02-10

    mitigate life-cycle risk of an airframe under the framework of ASIP, a rigorous capability study following the spirit of MIL-HDBK-1823A for POD...model, iaf including uncertainty. Random events such as sensor failure/disbond (b1), sensor bond degradation ( b2 ), sensor replacement (b3), and local

  2. Developing A/E Capabilities

    International Nuclear Information System (INIS)

    Gonzalez, A.; Gurbindo, J.

    1987-01-01

    During the last few years, the methods used by EMPRESARIOS AGRUPADOS and INITEC to perform Architect-Engineering work in Spain for nuclear projects has undergone a process of significant change in project management and engineering approaches. Specific practical examples of management techniques and design practices which represent a good record of results will be discussed. They are identified as areas of special interest in developing A/E capabilities for nuclear projects . Command of these areas should produce major payoffs in local participation and contribute to achieving real nuclear engineering capabities in the country. (author)

  3. IMPACT OF CO-CREATION ON INNOVATION CAPABILITY AND FIRM PERFORMANCE: A STRUCTURAL EQUATION MODELING

    Directory of Open Access Journals (Sweden)

    FATEMEH HAMIDI

    Full Text Available ABSTRACT Traditional firms used to design products, evaluate marketing messages and control product distribution channels with no costumer interface. With the advancements in interaction technologies, however, users can easily make impacts on firms; the interaction between costumers and firms is now in peak condition in comparison to the past and is no longer controlled by firms. Customers are playing two roles of value creators and consumers simultaneously. We examine the role of co-creation on the influences of innovation capability and firm performance. We develop hypotheses and test them using researcher survey data. The results suggest that implement of co-creation partially mediate the effect of process innovation capability. We discuss the implications of these findings for research and practice on the depict and implement of unique value co-creation model.

  4. Advancing botnet modeling techniques for military and security simulations

    Science.gov (United States)

    Banks, Sheila B.; Stytz, Martin R.

    2011-06-01

    Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.

  5. Programming with models: modularity and abstraction provide powerful capabilities for systems biology.

    Science.gov (United States)

    Mallavarapu, Aneil; Thomson, Matthew; Ullian, Benjamin; Gunawardena, Jeremy

    2009-03-06

    Mathematical models are increasingly used to understand how phenotypes emerge from systems of molecular interactions. However, their current construction as monolithic sets of equations presents a fundamental barrier to progress. Overcoming this requires modularity, enabling sub-systems to be specified independently and combined incrementally, and abstraction, enabling generic properties of biological processes to be specified independently of specific instances. These, in turn, require models to be represented as programs rather than as datatypes. Programmable modularity and abstraction enables libraries of modules to be created, which can be instantiated and reused repeatedly in different contexts with different components. We have developed a computational infrastructure that accomplishes this. We show here why such capabilities are needed, what is required to implement them and what can be accomplished with them that could not be done previously.

  6. Determining plutonium mass in spent fuel with non-destructive assay techniques - NGSU research overview and update on 6 NDA techniques

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, Stephen J [Los Alamos National Laboratory; Conlin, Jeremy L [Los Alamos National Laboratory; Evans, Louise G [Los Alamos National Laboratory; Hu, Jianwei [Los Alamos National Laboratory; Blanc, Pauline C [Los Alamos National Laboratory; Lafleur, Adrienne M [Los Alamos National Laboratory; Menlove, Howard O [Los Alamos National Laboratory; Schear, Melissa A [Los Alamos National Laboratory; Swinhoe, Martyn T [Los Alamos National Laboratory; Croft, Stephen [Los Alamos National Laboratory; Fensin, Michael L [Los Alamos National Laboratory; Freeman, Corey R [Los Alamos National Laboratory; Koehler, William E [Los Alamos National Laboratory; Mozin, V [Los Alamos National Laboratory; Sandoval, N P [Los Alamos National Laboratory; Lee, T H [KAERI; Cambell, L W [PNNL; Cheatham, J R [ORNL; Gesh, C J [PNNL; Hunt, A [IDAHO STATE UNIV; Ludewigt, B A [LBNL; Smith, L E [PNNL; Sterbentz, J [INL

    2010-09-15

    This poster is one of two complementary posters. The Next Generation Safeguards Initiative (NGSI) of the U.S. DOE has initiated a multi-lab/university collaboration to quantify the plutonium (Pu) mass in, and detect the diversion of pins from, spent nuclear fuel assemblies with non-destructive assay (NDA). This research effort has the goal of quantifying the capability of 14 NDA techniques as well as training a future generation of safeguards practitioners. By November of 2010, we will be 1.5 years into the first phase (2.5 years) of work. This first phase involves primarily Monte Carlo modelling while the second phase (also 2.5 years) will focus on experimental work. The goal of phase one is to quantify the detection capability of the various techniques for the benefit of safeguard technology developers, regulators, and policy makers as well as to determine what integrated techniques merit experimental work, We are considering a wide range of possible technologies since our research horizon is longer term than the focus of most regulator bodies. The capability of all of the NDA techniques will be determined for a library of 64 17 x 17 PWR assemblies [burnups (15, 30, 45, 60 GWd/tU), initial enrichments (2, 3, 4, 5%) and cooling times (1, 5, 20, 80 years)]. The burnup and cooling time were simulated with each fuel pin being comprised of four radial regions. In this paper an overview of the purpose will be given as well as a technical update on the following 6 neutron techniques: {sup 252}Cf Interrogation with Prompt Neutron Detection, Delayed Neutrons, Differential Die-Away, Differential Die-Away Self-Interrogation, Passive Neutron Albedo Reactivity, Self-Integration Neutron Resonance Densitometry. The technical update will quantify the anticipated performance of each technique for the 64 assemblies of the spent fuel library.

  7. Determining plutonium mass in spent fuel with non-destructive assay techniques - NGSI research overview and update on 6 NDA techniques

    International Nuclear Information System (INIS)

    Tobin, Stephen J.; Conlin, Jeremy L.; Evans, Louise G.; Hu, Kianwei; Blanc, P.C.; Lafleur, Am; Menlove, H.O.; Schear, M.A.; Swinhoe, M.T.; Croft, S.; Fensin, M.L.; Freeman, C.R.; Koehler, W.E.; Mozin, V.; Sandoval, N.P.; Lee, T.H.; Cambell, L.W.; Cheatham, J.R.; Gesh, C.J.; Hunt, A.; Ludewigt, B.A.; Smith, L.E.; Sterbentz, J.

    2010-01-01

    This poster is one of two complementary posters. The Next Generation Safeguards Initiative (NGSI) of the U.S. DOE has initiated a multi-lab/university collaboration to quantify the plutonium (Pu) mass in, and detect the diversion of pins from, spent nuclear fuel assemblies with non-destructive assay (NDA). This research effort has the goal of quantifying the capability of 14 NDA techniques as well as training a future generation of safeguards practitioners. By November of 2010, we will be 1.5 years into the first phase (2.5 years) of work. This first phase involves primarily Monte Carlo modelling while the second phase (also 2.5 years) will focus on experimental work. The goal of phase one is to quantify the detection capability of the various techniques for the benefit of safeguard technology developers, regulators, and policy makers as well as to determine what integrated techniques merit experimental work, We are considering a wide range of possible technologies since our research horizon is longer term than the focus of most regulator bodies. The capability of all of the NDA techniques will be determined for a library of 64 17 x 17 PWR assemblies (burnups (15, 30, 45, 60 GWd/tU), initial enrichments (2, 3, 4, 5%) and cooling times (1, 5, 20, 80 years)). The burnup and cooling time were simulated with each fuel pin being comprised of four radial regions. In this paper an overview of the purpose will be given as well as a technical update on the following 6 neutron techniques: 252 Cf Interrogation with Prompt Neutron Detection, Delayed Neutrons, Differential Die-Away, Differential Die-Away Self-Interrogation, Passive Neutron Albedo Reactivity, Self-Integration Neutron Resonance Densitometry. The technical update will quantify the anticipated performance of each technique for the 64 assemblies of the spent fuel library.

  8. Evaluating Pillar Industry’s Transformation Capability: A Case Study of Two Chinese Steel-Based Cities

    Science.gov (United States)

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China’s steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities’ abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns. PMID:26422266

  9. High Altitude Platforms for Disaster Recovery: Capabilities, Strategies, and Techniques for Emergency Telecommunications

    Directory of Open Access Journals (Sweden)

    Deaton JuanD

    2008-01-01

    Full Text Available Abstract Natural disasters and terrorist acts have significant potential to disrupt emergency communication systems. These emergency communication networks include first-responder, cellular, landline, and emergency answering services such as 911, 112, or 999. Without these essential emergency communications capabilities, search, rescue, and recovery operations during a catastrophic event will be severely debilitated. High altitude platforms could be fitted with telecommunications equipment and used to support these critical communications missions once the catastrophic event occurs. With the ability to be continuously on station, HAPs provide excellent options for providing emergency coverage over high-risk areas before catastrophic incidents occur. HAPs could also provide enhanced 911 capabilities using either GPS or reference stations. This paper proposes potential emergency communications architecture and presents a method for estimating emergency communications systems traffic patterns for a catastrophic event.

  10. Modeling rainfall-runoff process using soft computing techniques

    Science.gov (United States)

    Kisi, Ozgur; Shiri, Jalal; Tombul, Mustafa

    2013-02-01

    Rainfall-runoff process was modeled for a small catchment in Turkey, using 4 years (1987-1991) of measurements of independent variables of rainfall and runoff values. The models used in the study were Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP) which are Artificial Intelligence (AI) approaches. The applied models were trained and tested using various combinations of the independent variables. The goodness of fit for the model was evaluated in terms of the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and scatter index (SI). A comparison was also made between these models and traditional Multi Linear Regression (MLR) model. The study provides evidence that GEP (with RMSE=17.82 l/s, MAE=6.61 l/s, CE=0.72 and R2=0.978) is capable of modeling rainfall-runoff process and is a viable alternative to other applied artificial intelligence and MLR time-series methods.

  11. Advanced Simulation Capability for Environmental Management: Development and Demonstrations - 12532

    Energy Technology Data Exchange (ETDEWEB)

    Freshley, Mark D.; Freedman, Vicky; Gorton, Ian [Pacific Northwest National Laboratory, MSIN K9-33, P.O. Box 999, Richland, WA 99352 (United States); Hubbard, Susan S. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, MS 50B-4230, Berkeley, CA 94720 (United States); Moulton, J. David; Dixon, Paul [Los Alamos National Laboratory, MS B284, P.O. Box 1663, Los Alamos, NM 87544 (United States)

    2012-07-01

    The U.S. Department of Energy Office of Environmental Management (EM), Technology Innovation and Development is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of capabilities, which are organized into Platform and Integrated Tool-sets and a High-Performance Computing Multi-process Simulator. The Platform capabilities target a level of functionality to allow end-to-end model development, starting with definition of the conceptual model and management of data for model input. The High-Performance Computing capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The new capabilities are demonstrated through working groups, including one focused on the Hanford Site Deep Vadose Zone. The ASCEM program focused on planning during the first year and executing a prototype tool-set for an early demonstration of individual components. Subsequently, ASCEM has focused on developing and demonstrating an integrated set of capabilities, making progress toward a version of the capabilities that can be used to engage end users. Demonstration of capabilities continues to be implemented through working groups. Three different working groups, one focused on EM problems in the deep vadose zone, another investigating attenuation mechanisms for metals and radionuclides, and a third focusing on waste tank performance assessment, continue to make progress. The project

  12. Demonstrating the capability and reliability of NDT inspections

    International Nuclear Information System (INIS)

    Wooldridge, A.B.

    1996-01-01

    This paper discusses some recent developments in demonstrating the capability of ultrasonics, eddy currents and radiography both theoretically and in practice, and indicates where further evidence is desirable. Magnox Electric has been involved with development of theoretical models for all three of these inspection methods. Feedback from experience on plant is also important to avoid overlooking any practical limitations of the inspections, and to ensure that the metallurgical characteristics of potential defects have been properly taken into account when designing and qualifying the inspections. For critical applications, inspection techniques are often supported by a Technical Justification which draws on all the relevant theoretical and experimental evidence, as well as experience of inspections on plant. The role of technical justifications is discussed in the context of inspection qualification. (author)

  13. Qualitative techniques for managing operational risk

    OpenAIRE

    Delfiner, Miguel; Pailhé, Cristina

    2009-01-01

    Qualitative techniques are essential tools for identifying and assessing operational risk (OR). Their relevance in assessing OR can be understood due to the lack of a quantitative static model capable of capturing the dynamic operational risk profile which is shaped by managerial decisions. An operational risk profile obtained solely from historical loss data could further change due to corrective actions implemented by the bank after the occurrence of those events. This document introduces s...

  14. Assessment of In-Situ Natural Dendroremediation Capability of ...

    African Journals Online (AJOL)

    Assessment of In-Situ Natural Dendroremediation Capability of Rhizophora racemosa in a Heavy Metal Polluted Mangrove Forest, Rivers State, Nigeria. ... Many of these noxious substances have been noted to be removable from polluted environment through proper application of phytoremediation techniques, particularly ...

  15. Developing A/E capabilities; areas of special interest

    International Nuclear Information System (INIS)

    Gonzalez, A.; Gurbindo, J.

    1988-01-01

    During the last few years, the methods used by Empresarios Agrupados and INITEC to perform Architect-Engineering work in Spain for nuclear projects has undergone a process of significant change in project management and engineering approaches. Specific practical examples of management techniques and design practices which represent a good record of results will be discussed. They are identified as areas of special interest in developing A/E capabilities for nuclear projects. Command of these areas should produce major payoffs in local participation and contribute to achieving real nuclear engineering capabilities in the country

  16. Power Capability Investigation Based on Electrothermal Models of Press-pack IGBT Three-Level NPC and ANPC VSCs for Multimegawatt Wind Turbines

    DEFF Research Database (Denmark)

    Senturk, Osman Selcuk; Helle, Lars; Munk-Nielsen, Stig

    2012-01-01

    to provide reactive power support as an ancillary service. For multimegawatt full-scale wind turbines, power capability depends on converter topology and semiconductor switch technology. As power capability limiting factors, switch current, semiconductor junction temperature, and converter output voltage...... are addressed in this study for the three-level neutral-point-clamped voltage source converter (3L-NPC-VSC) and 3L Active NPC VSC (3L-ANPC-VSC) with press-pack insulated gate bipolar transistors employed as a grid-side converter. In order to investigate these VSCs' power capabilities under various operating...... conditions with respect to these limiting factors, a power capability generation algorithm based on the converter electrothermal model is developed. Built considering the VSCs' operation principles and physical structure, the model is validated by a 2 MV·A single-phase 3L-ANPC-VSC test setup. The power...

  17. Proposing a Qualitative Approach for Corporate Competitive Capability Modeling in High-Tech Business (Case study: Software Industry

    Directory of Open Access Journals (Sweden)

    Mahmoud Saremi Saremi

    2010-09-01

    Full Text Available The evolution of global business trend for ICT-based products in recent decades shows the intensive activity of pioneer developing countries to gain a powerful competitive position in global software industry. In this research, with regard to importance of competition issue for top managers of Iranian software companies, a conceptual model has been developed for Corporate Competitive Capability concept. First, after describing the research problem, we present a comparative review of recent theories of firm and competition that has been applied by different researchers in the High-Tech and Knowledge Intensive Organization filed. Afterwards, with a detailed review of literature and previous research papers, an initial research framework and applied research method has been proposed. The main and final section of paper assigned to describing the result of research in different steps of qualitative modeling process. The agreed concepts are related to corporate competitive capability, the elicited and analyzed experts Cause Map, the elicited collective causal maps, and the final proposed model for software industry are the modeling results for this paper.

  18. Mechanism of supply chain coordination cased on dynamic capability framework-the mediating role of manufacturing capabilities

    Directory of Open Access Journals (Sweden)

    Tiantian Gao

    2014-10-01

    Full Text Available Purpose: A critical issue has been absent from the conversation on supply chain coordination: how supply chain coordination influence the enterprise performance. This research proposes a new vision to research the performance mechanism of supply chain coordination capability as a dynamic capability. Manufacturing capabilities are existed as mediating role. Design/methodology/approach: Data from International Manufacturing Strategy Survey in 2009 is used to verify the mediating model by hierarchical regression analysis. Findings: The results show that supply chain coordination impacts the enterprise performance positively and indirect impacts the enterprise performance through quality, cost, flexibility. Research implications: This study presents an overview of the impact of supply chain coordination and manufacturing capabilities on enterprise performance, giving grasp for further research of the relationships that exist between them. Originality/value: This finding integrates insights from previous research in dynamic capability framework and supply chain management into a generalization and extension of the performance mechanism in manufacturing enterprises.

  19. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    Campbell and Shiller (1987) proposed a graphical technique for the present value model, which consists of plotting estimates of the spread and theoretical spread as calculated from the cointegrated vector autoregressive model without imposing the restrictions implied by the present value model....... In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  20. Implementing Firm Dynamic Capabilities Through the Concept Design Process

    DEFF Research Database (Denmark)

    Nedergaard, Nicky; Jones, Richard

    2011-01-01

    It is well understood that firms operating in highly dynamic and fluid markets need to possess strong dynamic capabilities of sensing (market trajectories), seizing (to capitalise on these trajectories), and transformation (in order to implement sustainable strategies). Less understood is how firms...... actually implement these capabilities. A conceptual model showing how managing concept design processes can help firms systematically develop dynamic capabilities and help bridge the gap between the market-oriented and resource-focused strategic perspectives is presented. By placing this model in a design......-driven innovation perspective three theoretical propositions is derived explicating both the paper’s implementation approach to dynamic capabilities as well as new ways of understanding these capabilities. Concluding remarks are made discussing both the paper’s contribution to the strategic marketing literature...

  1. Computational modelling of the HyperVapotron cooling technique

    Energy Technology Data Exchange (ETDEWEB)

    Milnes, Joseph, E-mail: Joe.Milnes@ccfe.ac.uk [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Burns, Alan [School of Process Material and Environmental Engineering, CFD Centre, University of Leeds, Leeds, LS2 9JT (United Kingdom); ANSYS UK, Milton Park, Oxfordshire (United Kingdom); Drikakis, Dimitris [Department of Engineering Physics, Cranfield University, Cranfield, MK43 0AL (United Kingdom)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer The heat transfer mechanisms within a HyperVapotron are examined. Black-Right-Pointing-Pointer A multiphase, CFD model is developed. Black-Right-Pointing-Pointer Modelling choices for turbulence and wall boiling are evaluated. Black-Right-Pointing-Pointer Considerable improvements in accuracy are found compared to standard boiling models. Black-Right-Pointing-Pointer The model should enable significant virtual prototyping to be performed. - Abstract: Efficient heat transfer technologies are essential for magnetically confined fusion reactors; this applies to both the current generation of experimental reactors as well as future power plants. A number of High Heat Flux devices have therefore been developed specifically for this application. One of the most promising candidates is the HyperVapotron, a water cooled device which relies on internal fins and boiling heat transfer to maximise the heat transfer capability. Over the past 30 years, numerous variations of the HyperVapotron have been built and tested at fusion research centres around the globe resulting in devices that can now sustain heat fluxes in the region of 20-30 MW/m{sup 2} in steady state. Until recently, there had been few attempts to model or understand the internal heat transfer mechanisms responsible for this exceptional performance with the result that design improvements have been traditionally sought experimentally which is both inefficient and costly. This paper presents the successful attempt to develop an engineering model of the HyperVapotron device using customisation of commercial Computational Fluid Dynamics software. To establish the most appropriate modelling choices, in-depth studies were performed examining the turbulence models (within the Reynolds Averaged Navier Stokes framework), near wall methods, grid resolution and boiling submodels. Comparing the CFD solutions with HyperVapotron experimental data suggests that a RANS-based, multiphase

  2. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  3. New Modelling Capabilities in Commercial Software for High-Gain Antennas

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Lumholt, Michael; Meincke, Peter

    2012-01-01

    characterization of the reflectarray element, an initial phaseonly synthesis, followed by a full optimization procedure taking into account the near-field from the feed and the finite extent of the array. Another interesting new modelling capability is made available through the DIATOOL software, which is a new...... type of EM software tool aimed at extending the ways engineers can use antenna measurements in the antenna design process. The tool allows reconstruction of currents and near fields on a 3D surface conformal to the antenna, by using the measured antenna field as input. The currents on the antenna...... surface can provide valuable information about the antenna performance or undesired contributions, e.g. currents on a cable,can be artificially removed. Finally, the CHAMP software will be extended to cover reflector shaping and more complex materials,which combined with a much faster execution speed...

  4. New measurement capabilities of mass spectrometry in the nuclear fuel cycle

    International Nuclear Information System (INIS)

    Perrin, R.E.

    1979-01-01

    Three recent developments, when combined, have the potential for greatly improving accountability measurements in the nuclear fuel cycle. The techniques are particularly valuable when measuring the contents of vessels which are difficult to calibrate by weight or volume. Input dissolver accountability measurements, inparticular, benefit from the application of these techniques. Los Alamos Scientific Laboratory has developed the capability for isotopic analysis of U and Pu samples at the nanogram level with an accuracy of 0.1 relative %. The Central Bureau for Nuclear Materials Measurement in Geel, Belgium has developed the capability of preparing mixed, solid metal U and Pu spikes with an accuracy of better than 0.1 relative %. Idaho Nuclear Energy Laboratory and C.K. Mathews at Bhabha Atomic Research have demonstrated a technique for determining the ratio of sample size to total solution measured which is independent of both the weight and the volume of the solution being measured. The advantages and limitations of these techniques are discussed. An analytical scheme which takes advantage of the special features of these techniques is proposed. 4 refs

  5. Systems Engineering for Space Exploration Medical Capabilities

    Science.gov (United States)

    Mindock, Jennifer; Reilly, Jeffrey; Rubin, David; Urbina, Michelle; Hailey, Melinda; Hanson, Andrea; Burba, Tyler; McGuire, Kerry; Cerro, Jeffrey; Middour, Chris; hide

    2017-01-01

    Human exploration missions that reach destinations beyond low Earth orbit, such as Mars, will present significant new challenges to crew health management. For the medical system, lack of consumable resupply, evacuation opportunities, and real-time ground support are key drivers toward greater autonomy. Recognition of the limited mission and vehicle resources available to carry out exploration missions motivates the Exploration Medical Capability (ExMC) Element's approach to enabling the necessary autonomy. The Element's work must integrate with the overall exploration mission and vehicle design efforts to successfully provide exploration medical capabilities. ExMC is applying systems engineering principles and practices to accomplish its goals. This paper discusses the structured and integrative approach that is guiding the medical system technical development. Assumptions for the required levels of care on exploration missions, medical system goals, and a Concept of Operations are early products that capture and clarify stakeholder expectations. Model-Based Systems Engineering techniques are then applied to define medical system behavior and architecture. Interfaces to other flight and ground systems, and within the medical system are identified and defined. Initial requirements and traceability are established, which sets the stage for identification of future technology development needs. An early approach for verification and validation, taking advantage of terrestrial and near-Earth exploration system analogs, is also defined to further guide system planning and development.

  6. RELAP5 nuclear plant analyzer capabilities

    International Nuclear Information System (INIS)

    Wagner, R.J.; Ransom, V.H.

    1982-01-01

    An interactive execution capability has been developed for the RELAP5 code which permits it to be used as a Nuclear Plant Analyzer. This capability has been demonstrated using a simplified primary and secondary loop model of a PWR. A variety of loss-of-feed-water accidents have been simulated using this model. The computer execution time on a CDC Cyber 176 is one half of the transient simulation time so that the results can be displayed in real time. The results of the demonstration problems are displayed in digital form on a color schematic of the plant model using a Textronics 4027 CRT terminal. The interactive feature allows the user to enter commands in much the same manner as a reactor operator

  7. Modeling ionospheric foF 2 response during geomagnetic storms using neural network and linear regression techniques

    Science.gov (United States)

    Tshisaphungo, Mpho; Habarulema, John Bosco; McKinnell, Lee-Anne

    2018-06-01

    In this paper, the modeling of the ionospheric foF 2 changes during geomagnetic storms by means of neural network (NN) and linear regression (LR) techniques is presented. The results will lead to a valuable tool to model the complex ionospheric changes during disturbed days in an operational space weather monitoring and forecasting environment. The storm-time foF 2 data during 1996-2014 from Grahamstown (33.3°S, 26.5°E), South Africa ionosonde station was used in modeling. In this paper, six storms were reserved to validate the models and hence not used in the modeling process. We found that the performance of both NN and LR models is comparable during selected storms which fell within the data period (1996-2014) used in modeling. However, when validated on storm periods beyond 1996-2014, the NN model gives a better performance (R = 0.62) compared to LR model (R = 0.56) for a storm that reached a minimum Dst index of -155 nT during 19-23 December 2015. We also found that both NN and LR models are capable of capturing the ionospheric foF 2 responses during two great geomagnetic storms (28 October-1 November 2003 and 6-12 November 2004) which have been demonstrated to be difficult storms to model in previous studies.

  8. A probabilistic evaluation procedure for process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2018-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  9. Multidisciplinary Optimization of Tilt Rotor Blades Using Comprehensive Composite Modeling Technique

    Science.gov (United States)

    Chattopadhyay, Aditi; McCarthy, Thomas R.; Rajadas, John N.

    1997-01-01

    An optimization procedure is developed for addressing the design of composite tilt rotor blades. A comprehensive technique, based on a higher-order laminate theory, is developed for the analysis of the thick composite load-carrying sections, modeled as box beams, in the blade. The theory, which is based on a refined displacement field, is a three-dimensional model which approximates the elasticity solution so that the beam cross-sectional properties are not reduced to one-dimensional beam parameters. Both inplane and out-of-plane warping are included automatically in the formulation. The model can accurately capture the transverse shear stresses through the thickness of each wall while satisfying stress free boundary conditions on the inner and outer surfaces of the beam. The aerodynamic loads on the blade are calculated using the classical blade element momentum theory. Analytical expressions for the lift and drag are obtained based on the blade planform with corrections for the high lift capability of rotor blades. The aerodynamic analysis is coupled with the structural model to formulate the complete coupled equations of motion for aeroelastic analyses. Finally, a multidisciplinary optimization procedure is developed to improve the aerodynamic, structural and aeroelastic performance of the tilt rotor aircraft. The objective functions include the figure of merit in hover and the high speed cruise propulsive efficiency. Structural, aerodynamic and aeroelastic stability criteria are imposed as constraints on the problem. The Kreisselmeier-Steinhauser function is used to formulate the multiobjective function problem. The search direction is determined by the Broyden-Fletcher-Goldfarb-Shanno algorithm. The optimum results are compared with the baseline values and show significant improvements in the overall performance of the tilt rotor blade.

  10. Archival Research Capabilities of the WFIRST Data Set

    Science.gov (United States)

    Szalay, Alexander

    apparent when exploring a database of 108 galaxies with multiband photometry and grism spectroscopy. The case studies will require (i) the creation of a unified WFIRST object catalog consisting of data cross-matched to external catalogs, (ii) an easy-to-access, scalable database, utilizing the latest data discovery and querying techniques, (iii) in situ analyses of large and/or complex data, (iv) identification of links to supporting data and enabling queries spanning WFIRST and other databases, (v) combining simulations with modeling software. To accomplish these objectives, we will prototype a system capable of executing complex user-defined scripts including database access to a shared computational facility with tools for joining WFIRST to other surveys, also enabling comparisons to physical models. Our organizational plan divides the work into several general areas where our team members have specific expertise: (a) apply the 20 queries methodology to derive performance and functionality requirements, (b) develop a practical interactive server-side query system, built on our SDSS experience, (c) apply advanced cross-matching techniques, (d) create mock WFIRST imaging and grism data, (e) develop high level cross correlation tools, (e) optimize scripting systems using high-level languages (iPython), (f) perform close integration of cosmological simulations with observational data, (g) apply advanced machine learning techniques. Our efforts will be coordinated with the WFIRST Science Center (WSC), the other SITs, and the broader community in a manner consistent with direction and review of the Project Office. We will publish our results as milestones are reached, and issue progress reports on a regular basis. We will represent SIT-F at all relevant meetings including meetings of the other SITs (SITs A-E), and participate in "Big Data" conferences to interact with others in the field and learn new techniques that might be applicable to WFIRST.

  11. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  12. Bias identification in PWR pressurizer instrumentation using the generalized liklihood-ratio technique

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1981-01-01

    A method for detecting and identifying biases in the pressure and level sensors of a pressurized water reactor (PWR) pressurizer is described. The generalized likelihood ratio (GLR) technique performs statistical tests on the innovations sequence of a Kalman filter state estimator and is capable of determining when a bias appears, in what sensor the bias exists, and estimating the bias magnitude. Simulation results using a second-order linear, discrete PWR pressurizer model demonstrate the capabilities of the GLR method

  13. Status Report on Modelling and Simulation Capabilities for Nuclear-Renewable Hybrid Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Epiney, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, P. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, J. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bragg-Sitton, S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Yigitoglu, A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Greenwood, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, S. M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ganda, F. [Argonne National Lab. (ANL), Argonne, IL (United States); Maronati, G. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    This report summarizes the current status of the modeling and simulation capabilities developed for the economic assessment of Nuclear-Renewable Hybrid Energy Systems (N-R HES). The increasing penetration of variable renewables is altering the profile of the net demand, with which the other generators on the grid have to cope. N-R HES analyses are being conducted to determine the potential feasibility of mitigating the resultant volatility in the net electricity demand by adding industrial processes that utilize either thermal or electrical energy as stabilizing loads. This coordination of energy generators and users is proposed to mitigate the increase in electricity cost and cost volatility through the production of a saleable commodity. Overall, the financial performance of a system that is comprised of peaking units (i.e. gas turbine), baseload supply (i.e. nuclear power plant), and an industrial process (e.g. hydrogen plant) should be optimized under the constraint of satisfying an electricity demand profile with a certain level of variable renewable (wind) penetration. The optimization should entail both the sizing of the components/subsystems that comprise the system and the optimal dispatch strategy (output at any given moment in time from the different subsystems). Some of the capabilities here described have been reported separately in [1, 2, 3]. The purpose of this report is to provide an update on the improvement and extension of those capabilities and to illustrate their integrated application in the economic assessment of N-R HES.

  14. Capability of Catfish (Clarias gariepinus to Accumulate Hg2+ From Water

    Directory of Open Access Journals (Sweden)

    Heny Suseno

    2015-12-01

    Full Text Available Mercury is hazardous contaminant that can be accumulated by aquatic organisms such as fishes, mussels etc. Catfish is one of source of animal protein but it also can accumulate Hg2+ from water that used in aquaculture. Due to less information about capability of catfish to accumulate Hg2+, therefore we studied bioaccumulation of Hg2+ that used biokinetic approach (aqueous uptake-rate, and elimination-rate.  Nuclear application technique was applied in this study by using radiotracer of 203Hg.  A simple kinetic model was then constructed to predict the bioaccumulation capability of   by catfish. The result of experiments were shown that the uptake rate of difference Hg2+ concentration were 79.90 to 101.22 ml.g-1.d-1. Strong correlation between uptake rates with increasing Hg2+concentration. In addition, the elimination rates were range 0.080 – 0.081 day-1. The biology half time (t1/2b of Hg2+ in whole body catfish were 8.50 – 8.63 days.  However, no clear correlation  between elimination rate with increasing concentration of Hg2+. The calculation of Bio Concentration Factor (BCF shown catfish have capability to accumulated Hg maximum 1242.69 time than its concentration in water

  15. Application of nonliner reduction techniques in chemical process modeling: a review

    International Nuclear Information System (INIS)

    Muhaimin, Z; Aziz, N.; Abd Shukor, S.R.

    2006-01-01

    Model reduction techniques have been used widely in engineering fields for electrical, mechanical as well as chemical engineering. The basic idea of reduction technique is to replace the original system by an approximating system with much smaller state-space dimension. A reduced order model is more beneficial to process and industrial field in terms of control purposes. This paper is to provide a review on application of nonlinear reduction techniques in chemical processes. The advantages and disadvantages of each technique reviewed are also highlighted

  16. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  17. Rabbit tissue model (RTM) harvesting technique.

    Science.gov (United States)

    Medina, Marelyn

    2002-01-01

    A method for creating a tissue model using a female rabbit for laparoscopic simulation exercises is described. The specimen is called a Rabbit Tissue Model (RTM). Dissection techniques are described for transforming the rabbit carcass into a small, compact unit that can be used for multiple training sessions. Preservation is accomplished by using saline and refrigeration. Only the animal trunk is used, with the rest of the animal carcass being discarded. Practice exercises are provided for using the preserved organs. Basic surgical skills, such as dissection, suturing, and knot tying, can be practiced on this model. In addition, the RTM can be used with any pelvic trainer that permits placement of larger practice specimens within its confines.

  18. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  19. Analytical Chemistry Core Capability Assessment - Preliminary Report

    International Nuclear Information System (INIS)

    Barr, Mary E.; Farish, Thomas J.

    2012-01-01

    The concept of 'core capability' can be nebulous one. Even at a fairly specific level, where core capability equals maintaining essential services, it is highly dependent upon the perspective of the requestor. Samples are submitted to analytical services because the requesters do not have the capability to conduct adequate analyses themselves. Some requests are for general chemical information in support of R and D, process control, or process improvement. Many analyses, however, are part of a product certification package and must comply with higher-level customer quality assurance requirements. So which services are essential to that customer - just those for product certification? Does the customer also (indirectly) need services that support process control and improvement? And what is the timeframe? Capability is often expressed in terms of the currently utilized procedures, and most programmatic customers can only plan a few years out, at best. But should core capability consider the long term where new technologies, aging facilities, and personnel replacements must be considered? These questions, and a multitude of others, explain why attempts to gain long-term consensus on the definition of core capability have consistently failed. This preliminary report will not try to define core capability for any specific program or set of programs. Instead, it will try to address the underlying concerns that drive the desire to determine core capability. Essentially, programmatic customers want to be able to call upon analytical chemistry services to provide all the assays they need, and they don't want to pay for analytical chemistry services they don't currently use (or use infrequently). This report will focus on explaining how the current analytical capabilities and methods evolved to serve a variety of needs with a focus on why some analytes have multiple analytical techniques, and what determines the infrastructure for these analyses. This information will be

  20. A static VAR compensator model for improved ride-through capability of wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Akhmatov, V.; Soebrink, K.

    2004-12-01

    Dynamic reactive compensation is associated with reactive power and voltage control of induction generator based wind turbines. With regard to wind power, application areas of dynamic reactive compensation can be improvement of the power quality and the voltage stability, the control of the reactive power exchange between the wind farm and the power grid in the wind farm connection point as well as improvement of the ride-through capability of the wind farm. This article presents a model of a Static VAR Compensator (SVC) with dynamic generic control that is a kind of dynamic reactive compensation device. The term 'generic' implies that the model is general and must cover a variety of the SVC units and their specific controls from different manufacturers. The SVC model with dynamic generic control is implemented by Eltra in the simulation tool Powerfactory and validated from the SVC model in the tool PSCAD/EMTDC. Implementation in the tool Powerfactory makes it possible to apply the SVC model with dynamic generic control in investigations of power system stability with regard to establishment of large wind farms without restrictions on the model size of the power grid. (Author)

  1. Constructing canine carotid artery stenosis model by endovascular technique

    International Nuclear Information System (INIS)

    Cheng Guangsen; Liu Yizhi

    2005-01-01

    Objective: To establish a carotid artery stenosis model by endovascular technique suitable for neuro-interventional therapy. Methods: Twelve dogs were anesthetized, the unilateral segments of the carotid arteries' tunica media and intima were damaged by a corneous guiding wire of home made. Twenty-four carotid artery stenosis models were thus created. DSA examination was performed on postprocedural weeks 2, 4, 8, 10 to estimate the changes of those stenotic carotid arteries. Results: Twenty-four carotid artery stenosis models were successfully created in twelve dogs. Conclusions: Canine carotid artery stenosis models can be created with the endovascular method having variation of pathologic characters and hemodynamic changes similar to human being. It is useful for further research involving the new technique and new material for interventional treatment. (authors)

  2. Pulse shaping and energy storage capabilities of angularly multiplexed KrF laser fusion drivers

    Science.gov (United States)

    Lehmberg, R. H.; Giuliani, J. L.; Schmitt, A. J.

    2009-07-01

    This paper describes a rep-rated multibeam KrF laser driver design for the 500kJ Inertial Fusion test Facility (FTF) recently proposed by NRL, then models its optical pulse shaping capabilities using the ORESTES laser kinetics code. It describes a stable and reliable iteration technique for calculating the required precompensated input pulse shape that will achieve the desired output shape, even when the amplifiers are heavily saturated. It also describes how this precompensation technique could be experimentally implemented in real time on a reprated laser system. The simulations show that this multibeam system can achieve a high fidelity pulse shaping capability, even for a high gain shock ignition pulse whose final spike requires output intensities much higher than the ˜4MW/cm2 saturation levels associated with quasi-cw operation; i.e., they show that KrF can act as a storage medium even for pulsewidths of ˜1ns. For the chosen pulse, which gives a predicted fusion energy gain of ˜120, the simulations predict the FTF can deliver a total on-target energy of 428kJ, a peak spike power of 385TW, and amplified spontaneous emission prepulse contrast ratios IASE/Ilaser.

  3. Extending the Lunar Mapping and Modeling Portal - New Capabilities and New Worlds

    Science.gov (United States)

    Day, B. H.; Law, E.; Arevalo, E.; Bui, B.; Chang, G.; Dodge, K.; Kim, R. M.; Malhotra, S.; Sadaqathullah, S.

    2015-12-01

    NASA's Lunar Mapping and Modeling Portal (LMMP) provides a web-based Portal and a suite of interactive visualization and analysis tools to enable mission planners, lunar scientists, and engineers to access mapped lunar data products from past and current lunar missions (http://lmmp.nasa.gov). During the past year, the capabilities and data served by LMMP have been significantly expanded. New interfaces are providing improved ways to access and visualize data. Many of the recent enhancements to LMMP have been specifically in response to the requirements of NASA's proposed Resource Prospector lunar rover, and as such, provide an excellent example of the application of LMMP to mission planning. At the request of NASA's Science Mission Directorate, LMMP's technology and capabilities are now being extended to additional planetary bodies. New portals for Vesta and Mars are the first of these new products to be released. On March 31, 2015, the LMMP team released Vesta Trek (http://vestatrek.jpl.nasa.gov), a web-based application applying LMMP technology to visualizations of the asteroid Vesta. Data gathered from multiple instruments aboard Dawn have been compiled into Vesta Trek's user-friendly set of tools, enabling users to study the asteroid's features. With an initial release on July 1, 2015, Mars Trek replicates the functionality of Vesta Trek for the surface of Mars. While the entire surface of Mars is covered, higher levels of resolution and greater numbers of data products are provided for special areas of interest. Early releases focus on past, current, and future robotic sites of operation. Future releases will add many new data products and analysis tools as Mars Trek has been selected for use in site selection for the Mars 2020 rover and in identifying potential human landing sites on Mars. Other destinations will follow soon. The user community is invited to provide suggestions and requests as the development team continues to expand the capabilities of LMMP

  4. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  5. Judgmental Forecasting of Operational Capabilities

    DEFF Research Database (Denmark)

    Hallin, Carina Antonia; Tveterås, Sigbjørn; Andersen, Torben Juul

    This paper explores a new judgmental forecasting indicator, the Employee Sensed Operational Capabilities (ESOC). The purpose of the ESOC is to establish a practical prediction tool that can provide early signals about changes in financial performance by gauging frontline employees’ sensing...... of changes in the firm’s operational capabilities. We present the first stage of the development of ESOC by applying a formative measurement approach to test the index in relation to financial performance and against an organizational commitment scale. We use distributed lag models to test whether the ESOC...

  6. Geometry and gravity influences on strength capability

    Science.gov (United States)

    Poliner, Jeffrey; Wilmington, Robert P.; Klute, Glenn K.

    1994-01-01

    Strength, defined as the capability of an individual to produce an external force, is one of the most important determining characteristics of human performance. Knowledge of strength capabilities of a group of individuals can be applied to designing equipment and workplaces, planning procedures and tasks, and training individuals. In the manned space program, with the high risk and cost associated with spaceflight, information pertaining to human performance is important to ensuring mission success and safety. Knowledge of individual's strength capabilities in weightlessness is of interest within many areas of NASA, including workplace design, tool development, and mission planning. The weightless environment of space places the human body in a completely different context. Astronauts perform a variety of manual tasks while in orbit. Their ability to perform these tasks is partly determined by their strength capability as demanded by that particular task. Thus, an important step in task planning, development, and evaluation is to determine the ability of the humans performing it. This can be accomplished by utilizing quantitative techniques to develop a database of human strength capabilities in weightlessness. Furthermore, if strength characteristics are known, equipment and tools can be built to optimize the operators' performance. This study examined strength in performing a simple task, specifically, using a tool to apply a torque to a fixture.

  7. State-of-the-art Tools and Techniques for Quantitative Modeling and Analysis of Embedded Systems

    DEFF Research Database (Denmark)

    Bozga, Marius; David, Alexandre; Hartmanns, Arnd

    2012-01-01

    This paper surveys well-established/recent tools and techniques developed for the design of rigorous embedded sys- tems. We will first survey U PPAAL and M ODEST, two tools capable of dealing with both timed and stochastic aspects. Then, we will overview the BIP framework for modular design...

  8. Comparison of Fuzzy AHP Buckley and ANP Models in Forestry Capability Evaluation (Case Study: Behbahan City Fringe

    Directory of Open Access Journals (Sweden)

    V. Rahimi

    2015-12-01

    Full Text Available The area of Zagros forests is continuously in danger of destruction. Therefore, the remaining forests should be carefully managed based on ecological capability evaluation. In fact, land evaluation includes prediction or assessment of land quality for a special land use with regard to production, vulnerability and management requirements. In this research, we studied the ecological capability of Behbahan city fringe for forestry land use. After the basic studies were completed and the thematic maps such as soil criteria, climate, physiography, vegetation and bedrock were prepared, the fuzzy multi-criteria decision-making methods of Fuzzy AHP Buckley and ANP were used to standardize and determine the weights of criteria. Finally, the ecological model of the region’s capability was generated to prioritize forestry land use and prepare the final map of evaluation using WLC model in seven classes. The results showed that in ANP method, 55.58% of the area is suitable for forestry land use which is more consistent with the reality, while in the Fuzzy AHP method, 95.23% of the area was found suitable. Finally, it was concluded that the ANP method shows more flexibility and ability to determine suitable areas for forestry land use in the study area.

  9. Chemical measurement capabilities at Lawrence Livermore National Laboratory

    International Nuclear Information System (INIS)

    Raber, E.; Harrar, J.E.

    1992-04-01

    This document is an attempt to summarize the available analytical chemistry and materials characterization techniques available LLNL. Emphasis of the techniques described is aimed at the variety of samples for which intelligence information is sought and/or applications where sample size would be very limited and duplicate samples are usually not obtainable. Current instrumentation available, types of samples presently being analyzed and a description of the various methods have been provided. LLNL has made an effort during the last three years to develop a forensic science approach to sample analysis. Many of these capabilities are presently utilized, to some degree, for ongoing analysis of unusual samples provided by various sponsor agencies. The analytical techniques utilized, although coordinated through the Special Projects Program, take advantage of the full range of capabilities available at LLNL. This document represents input from several organizations at LLNL, all working together to provide the maximum level of available expertise: Condensed Matter and Analytical Sciences Division of the Materials Science Directorate, Nuclear Chemistry Division of the Defense Sciences Directorate, Center for Accelerator Mass Spectrometry of the Physics Directorate, Biomedical Sciences Division of the Environmental Sciences and Biomedical Directorate, and Applied Technology Division of the Special Projects Program Directorate

  10. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  11. Modelling Technique for the Assessment of the Sub-Soil Drain for Groundwater Seepage Remediation

    Directory of Open Access Journals (Sweden)

    Tajul Baharuddin Mohamad Faizal

    2017-01-01

    Full Text Available Groundwater simulation technique was carried out for examining the performance of sub-soil drain at problematic site area. Subsoil drain was proposed as one of solution for groundwater seepage occurred at the slope face by reducing groundwater table at Taman Botani Park Kuala Lumpur. The simulation technique used Modular Three-Dimensional Finite Difference Groundwater Flow (MODFLOW software. In transient conditions, the results of simulation showed that heads increases surpass 1 to 2 m from the elevation level of the slope area that caused groundwater seepage on slope face. This study attempt to decrease the heads increase surpass by using different sub-soil drain size in simulation technique. The sub-soil drain capable to decline the heads ranges of 1 to 2 m.

  12. High Altitude Platforms for Disaster Recovery: Capabilities, Strategies, and Techniques for Providing Emergency Telecommunications

    Energy Technology Data Exchange (ETDEWEB)

    Juan D. Deaton

    2008-05-01

    Natural disasters and terrorist acts have significant potential to disrupt emergency communication systems. These emergency communication networks include first-responder, cellular, landline, and emergency answering services such as 911, 112, or 999. Without these essential emergency communications capabilities, search, rescue, and recovery operations during a catastrophic event will be severely debilitated. High altitude platforms could be fitted with telecommunications equipment and used to support these critical communications missions once the catastrophic event occurs. With the ability to be continuously on station, HAPs provide excellent options for providing emergency coverage over high-risk areas before catastrophic incidents occur. HAPs could also provide enhanced 911 capabilities using either GPS or reference stations. This paper proposes potential emergency communications architecture and presents a method for estimating emergency communications systems traffic patterns for a catastrophic event.

  13. Photosynthetic and nitrogen fixation capability in several soybean mutant lines

    International Nuclear Information System (INIS)

    Gandanegara, S.; Hendratno, K.

    1987-01-01

    Photosynthetic and nitrogen fixation capability in several soybean mutant lines. A greenhouse experiment has been carried out to study photosynthetic and nitrogen fixation capability of five mutant lines and two soybean varieties. An amount of 330 uCi of 14 CO 2 was fed to the plants including of the non-fixing reference crop (Chippewa non-nodulating isoline). Nitrogen fixation measurements was carried out using 15 N isotope dilution technique according to A-value concept. Results showed that beside variety/mutant lines, plant growth also has important role in photosynthetic and N fixing capability. Better growth and a higher photosynthetic capability in Orba, mutant lines nos. 63 and 65 resulted in a greater amount of N 2 fixed (mg N/plant) than other mutant lines. (author). 12 refs.; 5 figs

  14. Capability Paternalism

    NARCIS (Netherlands)

    Claassen, R.J.G.|info:eu-repo/dai/nl/269266224

    A capability approach prescribes paternalist government actions to the extent that it requires the promotion of specific functionings, instead of the corresponding capabilities. Capability theorists have argued that their theories do not have much of these paternalist implications, since promoting

  15. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  16. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  17. Capability Building and Learning: An Emergent Behavior Approach

    Directory of Open Access Journals (Sweden)

    Andreu Rafael

    2014-12-01

    Full Text Available Economics-based models of firms typically overlook management acts and capability development. We propose a model that analyzes the aggregate behavior of a population of firms resulting from both specific management decisions and learning processes, that induce changes in companies’ capabilities. Decisions are made under imperfect information and bounded rationality, and managers may sacrifice short-term performance in exchange for qualitative outcomes that affect their firm’s future potential. The proposed model provides a structured setting in which these issues -often discussed only informally- can be systematically analyzed through simulation, producing a variety of hard-to-anticipate emergent behaviors. Economic performance is quite sensitive to managers’ estimates of their firms’ capabilities, and companies willing to sacrifice short-run results for future potential appear to be more stable than the rest. Also, bounded rationality can produce chaotic dynamics reminiscent of real life situations.

  18. Evolving Capabilities for Virtual Globes

    Science.gov (United States)

    Glennon, A.

    2006-12-01

    Though thin-client spatial visualization software like Google Earth and NASA World Wind enjoy widespread popularity, a common criticism is their general lack of analytical functionality. This concern, however, is rapidly being addressed; standard and advanced geographic information system (GIS) capabilities are being developed for virtual globes--though not centralized into a single implementation or software package. The innovation is mostly originating from the user community. Three such capabilities relevant to the earth science, education, and emergency management communities are modeling dynamic spatial phenomena, real-time data collection and visualization, and multi-input collaborative databases. Modeling dynamic spatial phenomena has been facilitated through joining virtual globe geometry definitions--like KML--to relational databases. Real-time data collection uses short scripts to transform user-contributed data into a format usable by virtual globe software. Similarly, collaborative data collection for virtual globes has become possible by dynamically referencing online, multi-person spreadsheets. Examples of these functions include mapping flows within a karst watershed, real-time disaster assessment and visualization, and a collaborative geyser eruption spatial decision support system. Virtual globe applications will continue to evolve further analytical capabilities, more temporal data handling, and from nano to intergalactic scales. This progression opens education and research avenues in all scientific disciplines.

  19. A JOINT VENTURE MODEL FOR ASSESSMENT OF PARTNER CAPABILITIES: THE CASE OF ESKOM ENTERPRISES AND THE AFRICAN POWER SECTOR

    Directory of Open Access Journals (Sweden)

    Y.V. Soni

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This article investigates the concept of joint ventures in the international energy sector and develops a joint venture model, as a business development and assessment tool. The joint venture model presents a systematic method that relies on modern business intelligence to assess a potential business venture by using a balanced score card technique to screen potential partners, based on their technological and financial core capabilities. The model can be used by business development managers to harness the potential of joint ventures to create economic growth and sustainable business expansion. Furthermore, partnerships with local companies can help to mitigate econo-political risk, and facilitate buy-in from the national governments that are normally the primary stakeholders in the energy sector ventures (directly or indirectly. The particular case of Eskom Enterprises (Pty Ltd, a wholly owned subsidiary of Eskom, is highlighted.

    AFRIKAANSE OPSOMMING: Hierdie artikel ondersoek die begrip gesamentlike onderneming in die internasionale energiesektor en ontwikkel 'n gesamentlike-onderneming-model as 'n sake-ontwikkeling- en takseermodel. Die gesamentlike-onderneming-model bied 'n stelselmatige metode wat op moderne sake-intelligensie staat maak om 'n potensiële sake-onderneming op grond van die tegnologiese en finansiële kernvermoëns daarvan te takseer deur 'n gebalanseerdepuntekaart-tegniek te gebruik. Die model kan deur sake-ontwikkelingsbestuurders gebruik word om die potensiaal van gesamentlike ondernemings in te span om ekonomiese groei en volhoubare sake-uitbreiding daar te stel. Verder kan venootskappe met plaaslike maatskappye help om die ekonomiese risiko te verminder en inkoop te vergemaklik van die nasionale regerings wat gewoonlik die primêre belanghebbendes in die energiesektorondernemings is (hetsy regstreeks of onregstreeks. Die besondere geval van Eskom Enterprises (Edms Bpk, 'n vol filiaal van Eskom

  20. Capabilities, innovation, and overall performance in Brazilian export firms.

    Directory of Open Access Journals (Sweden)

    José Ednilson de Oliveira Cabral

    2015-06-01

    Full Text Available This article extends the current research on innovation by investigating the relationship between innovative capabilities and export firms’ overall performance. From the perspectives of the resource-based view (RBV and dynamic capability, we examine the differential and interactive effects of exploration and exploitation capabilities in product innovation for external markets and overall performance (direct and mediated by a new product. In addition, we test the moderating effect of market dynamism and the controlling effect of firm size on these relationships. Hence, the main contribution of this article is developing and empirically testing an original model, by combining these constructs that address new relationships, in an emerging country. This model was tested with data from 498 Brazilian export firms, distributed throughout all Brazilian manufacturing sectors, by firm size, and in states. The analysis was made with application of the structural equation modeling (SEM. As a result, we found support for the assumptions that exploitation capabilities influence product innovation and overall performance, whereas exploration capabilities and their interaction to exploitation capabilities influence overall performance, but not product innovation. Additionally, the relationship between exploitation capabilities and overall performance is mediated by product innovation. Unlike hypothesized, market dynamism does not moderate the relationship between product innovation and overall performance. Furthermore, firm size works as a controlling variable in the relationships analyzed. Regarding the implications for theory, this study contributes to grasp that exploitation capabilities influences a firm’s overall performance, both directly and indirectly (via product innovation, and highlights the various direct and mediatory effects of innovation on overall performance. These insights show the importance of considering the role of mediating and

  1. Construct canine intracranial aneurysm model by endovascular technique

    International Nuclear Information System (INIS)

    Liang Xiaodong; Liu Yizhi; Ni Caifang; Ding Yi

    2004-01-01

    Objective: To construct canine bifurcation aneurysms suitable for evaluating the exploration of endovascular devices for interventional therapy by endovascular technique. Methods: The right common carotid artery of six dogs was expanded with a pliable balloon by means of endovascular technique, then embolization with detached balloon was taken at their originations DAS examination were performed on 1, 2, 3 d after the procedurse. Results: 6 aneurysm models were created in six dogs successfully with the mean width and height of the aneurysms decreasing in 3 days. Conclusions: This canine aneurysm model presents the virtue in the size and shape of human cerebral bifurcation saccular aneurysms on DSA image, suitable for developing the exploration of endovascular devices for aneurismal therapy. The procedure is quick, reliable and reproducible. (authors)

  2. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  3. A Performance Evaluation for IT/IS Implementation in Organisation: Preliminary New IT/IS Capability Evaluation (NICE Model

    Directory of Open Access Journals (Sweden)

    Hafez Salleh

    2011-12-01

    Full Text Available Most of the traditional IT/IS performance measures are based on productivity and process, which mainly focus on method of investment appraisal. There is a need to produce alternative holistic measurement models that enable soft and hard issues to be measured qualitatively. A New IT/IS Capability Evaluation (NICE framework has been designed to measure the capability of organisations to'successfully implement IT systems' and it is applicable across industries.The idea is to provide managers with measurement tools to enable them to identify where improvements are required within their organisations and to indicate their readiness prior to IT investment. The NICE framework investigates four organisational key elements: IT, Environment, Process and People, and is composed of six progressive stages of maturity that a company can achieve its IT/IS capabilities. For each maturity stage, the NICE framework describes a set of critical success factors that must be in place for the company to achieve each stage.

  4. Tracer techniques in microelectronics

    International Nuclear Information System (INIS)

    Flachowsky, J.; Freyer, K.

    1981-01-01

    Tracer technique and neutron activation analysis are capable of measuring impurities in semiconductor material or on the semiconductor surface in a very low concentration range. The methods, combined with autoradiography, are also suitable to determine dopant distributions in silicon. However, both techniques suffer from certain inherent experimental difficulties and/or limitations which are discussed. Methods of tracer technique practicable in the semiconductor field are described. (author)

  5. Concerns over modeling and warning capabilities in wake of Tohoku Earthquake and Tsunami

    Science.gov (United States)

    Showstack, Randy

    2011-04-01

    Improved earthquake models, better tsunami modeling and warning capabilities, and a review of nuclear power plant safety are all greatly needed following the 11 March Tohoku earthquake and tsunami, according to scientists at the European Geosciences Union's (EGU) General Assembly, held 3-8 April in Vienna, Austria. EGU quickly organized a morning session of oral presentations and an afternoon panel discussion less than 1 month after the earthquake and the tsunami and the resulting crisis at Japan's Fukushima nuclear power plant, which has now been identified as having reached the same level of severity as the 1986 Chernobyl disaster. Many of the scientists at the EGU sessions expressed concern about the inability to have anticipated the size of the earthquake and the resulting tsunami, which appears likely to have caused most of the fatalities and damage, including damage to the nuclear plant.

  6. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  7. ADVANCED SIMULATION CAPABILITY FOR ENVIRONMENTAL MANAGEMENT- CURRENT STATUS AND PHASE II DEMONSTRATION RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.

    2013-02-26

    The U.S. Department of Energy (USDOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Toolsets and High-Performance Computing (HPC) Multiprocess Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, toolsets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial toolsets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  8. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  9. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  10. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  11. Study of applying the Atmospheric Release Advisory Capability to nuclear power plants

    International Nuclear Information System (INIS)

    Orphan, R.C.

    1978-06-01

    Each utility licensee for a nuclear power reactor is required to minimize the adverse effects from an accidental radionuclide release into the atmosphere. In the past the ability to forecast quantitatively the extent of the hazard from such a release has been limited. Now powerful atmospheric modeling techniques are available to assist nuclear reactor site officials with greatly improved assessments. Lawrence Livermore Laboratory (LLL) has developed a prototype system called the Atmospheric Release Advisory Capability (ARAC) which is designed to integrate the modeling with advanced sensors, data handling techniques, and weather data in order to provide timely, usable advisories to the site officials. The purpose of this project is to examine the ways and means of adapting ARAC for application to many nuclear power reactors widely dispersed across the nation. The project will emphasize the management aspects, including government-industry relationships, technology transfer, organizational structure, staffing, implementing procedures, and costs. Benefits and costs for several alternative systems will be compared. The results will be reviewed and evaluated by the management and staff of the ARAC project at LLL and also by selected staff members of the sponsoring government agency

  12. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  13. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    Energy Technology Data Exchange (ETDEWEB)

    Burke, G.J.

    1988-04-08

    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  14. Temperature dependent power capability estimation of lithium-ion batteries for hybrid electric vehicles

    International Nuclear Information System (INIS)

    Zheng, Fangdan; Jiang, Jiuchun; Sun, Bingxiang; Zhang, Weige; Pecht, Michael

    2016-01-01

    The power capability of lithium-ion batteries affects the safety and reliability of hybrid electric vehicles and the estimate of power by battery management systems provides operating information for drivers. In this paper, lithium ion manganese oxide batteries are studied to illustrate the temperature dependency of power capability and an operating map of power capability is presented. Both parametric and non-parametric models are established in conditions of temperature, state of charge, and cell resistance to estimate the power capability. Six cells were tested and used for model development, training, and validation. Three samples underwent hybrid pulse power characterization tests at varied temperatures and were used for model parameter identification and model training. The other three were used for model validation. By comparison, the mean absolute error of the parametric model is about 29 W, and that of the non-parametric model is around 20 W. The mean relative errors of two models are 0.076 and 0.397, respectively. The parametric model has a higher accuracy in low temperature and state of charge conditions, while the non-parametric model has better estimation result in high temperature and state of charge conditions. Thus, two models can be utilized together to achieve a higher accuracy of power capability estimation. - Highlights: • The temperature dependency of power capability of lithium-ion battery is investigated. • The parametric and non-parametric power capability estimation models are proposed. • An exponential function is put forward to compensate the effects of temperature. • A comparative study on the accuracy of two models using statistical metrics is presented.

  15. The necessity of environmental capability evaluation in physical planning

    International Nuclear Information System (INIS)

    Tavakol, M.

    1997-01-01

    For the physical planning of V az research Forest, the necessity of site selection in the context of land evaluation, has been discussed. the project has studied the evaluation of ecological capability of Land in the following stages: 1- Studying of physical and Biological resources in the context of GIS system. 2- Analysis and integration of data. Evaluation of ecological capability of the land by employing the suitable ecological model. 4- Site selection by comparison and coordination of the principles used in the model with the results of ecological capability of the land in GIS system. 5- Site and environmental design based on ecological principles

  16. Nondestructive determination of plutonium mass in spent fuel: preliminary modeling results using the passive neutron Albedo reactivity technique

    International Nuclear Information System (INIS)

    Evans, Louise G.; Tobin, Stephen J.; Schear, Melissa A.; Menlove, Howard O.; Lee, Sang Y.; Swinhoe, Martyn T.

    2009-01-01

    There are a variety of motivations for quantifying plutonium (Pu) in spent fuel assemblies by means of nondestructive assay (NDA) including the following: strengthening the capability of the International Atomic Energy Agency (LAEA) to safeguard nuclear facilities, quantifying shipper/receiver difference, determining the input accountability value at pyrochemical processing facilities, providing quantitative input to burnup credit and final safeguards measurements at a long-term repository. In order to determine Pu mass in spent fuel assemblies, thirteen NDA techniques were identified that provide information about the composition of an assembly. A key motivation of the present research is the realization that none of these techniques, in isolation, is capable of both (1) quantifying the Pu mass of an assembly and (2) detecting the diversion of a significant number of rods. It is therefore anticipated that a combination of techniques will be required. A 5 year effort funded by the Next Generation Safeguards Initiative (NGSI) of the U.S. DOE was recently started in pursuit of these goals. The first two years involves researching all thirteen techniques using Monte Carlo modeling while the final three years involves fabricating hardware and measuring spent fuel. Here, we present the work in two main parts: (1) an overview of this NGSI effort describing the motivations and approach being taken; (2) The preliminary results for one of the NDA techniques - Passive Neutron Albedo Reactivity (PNAR). The PNAR technique functions by using the intrinsic neutron emission of the fuel (primarily from the spontaneous fission of curium) to self-interrogate any fissile material present. Two separate measurements of the spent fuel are made, both with and without cadmium (Cd) present. The ratios of the Singles, Doubles and Triples count rates obtained in each case are analyzed; known as the Cd ratio. The primary differences between the two measurements are the neutron energy spectrum

  17. Sandia Laboratories technical capabilities. Auxiliary capabilities: environmental health information science

    International Nuclear Information System (INIS)

    1975-09-01

    Sandia Laboratories is an engineering laboratory in which research, development, testing, and evaluation capabilities are integrated by program management for the generation of advanced designs. In fulfilling its primary responsibility to ERDA, Sandia Laboratories has acquired extensive research and development capabilities. The purpose of this series of documents is to catalog the many technical capabilities of the Laboratories. After the listing of capabilities, supporting information is provided in the form of highlights, which show applications. This document deals with auxiliary capabilities, in particular, environmental health and information science. (11 figures, 1 table) (RWR)

  18. Modeling air concentration over macro roughness conditions by Artificial Intelligence techniques

    Science.gov (United States)

    Roshni, T.; Pagliara, S.

    2018-05-01

    Aeration is improved in rivers by the turbulence created in the flow over macro and intermediate roughness conditions. Macro and intermediate roughness flow conditions are generated by flows over block ramps or rock chutes. The measurements are taken in uniform flow region. Efficacy of soft computing methods in modeling hydraulic parameters are not common so far. In this study, modeling efficiencies of MPMR model and FFNN model are found for estimating the air concentration over block ramps under macro roughness conditions. The experimental data are used for training and testing phases. Potential capability of MPMR and FFNN model in estimating air concentration are proved through this study.

  19. Radio techniques for probing the terrestrial ionosphere.

    Science.gov (United States)

    Hunsucker, R. D.

    The subject of the book is a description of the basic principles of operation, plus the capabilities and limitations of all generic radio techniques employed to investigate the terrestrial ionosphere. The purpose of this book is to present to the reader a balanced treatment of each technique so they can understand how to interpret ionospheric data and decide which techniques are most effective for studying specific phenomena. The first two chapters outline the basic theory underlying the techniques, and each following chapter discusses a separate technique. This monograph is entirely devoted to techniques in aeronomy and space physics. The approach is unique in its presentation of the principles, capabilities and limitations of the most important presently used radio techniques. Typical examples of data are shown for the various techniques, and a brief historical account of the technique development is presented. An extended annotated bibliography of the salient papers in the field is included.

  20. Improving Coastal Ocean Color Validation Capabilities through Application of Inherent Optical Properties (IOPs)

    Science.gov (United States)

    Mannino, Antonio

    2008-01-01

    Understanding how the different components of seawater alter the path of incident sunlight through scattering and absorption is essential to using remotely sensed ocean color observations effectively. This is particularly apropos in coastal waters where the different optically significant components (phytoplankton, detrital material, inorganic minerals, etc.) vary widely in concentration, often independently from one another. Inherent Optical Properties (IOPs) form the link between these biogeochemical constituents and the Apparent Optical Properties (AOPs). understanding this interrelationship is at the heart of successfully carrying out inversions of satellite-measured radiance to biogeochemical properties. While sufficient covariation of seawater constituents in case I waters typically allows empirical algorithms connecting AOPs and biogeochemical parameters to behave well, these empirical algorithms normally do not hold for case I1 regimes (Carder et al. 2003). Validation in the context of ocean color remote sensing refers to in-situ measurements used to verify or characterize algorithm products or any assumption used as input to an algorithm. In this project, validation capabilities are considered those measurement capabilities, techniques, methods, models, etc. that allow effective validation. Enhancing current validation capabilities by incorporating state-of-the-art IOP measurements and optical models is the purpose of this work. Involved in this pursuit is improving core IOP measurement capabilities (spectral, angular, spatio-temporal resolutions), improving our understanding of the behavior of analytical AOP-IOP approximations in complex coastal waters, and improving the spatial and temporal resolution of biogeochemical data for validation by applying biogeochemical-IOP inversion models so that these parameters can be computed from real-time IOP sensors with high sampling rates. Research cruises supported by this project provides for collection and

  1. Analysis of Multipath Mitigation Techniques with Land Mobile Satellite Channel Model

    Directory of Open Access Journals (Sweden)

    M. Z. H. Bhuiyan J. Zhang

    2012-12-01

    Full Text Available Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this is of utmost importance to analyze the performance of different multipath mitigation techniques in some realistic measurement-based channel models, for example, the Land Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this

  2. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  3. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Burr, Tom; Gorensek, M.B.; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclearnonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facilitymodeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facilitymodeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facilitymodelingcapabilities and illustrates how they could be integrated and utilized for nonproliferationanalysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facilitymodeling tools. After considering a representative sampling of key facilitymodelingcapabilities, the proposed integration framework is illustrated with several examples.

  4. ANTECEDENTS OF CUSTOMER RELATIONSHIP MANAGEMENT CAPABILITIES

    Directory of Open Access Journals (Sweden)

    Tuleu Daniela

    2015-07-01

    Full Text Available Customer relationship management, as a process to manage customer relationship initiation, maintenance and termination to maximize the value of the relationship portfolio, is an evolving process. In recent years, the development of interactive technologies (social media have revolutionized the interaction between firms and their customers and between customers. The impact of technology on CRM is improving ways of collecting and processing customer information and transforming communication with customers. In the context of the development of social networks, the introduction of social media applications in customer relationship management activities bring important changes in this area. Thus, managers need to pay attention to the interaction management as an important process of CRM and enhance the customer relationship management capabilities. The study propose a conceptual research model of several antecedents of customer relationship managements capabilities and provide the linkage between this antecedents and CRM capabilities. First, following review of existing research literature related to customer relationship management, there are some conceptual clarification on customer relationship management. Second, are presented the working concepts: the adoption of interactive technologies, customer concept, customer empowerment, customer relationship orientation and customer-centric management system. Then, it is proposed the conceptual model and finally are presented conclusions, managerial implications, limitations and research directions. From a theoretical perspective, this paper highlights the importance of marketing actions at the individual customer level and reveal the impact of adoption by companies of interactive technologies so that organizations have the opportunity to engage in conversations with customers and respond in real time the requirements that they launched the online environment. Nowadays, customers feel empowered and play

  5. Video: two novel endoscopic esophageal lengthening and reconstruction techniques.

    Science.gov (United States)

    Perretta, Silvana; Wall, James K; Dallemagne, Bernard; Harrison, Michael; Becmeur, François; Marescaux, Jacques

    2011-10-01

    Esophageal reconstruction presents a significant clinical challenge in patients ranging from neonates with long-gap esophageal atresia to adults after esophageal resection. Both gastric and colonic replacement conduits carry significant morbidity. As emerging organ-sparring techniques become established for early stage esophageal tumors, less morbid reconstruction techniques are warranted. We present two novel endoscopic approaches for esophageal lengthening and reconstruction in a porcine model. Two models of esophageal defects were created in pigs (30-35 kg) under general anesthesia and subsequently reconstructed with the novel techniques. The first model was a segmental defect of the esophagus created by thoracoscopically transecting the esophagus above the gastroesophageal (GE) junction. The first reconstruction technique involved bilateral submucosal endoscopic lengthening myotomies (BSELM) with a magnetic compression anastomosis (MAGNAMOSIS™). The second model was a wedge defect in the anterior esophagus created above the GE junction through a laparotomy. The second reconstruction technique involved an inverted mucosal-submucosal sleeve transposition graft (IMSTG) that crossed the esophageal gap and was secured in place with a self-expandable covered esophageal stent. Both techniques were feasible in the pig model. The BSELM approach lengthened the esophagus 1 cm for every 2 cm length of myotomy. The myotomy targeted only the inner circular fibers of the esophagus, with preservation of the longitudinal layer to protect against long-term dilation and pouching. The IMSTG approach generated a vascularized mucosal graft almost as long as the esophagus itself. Emerging endoscopic capabilities are enabling complex endoluminal esophageal procedures. BSELM and IMSTG are two novel and technically feasible approaches to esophageal lengthening and reconstruction. Further survival studies are needed to establish the safety and efficacy of these techniques.

  6. Simulation of land mine detection processes using nuclear techniques

    International Nuclear Information System (INIS)

    Aziz, M.

    2005-01-01

    A computer models were designed to study the processes of land mine detection using nuclear technique. Parameters that affect the detection were analyzed . Mines of different masses at different depths in the soil are considered using two types of sources , 252 C f and 14 MeV neutron source. The capability to differentiate between mines and other objects such as concrete , iron , wood , Aluminum ,water and polyethylene were analyzed and studied

  7. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  8. Entry into new markets: the development of the business model and dynamic capabilities

    Directory of Open Access Journals (Sweden)

    Victor Wolowski Kenski

    2017-12-01

    Full Text Available This work shows the path through which companies enter new markets or bring new propositions to established ones. It presents the market analysis process, the strategical decisions that determine the company’s position on it and the required changes in the configurations for this new action. It also studies the process of selecting the business model and the conditions for its definition the adoption and subsequent development of resources and capabilities required to conquer this new market. It is presented the necessary conditions to remain and maintain its market position. These concepts are presented through a case study of a business group that takes part in different franchises.

  9. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  10. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  11. Cloud-based Architecture Capabilities Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Vang, Leng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Prescott, Steven R [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-09-01

    In collaborating scientific research arena it is important to have an environment where analysts have access to a shared of information documents, software tools and be able to accurately maintain and track historical changes in models. A new cloud-based environment would be accessible remotely from anywhere regardless of computing platforms given that the platform has available of Internet access and proper browser capabilities. Information stored at this environment would be restricted based on user assigned credentials. This report reviews development of a Cloud-based Architecture Capabilities (CAC) as a web portal for PRA tools.

  12. Atmospheric release advisory capability

    International Nuclear Information System (INIS)

    Sullivan, T.J.

    1981-01-01

    The ARAC system (Atmospheric Release Advisory Capability) is described. The system is a collection of people, computers, computer models, topographic data and meteorological input data that together permits a calculation of, in a quasi-predictive sense, where effluent from an accident will migrate through the atmosphere, where it will be deposited on the ground, and what instantaneous and integrated dose an exposed individual would receive

  13. Time series forecasting using ERNN and QR based on Bayesian model averaging

    Science.gov (United States)

    Pwasong, Augustine; Sathasivam, Saratha

    2017-08-01

    The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.

  14. A triple helix model of medical innovation: Supply, demand, and technological capabilities in terms of Medical Subject Headings

    NARCIS (Netherlands)

    Petersen, A.M.; Rotolo, D.; Leydesdorff, L.

    We develop a model of innovation that enables us to trace the interplay among three key dimensions of the innovation process: (i) demand of and (ii) supply for innovation, and (iii) technological capabilities available to generate innovation in the forms of products, processes, and services.

  15. Energy absorption capabilities of composite sandwich panels under blast loads

    Science.gov (United States)

    Sankar Ray, Tirtha

    As blast threats on military and civilian structures continue to be a significant concern, there remains a need for improved design strategies to increase blast resistance capabilities. The approach to blast resistance proposed here is focused on dissipating the high levels of pressure induced during a blast through maximizing the potential for energy absorption of composite sandwich panels, which are a competitive structural member type due to the inherent energy absorption capabilities of fiber reinforced polymer (FRP) composites. Furthermore, the middle core in the sandwich panels can be designed as a sacrificial layer allowing for a significant amount of deformation or progressive failure to maximize the potential for energy absorption. The research here is aimed at the optimization of composite sandwich panels for blast mitigation via energy absorption mechanisms. The energy absorption mechanisms considered include absorbed strain energy due to inelastic deformation as well as energy dissipation through progressive failure of the core of the sandwich panels. The methods employed in the research consist of a combination of experimentally-validated finite element analysis (FEA) and the derivation and use of a simplified analytical model. The key components of the scope of work then includes: establishment of quantified energy absorption criteria, validation of the selected FE modeling techniques, development of the simplified analytical model, investigation of influential core architectures and geometric parameters, and investigation of influential material properties. For the parameters that are identified as being most-influential, recommended values for these parameters are suggested in conceptual terms that are conducive to designing composite sandwich panels for various blast threats. Based on reviewing the energy response characteristic of the panel under blast loading, a non-dimensional parameter AET/ ET (absorbed energy, AET, normalized by total energy

  16. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  17. A study on the modeling techniques using LS-INGRID

    Energy Technology Data Exchange (ETDEWEB)

    Ku, J. H.; Park, S. W

    2001-03-01

    For the development of radioactive material transport packages, the verification of structural safety of a package against the free drop impact accident should be carried out. The use of LS-DYNA, which is specially developed code for impact analysis, is essential for impact analysis of the package. LS-INGRID is a pre-processor for LS-DYNA with considerable capability to deal with complex geometries and allows for parametric modeling. LS-INGRID is most effective in combination with LS-DYNA code. Although the usage of LS-INGRID seems very difficult relative to many commercial mesh generators, the productivity of users performing parametric modeling tasks with LS-INGRID can be much higher in some cases. Therefore, LS-INGRID has to be used with LS-DYNA. This report presents basic explanations for the structure and commands, basic modelling examples and advanced modelling of LS-INGRID to use it for the impact analysis of various packages. The new users can build the complex model easily, through a study for the basic examples presented in this report from the modelling to the loading and constraint conditions.

  18. A three-dimensional muscle activity imaging technique for assessing pelvic muscle function

    Science.gov (United States)

    Zhang, Yingchun; Wang, Dan; Timm, Gerald W.

    2010-11-01

    A novel multi-channel surface electromyography (EMG)-based three-dimensional muscle activity imaging (MAI) technique has been developed by combining the bioelectrical source reconstruction approach and subject-specific finite element modeling approach. Internal muscle activities are modeled by a current density distribution and estimated from the intra-vaginal surface EMG signals with the aid of a weighted minimum norm estimation algorithm. The MAI technique was employed to minimally invasively reconstruct electrical activity in the pelvic floor muscles and urethral sphincter from multi-channel intra-vaginal surface EMG recordings. A series of computer simulations were conducted to evaluate the performance of the present MAI technique. With appropriate numerical modeling and inverse estimation techniques, we have demonstrated the capability of the MAI technique to accurately reconstruct internal muscle activities from surface EMG recordings. This MAI technique combined with traditional EMG signal analysis techniques is being used to study etiologic factors associated with stress urinary incontinence in women by correlating functional status of muscles characterized from the intra-vaginal surface EMG measurements with the specific pelvic muscle groups that generated these signals. The developed MAI technique described herein holds promise for eliminating the need to place needle electrodes into muscles to obtain accurate EMG recordings in some clinical applications.

  19. National power grid simulation capability : need and issues

    Energy Technology Data Exchange (ETDEWEB)

    Petri, Mark C. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2009-06-02

    On December 9 and 10, 2008, the Department of Homeland Security (DHS) Science and Technology Directorate sponsored a national workshop at Argonne National Laboratory to explore the need for a comprehensive modeling and simulation capability for the national electric power grid system. The workshop brought together leading electric power grid experts from federal agencies, the national laboratories, and academia to discuss the current state of power grid science and engineering and to assess if important challenges are being met. The workshop helped delineate gaps between grid needs and current capabilities and identify issues that must be addressed if a solution is to be implemented. This report is a result of the workshop and highlights power grid modeling and simulation needs, the barriers that must be overcome to address them, and the benefits of a national power grid simulation capability.

  20. Model technique for aerodynamic study of boiler furnace

    Energy Technology Data Exchange (ETDEWEB)

    1966-02-01

    The help of the Division was recently sought to improve the heat transfer and reduce the exit gas temperature in a pulverized-fuel-fired boiler at an Australian power station. One approach adopted was to construct from Perspex a 1:20 scale cold-air model of the boiler furnace and to use a flow-visualization technique to study the aerodynamic patterns established when air was introduced through the p.f. burners of the model. The work established good correlations between the behaviour of the model and of the boiler furnace.

  1. A New ABCD Technique to Analyze Business Models & Concepts

    OpenAIRE

    Aithal P. S.; Shailasri V. T.; Suresh Kumar P. M.

    2015-01-01

    Various techniques are used to analyze individual characteristics or organizational effectiveness like SWOT analysis, SWOC analysis, PEST analysis etc. These techniques provide an easy and systematic way of identifying various issues affecting a system and provides an opportunity for further development. Whereas these provide a broad-based assessment of individual institutions and systems, it suffers limitations while applying to business context. The success of any business model depends on ...

  2. Exploration Medical Capability (ExMC) Projects

    Science.gov (United States)

    Wu, Jimmy; Watkins, Sharmila; Baumann, David

    2010-01-01

    During missions to the Moon or Mars, the crew will need medical capabilities to diagnose and treat disease as well as for maintaining their health. The Exploration Medical Capability Element develops medical technologies, medical informatics, and clinical capabilities for different levels of care during space missions. The work done by team members in this Element is leading edge technology, procedure, and pharmacological development. They develop data systems that protect patient's private medical information, aid in the diagnosis of medical conditions, and act as a repository of relevant NASA life sciences experimental studies. To minimize the medical risks to crew health the physicians and scientists in this Element develop models to quantify the probability of medical events occurring during a mission. They define procedures to treat an ill or injured crew member who does not have access to an emergency room and who must be cared for in a microgravity environment where both liquids and solids behave differently than on Earth. To support the development of these medical capabilities, the Element manages the development of medical technologies that prevent, monitor, diagnose, and treat an ill or injured crewmember. The Exploration Medical Capability Element collaborates with the National Space Biomedical Research Institute (NSBRI), the Department of Defense, other Government-funded agencies, academic institutions, and industry.

  3. Relationships between structural social capital, knowledge identification capability and external knowledge acquisition

    Directory of Open Access Journals (Sweden)

    Beatriz Ortiz

    2017-07-01

    Full Text Available Purpose - The purpose of this paper is to analyze the mediating effect of the identification of valuable external knowledge on the relationship between the development of inter-organizational ties (structural social capital and the acquisition of external knowledge. Design/methodology/approach - Using a sample of 87 firms from Spanish biotechnology and pharmaceutics industries, the authors have tested the proposed mediation hypothesis by applying the partial least squares technique to a structural equations model. Findings - The study results show that those firms with stronger, more frequent and closer inter-relationships are able to increase the amount of intentionally acquired knowledge, partly due to the greater level of development of their knowledge identification capability. Thus, firms with a higher capability to recognize the value of the knowledge embedded in their inter-organizational networks will be more likely to design better strategies to acquire and integrate such knowledge into their current knowledge bases for either present or future use. Originality/value - This research contributes to knowledge management and social capital literature by means of the study of two key determinants of knowledge acquisition – structural social capital and knowledge identification capability – and the explanation of their relationships of mutual influence. The paper thus tries to fill this literature gap and connects the relational perspective of social capital with the knowledge-based view from a strategic point of view.

  4. Dynamic capabilities and innovation capabilities: The case of the ‘Innovation Clinic’

    Directory of Open Access Journals (Sweden)

    Fred Strønen

    2017-01-01

    Full Text Available In this explorative study, we investigate the relationship between dynamic capabilities and innovation capabilities. Dynamic capabilities are at the core of strategic management in terms of how firms can ensure adaptation to changing environments over time. Our paper follows two paths of argumentation. First, we review and discuss some major contributions to the theories on ordinary capabilities, dynamic capabilities, and innovation capabilities. We seek to identify different understandings of the concepts in question, in order to clarify the distinctions and relationships between dynamic capabilities and innovation capabilities. Second, we present a case study of the ’Innovation Clinic’ at a major university hospital, including four innovation projects. We use this case study to explore and discuss how dynamic capabilities can be extended, as well as to what extent innovation capabilities can be said to be dynamic. In our conclusion, we discuss the conditions for nurturing ‘dynamic innovation capabilities’ in organizations.

  5. RFCM Techniques Chamber Facility

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Provides the capability to develop radio-frequency countermeasure (RFCM) techniques in a controlled environment from 2.0 to 40.0 GHz. The configuration of...

  6. Advanced simulation capability for environmental management - current status and future applications

    Energy Technology Data Exchange (ETDEWEB)

    Freshley, Mark; Scheibe, Timothy [Pacific Northwest National Laboratory, Richland, Washington (United States); Robinson, Bruce; Moulton, J. David; Dixon, Paul [Los Alamos National Laboratory, Los Alamos, New Mexico (United States); Marble, Justin; Gerdes, Kurt [U.S. Department of Energy, Office of Environmental Management, Washington DC (United States); Stockton, Tom [Neptune and Company, Inc, Los Alamos, New Mexico (United States); Seitz, Roger [Savannah River National Laboratory, Aiken, South Carolina (United States); Black, Paul [Neptune and Company, Inc, Lakewood, Colorado (United States)

    2013-07-01

    The U.S. Department of Energy (US DOE) Office of Environmental Management (EM), Office of Soil and Groundwater (EM-12), is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach that is currently aimed at understanding and predicting contaminant fate and transport in natural and engineered systems. ASCEM is a modular and open source high-performance computing tool. It will be used to facilitate integrated approaches to modeling and site characterization, and provide robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of capabilities, with current emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Tool-sets and High-Performance Computing (HPC) multi-process simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The integration of the Platform and HPC capabilities were tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities in 2012. The current maturity of the ASCEM computational and analysis capabilities has afforded the opportunity for collaborative efforts to develop decision analysis tools to support and optimize radioactive waste disposal. Recent advances in computerized decision analysis frameworks provide the perfect opportunity to bring this capability into ASCEM. This will allow radioactive waste

  7. Advanced Simulation Capability for Environmental Management - Current Status and Phase II Demonstration Results - 13161

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, Roger R.; Flach, Greg [Savannah River National Laboratory, Savannah River Site, Bldg 773-43A, Aiken, SC 29808 (United States); Freshley, Mark D.; Freedman, Vicky; Gorton, Ian [Pacific Northwest National Laboratory, MSIN K9-33, P.O. Box 999, Richland, WA 99352 (United States); Dixon, Paul; Moulton, J. David [Los Alamos National Laboratory, MS B284, P.O. Box 1663, Los Alamos, NM 87544 (United States); Hubbard, Susan S.; Faybishenko, Boris; Steefel, Carl I.; Finsterle, Stefan [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, MS 50B-4230, Berkeley, CA 94720 (United States); Marble, Justin [Department of Energy, 19901 Germantown Road, Germantown, MD 20874-1290 (United States)

    2013-07-01

    The U.S. Department of Energy (US DOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Tool-sets and High-Performance Computing (HPC) Multi-process Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial tool-sets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  8. Advanced Simulation Capability for Environmental Management - Current Status and Phase II Demonstration Results - 13161

    International Nuclear Information System (INIS)

    Seitz, Roger R.; Flach, Greg; Freshley, Mark D.; Freedman, Vicky; Gorton, Ian; Dixon, Paul; Moulton, J. David; Hubbard, Susan S.; Faybishenko, Boris; Steefel, Carl I.; Finsterle, Stefan; Marble, Justin

    2013-01-01

    The U.S. Department of Energy (US DOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Tool-sets and High-Performance Computing (HPC) Multi-process Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial tool-sets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  9. Advanced digital modulation: Communication techniques and monolithic GaAs technology

    Science.gov (United States)

    Wilson, S. G.; Oliver, J. D., Jr.; Kot, R. C.; Richards, C. R.

    1983-01-01

    Communications theory and practice are merged with state-of-the-art technology in IC fabrication, especially monolithic GaAs technology, to examine the general feasibility of a number of advanced technology digital transmission systems. Satellite-channel models with (1) superior throughput, perhaps 2 Gbps; (2) attractive weight and cost; and (3) high RF power and spectrum efficiency are discussed. Transmission techniques possessing reasonably simple architectures capable of monolithic fabrication at high speeds were surveyed. This included a review of amplitude/phase shift keying (APSK) techniques and the continuous-phase-modulation (CPM) methods, of which MSK represents the simplest case.

  10. Capability ethics

    OpenAIRE

    Robeyns, Ingrid

    2012-01-01

    textabstractThe capability approach is one of the most recent additions to the landscape of normative theories in ethics and political philosophy. Yet in its present stage of development, the capability approach is not a full-blown normative theory, in contrast to utilitarianism, deontological theories, virtue ethics, or pragmatism. As I will argue in this chapter, at present the core of the capability approach is an account of value, which together with some other (more minor) normative comm...

  11. Does organizational agility affect organizational learning capability? Evidence from commercial banking

    OpenAIRE

    Zaina Mustafa Mahmoud Hamad; Uğur Yozgat

    2017-01-01

    Both organizational agility and learning capability are prerequisites for organizational survival and success. This study explores the contribution of agility practices to organizational learning capabilities at the commercial banks in Jordan. To examine the proposed model, a sample of 158 employees within top and middle managements was used. Structural Equation Modeling was conducted for assessing validity and reliability of measurement instrument, evaluating model fit, and testing hypothese...

  12. Introducing a New Capability at SSRL: Resonant Soft X-ray Scattering

    Science.gov (United States)

    Lee, Jun-Sik; Jang, Hoyoung; Lu, Donghui; Kao, Chi-Chang

    Stanford Synchrotron Radiation Lightsource (SSRL) at SLAC recently developed a setup for the resonant soft x-ray scattering (RSXS). In general, the RSXS technique uniquely probes not only structural information, but also chemical specific information. This is because this technique can explore the spatial periodicities of charge, orbital, spin, and lattice with spectroscopic aspect. Moreover, the soft x-ray range is particularly relevant for a study of soft materials as it covers the K-edge of C, N, F, and O, as well as the L-edges of transition metals and M-edges of rare-earth elements. Hence, the RSXS capability has been regarded as a very powerful technique for investigating the intrinsic properties of materials such as quantum- and energy-materials. The RSXS capability at the SSRL composes of in-vacuum 4-circle diffractometer. There are also the fully motorized sample-motion manipulations. Also, the sample can be cooled down to 25 K via the liquid helium. This capability has been installed at BL 13-3, where the photon source is from elliptically polarized undulator (EPU). Covering the photon energies is from 230 eV to 1400 eV. Furthermore, this EPU system offers more degree of freedoms for controlling x-ray polarizations (linear and circular). Using the advance of controlling x-ray polarization, we can also investigate a morphology effect of local domain/grain in materials. The detailed introduction of the RSXS end-station and several results will be touched in this poster presentation.

  13. A fermionic molecular dynamics technique to model nuclear matter

    International Nuclear Information System (INIS)

    Vantournhout, K.; Jachowicz, N.; Ryckebusch, J.

    2009-01-01

    Full text: At sub-nuclear densities of about 10 14 g/cm 3 , nuclear matter arranges itself in a variety of complex shapes. This can be the case in the crust of neutron stars and in core-collapse supernovae. These slab like and rod like structures, designated as nuclear pasta, have been modelled with classical molecular dynamics techniques. We present a technique, based on fermionic molecular dynamics, to model nuclear matter at sub-nuclear densities in a semi classical framework. The dynamical evolution of an antisymmetric ground state is described making the assumption of periodic boundary conditions. Adding the concepts of antisymmetry, spin and probability distributions to classical molecular dynamics, brings the dynamical description of nuclear matter to a quantum mechanical level. Applications of this model vary from investigation of macroscopic observables and the equation of state to the study of fundamental interactions on the microscopic structure of the matter. (author)

  14. The new MCNP6 depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.

    2012-01-01

    The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)

  15. The New MCNP6 Depletion Capability

    International Nuclear Information System (INIS)

    Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.

    2012-01-01

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.

  16. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  17. Analysis and study on core power capability with margin method

    International Nuclear Information System (INIS)

    Liu Tongxian; Wu Lei; Yu Yingrui; Zhou Jinman

    2015-01-01

    Core power capability analysis focuses on the power distribution control of reactor within the given mode of operation, for the purpose of defining the allowed normal operating space so that Condition Ⅰ maneuvering flexibility is maintained and Condition Ⅱ occurrences are adequately protected by the reactor protection system. For the traditional core power capability analysis methods, such as synthesis method or advanced three dimension method, usually calculate the key safety parameters of the power distribution, and then verify that these parameters meet the design criteria. For PWR with on-line power distribution monitoring system, core power capability analysis calculates the most power level which just meets the design criteria. On the base of 3D FAC method of Westinghouse, the calculation model of core power capability analysis with margin method is introduced to provide reference for engineers. The core power capability analysis of specific burnup of Sanmen NPP is performed with the margin method. The results demonstrate the rationality of the margin method. The calculation model of the margin method not only helps engineers to master the core power capability analysis for AP1000, but also provides reference for engineers for core power capability analysis of other PWR with on-line power distribution monitoring system. (authors)

  18. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    International Nuclear Information System (INIS)

    Shadid, J.N.; Smith, T.M.; Cyr, E.C.; Wildey, T.M.; Pawlowski, R.P.

    2016-01-01

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.

  19. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Shadid, J.N., E-mail: jnshadi@sandia.gov [Sandia National Laboratories, Computational Mathematics Department (United States); Department of Mathematics and Statistics, University of New Mexico (United States); Smith, T.M. [Sandia National Laboratories, Multiphysics Applications Department (United States); Cyr, E.C. [Sandia National Laboratories, Computational Mathematics Department (United States); Wildey, T.M. [Sandia National Laboratories, Optimization and UQ Department (United States); Pawlowski, R.P. [Sandia National Laboratories, Multiphysics Applications Department (United States)

    2016-09-15

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.

  20. HIGHLY-ACCURATE MODEL ORDER REDUCTION TECHNIQUE ON A DISCRETE DOMAIN

    Directory of Open Access Journals (Sweden)

    L. D. Ribeiro

    2015-09-01

    Full Text Available AbstractIn this work, we present a highly-accurate technique of model order reduction applied to staged processes. The proposed method reduces the dimension of the original system based on null values of moment-weighted sums of heat and mass balance residuals on real stages. To compute these sums of weighted residuals, a discrete form of Gauss-Lobatto quadrature was developed, allowing a high degree of accuracy in these calculations. The locations where the residuals are cancelled vary with time and operating conditions, characterizing a desirable adaptive nature of this technique. Balances related to upstream and downstream devices (such as condenser, reboiler, and feed tray of a distillation column are considered as boundary conditions of the corresponding difference-differential equations system. The chosen number of moments is the dimension of the reduced model being much lower than the dimension of the complete model and does not depend on the size of the original model. Scaling of the discrete independent variable related with the stages was crucial for the computational implementation of the proposed method, avoiding accumulation of round-off errors present even in low-degree polynomial approximations in the original discrete variable. Dynamical simulations of distillation columns were carried out to check the performance of the proposed model order reduction technique. The obtained results show the superiority of the proposed procedure in comparison with the orthogonal collocation method.

  1. Techniques for discrimination-free predictive models (Chapter 12)

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter, we give an overview of the techniques developed ourselves for constructing discrimination-free classifiers. In discrimination-free classification the goal is to learn a predictive model that classifies future data objects as accurately as possible, yet the predicted labels should be

  2. Air quality modelling using chemometric techniques | Azid | Journal ...

    African Journals Online (AJOL)

    This study presents that the chemometric techniques and modelling become an excellent tool in API assessment, air pollution source identification, apportionment and can be setbacks in designing an API monitoring network for effective air pollution resources management. Keywords: air pollutant index; chemometric; ANN; ...

  3. Gossiping Capabilities

    DEFF Research Database (Denmark)

    Mogensen, Martin; Frey, Davide; Guerraoui, Rachid

    Gossip-based protocols are now acknowledged as a sound basis to implement collaborative high-bandwidth content dissemination: content location is disseminated through gossip, the actual contents being subsequently pulled. In this paper, we present HEAP, HEterogeneity Aware gossip Protocol, where...... nodes dynamically adjust their contribution to gossip dissemination according to their capabilities. Using a continuous, itself gossip-based, approximation of relative capabilities, HEAP dynamically leverages the most capable nodes by (a) increasing their fanouts (while decreasing by the same proportion...... declare a high capability in order to augment their perceived quality without contributing accordingly. We evaluate HEAP in the context of a video streaming application on a 236 PlanetLab nodes testbed. Our results shows that HEAP improves the quality of the streaming by 25% over a standard gossip...

  4. Modelling Technique for Demonstrating Gravity Collapse Structures in Jointed Rock.

    Science.gov (United States)

    Stimpson, B.

    1979-01-01

    Described is a base-friction modeling technique for studying the development of collapse structures in jointed rocks. A moving belt beneath weak material is designed to simulate gravity. A description is given of the model frame construction. (Author/SA)

  5. SuperMAG: Present and Future Capabilities

    Science.gov (United States)

    Hsieh, S. W.; Gjerloev, J. W.; Barnes, R. J.

    2009-12-01

    SuperMAG is a global collaboration that provides ground magnetic field perturbations from a long list of stations in the same coordinate system, identical time resolution and with a common baseline removal approach. This unique high quality dataset provides a continuous and nearly global monitoring of the ground magnetic field perturbation. Currently, only archived data are available on the website and hence it targets basic research without any operational capabilities. The existing SuperMAG software can be easily adapted to ingest real-time or near real-time data and provide a now-casting capability. The SuperDARN program has a long history of providing near real-time maps of the northern hemisphere electrostatic potential and as both SuperMAG and SuperDARN share common software it is relatively easy to adapt these maps for global magnetic perturbations. Magnetometer measurements would be assimilated by the SuperMAG server using a variety of techniques, either by downloading data at regular intervals from remote servers or by real-time streaming connections. The existing SuperMAG analysis software would then process these measurements to provide the final calibrated data set using the SuperMAG coordinate system. The existing plotting software would then be used to produce regularly updated global plots. The talk will focus on current SuperMAG capabilities illustrating the potential for now-casting and eventually forecasting.

  6. Advanced Simulation Capability for Environmental Management (ASCEM) Phase II Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Freshley, M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hubbard, S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flach, G. [Savannah River National Lab. (SRNL), Aiken, SC (United States); Freedman, V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Agarwal, D. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Andre, B. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bott, Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chen, X. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Davis, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Faybishenko, B. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gorton, I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Murray, C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Moulton, D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Meyer, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rockhold, M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Shoshani, A. [LBNL; Steefel, C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wainwright, H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Waichler, S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2012-09-28

    In 2009, the National Academies of Science (NAS) reviewed and validated the U.S. Department of Energy Office of Environmental Management (EM) Technology Program in its publication, Advice on the Department of Energy’s Cleanup Technology Roadmap: Gaps and Bridges. The NAS report outlined prioritization needs for the Groundwater and Soil Remediation Roadmap, concluded that contaminant behavior in the subsurface is poorly understood, and recommended further research in this area as a high priority. To address this NAS concern, the EM Office of Site Restoration began supporting the development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific approach that uses an integration of toolsets for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM modeling toolset is modular and open source. It is divided into three thrust areas: Multi-Process High Performance Computing (HPC), Platform and Integrated Toolsets, and Site Applications. The ASCEM toolsets will facilitate integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. During fiscal year 2012, the ASCEM project continued to make significant progress in capabilities development. Capability development occurred in both the Platform and Integrated Toolsets and Multi-Process HPC Simulator areas. The new Platform and Integrated Toolsets capabilities provide the user an interface and the tools necessary for end-to-end model development that includes conceptual model definition, data management for model input, model calibration and uncertainty analysis, and model output processing including visualization. The new HPC Simulator capabilities target increased functionality of process model representations, toolsets for interaction with the Platform, and model confidence testing and verification for

  7. An emotion-based view of acquisition integration capability

    NARCIS (Netherlands)

    Q.N. Huy (Quy N.); T.H. Reus (Taco)

    2011-01-01

    markdownabstractWe propose an emotion-based view of acquisition integration capability by developing an inter-firm model that focuses on dealing constructively with emotions during various organizational identification processes following mergers and acquisitions. The model describes diverse types

  8. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  9. “Coupled processes” as dynamic capabilities in systems integration

    Directory of Open Access Journals (Sweden)

    Milton de Freitas Chagas Jr.

    2017-05-01

    Full Text Available The dynamics of innovation in complex systems industries is becoming an independent research stream. Apart from conventional uncertainties related to commerce and technology, complex-system industries must cope with systemic uncertainty. This paper’s objective is to analyze evolving technological paths from one product generation to the next through two case studies in the Brazilian aerospace indus­try, considering systems integration as an empirical instantiation of dynamic capabilities. A proposed “coupled processes” model intertwines two organizational processes regarded as two levels of dynamic capabilities: new product and technological developments. The model addresses the role of emergent properties in shaping a firm’s technological base. Moreover, it uses a technology readiness level to unveil systems integration business tricks and as a decision-making yardstick. The “coupled processes” model is revealed as a set of dynamic capabilities presenting ambidexterity in complex systems indus­tries, a finding that may be relevant for newly industrialized economies.

  10. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    International Nuclear Information System (INIS)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. (paper)

  11. Steady-state capabilities for hydroturbines with OpenFOAM

    Science.gov (United States)

    Page, M.; Beaudoin, M.; Giroux, A. M.

    2010-08-01

    The availability of a high quality Open Source CFD simulation platform like OpenFOAM offers new R&D opportunities by providing direct access to models and solver implementation details. Efforts have been made by Hydro-Québec to adapt OpenFOAM to hydroturbines for the development of steady-state capabilities. The paper describes the developments that have been made to implement new turbomachinery related capabilities: Multiple Frame of Reference solver, domain coupling interfaces (GGI, cyclicGGI and mixing plane) and specialized boundary conditions. Practical use of the new turbomachinery capabilities are demonstrated for the analysis of a 195-MW Francis hydroturbine.

  12. Steady-state capabilities for hydroturbines with OpenFOAM

    Energy Technology Data Exchange (ETDEWEB)

    Page, M; Beaudoin, M; Giroux, A M, E-mail: page.maryse@ireq.c [Hydro-Quebec, Institut de recherche 1800 Lionel-Boulet, Varennes, Quebec J3X 1S1 (Canada)

    2010-08-15

    The availability of a high quality Open Source CFD simulation platform like OpenFOAM offers new R and D opportunities by providing direct access to models and solver implementation details. Efforts have been made by Hydro-Quebec to adapt OpenFOAM to hydroturbines for the development of steady-state capabilities. The paper describes the developments that have been made to implement new turbomachinery related capabilities: Multiple Frame of Reference solver, domain coupling interfaces (GGI, cyclicGGI and mixing plane) and specialized boundary conditions. Practical use of the new turbomachinery capabilities are demonstrated for the analysis of a 195-MW Francis hydroturbine.

  13. Steady-state capabilities for hydroturbines with OpenFOAM

    International Nuclear Information System (INIS)

    Page, M; Beaudoin, M; Giroux, A M

    2010-01-01

    The availability of a high quality Open Source CFD simulation platform like OpenFOAM offers new R and D opportunities by providing direct access to models and solver implementation details. Efforts have been made by Hydro-Quebec to adapt OpenFOAM to hydroturbines for the development of steady-state capabilities. The paper describes the developments that have been made to implement new turbomachinery related capabilities: Multiple Frame of Reference solver, domain coupling interfaces (GGI, cyclicGGI and mixing plane) and specialized boundary conditions. Practical use of the new turbomachinery capabilities are demonstrated for the analysis of a 195-MW Francis hydroturbine.

  14. The enhancements and testing for the MCNPX depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; Hendricks, J. S.; Anghaie, S.

    2008-01-01

    Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model true system physics and better track the evolution of temporal nuclide inventory by simulating the actual physical process. The integration of INDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte- Carlo-linked depletion capability in a single Monte Carlo code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. We describe here the depletion methodology dating from the original linking of MONTEBURNS and MCNP to the first public release of the integrated capability (MCNPX 2. 6.B, June, 2006) that has been reported previously. Then we further detail the many new depletion capability enhancements since then leading to the present capability. The H.B. Robinson benchmark calculation results are also reported. The new MCNPX depletion capability enhancements include: (1) allowing the modeling of as large a system as computer memory capacity permits; (2) tracking every fission product available in ENDF/B VII. 0; (3) enabling depletion in repeated structures geometries such as repeated arrays of fuel pins; (4) including metastable isotopes in burnup; and (5) manually changing the concentrations of key isotopes during different time steps to simulate changing reactor control conditions such as dilution of poisons to maintain criticality during burnup. These enhancements allow better detail to model the true system physics and also improve the robustness of the capability. The H.B. Robinson benchmark calculation was completed in order to determine the accuracy of the depletion solution. Temporal nuclide computations of key actinide and fission products are compared to the results of other

  15. Using Web-Based Knowledge Extraction Techniques to Support Cultural Modeling

    Science.gov (United States)

    Smart, Paul R.; Sieck, Winston R.; Shadbolt, Nigel R.

    The World Wide Web is a potentially valuable source of information about the cognitive characteristics of cultural groups. However, attempts to use the Web in the context of cultural modeling activities are hampered by the large-scale nature of the Web and the current dominance of natural language formats. In this paper, we outline an approach to support the exploitation of the Web for cultural modeling activities. The approach begins with the development of qualitative cultural models (which describe the beliefs, concepts and values of cultural groups), and these models are subsequently used to develop an ontology-based information extraction capability. Our approach represents an attempt to combine conventional approaches to information extraction with epidemiological perspectives of culture and network-based approaches to cultural analysis. The approach can be used, we suggest, to support the development of models providing a better understanding of the cognitive characteristics of particular cultural groups.

  16. Assessment of surface solar irradiance derived from real-time modelling techniques and verification with ground-based measurements

    Science.gov (United States)

    Kosmopoulos, Panagiotis G.; Kazadzis, Stelios; Taylor, Michael; Raptis, Panagiotis I.; Keramitsoglou, Iphigenia; Kiranoudis, Chris; Bais, Alkiviadis F.

    2018-02-01

    This study focuses on the assessment of surface solar radiation (SSR) based on operational neural network (NN) and multi-regression function (MRF) modelling techniques that produce instantaneous (in less than 1 min) outputs. Using real-time cloud and aerosol optical properties inputs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite and the Copernicus Atmosphere Monitoring Service (CAMS), respectively, these models are capable of calculating SSR in high resolution (1 nm, 0.05°, 15 min) that can be used for spectrally integrated irradiance maps, databases and various applications related to energy exploitation. The real-time models are validated against ground-based measurements of the Baseline Surface Radiation Network (BSRN) in a temporal range varying from 15 min to monthly means, while a sensitivity analysis of the cloud and aerosol effects on SSR is performed to ensure reliability under different sky and climatological conditions. The simulated outputs, compared to their common training dataset created by the radiative transfer model (RTM) libRadtran, showed median error values in the range -15 to 15 % for the NN that produces spectral irradiances (NNS), 5-6 % underestimation for the integrated NN and close to zero errors for the MRF technique. The verification against BSRN revealed that the real-time calculation uncertainty ranges from -100 to 40 and -20 to 20 W m-2, for the 15 min and monthly mean global horizontal irradiance (GHI) averages, respectively, while the accuracy of the input parameters, in terms of aerosol and cloud optical thickness (AOD and COT), and their impact on GHI, was of the order of 10 % as compared to the ground-based measurements. The proposed system aims to be utilized through studies and real-time applications which are related to solar energy production planning and use.

  17. Internationalization Process and Technological Capability Trajectory of Iguaçu

    Directory of Open Access Journals (Sweden)

    Rafael Kuramoto Gonzalez

    2012-07-01

    Full Text Available This article focuses on the influence of the internationalization process in the evolution of technological capability. This implication was studied in Iguaçu between 1967 and 2009. To achieve the proposed goal it was used the Internationalization of Brazilian Export Producer Companies Model, built by Kraus (2006 and the Model of Technological Capabilities in Companies of Emerging Economies, built by Figueiredo (2004. The study found that different stages of internationalization require different functions and different levels of technology. The discussion proposed by this paper found a close association between the process of internationalization and the development of technological capability in the company studied. It can be concluded that for companies of Soluble Coffee to conquer, reach and remain competitive in international markets it should engage efforts to build diverse organizational skills, alliances and technological capabilities.

  18. Acquisition Modernization: Transitioning Technology Into Warfighter Capability

    Science.gov (United States)

    2011-08-01

    to test and evaluate the technology and integrate the new capability into operational weapon systems (Figure 4). This funding model creates stove...misalignment between missions, TRLs, and the RDT&E funding model is a major 11 contributor to the valley of death. Technologies become obsolete on... funding model of the acquisition system. Create an individual budget account to fund the development of promising technologies. The Acquisition

  19. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  20. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile

  1. THEORY OF REASONED ACTION FOR CONTINUOUS IMPROVEMENT CAPABILITIES: A BEHAVIORAL APPROACH

    Directory of Open Access Journals (Sweden)

    Janaina Siegler

    2012-09-01

    Full Text Available The importance of interaction between Operations Management (OM and Human Behavior has been recently re-addressed. This paper introduced the Reasoned Action Theory suggested by Froehle and Roth (2004 to analyze Operational Capabilities exploring the suitability of this model in the context of OM. It also seeks to discuss the behavioral aspects of operational capabilities from the perspective of organizational routines. This theory was operationalized using Fishbein and Ajzen (F/A behavioral model and a multi-case strategy was employed to analyze the Continuous Improvement (CI capability. The results posit that the model explains partially the CI behavior in an operational context and some contingency variables might influence the general relationsamong the variables involved in the F/A model. Thus intention might not be the determinant variable of behavior in this context.

  2. Capabilities, performance, and future possibilities of high frequency polyphase resonant converters

    International Nuclear Information System (INIS)

    Reass, W.A.; Baca, D.M.; Bradley, J.T. III; Hardek, T.W.; Kwon, S.I.; Lynch, M.T.; Rees, D.E.

    2004-01-01

    High Frequency Polyphase Resonant Power Conditioning (PRPC) techniques developed at Los Alamos National Laboratory (LANL) are now being utilized for the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source (SNS) accelerator klystron RF amplifier power systems. Three different styles of polyphase resonant converter modulators were developed for the SNS application. The various systems operate up to 140 kV, or 11 MW pulses, or up to 1.1 MW average power, all from a DC input of +/- 1.2 kV. Component improvements realized with the SNS effort coupled with new applied engineering techniques have resulted in dramatic changes in RF power conditioning topology. As an example, the high-voltage transformers are over 100 times smaller and lighter than equivalent 60 Hz versions. With resonant conversion techniques, load protective networks are not required. A shorted load de-tunes the resonance and little power transfer can occur. This provides for power conditioning systems that are inherently self-protective, with automatic fault 'ride-through' capabilities. By altering the Los Alamos design, higher power and CW power conditioning systems can be realized without further demands of the individual component voltage or current capabilities. This has led to designs that can accommodate 30 MW long pulse applications and megawatt class CW systems with high efficiencies. The same PRPC techniques can also be utilized for lower average power systems (∼250 kW). This permits the use of significantly higher frequency conversion techniques that result in extremely compact systems with short pulse (10 to 100 us) capabilities. These lower power PRPC systems may be suitable for medical Linacs and mobile RF systems. This paper will briefly review the performance achieved for the SNS accelerator and examine designs for high efficiency megawatt class CW systems and 30 MW peak power applications. The devices and designs for compact higher frequency converters utilized for short pulse

  3. Lithium-Ion Battery Power Degradation Modelling by Electrochemical Impedance Spectroscopy

    DEFF Research Database (Denmark)

    Stroe, Daniel-Ioan; Swierczynski, Maciej Jozef; Stroe, Ana-Irina

    2017-01-01

    This paper investigates the use of the electrochemical impedance spectroscopy (EIS) technique as an alternative to the DC pulses technique for estimating the power capability decrease of Lithium-ion batteries during calendar ageing. Based on results obtained from calendar ageing tests performed...... at different conditions during one to two years, a generalized model that estimates the battery power capability decrease as function of the resistance Rs increase (obtained from EIS) was proposed and successfully verified....

  4. Rights, goals, and capabilities

    NARCIS (Netherlands)

    van Hees, M.V.B.P.M

    This article analyses the relationship between rights and capabilities in order to get a better grasp of the kind of consequentialism that the capability theory represents. Capability rights have been defined as rights that have a capability as their object (rights to capabilities). Such a

  5. On the Conditioning of Machine-Learning-Assisted Turbulence Modeling

    Science.gov (United States)

    Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng

    2017-11-01

    Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.

  6. Hitch code capabilities for modeling AVT chemistry

    International Nuclear Information System (INIS)

    Leibovitz, J.

    1985-01-01

    Several types of corrosion have damaged alloy 600 tubing in the secondary side of steam generators. The types of corrosion include wastage, denting, intergranular attack, stress corrosion, erosion-corrosion, etc. The environments which cause attack may originate from leaks of cooling water into the condensate, etc. When the contaminated feedwater is pumped into the generator, the impurities may concentrate first 200 to 400 fold in the bulk water, depending on the blowdown, and then further to saturation and dryness in heated tube support plate crevices. Characterization of local solution chemistries is the first step to predict and correct the type of corrosion that can occur. The pH is of particular importance because it is a major factor governing the rate of corrosion reactions. The pH of a solution at high temperature is not the same as the ambient temperature, since ionic dissociation constants, solubility and solubility products, activity coefficients, etc., all change with temperature. Because the high temperature chemistry of such solutions is not readily characterized experimentally, modeling techniques were developed under EPRI sponsorship to calculate the high temperature chemistry of the relevant solutions. In many cases, the effects of cooling water impurities on steam generator water chemistry with all volatile treatment (AVT), upon concentration by boiling, and in particular the resulting acid or base concentration can be calculated by a simple code, the HITCH code, which is very easy to use. The scope and applicability of the HITCH code are summarized

  7. MODERN CAPABILITIES OF BREAST PATHOLOGY DIAGNOSTICS

    Directory of Open Access Journals (Sweden)

    I. V. Vysotskaya

    2015-01-01

    Full Text Available Every year more than 1 million new cases of breast cancer are being recorded worldwide. Choice of appropriate tactics of treatment depends on the timely diagnosis and correct assessment of the prevalence of cancer.The algorithm of patient»s examination includes clinical examination, X-ray mammography and ultrasonic diagnosis of breast. However, this is not sufficient for a complete interpretation of the patient»s condition in case of non-palpable breast formations, ambiguous interpretation of imaging under structural changes, increased density of breast tissue, etc.In this regard, the introduction of new technologies and their evaluation in terms of practicality is a logical and developing method of early diagnosis of breast pathology.One of the methods that enables enhancing the information capability of ultrasonic diagnosis of breast is elastography. It allows for the differential diagnosis of benign and malignant changes not only in the breast tissue, but also in the areas of regional lymph drainage.Promising method of modern diagnostic breast care is digital mammography tomosynthesis. However, in spite of the first and very optimisticdata, this technique is still far from standard.Complex diagnostics of breast pathology, in addition to clinical data and imaging results, are based on information obtained from biopsies. At the present stage core-biopsy is considered as the best way of verification, where the resulting material is subjected to immunohistochemical studies.Thus, the spectrum of diagnostic capabilities is constantly expanding. Highly informative techniques included in the daily practice today enable clinicians to achieve optimal results in curing even greater number of patients.

  8. Development of three dimensional solid modeler

    International Nuclear Information System (INIS)

    Zahoor, R.M.A.

    1999-01-01

    The work presented in this thesis is aimed at developing a three dimensional solid modeler employing computer graphics techniques using C-Language. Primitives have been generated, by combination of plane surfaces, for various basic geometrical shapes including cylinder, cube and cone. Back face removal technique for hidden surface removal has also been incorporated. Various transformation techniques such as scaling, translation, and rotation have been included for the object animation. Three dimensional solid modeler has been created by the union of two primitives to demonstrate the capabilities of the developed program. (author)

  9. Evaluation of the Predictive Capabilities of a Phenomenological Combustion Model for Natural Gas SI Engine

    Directory of Open Access Journals (Sweden)

    Toman Rastislav

    2017-12-01

    Full Text Available The current study evaluates the predictive capabilities of a new phenomenological combustion model, available as a part of the GT-Suite software package. It is comprised of two main sub-models: 0D model of in-cylinder flow and turbulence, and turbulent SI combustion model. The 0D in-cylinder flow model (EngCylFlow uses a combined K-k-ε kinetic energy cascade approach to predict the evolution of the in-cylinder charge motion and turbulence, where K and k are the mean and turbulent kinetic energies, and ε is the turbulent dissipation rate. The subsequent turbulent combustion model (EngCylCombSITurb gives the in-cylinder burn rate; based on the calculation of flame speeds and flame kernel development. This phenomenological approach reduces significantly the overall computational effort compared to the 3D-CFD, thus allowing the computation of full engine operating map and the vehicle driving cycles. Model was calibrated using a full map measurement from a turbocharged natural gas SI engine, with swirl intake ports. Sensitivity studies on different calibration methods, and laminar flame speed sub-models were conducted. Validation process for both the calibration and sensitivity studies was concerning the in-cylinder pressure traces and burn rates for several engine operation points achieving good overall results.

  10. Modeling fuel cell stack systems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J H [Los Alamos National Lab., Los Alamos, NM (United States); Lalk, T R [Dept. of Mech. Eng., Texas A and M Univ., College Station, TX (United States)

    1998-06-15

    A technique for modeling fuel cell stacks is presented along with the results from an investigation designed to test the validity of the technique. The technique was specifically designed so that models developed using it can be used to determine the fundamental thermal-physical behavior of a fuel cell stack for any operating and design configuration. Such models would be useful tools for investigating fuel cell power system parameters. The modeling technique can be applied to any type of fuel cell stack for which performance data is available for a laboratory scale single cell. Use of the technique is demonstrated by generating sample results for a model of a Proton Exchange Membrane Fuel Cell (PEMFC) stack consisting of 125 cells each with an active area of 150 cm{sup 2}. A PEMFC stack was also used in the verification investigation. This stack consisted of four cells, each with an active area of 50 cm{sup 2}. Results from the verification investigation indicate that models developed using the technique are capable of accurately predicting fuel cell stack performance. (orig.)

  11. Using non-invasive molecular spectroscopic techniques to detect unique aspects of protein Amide functional groups and chemical properties of modeled forage from different sourced-origins.

    Science.gov (United States)

    Ji, Cuiying; Zhang, Xuewei; Yu, Peiqiang

    2016-03-05

    The non-invasive molecular spectroscopic technique-FT/IR is capable to detect the molecular structure spectral features that are associated with biological, nutritional and biodegradation functions. However, to date, few researches have been conducted to use these non-invasive molecular spectroscopic techniques to study forage internal protein structures associated with biodegradation and biological functions. The objectives of this study were to detect unique aspects and association of protein Amide functional groups in terms of protein Amide I and II spectral profiles and chemical properties in the alfalfa forage (Medicago sativa L.) from different sourced-origins. In this study, alfalfa hay with two different origins was used as modeled forage for molecular structure and chemical property study. In each forage origin, five to seven sources were analyzed. The molecular spectral profiles were determined using FT/IR non-invasive molecular spectroscopy. The parameters of protein spectral profiles included functional groups of Amide I, Amide II and Amide I to II ratio. The results show that the modeled forage Amide I and Amide II were centered at 1653 cm(-1) and 1545 cm(-1), respectively. The Amide I spectral height and area intensities were from 0.02 to 0.03 and 2.67 to 3.36 AI, respectively. The Amide II spectral height and area intensities were from 0.01 to 0.02 and 0.71 to 0.93 AI, respectively. The Amide I to II spectral peak height and area ratios were from 1.86 to 1.88 and 3.68 to 3.79, respectively. Our results show that the non-invasive molecular spectroscopic techniques are capable to detect forage internal protein structure features which are associated with forage chemical properties. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration

    2004-12-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access

  13. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  14. Application of pattern recognition techniques to crime analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bender, C.F.; Cox, L.A. Jr.; Chappell, G.A.

    1976-08-15

    The initial goal was to evaluate the capabilities of current pattern recognition techniques when applied to existing computerized crime data. Performance was to be evaluated both in terms of the system's capability to predict crimes and to optimize police manpower allocation. A relation was sought to predict the crime's susceptibility to solution, based on knowledge of the crime type, location, time, etc. The preliminary results of this work are discussed. They indicate that automatic crime analysis involving pattern recognition techniques is feasible, and that efforts to determine optimum variables and techniques are warranted. 47 figures (RWR)

  15. The influence of ligament modelling strategies on the predictive capability of finite element models of the human knee joint.

    Science.gov (United States)

    Naghibi Beidokhti, Hamid; Janssen, Dennis; van de Groes, Sebastiaan; Hazrati, Javad; Van den Boogaard, Ton; Verdonschot, Nico

    2017-12-08

    In finite element (FE) models knee ligaments can represented either by a group of one-dimensional springs, or by three-dimensional continuum elements based on segmentations. Continuum models closer approximate the anatomy, and facilitate ligament wrapping, while spring models are computationally less expensive. The mechanical properties of ligaments can be based on literature, or adjusted specifically for the subject. In the current study we investigated the effect of ligament modelling strategy on the predictive capability of FE models of the human knee joint. The effect of literature-based versus specimen-specific optimized material parameters was evaluated. Experiments were performed on three human cadaver knees, which were modelled in FE models with ligaments represented either using springs, or using continuum representations. In spring representation collateral ligaments were each modelled with three and cruciate ligaments with two single-element bundles. Stiffness parameters and pre-strains were optimized based on laxity tests for both approaches. Validation experiments were conducted to evaluate the outcomes of the FE models. Models (both spring and continuum) with subject-specific properties improved the predicted kinematics and contact outcome parameters. Models incorporating literature-based parameters, and particularly the spring models (with the representations implemented in this study), led to relatively high errors in kinematics and contact pressures. Using a continuum modelling approach resulted in more accurate contact outcome variables than the spring representation with two (cruciate ligaments) and three (collateral ligaments) single-element-bundle representations. However, when the prediction of joint kinematics is of main interest, spring ligament models provide a faster option with acceptable outcome. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. PROGRAMS WITH DATA MINING CAPABILITIES

    Directory of Open Access Journals (Sweden)

    Ciobanu Dumitru

    2012-03-01

    Full Text Available The fact that the Internet has become a commodity in the world has created a framework for anew economy. Traditional businesses migrate to this new environment that offers many features and options atrelatively low prices. However competitiveness is fierce and successful Internet business is tied to rigorous use of allavailable information. The information is often hidden in data and for their retrieval is necessary to use softwarecapable of applying data mining algorithms and techniques. In this paper we want to review some of the programswith data mining capabilities currently available in this area.We also propose some classifications of this softwareto assist those who wish to use such software.

  17. Exploring synchrotron radiation capabilities: The ALS-Intel CRADA

    International Nuclear Information System (INIS)

    Gozzo, F.; Cossy-Favre, A.; Padmore, H.

    1997-01-01

    Synchrotron radiation spectroscopy and spectromicroscopy were applied, at the Advanced Light Source, to the analysis of materials and problems of interest to the commercial semiconductor industry. The authors discuss some of the results obtained at the ALS using existing capabilities, in particular the small spot ultra-ESCA instrument on beamline 7.0 and the AMS (Applied Material Science) endstation on beamline 9.3.2. The continuing trend towards smaller feature size and increased performance for semiconductor components has driven the semiconductor industry to invest in the development of sophisticated and complex instrumentation for the characterization of microstructures. Among the crucial milestones established by the Semiconductor Industry Association are the needs for high quality, defect free and extremely clean silicon wafers, very thin gate oxides, lithographies near 0.1 micron and advanced material interconnect structures. The requirements of future generations cannot be met with current industrial technologies. The purpose of the ALS-Intel CRADA (Cooperative Research And Development Agreement) is to explore, compare and improve the utility of synchrotron-based techniques for practical analysis of substrates of interest to semiconductor chip manufacturing. The first phase of the CRADA project consisted in exploring existing ALS capabilities and techniques on some problems of interest. Some of the preliminary results obtained on Intel samples are discussed here

  18. Capability ethics

    NARCIS (Netherlands)

    I.A.M. Robeyns (Ingrid)

    2012-01-01

    textabstractThe capability approach is one of the most recent additions to the landscape of normative theories in ethics and political philosophy. Yet in its present stage of development, the capability approach is not a full-blown normative theory, in contrast to utilitarianism, deontological

  19. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; p<0.01), which demonstrated that Doppler ultrasonography is a convenient and reliable technique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat

  20. Antimicrobial properties of uncapped silver nanoparticles synthesized by DC arc thermal plasma technique.

    Science.gov (United States)

    Shinde, Manish; Patil, Rajendra; Karmakar, Soumen; Bhoraskar, Sudha; Rane, Sunit; Gade, Wasudev; Amalnerkar, Dinesh

    2012-02-01

    We, herein, report the antimicrobial properties of uncapped silver nanoparticles for a Gram positive model organism, Bacillus subtilis. Uncapped silver nanoparticles have been prepared using less-explored DC arc thermal plasma technique by considering its large scale generation capability. It is observed that the resultant nanoparticles show size as well as optical property dependent antimicrobial effect.

  1. MCNP capabilities for nuclear well logging calculations

    International Nuclear Information System (INIS)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.; Hendricks, J.S.

    1990-01-01

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo neutron photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data

  2. Determinants of Marketing Performance: Innovation, Market Capabilities and Marketing Performance

    Directory of Open Access Journals (Sweden)

    Naili Farida

    2016-04-01

    Full Text Available This research aim to analyze causality influence between innovations, market capability, social modal, entrepreneurs oriented into marketing performance. Organizational innovations is a basic focus on Total  Quality Management. Innovation has a role to technological development and competitive economy environment. The sample technique used is Purpusive sampling amount 58 respondent owner of Batik Small Medium Enterprise known as UKM.  Small businesses have small medium or medium can grow and develop so that it is able to increase its product and sustainability efforts on the creative industry. The technique analysis used is Parsial Least Square (PLS, this result shows entrepreneur orientation doesn’t influence into market capability and social modal, innovation has positive influence and significance into market capability and marketing performance. This results shows innovation has important role in advantaging market capability while influencing to increase marketing performance of Small Medium Enterprise known as UKM. Penelitian ini bertujuan untuk menganalisis pengaruh kausalitas antara inovasi, kemampuan pasar, modal sosial, pengusaha berorientasi ke kinerja pemasaran. Inovasi organisasi adalah dasar TQM. Inovasi mempunyai peran pentinga dalam pengembangan teknologi dan lingkungan yang penuh persaingan. Teknik sampel yang digunakan adalah Purposive Sampling jumlah 58 pemilik responden dari Batik Kecil Menengah dikenal sebagai UKM. UKM dapat tumbuh dan berkembang melalui industri kreatif. Teknik analisis yang digunakan adalah Parsial Least Square (PLS, hasil ini menunjukkan orientasi entrepreneur tidak mempengaruhi ke kemampuan pasar dan modal sosial, inovasi berpengaruh positif dan signifikan dalam kemampuan pasar dan kinerja pemasaran. Hal ini menyebabkan menunjukkan inovasi memiliki peran penting dalam advantaging kemampuan pasar sementara yang mempengaruhi untuk meningkatkan kinerja pemasaran Kecil Menengah dikenal sebagai UKM.

  3. Dynamic Capabilities

    DEFF Research Database (Denmark)

    Grünbaum, Niels Nolsøe; Stenger, Marianne

    2013-01-01

    The findings reveal a positive relationship between dynamic capabilities and innovation performance in the case enterprises, as we would expect. It was, however, not possible to establish a positive relationship between innovation performance and profitability. Nor was there any positive...... relationship between dynamic capabilities and profitability....

  4. Multi-phase model development to assess RCIC system capabilities under severe accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kirkland, Karen Vierow [Texas A & M Univ., College Station, TX (United States); Ross, Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beeny, Bradley [Texas A & M Univ., College Station, TX (United States); Luthman, Nicholas [Texas A& M Engineering Experiment Station, College Station, TX (United States); Strater, Zachary [Texas A & M Univ., College Station, TX (United States)

    2017-12-23

    The Reactor Core Isolation Cooling (RCIC) System is a safety-related system that provides makeup water for core cooling of some Boiling Water Reactors (BWRs) with a Mark I containment. The RCIC System consists of a steam-driven Terry turbine that powers a centrifugal, multi-stage pump for providing water to the reactor pressure vessel. The Fukushima Dai-ichi accidents demonstrated that the RCIC System can play an important role under accident conditions in removing core decay heat. The unexpectedly sustained, good performance of the RCIC System in the Fukushima reactor demonstrates, firstly, that its capabilities are not well understood, and secondly, that the system has high potential for extended core cooling in accident scenarios. Better understanding and analysis tools would allow for more options to cope with a severe accident situation and to reduce the consequences. The objectives of this project were to develop physics-based models of the RCIC System, incorporate them into a multi-phase code and validate the models. This Final Technical Report details the progress throughout the project duration and the accomplishments.

  5. Finite element modelling

    International Nuclear Information System (INIS)

    Tonks, M.R.; Williamson, R.; Masson, R.

    2015-01-01

    The Finite Element Method (FEM) is a numerical technique for finding approximate solutions to boundary value problems. While FEM is commonly used to solve solid mechanics equations, it can be applied to a large range of BVPs from many different fields. FEM has been used for reactor fuels modelling for many years. It is most often used for fuel performance modelling at the pellet and pin scale, however, it has also been used to investigate properties of the fuel material, such as thermal conductivity and fission gas release. Recently, the United Stated Department Nuclear Energy Advanced Modelling and Simulation Program has begun using FEM as the basis of the MOOSE-BISON-MARMOT Project that is developing a multi-dimensional, multi-physics fuel performance capability that is massively parallel and will use multi-scale material models to provide a truly predictive modelling capability. (authors)

  6. Analytical capabilities of laser-probe mass spectrometry

    International Nuclear Information System (INIS)

    Kovalev, I.D.; Madsimov, G.A.; Suchkov, A.I.; Larin, N.V.

    1978-01-01

    The physical bases and quantitative analytical procedures of laser-probe mass spectrometry are considered in this review. A comparison is made of the capabilities of static and dynamic mass spectrometers. Techniques are studied for improving the analytical characteristics of laser-probe mass spectrometers. The advantages, for quantitative analysis, of the Q-switched mode over the normal pulse mode for lasers are: (a) the possibility of analysing metals, semiconductors and insulators without the use of standards; and (b) the possibility of layer-by-layer and local analysis. (Auth.)

  7. Application of Arma Technique For Operation Stability of RSG-Gas

    International Nuclear Information System (INIS)

    Djudjuratisbela, Udju

    2000-01-01

    Application Of Arma Technique For Operation Stability Of RSG-Gas. Application of Fast Fourier Transport (FFT) method in the noise experiments data had been conducted to reactor kinetic parameter determination of RSG-Gas. Reactor stability that has closed relation to operation safety has not been measured yet. Noise analysis method and ARMA (Auto Regressive Moving Average) technique that has capability to identify mathematical model of the noise experimental data can be used for determination of kinetic/dynamic characteristic equation and its roots. From the roots of reactor characteristic equation, magnitude of natural frequency (fn), damping ratio (xi), damping frequency (fd), decay ratio (delta) and then reactor stability can be calculated

  8. Research Capabilities Directed to all Electric Engineering Teachers, from an Alternative Energy Model

    Directory of Open Access Journals (Sweden)

    Víctor Hugo Ordóñez Navea

    2017-08-01

    Full Text Available The purpose of this work was to contemplate research capabilities directed to all electric engineering teachers from an alternative energy model intro the explanation of a semiconductor in the National Training Program in Electricity. Some authors, such as. Vidal (2016, Atencio (2014 y Camilo (2012 point out to technological applications with semiconductor electrical devices. In this way; a diagnostic phase is presented, held on this field research as a descriptive type about: a how to identify the necessities of alternative energies, and b The research competences in the alternatives energies of researcher from a solar cell model, to boost and innovate the academic praxis and technologic ingenuity. Themselves was applied a survey for a group of 15 teachers in the National Program of Formation in electricity to diagnose the deficiencies in the research area of alternatives energies. The process of data analysis was carried out through descriptive statistic. Later the conclusions are presented the need to generate strategies for stimulate and propose exploration of alternatives energies to the development of research competences directed to the teachers of electrical engineering for develop the research competences in the enforcement of the teachers exercise for the electric engineering, from an alternative energy model and boost the technologic research in the renewal energies field.

  9. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    Science.gov (United States)

    Amicarelli, A.; Gariazzo, C.; Finardi, S.; Pelliccioni, A.; Silibello, C.

    2008-05-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.

  10. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    Energy Technology Data Exchange (ETDEWEB)

    Amicarelli, A; Pelliccioni, A [ISPESL - Dipartimento Insediamenti Produttivi e Interazione con l' Ambiente, Via Fontana Candida, 1 00040 Monteporzio Catone (RM) Italy (Italy); Finardi, S; Silibello, C [ARIANET, via Gilino 9, 20128 Milano (Italy); Gariazzo, C

    2008-05-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM{sub 10} concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.

  11. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    International Nuclear Information System (INIS)

    Amicarelli, A; Pelliccioni, A; Finardi, S; Silibello, C; Gariazzo, C

    2008-01-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM 10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode

  12. Capabilities and Incapabilities of the Capabilities Approach to Health Justice.

    Science.gov (United States)

    Selgelid, Michael J

    2016-01-01

    This first part of this article critiques Sridhar Venkatapuram's conception of health as a capability. It argues that Venkatapuram relies on the problematic concept of dignity, implies that those who are unhealthy lack lives worthy of dignity (which seems politically incorrect), sets a low bar for health, appeals to metaphysically problematic thresholds, fails to draw clear connections between appealed-to capabilities and health, and downplays the importance/relevance of health functioning. It concludes by questioning whether justice entitlements should pertain to the capability for health versus health achievements, challenging Venkatapuram's claims about the strength of health entitlements, and demonstrating that the capabilities approach is unnecessary to address social determinants of health. © 2016 John Wiley & Sons Ltd.

  13. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  14. THE EFFECT OF SOCIAL CAPITAL AND KNOWLEDGE SHARING ON INNOVATION CAPABILITY

    Directory of Open Access Journals (Sweden)

    Dhyah Harjanti

    2017-09-01

    Full Text Available This research examines social capital and knowledge sharing effect on innovation capability among lectures in universities. Social capital was analyzed using three constructs, namely trust, norm and network, while knowledge sharing was broken down into two variables, namely knowledge collecting and knowledge donating. Innovation capability was explained on an individual level based on personality, behavioral and output perspectives. The research model and hypotheses were developed from the literature. Data collection is conducted through a survey on lecturers of private universities in Surabaya. The obtained data from the questionnaires were analyzed with the Partial Least Square (PLS to investigate the research model. The results suggest that social capital significantly influences innovation capability, while high level of knowledge collecting and knowledge donating can lead to high level of innovation capability. This study offers a foundation to analyze the relationships between social capital, knowledge-sharing process, consisting of knowledge collecting and knowledge donating, and innovation capability

  15. In-Vessel Retention Modeling Capabilities of SCDAP/RELAP5-3DC

    International Nuclear Information System (INIS)

    Knudson, D.L.; Rempe, J.L.

    2002-01-01

    Molten core materials may relocate to the lower head of a reactor vessel in the latter stages of a severe accident. Under such circumstances, in-vessel retention (IVR) of the molten materials is a vital step in mitigating potential severe accident consequences. Whether IVR occurs depends on the interactions of a number of complex processes including heat transfer inside the accumulated molten pool, heat transfer from the molten pool to the reactor vessel (and to overlying fluids), and heat transfer from exterior vessel surfaces. SCDAP/RELAP5-3D C has been developed at the Idaho National Engineering and Environmental Laboratory to facilitate simulation of the processes affecting the potential for IVR, as well as processes involved in a wide variety of other reactor transients. In this paper, current capabilities of SCDAP/RELAP5-3D C relative to IVR modeling are described and results from typical applications are provided. In addition, anticipated developments to enhance IVR simulation with SCDAP/RELAP5-3D C are outlined. (authors)

  16. Design Management Capability and Product Innovation in SMEs

    OpenAIRE

    Fernández-Mesa, Ana Isabel; ALEGRE VIDAL, JOAQUIN; CHIVA GOMEZ, RICARDO; Gutiérrez Gracia, Antonio

    2013-01-01

    [EN] Purpose The aim of this paper is to present design management as a dynamic capability and to analyze its mediating role between organizational learning capability and product innovation performance in small and medium enterprises (SMEs). Design/methodology/approach Structural equation modeling is used to test the research hypotheses based on data from the Italian and Spanish ceramic tile industries. The data are derived from the responses of 182 companies (50 percent of the targ...

  17. Capabilities for Strategic Adaptation

    DEFF Research Database (Denmark)

    Distel, Andreas Philipp

    This dissertation explores capabilities that enable firms to strategically adapt to environmental changes and preserve competitiveness over time – often referred to as dynamic capabilities. While dynamic capabilities being a popular research domain, too little is known about what these capabiliti...

  18. MSC/NASTRAN ''expert'' techniques developed and applied to the TFTR poloidal field coils

    International Nuclear Information System (INIS)

    O'Toole, J.A.

    1986-01-01

    The TFTR poloidal field (PF) coils are being analyzed by PPPL and Grumman using MSC/NASTRAN as a part of an overall effort to establish the absolute limiting conditions of operation for TFTR. Each of the PF coils will be analyzed in depth, using a detailed set of finite element models. Several of the models developed are quite large because each copper turn, as well as its surrounding insulation, was modeled using solid elements. Several of the finite element models proved large enough to tax the capabilities of the National Magnetic Fusion Energy Computer Center (NMFECC), specifically disk storage space. To allow the use of substructuring techniques with their associated data bases for the larger models, it became necessary to employ certain infrequently used MSC/NASTRAN ''expert'' techniques. The techniques developed used multiple data bases and data base sets to divide each problem into a series of computer runs. For each run, only the data required was kept on active disk space, the remainder being placed in inactive ''FILEM'' storage, thus, minimizing active disk space required at any time and permitting problem solution using the NMFECC. A representative problem using the TFTR OH-1 coil global model provides an example of the techniques developed. The special considerations necessary to obtain proper results are discussed

  19. A hybrid SEA/modal technique for modeling structural-acoustic interior noise in rotorcraft.

    Science.gov (United States)

    Jayachandran, V; Bonilha, M W

    2003-03-01

    This paper describes a hybrid technique that combines Statistical Energy Analysis (SEA) predictions for structural vibration with acoustic modal summation techniques to predict interior noise levels in rotorcraft. The method was applied for predicting the sound field inside a mock-up of the interior panel system of the Sikorsky S-92 helicopter. The vibration amplitudes of the frame and panel systems were predicted using a detailed SEA model and these were used as inputs to the model of the interior acoustic space. The spatial distribution of the vibration field on individual panels, and their coupling to the acoustic space were modeled using stochastic techniques. Leakage and nonresonant transmission components were accounted for using space-averaged values obtained from a SEA model of the complete structural-acoustic system. Since the cabin geometry was quite simple, the modeling of the interior acoustic space was performed using a standard modal summation technique. Sound pressure levels predicted by this approach at specific microphone locations were compared with measured data. Agreement within 3 dB in one-third octave bands above 40 Hz was observed. A large discrepancy in the one-third octave band in which the first acoustic mode is resonant (31.5 Hz) was observed. Reasons for such a discrepancy are discussed in the paper. The developed technique provides a method for modeling helicopter cabin interior noise in the frequency mid-range where neither FEA nor SEA is individually effective or accurate.

  20. Artificial Life as an Aid to Astrobiology: Testing Life Seeking Techniques

    OpenAIRE

    Centler, F.; Dittrich, P.; Ku, L.; Matsumaru, N.; Pfaffmann, J.; Zauner, K.-P.

    2003-01-01

    Searching for signatures of fossil or present life in our solar system requires autonomous devices capable of investigating remote locations with limited assistance from earth. Here, we use an artificial chemistry model to create spatially complex chemical environments. An autonomous experimentation technique based on evolutionary computation is then employed to explore these environments with the aim of discovering the chemical signature of small patches of biota present in the simulation sp...

  1. New Capabilities in the Astrophysics Multispectral Archive Search Engine

    Science.gov (United States)

    Cheung, C. Y.; Kelley, S.; Roussopoulos, N.

    The Astrophysics Multispectral Archive Search Engine (AMASE) uses object-oriented database techniques to provide a uniform multi-mission and multi-spectral interface to search for data in the distributed archives. We describe our experience of porting AMASE from Illustra object-relational DBMS to the Informix Universal Data Server. New capabilities and utilities have been developed, including a spatial datablade that supports Nearest Neighbor queries.

  2. The application of fluid structure interaction techniques within finite element analyses of water-filled transport flasks

    International Nuclear Information System (INIS)

    Smith, C.; Stojko, S.

    2004-01-01

    Historically, Finite Element (FE) analyses of water-filled transport flasks and their payloads have been carried out assuming a dry environment, mainly due to a lack of robust Fluid Structure Interaction (FSI) modelling techniques. Also it has been accepted within the RAM transport industry that the presence of water would improve the impact withstand capability of dropped payloads within containers. In recent years the FE community has seen significant progress and improvement in FSI techniques. These methods have been utilised to investigate the effects of a wet environment on payload behaviour for the regulatory drop test within a recent transport licence renewal application. Fluid flow and pressure vary significantly during a wet impact and the effects on the contents become complex when water is incorporated into the flask analyses. Modelling a fluid environment within the entire flask is considered impractical; hence a good understanding of the FSI techniques and assumptions regarding fluid boundaries is required in order to create a representative FSI model. Therefore, a Verification and Validation (V and V) exercise was undertaken to underpin the FSI techniques eventually utilised. A number of problems of varying complexity have been identified to test the FSI capabilities of the explicit code LS-DYNA, which is used in the extant dry container impact analyses. RADIOSS explicit code has been used for comparison, to provide further confidence in LS-DYNA predictions. Various methods of modelling fluid are tested, and the relative advantages and limitations of each method and FSI coupling approaches are discussed. Results from the V and V problems examined provided sufficient confidence that FSI effects within containers can be accurately modelled

  3. A neuro-fuzzy computing technique for modeling hydrological time series

    Science.gov (United States)

    Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.

    2004-05-01

    Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.

  4. Towards a Business Process Modeling Technique for Agile Development of Case Management Systems

    Directory of Open Access Journals (Sweden)

    Ilia Bider

    2017-12-01

    Full Text Available A modern organization needs to adapt its behavior to changes in the business environment by changing its Business Processes (BP and corresponding Business Process Support (BPS systems. One way of achieving such adaptability is via separation of the system code from the process description/model by applying the concept of executable process models. Furthermore, to ease introduction of changes, such process model should separate different perspectives, for example, control-flow, human resources, and data perspectives, from each other. In addition, for developing a completely new process, it should be possible to start with a reduced process model to get a BPS system quickly running, and then continue to develop it in an agile manner. This article consists of two parts, the first sets requirements on modeling techniques that could be used in the tools that supports agile development of BPs and BPS systems. The second part suggests a business process modeling technique that allows to start modeling with the data/information perspective which would be appropriate for processes supported by Case or Adaptive Case Management (CM/ACM systems. In a model produced by this technique, called data-centric business process model, a process instance/case is defined as sequence of states in a specially designed instance database, while the process model is defined as a set of rules that set restrictions on allowed states and transitions between them. The article details the background for the project of developing the data-centric process modeling technique, presents the outline of the structure of the model, and gives formal definitions for a substantial part of the model.

  5. Consumer preferences relative to the price and network capability of small urban vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Burns, L.D.

    1979-09-01

    Preferences of consumers for small urban vehicle concepts differing only with respect to their hypothetical purchase prices and network capabilities (i.e., whether they are capable of operating on expressways, major arterials, or local streets) are analyzed using statistical techniques based on psychological scaling theories. Results from these analyses indicate that a vast majority of consumers are not readily willing to give up the accessibility provided by conventional automobiles. More specifically, over the range of hypothetical prices considered here, network capability dominates as a determinant of preferences for vehicle concepts. Also, the ability to operate vehicles on expressways is of utmost importance to consumers.

  6. An Empirical Competence-Capability Model of Supply Chain Innovation

    OpenAIRE

    Mandal, Santanu

    2016-01-01

    Supply chain innovation has become the new pre-requisite for the survival of firms in developing capabilities and strategies for sustaining their operations and performance in the market. This study investigates the influence of supply and demand competence on supply chain innovation and its influence on a firm’s operational and relational performance. While the former competence refers to production and supply management related activities, the latter refers to distribution and demand manage...

  7. Composite Elements for Biomimetic Aerospace Structures with Progressive Shape Variation Capabilities

    Directory of Open Access Journals (Sweden)

    Alessandro Airoldi

    2016-07-01

    Full Text Available The paper presents some engineering solutions for the development of innovative aerodynamic surfaces with the capability of progressive shape variation. A brief introduction of the most significant issues related to the design of such morphing structures is provided. Thereafter, two types of structural solutions are presented for the design of internal compliant structures and flexible external skins. The proposed solutions exploit the properties and the manufacturing techniques of long fibre reinforced plastic in order to fulfil the severe and contradictory requirements related to the trade-off between morphing performance and load carrying capabilities.

  8. PHISICS multi-group transport neutronic capabilities for RELAP5

    Energy Technology Data Exchange (ETDEWEB)

    Epiney, A.; Rabiti, C.; Alfonsi, A.; Wang, Y.; Cogliati, J.; Strydom, G. [Idaho National Laboratory (INL), 2525 N. Fremont Ave., Idaho Falls, ID 83402 (United States)

    2012-07-01

    PHISICS is a neutronic code system currently under development at INL. Its goal is to provide state of the art simulation capability to reactor designers. This paper reports on the effort of coupling this package to the thermal hydraulic system code RELAP5. This will enable full prismatic core and system modeling and the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5 (NESTLE). The paper describes the capabilities of the coupling and illustrates them with a set of sample problems. (authors)

  9. Social Capital, IT Capability, and the Success of Knowledge Management Systems

    Directory of Open Access Journals (Sweden)

    Irene Y.L. Chen

    2009-03-01

    Full Text Available Many organizations have implemented knowledge management systems to support knowledge management. However, many of such systems have failed due to the lack of relationship networks and IT capability within organizations. Motivated by such concerns, this paper examines the factors that may facilitate the success of knowledge management systems. The ten constructs derived from social capital theory, resource-based view and IS success model are integrated into the current research model. Twenty-one hypotheses derived from the research model are empirically validated using a field survey of KMS users. The results suggest that social capital and organizational IT capability are important preconditions of the success of knowledge management systems. Among the posited relationships, trust, social interaction ties, IT capability do not significantly impact service quality, system quality and IT capability, respectively. Against prior expectation, service quality and knowledge quality do not significantly influence perceived KMS benefits and user satisfaction, respectively. Discussion of the results and conclusion are provided. This study then provides insights for future research avenue.

  10. OECD/CSNI specialist meeting on advanced instrumentation and measurements techniques: summary and conclusions

    International Nuclear Information System (INIS)

    1997-01-01

    This specialist meeting on Advanced Instrumentation and Measurements Techniques was held in Santa Barbara (USA) in 1997 and attracted some 70 participants in ten technical sessions and a session of the round table discussions, with a total of 41 papers. It was intended to bring together the international experts in multi-phase flow instrumentation, experiment and modeling to review the state-of-the-art of the two-phase flow instrumentation methods and to discuss the relation between modeling needs and instrumentation capabilities. The following topics were included: Modeling needs and future direction for improved constitutive relations, interfacial area transport equation, and multi-dimensional two-fluid model formulation; local instrumentation developments for void fraction, interfacial area, phase velocities, turbulence, entrainment, particle size, thermal non-equilibrium, shear stress, nucleation, condensation and boiling; global instrumentation developments for void fraction, mass flow, two-phase level, non-condensable concentration, flow regimes, low flow and break flow; relation between modeling needs and instrumentation capabilities, future directions for experiments focused on modeling needs and for instrumentation developments

  11. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  12. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    International Nuclear Information System (INIS)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J.

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques

  13. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    Science.gov (United States)

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  14. Development of phased array UT technique for inspection of turbine wheel rim

    International Nuclear Information System (INIS)

    Komura, I.; Nagal, S.; Goto, M.; Ohmatsu, K.

    1986-01-01

    A phased array UT technique has been developed for the improvement of defect detection under the keyway region of shrunk-on type turbine wheel. The sector scanning mode operation with plexiglas wedge of phased array capability was applied to construct the B-scope image of turbine wheel rim region. Preceding to the inspection test of the model specimen having real shape of rim region, the distribution of sound field intensity along the steering angle of the scanning line was measured on the test block. Then, the minimum depth of detectable defect by the B-scope imaging was evaluated on the dovetail shape specimens which had different depth EDM notches at the each hook fillet. As the results, it has been realized that the B-scope imaging of the sector scanning mode phased array technique has a capability for distinguishing the defect echoes from the many reflection echoes caused by the complexed shape of wheel rim region

  15. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  16. AI techniques in geomagnetic storm forecasting

    Science.gov (United States)

    Lundstedt, Henrik

    This review deals with how geomagnetic storms can be predicted with the use of Artificial Intelligence (AI) techniques. Today many different Al techniques have been developed, such as symbolic systems (expert and fuzzy systems) and connectionism systems (neural networks). Even integrations of AI techniques exist, so called Intelligent Hybrid Systems (IHS). These systems are capable of learning the mathematical functions underlying the operation of non-linear dynamic systems and also to explain the knowledge they have learned. Very few such powerful systems exist at present. Two such examples are the Magnetospheric Specification Forecast Model of Rice University and the Lund Space Weather Model of Lund University. Various attempts to predict geomagnetic storms on long to short-term are reviewed in this article. Predictions of a month to days ahead most often use solar data as input. The first SOHO data are now available. Due to the high temporal and spatial resolution new solar physics have been revealed. These SOHO data might lead to a breakthrough in these predictions. Predictions hours ahead and shorter rely on real-time solar wind data. WIND gives us real-time data for only part of the day. However, with the launch of the ACE spacecraft in 1997, real-time data during 24 hours will be available. That might lead to the second breakthrough for predictions of geomagnetic storms.

  17. COMPETITIVE INTELLIGENCE: THE ENHANCING ROLE OF ORGANIZATIONAL LEARNING CAPABILITY

    OpenAIRE

    HAMAD, Zaina Mustafa Mahmoud; YOZGAT, Ugur

    2017-01-01

    Performinga strong intelligence grants an organization a guaranteeof long-term success. This paper investigates the enhancing effect of organizational learning capabilities on competitive intelligence atthe commercial banks in Jordan. A sample within top and middle managements was used.Measurement instrument validity and model fit were assessed before testinghypotheses. This study emphasizes the role learning capability plays inenhancing intelligence. Key findings support importance of organi...

  18. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  19. Business and technology integrated model

    OpenAIRE

    Noce, Irapuan; Carvalho, João Álvaro

    2011-01-01

    There is a growing interest in business modeling and architecture in the areas of management and information systems. One of the issues in the area is the lack of integration between the modeling techniques that are employed to support business development and those used for technology modeling. This paper proposes a modeling approach that is capable of integrating the modeling of the business and of the technology. By depicting the business model, the organization structure and the technolog...

  20. Dynamic model reduction: An overview of available techniques with application to power systems

    Directory of Open Access Journals (Sweden)

    Đukić Savo D.

    2012-01-01

    Full Text Available This paper summarises the model reduction techniques used for the reduction of large-scale linear and nonlinear dynamic models, described by the differential and algebraic equations that are commonly used in control theory. The groups of methods discussed in this paper for reduction of the linear dynamic model are based on singular perturbation analysis, modal analysis, singular value decomposition, moment matching and methods based on a combination of singular value decomposition and moment matching. Among the nonlinear dynamic model reduction methods, proper orthogonal decomposition, the trajectory piecewise linear method, balancing-based methods, reduction by optimising system matrices and projection from a linearised model, are described. Part of the paper is devoted to the techniques commonly used for reduction (equivalencing of large-scale power systems, which are based on coherency, synchrony, singular perturbation analysis, modal analysis and identification. Two (most interesting of the described techniques are applied to the reduction of the commonly used New England 10-generator, 39-bus test power system.

  1. Mobile Test Capabilities

    Data.gov (United States)

    Federal Laboratory Consortium — The Electrical Power Mobile Test capabilities are utilized to conduct electrical power quality testing on aircraft and helicopters. This capability allows that the...

  2. DYNAMIC CAPABILITIES AND CREATING ORGANIZATIONAL KNOWLEDGE: IMPORTANT LINKAGE FOR BUILDING COMPETITIVE ADVANTAGE

    Directory of Open Access Journals (Sweden)

    Sugiono A.

    2017-08-01

    Full Text Available As a concept derived from the resource-based view, dynamic capabilities essentially have an important linkage with activities related to the creation of organizational knowledge. Using literature study method, this paper aims to discuss the linkage between the creation of dynamic knowledge capabilities and the creation of knowledge company. The study shows that the discussion of dynamic capabilities creation finally puts both learning and knowledge in an important position. Correspondingly, the relationship between a growth strategy that is generally chosen by the organization brings a consequence that the creation of organizational knowledge becomes something that can not be ignored. In order to make the process of knowledge creation in line with dynamic capabilities creation within a growth strategy creation framework, we need a dynamic process of knowledge creation. Among the various models of knowledge creation, SECI model still becomes a relevant model within organizational knowledge creation framework. In general, this study is still theoretical, therefore, more empirical subsequent discussions are expected.

  3. Edge printability: techniques used to evaluate and improve extreme wafer edge printability

    Science.gov (United States)

    Roberts, Bill; Demmert, Cort; Jekauc, Igor; Tiffany, Jason P.

    2004-05-01

    The economics of semiconductor manufacturing have forced process engineers to develop techniques to increase wafer yield. Improvements in process controls and uniformities in all areas of the fab have reduced film thickness variations at the very edge of the wafer surface. This improved uniformity has provided the opportunity to consider decreasing edge exclusions, and now the outermost extents of the wafer must be considered in the yield model and expectations. These changes have increased the requirements on lithography to improve wafer edge printability in areas that previously were not even coated. This has taxed all software and hardware components used in defining the optical focal plane at the wafer edge. We have explored techniques to determine the capabilities of extreme wafer edge printability and the components of the systems that influence this printability. We will present current capabilities and new detection techniques and the influence that the individual hardware and software components have on edge printability. We will show effects of focus sensor designs, wafer layout, utilization of dummy edge fields, the use of non-zero overlay targets and chemical/optical edge bead optimization.

  4. Sabots, Obturator and Gas-In-Launch Tube Techniques for Heat Flux Models in Ballistic Ranges

    Science.gov (United States)

    Bogdanoff, David W.; Wilder, Michael C.

    2013-01-01

    For thermal protection system (heat shield) design for space vehicle entry into earth and other planetary atmospheres, it is essential to know the augmentation of the heat flux due to vehicle surface roughness. At the NASA Ames Hypervelocity Free Flight Aerodynamic Facility (HFFAF) ballistic range, a campaign of heat flux studies on rough models, using infrared camera techniques, has been initiated. Several phenomena can interfere with obtaining good heat flux data when using this measuring technique. These include leakage of the hot drive gas in the gun barrel through joints in the sabot (model carrier) to create spurious thermal imprints on the model forebody, deposition of sabot material on the model forebody, thereby changing the thermal properties of the model surface and unknown in-barrel heating of the model. This report presents developments in launch techniques to greatly reduce or eliminate these problems. The techniques include the use of obturator cups behind the launch package, enclosed versus open front sabot designs and the use of hydrogen gas in the launch tube. Attention also had to be paid to the problem of the obturator drafting behind the model and impacting the model. Of the techniques presented, the obturator cups and hydrogen in the launch tube were successful when properly implemented

  5. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  6. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  7. Use of hydrological modelling and isotope techniques in Guvenc basin

    International Nuclear Information System (INIS)

    Altinbilek, D.

    1991-07-01

    The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs

  8. Modeling of an Aged Porous Silicon Humidity Sensor Using ANN Technique

    Directory of Open Access Journals (Sweden)

    Tarikul ISLAM

    2006-10-01

    Full Text Available Porous silicon (PS sensor based on capacitive technique used for measuring relative humidity has the advantages of low cost, ease of fabrication with controlled structure and CMOS compatibility. But the response of the sensor is nonlinear function of humidity and suffers from errors due to aging and stability. One adaptive linear (ADALINE ANN model has been developed to model the behavior of the sensor with a view to estimate these errors and compensate them. The response of the sensor is represented by third order polynomial basis function whose coefficients are determined by the ANN technique. The drift in sensor output due to aging of PS layer is also modeled by adapting the weights of the polynomial function. ANN based modeling is found to be more suitable than conventional physical modeling of PS humidity sensor in changing environment and drift due to aging. It helps online estimation of nonlinearity as well as monitoring of the fault of the PS humidity sensor using the coefficients of the model.

  9. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling

    DEFF Research Database (Denmark)

    Dotto, C. B.; Mannina, G.; Kleidorfer, M.

    2012-01-01

    -UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multiobjective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty...... techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside...... the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper...

  10. Building Service Provider Capabilities

    DEFF Research Database (Denmark)

    Brandl, Kristin; Jaura, Manya; Ørberg Jensen, Peter D.

    2015-01-01

    In this paper we study whether and how the interaction between clients and the service providers contributes to the development of capabilities in service provider firms. In situations where such a contribution occurs, we analyze how different types of activities in the production process...... process. We find that clients influence the development of human capital capabilities and management capabilities in reciprocally produced services. While in sequential produced services clients influence the development of organizational capital capabilities and management capital capabilities....... of the services, such as sequential or reciprocal task activities, influence the development of different types of capabilities. We study five cases of offshore-outsourced knowledge-intensive business services that are distinguished according to their reciprocal or sequential task activities in their production...

  11. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    International Nuclear Information System (INIS)

    Herranz, Luis E.; Garcia, Monica; Morandi, Sonia

    2013-01-01

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that

  12. Introduction of an Evaluation Tool to Predict the Probability of Success of Companies: The Innovativeness, Capabilities and Potential Model (ICP

    Directory of Open Access Journals (Sweden)

    Michael Lewrick

    2009-05-01

    Full Text Available Successful innovation requires management and in this paper a model to help manage the innovation process is presented. This model can be used to audit the management capability to innovate and to monitor how sales increase is related to innovativeness. The model was developed from a study of companies in the high technology cluster around Munich and validated using statistical procedures. The model was found to be effective at predicting the success or otherwise of the innovation strategy pursued by the company. The use of this model and how it can be used to identify areas for improvement are documented in this paper.

  13. NMR and modelling techniques in structural and conformation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, R J [Liverpool Univ. (United Kingdom)

    1994-12-31

    The use of Lanthanide Induced Shifts (L.I.S.) and modelling techniques in conformational analysis is presented. The use of Co{sup III} porphyrins as shift reagents is discussed, with examples of their use in the conformational analysis of some heterocyclic amines. (author) 13 refs., 9 figs.

  14. Collaborative Capability in Coworking Spaces: Convenience Sharing or Community Building?

    Directory of Open Access Journals (Sweden)

    Marcelo F. Castilho

    2017-12-01

    Full Text Available This study explores the development of collaborative capability in coworking spaces. It is based on the perception of collaboration among 31 coworking founders, community managers, and coworkers of those spaces. In-depth interviews around the meaning of collaboration and its challenges were conducted in 14 coworking spaces located in six Asian countries. A set of factors was identified and a model was proposed based on a set of four dimensions: enabling knowledge sharing, enhancing a creative field, enhancing an individual action for the collective, and supporting a collective action to an effective execution. The “Convenience Sharing” and “Community Building” coworking types based on Capdevila (2014 suggest different conditions under which collaborative capability develops. Convenience Sharing coworking spaces tend to foster collaborative capability through knowledge sharing and effective execution, whereas Community Building coworking spaces tend to foster collaborative capability by enhancing a creative field and individual action for the collective. Overall, this study contributes to a theoretical model for coworking spaces to help coworking founders and community managers make strategic decisions. The findings suggest that collaborative capability in coworking spaces depends on the interlacing of a set of factors along four dimensions that relate in varying degrees of intensity to a two-fold coworking space typology.

  15. Model-based Assessment for Balancing Privacy Requirements and Operational Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Knirsch, Fabian [Salzburg Univ. (Austria); Engel, Dominik [Salzburg Univ. (Austria); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States)

    2015-02-17

    The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and – if feasible – an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.

  16. Bayesian Network Models in Cyber Security: A Systematic Review

    NARCIS (Netherlands)

    Chockalingam, S.; Pieters, W.; Herdeiro Teixeira, A.M.; van Gelder, P.H.A.J.M.; Lipmaa, Helger; Mitrokotsa, Aikaterini; Matulevicius, Raimundas

    2017-01-01

    Bayesian Networks (BNs) are an increasingly popular modelling technique in cyber security especially due to their capability to overcome data limitations. This is also instantiated by the growth of BN models development in cyber security. However, a comprehensive comparison and analysis of these

  17. A review of cutting mechanics and modeling techniques for biological materials.

    Science.gov (United States)

    Takabi, Behrouz; Tai, Bruce L

    2017-07-01

    This paper presents a comprehensive survey on the modeling of tissue cutting, including both soft tissue and bone cutting processes. In order to achieve higher accuracy in tissue cutting, as a critical process in surgical operations, the meticulous modeling of such processes is important in particular for surgical tool development and analysis. This review paper is focused on the mechanical concepts and modeling techniques utilized to simulate tissue cutting such as cutting forces and chip morphology. These models are presented in two major categories, namely soft tissue cutting and bone cutting. Fracture toughness is commonly used to describe tissue cutting while Johnson-Cook material model is often adopted for bone cutting in conjunction with finite element analysis (FEA). In each section, the most recent mathematical and computational models are summarized. The differences and similarities among these models, challenges, novel techniques, and recommendations for future work are discussed along with each section. This review is aimed to provide a broad and in-depth vision of the methods suitable for tissue and bone cutting simulations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  19. FMEF/experimental capabilities

    International Nuclear Information System (INIS)

    Burgess, C.A.; Dronen, V.R.

    1981-01-01

    The Fuels and Materials Examination Facility (FMEF), under construction at the Hanford site north of Richland, Washington, will be one of the most modern facilities offering irradiated fuels and materials examination capabilities and fuel fabrication development technologies. Scheduled for completion in 1984, the FMEF will provide examination capability for fuel assemblies, fuel pins and test pins irradiated in the FFTF. Various functions of the FMEF are described, with emphasis on experimental data-gathering capabilities in the facility's Nondestructive and Destructive examination cell complex

  20. PENGELOLAAN KNOWLEDGE MANAGEMENT CAPABILITY DALAM MEMEDIASI DUKUNGAN INFORMATION TECHNOLOGY RELATEDNESS TERHADAP KINERJA PERUSAHAAN

    Directory of Open Access Journals (Sweden)

    Luluk Muhimatul Ifada

    2011-06-01

    Full Text Available The study examines whether or not and how information technology (IT relatedness influences corporate performance. This study proposes that knowledge management (KM is a critical organizational capability through which IT influences firm performance. Measurement of IT relatedness and KM capability uses a reflective second-order factor modeling approach for capturing complementarities among the four dimensions of IT relatedness (IT strategy making processes, IT vendor management processes, IT human resource management processes and IT infrastructure and for capturing complementarities among the three dimensions of KM capability (product KM capability, customer KM capability, and managerial KM capability. A survey was conducted among 93 branch managers of banking in Central Java. Structural Equation Model (SEM was used to analyze the data using the software program of SmartPLS (Partial Least Square. The findings support for the hypotheses of the study. IT relatedness of business units enhances the cross unit KM capability of the corporate. The KM capability creates and exploits cross-unit synergies from the product, customer, and managerial knowledge resources of the corporate. These synergies increase the corporate performance. IT relatedness of business units positively influences corporate performance. IT relatedness also has significant indirect effects on corporate performance through the mediation of KM capability.

  1. Mathematical Modeling of Diverse Phenomena

    Science.gov (United States)

    Howard, J. C.

    1979-01-01

    Tensor calculus is applied to the formulation of mathematical models of diverse phenomena. Aeronautics, fluid dynamics, and cosmology are among the areas of application. The feasibility of combining tensor methods and computer capability to formulate problems is demonstrated. The techniques described are an attempt to simplify the formulation of mathematical models by reducing the modeling process to a series of routine operations, which can be performed either manually or by computer.

  2. Artificial intelligence techniques for photovoltaic applications: A review

    Energy Technology Data Exchange (ETDEWEB)

    Mellit, Adel [Department of Electronics, Faculty of Sciences Engineering, LAMEL Laboratory, Jijel University, Oulad-aissa, P.O. Box 98, Jijel 18000 (Algeria); Kalogirou, Soteris A. [Department of Mechanical Engineering and Materials Science and Engineering, Cyprus University of Technology, P.O. Box 50329, Limassol 3603 (Cyprus)

    2008-10-15

    Artificial intelligence (AI) techniques are becoming useful as alternate approaches to conventional techniques or as components of integrated systems. They have been used to solve complicated practical problems in various areas and are becoming more popular nowadays. They can learn from examples, are fault tolerant in the sense that they are able to handle noisy and incomplete data, are able to deal with nonlinear problems and once trained can perform prediction and generalization at high speed. AI-based systems are being developed and deployed worldwide in a wide variety of applications, mainly because of their symbolic reasoning, flexibility and explanation capabilities. AI has been used in different sectors, such as engineering, economics, medicine, military, marine, etc. They have also been applied for modeling, identification, optimization, prediction, forecasting and control of complex systems. The paper outlines an understanding of how AI systems operate by way of presenting a number of problems in photovoltaic systems application. Problems presented include three areas: forecasting and modeling of meteorological data, sizing of photovoltaic systems and modeling, simulation and control of photovoltaic systems. Published literature presented in this paper show the potential of AI as design tool in photovoltaic systems. (author)

  3. Application of an enriched FEM technique in thermo-mechanical contact problems

    Science.gov (United States)

    Khoei, A. R.; Bahmani, B.

    2018-02-01

    In this paper, an enriched FEM technique is employed for thermo-mechanical contact problem based on the extended finite element method. A fully coupled thermo-mechanical contact formulation is presented in the framework of X-FEM technique that takes into account the deformable continuum mechanics and the transient heat transfer analysis. The Coulomb frictional law is applied for the mechanical contact problem and a pressure dependent thermal contact model is employed through an explicit formulation in the weak form of X-FEM method. The equilibrium equations are discretized by the Newmark time splitting method and the final set of non-linear equations are solved based on the Newton-Raphson method using a staggered algorithm. Finally, in order to illustrate the capability of the proposed computational model several numerical examples are solved and the results are compared with those reported in literature.

  4. Power capability prediction for lithium-ion batteries based on multiple constraints analysis

    International Nuclear Information System (INIS)

    Pan, Rui; Wang, Yujie; Zhang, Xu; Yang, Duo; Chen, Zonghai

    2017-01-01

    Highlights: • Multiple constraints for peak power capability prediction are deeply analyzed. • Multi-limited method is proposed for the peak power capability prediction of LIBs. • The EKF is used for the model based peak power capability prediction. • The FUDS and UDDS profiles are executed to evaluate the proposed method. - Abstract: The power capability of the lithium-ion battery is a key performance indicator for electric vehicle, and it is intimately correlated with the acceleration, regenerative braking and gradient climbing power requirements. Therefore, an accurate power capability or state-of-power prediction is critical to a battery management system, which can help the battery to work in suitable area and prevent the battery from over-charging and over-discharging. However, the power capability is easily affected by dynamic load, voltage variation and temperature. In this paper, three different constraints in power capability prediction are introduced, and the advantages and disadvantages of the three methods are deeply analyzed. Furthermore, a multi-limited approach for the power capability prediction is proposed, which can overcome the drawbacks of the three methods. Subsequently, the extended Kalman filter algorithm is employed for model based state-of-power prediction. In order to verify the proposed method, diverse experiments are executed to explore the efficiency, robustness, and precision. The results indicate that the proposed method can improve the precision and robustness obviously.

  5. Pyomo optimization modeling in Python

    CERN Document Server

    Hart, William E; Watson, Jean-Paul; Woodruff, David L; Hackebeil, Gabriel A; Nicholson, Bethany L; Siirola, John D

    2017-01-01

    This book provides a complete and comprehensive guide to Pyomo (Python Optimization Modeling Objects) for beginning and advanced modelers, including students at the undergraduate and graduate levels, academic researchers, and practitioners. Using many examples to illustrate the different techniques useful for formulating models, this text beautifully elucidates the breadth of modeling capabilities that are supported by Pyomo and its handling of complex real-world applications. This second edition provides an expanded presentation of Pyomo’s modeling capabilities, providing a broader description of the software that will enable the user to develop and optimize models. Introductory chapters have been revised to extend tutorials; chapters that discuss advanced features now include the new functionalities added to Pyomo since the first edition including generalized disjunctive programming, mathematical programming with equilibrium constraints, and bilevel programming. Pyomo is an open source software package fo...

  6. Building Micro-Foundations for the Routines, Capabilities, and Performance Links

    DEFF Research Database (Denmark)

    Abell, Peter; Felin, Teppo; Foss, Nicolai Juul

    2007-01-01

    a neglect of micro-foundations - is incomplete. There are no mechanisms that work solely on the macro-level, directly connecting routines and capabilities to firm-level outcomes. While routines and capabilities are useful shorthand for complicated patterns of individual action and interaction, ultimately...... they are best understood at the micro-level. Second, we provide a formal model that shows precisely why macro explanation is incomplete and which exemplifies how explicit micro-foundations may be built for notions of routines and capabilities and for how these impact firm performance....

  7. NGNP Data Management and Analysis System Modeling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Cynthia D. Gentillon

    2009-09-01

    Projects for the very-high-temperature reactor (VHTR) program provide data in support of Nuclear Regulatory Commission licensing of the VHTR. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high temperature and high fluence environments. In addition, thermal-hydraulic experiments are conducted to validate codes used to assess reactor safety. The VHTR Program has established the NGNP Data Management and Analysis System (NDMAS) to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities for displaying the data in meaningful ways and identifying relationships among the measured quantities that contribute to their understanding.

  8. NGNP Data Management and Analysis System Modeling Capabilities

    International Nuclear Information System (INIS)

    Gentillon, Cynthia D.

    2009-01-01

    Projects for the very-high-temperature reactor (VHTR) program provide data in support of Nuclear Regulatory Commission licensing of the VHTR. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high temperature and high fluence environments. In addition, thermal-hydraulic experiments are conducted to validate codes used to assess reactor safety. The VHTR Program has established the NGNP Data Management and Analysis System (NDMAS) to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities for displaying the data in meaningful ways and identifying relationships among the measured quantities that contribute to their understanding.

  9. Available transfer capability calculation considering voltage stability margin

    International Nuclear Information System (INIS)

    Pan, Xiong; Xu, Guoyu

    2005-01-01

    To make the electricity trades carry out successfully, the calculation of available transfer capability (ATC) must coordinate the relationship between the security and economic benefits. In this paper, a model for ATC calculations accorded with trade-off mechanism in electricity market was set up. The impact of branch outage contingency on the static voltage stability margin was analyzed, and contingency ranking was performed through sensitivity indices of branch flows with respect to the loading margin. Optimal power flow based on primal-dual interior point method was applied to obtain ATC when the N-1 security constraints were included. The calculation results of IEEE 30-bus and IEEE 118-bus systems show that the proposed model and method are valid. (author) (N-1 security constraints; Electricity market; Available transfer capability; Optimal power flow; Voltage stability)

  10. Radioactive material package testing capabilities at Sandia National Laboratories

    International Nuclear Information System (INIS)

    Uncapher, W.L.; Hohnstreiter, G.F.

    1995-01-01

    Evaluation and certification of radioactive and hazardous material transport packages can be accomplished by subjecting these packages to normal transport and hypothetical accident test conditions. The regulations allow package designers to certify packages using analysis, testing, or a combination of analysis and testing. Testing can be used to substantiate assumptions used in analytical models and to demonstrate package structural and thermal response. Regulatory test conditions include impact, puncture, crush, penetration, water spray, immersion, and thermal environments. Testing facilities are used to simulate the required test conditions and provide measurement response data. Over the past four decades, comprehensive testing facilities have been developed at Sandia National Laboratories to perform a broad range of verification and certification tests on hazardous and radioactive material packages or component sections. Sandia's facilities provide an experience base that has been established during the development and certification of many package designs. These unique facilities, along with innovative instrumentation data collection capabilities and techniques, simulate a broad range of testing environments. In certain package designs, package testing can be an economical alternative to complex analysis to resolve regulatory questions or concerns

  11. Controller design for flexible, distributed parameter mechanical arms via combined state space and frequency domain techniques

    Science.gov (United States)

    Book, W. J.; Majett, M.

    1982-01-01

    The potential benefits of the ability to control more flexible mechanical arms are discussed. A justification is made in terms of speed of movement. A new controller design procedure is then developed to provide this capability. It uses both a frequency domain representation and a state variable representation of the arm model. The frequency domain model is used to update the modal state variable model to insure decoupled states. The technique is applied to a simple example with encouraging results.

  12. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  13. Transfer of physics detector models into CAD systems using modern techniques

    International Nuclear Information System (INIS)

    Dach, M.; Vuoskoski, J.

    1996-01-01

    Designing high energy physics detectors for future experiments requires sophisticated computer aided design and simulation tools. In order to satisfy the future demands in this domain, modern techniques, methods, and standards have to be applied. We present an interface application, designed and implemented using object-oriented techniques, for the widely used GEANT physics simulation package. It converts GEANT detector models into the future industrial standard, STEP. (orig.)

  14. Development of an environmental radiation analysis research capability in the UAE

    International Nuclear Information System (INIS)

    Kim, Sung-yeop; Kim, Chankyu; Lee, Kun Jai; Chang, Soon Heung; Elmasri, Hasna; Beeley, Philip A.

    2013-01-01

    The UAE has started a nuclear energy program with the aim of having its first four units on-line between 2017 and 2020 and it is important that the country has an environmental radiation analysis capability to support this program. Khalifa University is therefore implementing a research laboratory to support both experimental analysis and radionuclide transport modeling in the aquatic and terrestrial environment. This paper outlines the development of this capability as well as the work in progress and planned for the future. - Highlights: • New university environmental radiation laboratory established in UAE. • Facilities included for alpha, beta and gamma radiometrics. • Transport modeling capability is being established. • Laboratory also used for education and training. • Robotic methods for sampling and analysis are under development

  15. Towards Model Validation and Verification with SAT Techniques

    OpenAIRE

    Gogolla, Martin

    2010-01-01

    After sketching how system development and the UML (Unified Modeling Language) and the OCL (Object Constraint Language) are related, validation and verification with the tool USE (UML-based Specification Environment) is demonstrated. As a more efficient alternative for verification tasks, two approaches using SAT-based techniques are put forward: First, a direct encoding of UML and OCL with Boolean variables and propositional formulas, and second, an encoding employing an...

  16. Fuzzy modeling and control of rotary inverted pendulum system using LQR technique

    International Nuclear Information System (INIS)

    Fairus, M A; Mohamed, Z; Ahmad, M N

    2013-01-01

    Rotary inverted pendulum (RIP) system is a nonlinear, non-minimum phase, unstable and underactuated system. Controlling such system can be a challenge and is considered a benchmark in control theory problem. Prior to designing a controller, equations that represent the behaviour of the RIP system must be developed as accurately as possible without compromising the complexity of the equations. Through Takagi-Sugeno (T-S) fuzzy modeling technique, the nonlinear system model is then transformed into several local linear time-invariant models which are then blended together to reproduce, or approximate, the nonlinear system model within local region. A parallel distributed compensation (PDC) based fuzzy controller using linear quadratic regulator (LQR) technique is designed to control the RIP system. The results show that the designed controller able to balance the RIP system

  17. An Automatic Segmentation Method Combining an Active Contour Model and a Classification Technique for Detecting Polycomb-group Proteinsin High-Throughput Microscopy Images.

    Science.gov (United States)

    Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura

    2016-01-01

    The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.

  18. Power capability evaluation for lithium iron phosphate batteries based on multi-parameter constraints estimation

    Science.gov (United States)

    Wang, Yujie; Pan, Rui; Liu, Chang; Chen, Zonghai; Ling, Qiang

    2018-01-01

    The battery power capability is intimately correlated with the climbing, braking and accelerating performance of the electric vehicles. Accurate power capability prediction can not only guarantee the safety but also regulate driving behavior and optimize battery energy usage. However, the nonlinearity of the battery model is very complex especially for the lithium iron phosphate batteries. Besides, the hysteresis loop in the open-circuit voltage curve is easy to cause large error in model prediction. In this work, a multi-parameter constraints dynamic estimation method is proposed to predict the battery continuous period power capability. A high-fidelity battery model which considers the battery polarization and hysteresis phenomenon is presented to approximate the high nonlinearity of the lithium iron phosphate battery. Explicit analyses of power capability with multiple constraints are elaborated, specifically the state-of-energy is considered in power capability assessment. Furthermore, to solve the problem of nonlinear system state estimation, and suppress noise interference, the UKF based state observer is employed for power capability prediction. The performance of the proposed methodology is demonstrated by experiments under different dynamic characterization schedules. The charge and discharge power capabilities of the lithium iron phosphate batteries are quantitatively assessed under different time scales and temperatures.

  19. Model-independent Exoplanet Transit Spectroscopy

    Science.gov (United States)

    Aronson, Erik; Piskunov, Nikolai

    2018-05-01

    We propose a new data analysis method for obtaining transmission spectra of exoplanet atmospheres and brightness variation across the stellar disk from transit observations. The new method is capable of recovering exoplanet atmosphere absorption spectra and stellar specific intensities without relying on theoretical models of stars and planets. We simultaneously fit both stellar specific intensity and planetary radius directly to transit light curves. This allows stellar models to be removed from the data analysis. Furthermore, we use a data quality weighted filtering technique to achieve an optimal trade-off between spectral resolution and reconstruction fidelity homogenizing the signal-to-noise ratio across the wavelength range. Such an approach is more efficient than conventional data binning onto a low-resolution wavelength grid. We demonstrate that our analysis is capable of reproducing results achieved by using an explicit quadratic limb-darkening equation and that the filtering technique helps eliminate spurious spectral features in regions with strong telluric absorption. The method is applied to the VLT FORS2 observations of the exoplanets GJ 1214 b and WASP-49 b, and our results are in agreement with previous studies. Comparisons between obtained stellar specific intensity and numerical models indicates that the method is capable of accurately reconstructing the specific intensity. The proposed method enables more robust characterization of exoplanetary atmospheres by separating derivation of planetary transmission and stellar specific intensity spectra (that is model-independent) from chemical and physical interpretation.

  20. Developing Alliance Capabilities

    DEFF Research Database (Denmark)

    Heimeriks, Koen H.; Duysters, Geert; Vanhaverbeke, Wim

    This paper assesses the differential performance effects of learning mechanisms on the development of alliance capabilities. Prior research has suggested that different capability levels could be identified in which specific intra-firm learning mechanisms are used to enhance a firm's alliance...

  1. Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.

    Science.gov (United States)

    Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H

    2013-05-01

    In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. Copyright © 2012 Wiley Periodicals, Inc.

  2. [Preparation of simulate craniocerebral models via three dimensional printing technique].

    Science.gov (United States)

    Lan, Q; Chen, A L; Zhang, T; Zhu, Q; Xu, T

    2016-08-09

    Three dimensional (3D) printing technique was used to prepare the simulate craniocerebral models, which were applied to preoperative planning and surgical simulation. The image data was collected from PACS system. Image data of skull bone, brain tissue and tumors, cerebral arteries and aneurysms, and functional regions and relative neural tracts of the brain were extracted from thin slice scan (slice thickness 0.5 mm) of computed tomography (CT), magnetic resonance imaging (MRI, slice thickness 1mm), computed tomography angiography (CTA), and functional magnetic resonance imaging (fMRI) data, respectively. MIMICS software was applied to reconstruct colored virtual models by identifying and differentiating tissues according to their gray scales. Then the colored virtual models were submitted to 3D printer which produced life-sized craniocerebral models for surgical planning and surgical simulation. 3D printing craniocerebral models allowed neurosurgeons to perform complex procedures in specific clinical cases though detailed surgical planning. It offered great convenience for evaluating the size of spatial fissure of sellar region before surgery, which helped to optimize surgical approach planning. These 3D models also provided detailed information about the location of aneurysms and their parent arteries, which helped surgeons to choose appropriate aneurismal clips, as well as perform surgical simulation. The models further gave clear indications of depth and extent of tumors and their relationship to eloquent cortical areas and adjacent neural tracts, which were able to avoid surgical damaging of important neural structures. As a novel and promising technique, the application of 3D printing craniocerebral models could improve the surgical planning by converting virtual visualization into real life-sized models.It also contributes to functional anatomy study.

  3. NWChem Meeting on Science Driven Petascale Computing and Capability Development at EMSL

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.

    2007-02-19

    On January 25, and 26, 2007, an NWChem meeting was held that was attended by 65 scientists from 29 institutions including 22 universities and 5 national laboratories. The goals of the meeting were to look at major scientific challenges that could be addressed by computational modeling in environmental molecular sciences, and to identify the associated capability development needs. In addition, insights were sought into petascale computing developments in computational chemistry. During the meeting common themes were identified that will drive the need for the development of new or improved capabilities in NWChem. Crucial areas of development that the developer's team will be focusing on are (1) modeling of dynamics and kinetics in chemical transformations, (2) modeling of chemistry at interfaces and in the condensed phase, and (3) spanning longer time scales in biological processes modeled with molecular dynamics. Various computational chemistry methodologies were discussed during the meeting, which will provide the basis for the capability developments in the near or long term future of NWChem.

  4. Organizational Capability Deployment Analysis for Technology Conversion into Processes, Products and Services

    Directory of Open Access Journals (Sweden)

    Tomoe Daniela Hamanaka Gusberti

    2013-12-01

    Full Text Available This article discusses Organizational Capabilities as the basic components of business models that emerged under the New Product Development Process and Technological Management. In the context of the new Technology Based Companies Development, it adopts a qualitative research in order to identify, analyze and underpin the organizational capability deployment in a process of technology conversion into product and service. The analysis was carried out considering concepts from literature review, in a technology based enterprise started by an academic spin-off company. The analysis enabled the elicitation of a Business Model and the discussion of their components, and correspondent evolution hypothesis. The paper provides an example of capability deployment accordingly the established theory illustrated by a case study. The study not just enumerate the needed partners, resources, customer channels, it enabled the description of their connection, representing the logic behind the decision made to develop the conceptual model. This detailed representation of the model allows better addressed discussions.

  5. Electromagnetic validation of fault-ride through capabilities of wind turbines

    DEFF Research Database (Denmark)

    Arana Aristi, Iván; Garcia-Valle, Rodrigo; Sharma, Ranjan

    2010-01-01

    Scope of the present project is the development and validation of electro-magnetic transient model of fixed-speed wind turbines. The research work is focused on the development of a fixed-speed wind turbine model with fault-ride through capabilities during transient over-voltages. The model is de...

  6. Identification of Fissionable Materials Using the Tagged Neutron Technique

    International Nuclear Information System (INIS)

    Keegan, R.P.; Hurley, J.P.; Tinsley, J.R.; Trainham, R.

    2009-01-01

    This summary describes experiments to detect and identify fissionable materials using the tagged neutron technique. The objective of this work is to enhance homeland security capability to find fissionable material that may be smuggled inside shipping boxes, containers, or vehicles. The technique distinguishes depleted uranium from lead, steel, and tungsten. Future work involves optimizing the technique to increase the count rate by many orders of magnitude and to build in the additional capability to image hidden fissionable materials. The tagged neutron approach is very different to other techniques based on neutron die-away or photo-fission. This work builds on the development of the Associated Particle Imaging (API) technique at the Special Technologies Laboratory (STL). Similar investigations have been performed by teams at the Oak Ridge National Laboratory (ORNL), the Khlopin Radium Institute in Russia, and by the EURITRACK collaboration in the European Union

  7. CRAC2 model description

    International Nuclear Information System (INIS)

    Ritchie, L.T.; Alpert, D.J.; Burke, R.P.; Johnson, J.D.; Ostmeyer, R.M.; Aldrich, D.C.; Blond, R.M.

    1984-03-01

    The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions

  8. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  9. An experimental technique for the modelling of air flow movements in nuclear plant

    International Nuclear Information System (INIS)

    Ainsworth, R.W.; Hallas, N.J.

    1986-01-01

    This paper describes an experimental technique developed at Harwell to model ventilation flows in plant at 1/5th scale. The technique achieves dynamic similarity not only for forced convection imposed by the plant ventilation system, but also for the interaction between natural convection (from heated objects) and forced convection. The use of a scale model to study flow of fluids is a well established technique, relying upon various criteria, expressed in terms of dimensionless numbers, to achieve dynamic similarity. For forced convective flows, simulation of Reynolds number is sufficient, but to model natural convection and its interaction with forced convection, the Rayleigh, Grashof and Prandtl numbers must be simulated at the same time. This paper describes such a technique, used in experiments on a hypothetical glove box cell to study the interaction between forced and natural convection. The model contained features typically present in a cell, such as a man, motor, stairs, glove box, etc. The aim of the experiment was to study the overall flow patterns, especially around the model man 'working' at the glove box. The cell ventilation was theoretically designed to produce a downward flow over the face of the man working at the glove box. However, the results have shown that the flow velocities produced an upwards flow over the face of the man. The work has indicated the viability of modelling simultaneously the forced and natural convection processes in a cell. It has also demonstrated that simplistic assumptions cannot be made about ventilation flow patterns. (author)

  10. Capability of DFIG WTS to ride through recurring asymmetrical grid faults

    DEFF Research Database (Denmark)

    Chen, Wenjie; Blaabjerg, Frede; Chen, Min

    2014-01-01

    The Wind Turbine Systems (WTS) are required to ride through recurring grid faults in some countries. In this paper, the capability of Doubly Fed Induction Generator (DFIG) WTS to ride through recurring asymmetrical grid faults is evaluated and compared with the ride through capability under single...... asymmetrical grid fault. A mathematical model of the DFIG under recurring asymmetrical grid faults is represented. The analysis are verified by simulations on a 1.5MW DFIG model and by experiments on a reduced-scale DFIG test system....

  11. Dynamic capability in an under-researched cultural environment

    Directory of Open Access Journals (Sweden)

    Fatemeh Rezaee

    2016-02-01

    Full Text Available During the past few years, dynamic capability (DC has been considered as an important issue in banking industry. This paper presents a survey on dynamic capability and its role on reaching sustainable competitive advantage (SCA within Mellat bank of Iran (MBI. A valid research instrument is utilized to conduct a survey among 150 managers from MBI. The study utilizes structural equation modelling to examine different hypotheses based on an integrated model of DC and SCA. According to literature studies, expert opinions and exploratory factor analysis, DC is classified into sensing, learning, reconfiguration, and coordination. Furthermore, SCA of the banking industry is classified into three dimensions: market, customer, and financial performance. The results indicate that DC had the greatest effect on the market centered, while it had the least influence on the customer centered.

  12. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    Energy Technology Data Exchange (ETDEWEB)

    Herranz, Luis E., E-mail: luisen.herranz@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Garcia, Monica, E-mail: monica.gmartin@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Morandi, Sonia, E-mail: sonia.morandi@rse-web.it [Nuclear and Industrial Plant Safety Team, Power Generation System Department, RSE, via Rubattino 54, 20134 Milano (Italy)

    2013-12-15

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have

  13. A Learning Framework for Control-Oriented Modeling of Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Rubio-Herrero, Javier; Chandan, Vikas; Siegel, Charles M.; Vishnu, Abhinav; Vrabie, Draguna L.

    2018-01-18

    Buildings consume a significant amount of energy worldwide. Several building optimization and control use cases require models of energy consumption which are control oriented, have high predictive capability, imposes minimal data pre-processing requirements, and have the ability to be adapted continuously to account for changing conditions as new data becomes available. Data driven modeling techniques, that have been investigated so far, while promising in the context of buildings, have been unable to simultaneously satisfy all the requirements mentioned above. In this context, deep learning techniques such as Recurrent Neural Networks (RNNs) hold promise, empowered by advanced computational capabilities and big data opportunities. In this paper, we propose a deep learning based methodology for the development of control oriented models for building energy management and test in on data from a real building. Results show that the proposed methodology outperforms other data driven modeling techniques significantly. We perform a detailed analysis of the proposed methodology along dimensions such as topology, sensitivity, and downsampling. Lastly, we conclude by envisioning a building analytics suite empowered by the proposed deep framework, that can drive several use cases related to building energy management.

  14. Does organizational agility affect organizational learning capability? Evidence from commercial banking

    Directory of Open Access Journals (Sweden)

    Zaina Mustafa Mahmoud Hamad

    2017-08-01

    Full Text Available Both organizational agility and learning capability are prerequisites for organizational survival and success. This study explores the contribution of agility practices to organizational learning capabilities at the commercial banks in Jordan. To examine the proposed model, a sample of 158 employees within top and middle managements was used. Structural Equation Modeling was conducted for assessing validity and reliability of measurement instrument, evaluating model fit, and testing hypotheses. This study recognizes agility as a key element of learning facilitators. Findings affirm the strategic value of agility and conclude that administrators working within ag-ile organizations would be able to acquire conditions that foster learning.

  15. Experience with the Large Eddy Simulation (LES) Technique for the Modelling of Premixed and Non-premixed Combustion

    OpenAIRE

    Malalasekera, W; Ibrahim, SS; Masri, AR; Gubba, SR; Sadasivuni, SK

    2013-01-01

    Compared to RANS based combustion modelling, the Large Eddy Simulation (LES) technique has recently emerged as a more accurate and very adaptable technique in terms of handling complex turbulent interactions in combustion modelling problems. In this paper application of LES based combustion modelling technique and the validation of models in non-premixed and premixed situations are considered. Two well defined experimental configurations where high quality data are available for validation is...

  16. The potential of 3D techniques for cultural heritage object documentation

    Science.gov (United States)

    Bitelli, Gabriele; Girelli, Valentina A.; Remondino, Fabio; Vittuari, Luca

    2007-01-01

    The generation of 3D models of objects has become an important research point in many fields of application like industrial inspection, robotics, navigation and body scanning. Recently the techniques for generating photo-textured 3D digital models have interested also the field of Cultural Heritage, due to their capability to combine high precision metrical information with a qualitative and photographic description of the objects. In fact this kind of product is a fundamental support for documentation, studying and restoration of works of art, until a production of replicas by fast prototyping techniques. Close-range photogrammetric techniques are nowadays more and more frequently used for the generation of precise 3D models. With the advent of automated procedures and fully digital products in the 1990s, it has become easier to use and cheaper, and nowadays a wide range of commercial software is available to calibrate, orient and reconstruct objects from images. This paper presents the complete process for the derivation of a photorealistic 3D model of an important basalt stela (about 70 x 60 x 25 cm) discovered in the archaeological site of Tilmen Höyük, in Turkey, dating back to 2nd mill. BC. We will report the modeling performed using passive and active sensors and the comparison of the achieved results.

  17. Atmospheric and dispersion modeling in areas of highly complex terrain employing a four-dimensional data assimilation technique

    International Nuclear Information System (INIS)

    Fast, J.D.; O'Steen, B.L.

    1994-01-01

    The results of this study indicate that the current data assimilation technique can have a positive impact on the mesoscale flow fields; however, care must be taken in its application to grids of relatively fine horizontal resolution. Continuous FDDA is a useful tool in producing high-resolution mesoscale analysis fields that can be used to (1) create a better initial conditions for mesoscale atmospheric models and (2) drive transport models for dispersion studies. While RAMS is capable of predicting the qualitative flow during this evening, additional experiments need to be performed to improve the prognostic forecasts made by RAMS and refine the FDDA procedure so that the overall errors are reduced even further. Despite the fact that a great deal of computational time is necessary in executing RAMS and LPDM in the configuration employed in this study, recent advances in workstations is making applications such as this more practical. As the speed of these machines increase in the next few years, it will become feasible to employ prognostic, three-dimensional mesoscale/transport models to routinely predict atmospheric dispersion of pollutants, even to highly complex terrain. For example, the version of RAMS in this study could be run in a ''nowcasting'' model that would continually assimilate local and regional observations as soon as they become available. The atmospheric physics in the model would be used to determine the wind field where no observations are available. The three-dimensional flow fields could be used as dynamic initial conditions for a model forecast. The output from this type of modeling system will have to be compared to existing diagnostic, mass-consistent models to determine whether the wind field and dispersion forecasts are significantly improved

  18. Capturing Firms’ Heterogeneity through Marketing and IT Capabilities in SMEs

    Directory of Open Access Journals (Sweden)

    María A. Ramón-Jerónimo

    2017-11-01

    Full Text Available To achieve sustainability, firms capable of surviving economic recessions is of key relevance; the capabilities that firms need to face dynamic environments remain an open question. In this work, a new procedure is proposed to capture firms’ heterogeneity with regard to the capabilities they possess in operating efficiently in dynamic environments. This approach enables the identification of the classes of firms that develop efficiency with a specific integration of resources. While the literature has most often measured firm capabilities using subjective measures, this study suggests the use of Data Envelopment Analysis to capture the ability to transform resources into outcomes and of Latent Class Regression to capture differences across firms that explain firms’ heterogeneity in the way they perform. By combining these two techniques, this work presents a way to identify those firms that need to invest in and develop certain capabilities. This work analyses a large dataset of manufacturing Small Medium Enterprises (SMEs extracted from the Business and Strategy survey provided by Fundación de la Sociedad Estatal de Participaciones Industriales( SEPI in Spain. The dataset used enfolds 10,960 observations from 2048 firms during the period 1994–2011. The complete dataset has been employed to calculate manufacturing firms’ efficiency. In a second step, data were cleaned to eliminate outliers, and to identify SMEs and observations with records of IT capabilities. As a result, 329 manufacturing SMEs were analysed to capture their heterogeneity. The results contribute to the current literature by explaining how manufacturing SMEs show a different need in their development of capabilities to be efficient and adapt to environmental changes. While approximately 20% of firms analysed really take advantage of recessions through their investment in R&D, the remaining 80% need to adjust their size or invest in IT capabilities to become competitive

  19. Using the Continuum of Design Modelling Techniques to Aid the Development of CAD Modeling Skills in First Year Industrial Design Students

    Science.gov (United States)

    Storer, I. J.; Campbell, R. I.

    2012-01-01

    Industrial Designers need to understand and command a number of modelling techniques to communicate their ideas to themselves and others. Verbal explanations, sketches, engineering drawings, computer aided design (CAD) models and physical prototypes are the most commonly used communication techniques. Within design, unlike some disciplines,…

  20. Identification techniques for phenomenological models of hysteresis based on the conjugate gradient method

    International Nuclear Information System (INIS)

    Andrei, Petru; Oniciuc, Liviu; Stancu, Alexandru; Stoleriu, Laurentiu

    2007-01-01

    An identification technique for the parameters of phenomenological models of hysteresis is presented. The basic idea of our technique is to set up a system of equations for the parameters of the model as a function of known quantities on the major or minor hysteresis loops (e.g. coercive force, susceptibilities at various points, remanence), or other magnetization curves. This system of equations can be either over or underspecified and is solved by using the conjugate gradient method. Numerical results related to the identification of parameters in the Energetic, Jiles-Atherton, and Preisach models are presented

  1. Global Monitoring of the CTBT: Progress, Capabilities and Plans (Invited)

    Science.gov (United States)

    Zerbo, L.

    2013-12-01

    The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), established in 1996, is tasked with building up the verification regime of the CTBT. The regime includes a global system for monitoring the earth, the oceans and the atmosphere for nuclear tests, and an on-site inspection (OSI) capability. More than 80% of the 337 facilities of the International Monitoring System (IMS) have been installed and are sending data to the International Data Centre (IDC) in Vienna, Austria for processing. These IMS data along with IDC processed and reviewed products are available to all States that have signed the Treaty. Concurrent with the build-up of the global monitoring networks, near-field geophysical methods are being developed and tested for OSIs. The monitoring system is currently operating in a provisional mode, as the Treaty has not yet entered into force. Progress in installing and operating the IMS and the IDC and in building up an OSI capability will be described. The capabilities of the monitoring networks have progressively improved as stations are added to the IMS and IDC processing techniques refined. Detection thresholds for seismic, hydroacoustic, infrasound and radionuclide events have been measured and in general are equal to or lower than the predictions used during the Treaty negotiations. The measurements have led to improved models and tools that allow more accurate predictions of future capabilities and network performance under any configuration. Unplanned tests of the monitoring network occurred when the DPRK announced nuclear tests in 2006, 2009, and 2013. All three tests were well above the detection threshold and easily detected and located by the seismic monitoring network. In addition, noble gas consistent with the nuclear tests in 2006 and 2013 (according to atmospheric transport models) was detected by stations in the network. On-site inspections of these tests were not conducted as the Treaty has not entered

  2. Development Of A Data Assimilation Capability For RAPID

    Science.gov (United States)

    Emery, C. M.; David, C. H.; Turmon, M.; Hobbs, J.; Allen, G. H.; Famiglietti, J. S.

    2017-12-01

    The global decline of in situ observations associated with the increasing ability to monitor surface water from space motivates the creation of data assimilation algorithms that merge computer models and space-based observations to produce consistent estimates of terrestrial hydrology that fill the spatiotemporal gaps in observations. RAPID is a routing model based on the Muskingum method that is capable of estimating river streamflow over large scales with a relatively short computing time. This model only requires limited inputs: a reach-based river network, and lateral surface and subsurface flow into the rivers. The relatively simple model physics imply that RAPID simulations could be significantly improved by including a data assimilation capability. Here we present the early developments of such data assimilation approach into RAPID. Given the linear and matrix-based structure of the model, we chose to apply a direct Kalman filter, hence allowing for the preservation of high computational speed. We correct the simulated streamflows by assimilating streamflow observations and our early results demonstrate the feasibility of the approach. Additionally, the use of in situ gauges at continental scales motivates the application of our new data assimilation scheme to altimetry measurements from existing (e.g. EnviSat, Jason 2) and upcoming satellite missions (e.g. SWOT), and ultimately apply the scheme globally.

  3. Determinants of Marketing Performance: Innovation, Market Capabilities and Marketing Performance

    OpenAIRE

    Naili Farida

    2016-01-01

    This research aim to analyze causality influence between innovations, market capability, social modal, entrepreneurs oriented into marketing performance. Organizational innovations is a basic focus on Total  Quality Management. Innovation has a role to technological development and competitive economy environment. The sample technique used is Purpusive sampling amount 58 respondent owner of Batik Small Medium Enterprise known as UKM.  Small businesses have small medium or medium can grow and ...

  4. Structural reliability assessment capability in NESSUS

    Science.gov (United States)

    Millwater, H.; Wu, Y.-T.

    1992-07-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  5. Application of the weighted total field-scattering field technique to 3D-PSTD light scattering model

    Science.gov (United States)

    Hu, Shuai; Gao, Taichang; Liu, Lei; Li, Hao; Chen, Ming; Yang, Bo

    2018-04-01

    PSTD (Pseudo Spectral Time Domain) is an excellent model for the light scattering simulation of nonspherical aerosol particles. However, due to the particularity of its discretization form of the Maxwell's equations, the traditional Total Field/Scattering Field (TF/SF) technique for FDTD (Finite Differential Time Domain) is not applicable to PSTD, and the time-consuming pure scattering field technique is mainly applied to introduce the incident wave. To this end, the weighted TF/SF technique proposed by X. Gao is generalized and applied to the 3D-PSTD scattering model. Using this technique, the incident light can be effectively introduced by modifying the electromagnetic components in an inserted connecting region between the total field and the scattering field region with incident terms, where the incident terms are obtained by weighting the incident field by a window function. To optimally determine the thickness of connection region and the window function type for PSTD calculations, their influence on the modeling accuracy is firstly analyzed. To further verify the effectiveness and advantages of the weighted TF/SF technique, the improved PSTD model is validated against the PSTD model equipped with pure scattering field technique in both calculation accuracy and efficiency. The results show that, the performance of PSTD seems to be not sensitive to variation of window functions. The number of the connection layer required decreases with the increasing of spatial resolution, where for spatial resolution of 24 grids per wavelength, a 6-layer region is thick enough. The scattering phase matrices and integral scattering parameters obtained by the improved PSTD show an excellent consistency with those well-tested models for spherical and nonspherical particles, illustrating that the weighted TF/SF technique can introduce the incident precisely. The weighted TF/SF technique shows higher computational efficiency than pure scattering technique.

  6. Gamma-Ray Emission Tomography: Modeling and Evaluation of Partial-Defect Testing Capabilities

    International Nuclear Information System (INIS)

    Jacobsson Svard, S.; Jansson, P.; Davour, A.; Grape, S.; White, T.A.; Smith, L.E.; Deshmukh, N.; Wittman, R.S.; Mozin, V.; Trellue, H.

    2015-01-01

    Gamma emission tomography (GET) for spent nuclear fuel verification is the subject for IAEA MSP project JNT1955. In line with IAEA Safeguards R&D plan 2012-2023, the aim of this effort is to ''develop more sensitive and less intrusive alternatives to existing NDA instruments to perform partial defect test on spent fuel assembly prior to transfer to difficult to access storage''. The current viability study constitutes the first phase of three, with evaluation and decision points between each phase. Two verification objectives have been identified; (1) counting of fuel pins in tomographic images without any a priori knowledge of the fuel assembly under study, and (2) quantitative measurements of pinby- pin properties, e.g., burnup, for the detection of anomalies and/or verification of operator-declared data. Previous measurements performed in Sweden and Finland have proven GET highly promising for detecting removed or substituted fuel rods in BWR and VVER-440 fuel assemblies even down to the individual fuel rod level. The current project adds to previous experiences by pursuing a quantitative assessment of the capabilities of GET for partial defect detection, across a broad range of potential IAEA applications, fuel types and fuel parameters. A modelling and performance-evaluation framework has been developed to provide quantitative GET performance predictions, incorporating burn-up and cooling-time calculations, Monte Carlo radiation-transport and detector-response modelling, GET instrument definitions (existing and notional) and tomographic reconstruction algorithms, which use recorded gamma-ray intensities to produce images of the fuel's internal source distribution or conclusive rod-by-rod data. The framework also comprises image-processing algorithms and performance metrics that recognize the inherent tradeoff between the probability of detecting missing pins and the false-alarm rate. Here, the modelling and analysis framework is

  7. Chronology of DIC technique based on the fundamental mathematical modeling and dehydration impact.

    Science.gov (United States)

    Alias, Norma; Saipol, Hafizah Farhah Saipan; Ghani, Asnida Che Abd

    2014-12-01

    A chronology of mathematical models for heat and mass transfer equation is proposed for the prediction of moisture and temperature behavior during drying using DIC (Détente Instantanée Contrôlée) or instant controlled pressure drop technique. DIC technique has the potential as most commonly used dehydration method for high impact food value including the nutrition maintenance and the best possible quality for food storage. The model is governed by the regression model, followed by 2D Fick's and Fourier's parabolic equation and 2D elliptic-parabolic equation in a rectangular slice. The models neglect the effect of shrinkage and radiation effects. The simulations of heat and mass transfer equations with parabolic and elliptic-parabolic types through some numerical methods based on finite difference method (FDM) have been illustrated. Intel®Core™2Duo processors with Linux operating system and C programming language have been considered as a computational platform for the simulation. Qualitative and quantitative differences between DIC technique and the conventional drying methods have been shown as a comparative.

  8. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  9. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals

    Directory of Open Access Journals (Sweden)

    John E. Huber

    2011-02-01

    Full Text Available The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  10. Facility Modeling Capability Demonstration Summary Report

    International Nuclear Information System (INIS)

    Key, Brian P.; Sadasivan, Pratap; Fallgren, Andrew James; Demuth, Scott Francis; Aleman, Sebastian E.; Almeida, Valmor F. de; Chiswell, Steven R.; Hamm, Larry; Tingey, Joel M.

    2017-01-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration's (NNSA's) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  11. Facility Modeling Capability Demonstration Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sadasivan, Pratap [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fallgren, Andrew James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demuth, Scott Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Aleman, Sebastian E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chiswell, Steven R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-02-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration’s (NNSA’s) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  12. Remote Sensing of Seagrass Leaf Area Index and Species: The Capability of a Model Inversion Method Assessed by Sensitivity Analysis and Hyperspectral Data of Florida Bay

    Directory of Open Access Journals (Sweden)

    John D. Hedley

    2017-11-01

    Full Text Available The capability for mapping two species of seagrass, Thalassia testudinium and Syringodium filiforme, by remote sensing using a physics based model inversion method was investigated. The model was based on a three-dimensional canopy model combined with a model for the overlying water column. The model included uncertainty propagation based on variation in leaf reflectances, canopy structure, water column properties, and the air-water interface. The uncertainty propagation enabled both a-priori predictive sensitivity analysis of potential capability and the generation of per-pixel error bars when applied to imagery. A primary aim of the work was to compare the sensitivity analysis to results achieved in a practical application using airborne hyperspectral data, to gain insight on the validity of sensitivity analyses in general. Results showed that while the sensitivity analysis predicted a weak but positive discrimination capability for species, in a practical application the relevant spectral differences were extremely small compared to discrepancies in the radiometric alignment of the model with the imagery—even though this alignment was very good. Complex interactions between spectral matching and uncertainty propagation also introduced biases. Ability to discriminate LAI was good, and comparable to previously published methods using different approaches. The main limitation in this respect was spatial alignment with the imagery with in situ data, which was heterogeneous on scales of a few meters. The results provide insight on the limitations of physics based inversion methods and seagrass mapping in general. Complex models can degrade unpredictably when radiometric alignment of the model and imagery is not perfect and incorporating uncertainties can have non-intuitive impacts on method performance. Sensitivity analyses are upper bounds to practical capability, incorporating a term for potential systematic errors in radiometric alignment may

  13. Designing airport checked-baggage-screening strategies considering system capability and reliability

    International Nuclear Information System (INIS)

    Feng Qianmei; Sahin, Hande; Kapur, Kailash C.

    2009-01-01

    Emerging image-based technologies are critical components of airport security for screening checked baggage. Since these new technologies differ widely in cost and accuracy, a comprehensive mathematical framework should be developed for selecting technology or combination of technologies for efficient 100% baggage screening. This paper addresses the problem of setting threshold values of these screening technologies and determining the optimal combination of technologies in a two-level screening system by considering system capability and human reliability. Probability and optimization techniques are used to quantify and evaluate the cost- and risk-effectiveness of various deployment configurations, which is captured by using a system life-cycle cost model that incorporates the deployment cost, operating cost, and costs associated with system decisions. Two system decision rules are studied for a two-level screening system. For each decision rule, two different optimization approaches are formulated and investigated from practitioner's perspective. Numerical examples for different decision rules, optimization approaches and system arrangements are demonstrated

  14. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  15. Automated extraction of knowledge for model-based diagnostics

    Science.gov (United States)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  16. Techniques to extract physical modes in model-independent analysis of rings

    International Nuclear Information System (INIS)

    Wang, C.-X.

    2004-01-01

    A basic goal of Model-Independent Analysis is to extract the physical modes underlying the beam histories collected at a large number of beam position monitors so that beam dynamics and machine properties can be deduced independent of specific machine models. Here we discuss techniques to achieve this goal, especially the Principal Component Analysis and the Independent Component Analysis.

  17. Transforming organizational capabilities in strategizing

    DEFF Research Database (Denmark)

    Jørgensen, Claus; Friis, Ole Uhrskov; Koch, Christian

    2014-01-01

    Offshored and networked enterprises are becoming an important if not leading organizational form and this development seriously challenges their organizational capabilities. More specifically, over the last years, SMEs have commenced entering these kinds of arrangements. As the organizational...... capabilities of SMEs are limited at the outset, even more emphasis is needed regarding the issues of developing relevant organizational capabilities. This paper aims at investigating how capabilities evolve during an offshoring process of more than 5 years in two Danish SMEs, i.e. not only short- but long......-term evolvements within the companies. We develop our framework of understanding organizational capabilities drawing on dynamic capability, relational capability and strategy as practice concepts, appreciating the performative aspects of developing new routines. Our two cases are taken from one author’s Ph...

  18. Sierra/SolidMechanics 4.48 User's Guide: Addendum for Shock Capabilities.

    Energy Technology Data Exchange (ETDEWEB)

    Plews, Julia A.; Crane, Nathan K; de Frias, Gabriel Jose; Le, San; Littlewood, David John; Merewether, Mark Thomas; Mosby, Matthew David; Pierson, Kendall H.; Porter, Vicki L.; Shelton, Timothy; Thomas, Jesse David; Tupek, Michael R.; Veilleux, Michael; Xavier, Patrick G.

    2018-03-01

    This is an addendum to the Sierra/SolidMechanics 4.48 User's Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State's International Traffic in Arms Regulations (ITAR) export-control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export-control requirements. The ITAR enhancements to Sierra/SM in- clude material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. Since this is an addendum to the standard Sierra/SM user's guide, please refer to that document first for general descriptions of code capability and use.

  19. Evaluating late detection capability against diverse insider adversaries

    International Nuclear Information System (INIS)

    Sicherman, A.

    1987-01-01

    This paper describes a model for evaluating the late (after-the-fact) detection capability of material control and accountability (MCandA) systems against insider theft or diversion of special nuclear material. Potential insider cover-up strategies to defeat activities providing detection (e.g., inventories) are addressed by the model in a tractable manner. For each potential adversary and detection activity, two probabilities are assessed and used to fit the model. The model then computes the probability of detection for activities occurring periodically over time. The model provides insight into MCandA effectiveness and helps identify areas for safeguards improvement. 4 refs., 4 tabs

  20. BWR modeling capability and Scale/Triton lattice-to-core integration of the Nestle nodal simulator - 331

    International Nuclear Information System (INIS)

    Galloway, J.; Hernandez, H.; Maldonado, G.I.; Jessee, M.; Popov, E.; Clarno, K.

    2010-01-01

    This article reports the status of recent and substantial enhancements made to the NESTLE nodal core simulator, a code originally developed at North Carolina State University (NCSU) of which version 5.2.1 has been available for several years through the Oak Ridge National Laboratory (ORNL) Radiation Safety Information Computational Center (RSICC) software repository. In its released and available form, NESTLE is a seasoned, well-developed and extensively tested code system particularly useful to model PWRs. In collaboration with NCSU, University of Tennessee (UT) and ORNL researchers have recently developed new enhancements for the NESTLE code, including the implementation of a two-phase drift-flux thermal hydraulic and flow redistribution model to facilitate modeling of Boiling Water Reactors (BWRs) as well as the development of an integrated coupling of SCALE/TRITON lattice physics to NESTLE so to produce an end-to-end capability for reactor simulations. These latest advancements implemented into NESTLE as well as an update of other ongoing efforts of this project are herein reported. (authors)