WorldWideScience

Sample records for modeling capability results

  1. NASA Air Force Cost Model (NAFCOM): Capabilities and Results

    Science.gov (United States)

    McAfee, Julie; Culver, George; Naderi, Mahmoud

    2011-01-01

    NAFCOM is a parametric estimating tool for space hardware. Uses cost estimating relationships (CERs) which correlate historical costs to mission characteristics to predict new project costs. It is based on historical NASA and Air Force space projects. It is intended to be used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels and estimates development and production costs. NAFCOM is applicable to various types of missions (crewed spacecraft, uncrewed spacecraft, and launch vehicles). There are two versions of the model: a government version that is restricted and a contractor releasable version.

  2. HYDROïD humanoid robot head with perception and emotion capabilities :Modeling, Design and Experimental Results

    Directory of Open Access Journals (Sweden)

    Samer eAlfayad

    2016-04-01

    Full Text Available In the framework of the HYDROïD humanoid robot project, this paper describes the modeling and design of an electrically actuated head mechanism. Perception and emotion capabilities are considered in the design process. Since HYDROïD humanoid robot is hydraulically actuated, the choice of electrical actuation for the head mechanism addressed in this paper is justified. Considering perception and emotion capabilities leads to a total number of 15 degrees of freedom for the head mechanism which are split on four main sub-mechanisms: the neck, the mouth, the eyes and the eyebrows. Biological data and kinematics performances of human head are taken as inputs of the design process. A new solution of uncoupled eyes is developed to possibly address the master-slave process that links the human eyes as well as vergence capabilities. Modeling each sub-system is carried out in order to get equations of motion, their frequency responses and their transfer functions. The neck pitch rotation is given as a study example. Then, the head mechanism performances are presented through a comparison between model and experimental results validating the hardware capabilities. Finally, the head mechanism is integrated on the HYDROïD upper-body. An object tracking experiment coupled with emotional expressions is carried out to validate the synchronization of the eye rotations with the body motions.

  3. Group Capability Model

    Science.gov (United States)

    Olejarski, Michael; Appleton, Amy; Deltorchio, Stephen

    2009-01-01

    The Group Capability Model (GCM) is a software tool that allows an organization, from first line management to senior executive, to monitor and track the health (capability) of various groups in performing their contractual obligations. GCM calculates a Group Capability Index (GCI) by comparing actual head counts, certifications, and/or skills within a group. The model can also be used to simulate the effects of employee usage, training, and attrition on the GCI. A universal tool and common method was required due to the high risk of losing skills necessary to complete the Space Shuttle Program and meet the needs of the Constellation Program. During this transition from one space vehicle to another, the uncertainty among the critical skilled workforce is high and attrition has the potential to be unmanageable. GCM allows managers to establish requirements for their group in the form of head counts, certification requirements, or skills requirements. GCM then calculates a Group Capability Index (GCI), where a score of 1 indicates that the group is at the appropriate level; anything less than 1 indicates a potential for improvement. This shows the health of a group, both currently and over time. GCM accepts as input head count, certification needs, critical needs, competency needs, and competency critical needs. In addition, team members are categorized by years of experience, percentage of contribution, ex-members and their skills, availability, function, and in-work requirements. Outputs are several reports, including actual vs. required head count, actual vs. required certificates, CGI change over time (by month), and more. The program stores historical data for summary and historical reporting, which is done via an Excel spreadsheet that is color-coded to show health statistics at a glance. GCM has provided the Shuttle Ground Processing team with a quantifiable, repeatable approach to assessing and managing the skills in their organization. They now have a common

  4. People Capability Maturity Model. SM.

    Science.gov (United States)

    1995-09-01

    tailored so it consumes less time and resources than a traditional software process assessment or CMU/SEI-95-MM-02 People Capability Maturity Model...improved reputation or customer loyalty. CMU/SEI-95-MM-02 People Capability Maturity Model ■ L5-17 Coaching Level 5: Optimizing Activity 1...Maturity Model CMU/SEI-95-MM-62 Carnegie-Mellon University Software Engineering Institute DTIC ELECTE OCT 2 7 1995 People Capability Maturity

  5. Model-Based Military Scenario Management for Defence Capability

    National Research Council Canada - National Science Library

    Gori, Ronnie; Chen, Pin; Pozgay, Angela

    2004-01-01

    .... This paper describes initial work towards the development of an information model that links scenario and capability related information, and the results of capability analysis and experimentation...

  6. Production capability: ERDA methods and results

    International Nuclear Information System (INIS)

    Klemenic, J.

    1977-01-01

    Production centers are categorized into four classes, according to the relative certainty of future production. A ''forward cost'' basis is used to establish both the resource base and to define the acceptable production centers. The first phase of the work is called the ''Could'' capability. Resources are assigned to existing production centers, or new production centers are postulated based on adequate resources to support a mill for a reasonable economic life. A production schedule is developed for each center. The last step in the ''Could'' study is to aggregate the capital and operating costs. The final step in the Production Capability study is the rescheduling of the production from the ''Could'' to produce only sufficient U concentrate to meet the feed requirements of enrichment facilities operated at the announced transaction tails assay plans. The optimized production schedules are called the ''Need'' production capability. A separate study was also performed of industry production plans. 4 tables, 7 figs

  7. Towards a national cybersecurity capability development model

    CSIR Research Space (South Africa)

    Jacobs, Pierre C

    2017-06-01

    Full Text Available to be broken down into its components, a model serves as a blueprint to ensure that those building the capability considers all components, allows for cost estimation and facilitates the evaluation of trade-offs. One national cybersecurity capability...

  8. Geospatial Information System Capability Maturity Models

    Science.gov (United States)

    2017-06-01

    To explore how State departments of transportation (DOTs) evaluate geospatial tool applications and services within their own agencies, particularly their experiences using capability maturity models (CMMs) such as the Urban and Regional Information ...

  9. Capability maturity models for offshore organisational management.

    Science.gov (United States)

    Strutt, J E; Sharp, J V; Terry, E; Miles, R

    2006-12-01

    The goal setting regime imposed by the UK safety regulator has important implications for an organisation's ability to manage health and safety related risks. Existing approaches to safety assurance based on risk analysis and formal safety assessments are increasingly considered unlikely to create the step change improvement in safety to which the offshore industry aspires and alternative approaches are being considered. One approach, which addresses the important issue of organisational behaviour and which can be applied at a very early stage of design, is the capability maturity model (CMM). The paper describes the development of a design safety capability maturity model, outlining the key processes considered necessary to safety achievement, definition of maturity levels and scoring methods. The paper discusses how CMM is related to regulatory mechanisms and risk based decision making together with the potential of CMM to environmental risk management.

  10. Experimental modeling of eddy current inspection capabilities

    International Nuclear Information System (INIS)

    Junker, W.R.; Clark, W.G.

    1984-01-01

    This chapter examines the experimental modeling of eddy current inspection capabilities based upon the use of liquid mercury samples designed to represent metal components containing discontinuities. A brief summary of past work with mercury modeling and a detailed discussion of recent experiments designed to further evaluate the technique are presented. The main disadvantages of the mercury modeling concept are that mercury is toxic and must be handled carefully, liquid mercury can only be used to represent nonferromagnetic materials, and wetting and meniscus problems can distort the effective size of artificial discontinuities. Artificial discontinuities placed in a liquid mercury sample can be used to represent discontinuities in solid metallic structures. Discontinuity size and type cannot be characterized from phase angle and signal amplitude data developed with a surface scanning, pancake-type eddy current probe. It is concluded that the mercury model approach can greatly enhance the overall understanding and applicability of eddy current inspection techniques

  11. Numerical modeling capabilities to predict repository performance

    International Nuclear Information System (INIS)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used

  12. Conceptual Model of IT Infrastructure Capability and Its Empirical Justification

    Institute of Scientific and Technical Information of China (English)

    QI Xianfeng; LAN Boxiong; GUO Zhenwei

    2008-01-01

    Increasing importance has been attached to the value of information technology (IT) infrastructure in today's organizations. The development of efficacious IT infrastructure capability enhances business performance and brings sustainable competitive advantage. This study analyzed the IT infrastructure capability in a holistic way and then presented a concept model of IT capability. IT infrastructure capability was categorized into sharing capability, service capability, and flexibility. This study then empirically tested the model using a set of survey data collected from 145 firms. Three factors emerge from the factor analysis as IT flexibility, IT service capability, and IT sharing capability, which agree with those in the conceptual model built in this study.

  13. Establishing an infrared measurement and modelling capability

    CSIR Research Space (South Africa)

    Willers, CJ

    2011-04-01

    Full Text Available The protection of own aircraft assets against infrared missile threats requires a deep understanding of the vulnerability of these assets with regard to specific threats and specific environments of operation. A key capability in the protection...

  14. State-of-the-art modeling capabilities for Orimulsion modeling

    International Nuclear Information System (INIS)

    Cekirge, H.M.; Palmer, S.L.; Convery, K.; Ileri, L.

    1996-01-01

    The pollution response of Orimulsion was discussed. Orimulsion is an inexpensive alternative to fuel oil No. 6. It has the capability to heat large industrial and electric utility boilers. It is an emulsion composed of approximately 70% bitumen (a heavy hydrocarbon) and 30% water to which a surfactant has been added. It has a specific gravity of one or higher, so it is of particular concern in the event of a spill. The physical and chemical processes that would take place in an Orimulsion spill were studied and incorporated into the design of the model ORI SLIK, a fate and transport model for marine environments. The most critical decision in using ORI SLIK is the assignment of the number of parcels into which the initial spill volume will be divided since an underspecification would result in inaccurate results. However, no reliable methods for determining this, other than a decision based on trial and error, has been found. It was concluded that while many of the complex processes of Orimulsion in marine environments are approximated in currently available models, some areas still need further study. Among these are the effect of current shear, changing particle densities, and differential settling. 24 refs., 1 tab., 5 figs

  15. Distinctive Innovation Capabilities of Argentine Software Companies with High Innovation Results and Impacts

    OpenAIRE

    María Isabel Camio; María del Carmen Romero; María Belén Álvarez; Alfredo José Rébori

    2018-01-01

    The software sector is of growing importance and, due to its degree of dynamism, the identification of capabilities for innovation is vital. This study identifies capabilities variables that distinguish Argentine software companies with high innovation results and high innovation impacts from those with lesser results and impacts. It is applied to a sample of 103 companies, a measurement model and the component variables of an innovation degree index for software companies (INIs) formulated i...

  16. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  17. Hitch code capabilities for modeling AVT chemistry

    International Nuclear Information System (INIS)

    Leibovitz, J.

    1985-01-01

    Several types of corrosion have damaged alloy 600 tubing in the secondary side of steam generators. The types of corrosion include wastage, denting, intergranular attack, stress corrosion, erosion-corrosion, etc. The environments which cause attack may originate from leaks of cooling water into the condensate, etc. When the contaminated feedwater is pumped into the generator, the impurities may concentrate first 200 to 400 fold in the bulk water, depending on the blowdown, and then further to saturation and dryness in heated tube support plate crevices. Characterization of local solution chemistries is the first step to predict and correct the type of corrosion that can occur. The pH is of particular importance because it is a major factor governing the rate of corrosion reactions. The pH of a solution at high temperature is not the same as the ambient temperature, since ionic dissociation constants, solubility and solubility products, activity coefficients, etc., all change with temperature. Because the high temperature chemistry of such solutions is not readily characterized experimentally, modeling techniques were developed under EPRI sponsorship to calculate the high temperature chemistry of the relevant solutions. In many cases, the effects of cooling water impurities on steam generator water chemistry with all volatile treatment (AVT), upon concentration by boiling, and in particular the resulting acid or base concentration can be calculated by a simple code, the HITCH code, which is very easy to use. The scope and applicability of the HITCH code are summarized

  18. Innovation and dynamic capabilities of the firm: Defining an assessment model

    Directory of Open Access Journals (Sweden)

    André Cherubini Alves

    2017-05-01

    Full Text Available Innovation and dynamic capabilities have gained considerable attention in both academia and practice. While one of the oldest inquiries in economic and strategy literature involves understanding the features that drive business success and a firm’s perpetuity, the literature still lacks a comprehensive model of innovation and dynamic capabilities. This study presents a model that assesses firms’ innovation and dynamic capabilities perspectives based on four essential capabilities: development, operations, management, and transaction capabilities. Data from a survey of 1,107 Brazilian manufacturing firms were used for empirical testing and discussion of the dynamic capabilities framework. Regression and factor analyses validated the model; we discuss the results, contrasting with the dynamic capabilities’ framework. Operations Capability is the least dynamic of all capabilities, with the least influence on innovation. This reinforces the notion that operations capabilities as “ordinary capabilities,” whereas management, development, and transaction capabilities better explain firms’ dynamics and innovation.

  19. Capabilities and accuracy of energy modelling software

    CSIR Research Space (South Africa)

    Osburn, L

    2010-11-01

    Full Text Available Energy modelling can be used in a number of different ways to fulfill different needs, including certification within building regulations or green building rating tools. Energy modelling can also be used in order to try and predict what the energy...

  20. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  1. Development of a fourth generation predictive capability maturity model.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel; Rider, William J.; Trucano, Timothy Guy

    2013-09-01

    The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, the PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.

  2. Facility Modeling Capability Demonstration Summary Report

    International Nuclear Information System (INIS)

    Key, Brian P.; Sadasivan, Pratap; Fallgren, Andrew James; Demuth, Scott Francis; Aleman, Sebastian E.; Almeida, Valmor F. de; Chiswell, Steven R.; Hamm, Larry; Tingey, Joel M.

    2017-01-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration's (NNSA's) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  3. Facility Modeling Capability Demonstration Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sadasivan, Pratap [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fallgren, Andrew James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demuth, Scott Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Aleman, Sebastian E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chiswell, Steven R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-02-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration’s (NNSA’s) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  4. UAVSAR Program: Initial Results from New Instrument Capabilities

    Science.gov (United States)

    Lou, Yunling; Hensley, Scott; Moghaddam, Mahta; Moller, Delwyn; Chapin, Elaine; Chau, Alexandra; Clark, Duane; Hawkins, Brian; Jones, Cathleen; Marks, Phillip; hide

    2013-01-01

    UAVSAR is an imaging radar instrument suite that serves as NASA's airborne facility instrument to acquire scientific data for Principal Investigators as well as a radar test-bed for new radar observation techniques and radar technology demonstration. Since commencing operational science observations in January 2009, the compact, reconfigurable, pod-based radar has been acquiring L-band fully polarimetric SAR (POLSAR) data with repeat-pass interferometric (RPI) observations underneath NASA Dryden's Gulfstream-III jet to provide measurements for science investigations in solid earth and cryospheric studies, vegetation mapping and land use classification, archaeological research, soil moisture mapping, geology and cold land processes. In the past year, we have made significant upgrades to add new instrument capabilities and new platform options to accommodate the increasing demand for UAVSAR to support scientific campaigns to measure subsurface soil moisture, acquire data in the polar regions, and for algorithm development, verification, and cross-calibration with other airborne/spaceborne instruments.

  5. Computable general equilibrium model fiscal year 2014 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Laboratory; Boero, Riccardo [Los Alamos National Laboratory

    2016-05-11

    This report provides an overview of the development of the NISAC CGE economic modeling capability since 2012. This capability enhances NISAC's economic modeling and analysis capabilities to answer a broader set of questions than possible with previous economic analysis capability. In particular, CGE modeling captures how the different sectors of the economy, for example, households, businesses, government, etc., interact to allocate resources in an economy and this approach captures these interactions when it is used to estimate the economic impacts of the kinds of events NISAC often analyzes.

  6. Capability to model reactor regulating system in RFSP

    Energy Technology Data Exchange (ETDEWEB)

    Chow, H C; Rouben, B; Younis, M H; Jenkins, D A [Atomic Energy of Canada Ltd., Mississauga, ON (Canada); Baudouin, A [Hydro-Quebec, Montreal, PQ (Canada); Thompson, P D [New Brunswick Electric Power Commission, Point Lepreau, NB (Canada). Point Lepreau Generating Station

    1996-12-31

    The Reactor Regulating System package extracted from SMOKIN-G2 was linked within RFSP to the spatial kinetics calculation. The objective is to use this new capability in safety analysis to model the actions of RRS in hypothetical events such as in-core LOCA or moderator drain scenarios. This paper describes the RRS modelling in RFSP and its coupling to the neutronics calculations, verification of the RRS control routine functions, sample applications and comparisons to SMOKIN-G2 results for the same transient simulations. (author). 7 refs., 6 figs.

  7. A Thermo-Optic Propagation Modeling Capability.

    Energy Technology Data Exchange (ETDEWEB)

    Schrader, Karl; Akau, Ron

    2014-10-01

    A new theoretical basis is derived for tracing optical rays within a finite-element (FE) volume. The ray-trajectory equations are cast into the local element coordinate frame and the full finite-element interpolation is used to determine instantaneous index gradient for the ray-path integral equation. The FE methodology (FEM) is also used to interpolate local surface deformations and the surface normal vector for computing the refraction angle when launching rays into the volume, and again when rays exit the medium. The method is implemented in the Matlab(TM) environment and compared to closed- form gradient index models. A software architecture is also developed for implementing the algorithms in the Zemax(TM) commercial ray-trace application. A controlled thermal environment was constructed in the laboratory, and measured data was collected to validate the structural, thermal, and optical modeling methods.

  8. Proposing a Capability Perspective on Digital Business Models

    OpenAIRE

    Bärenfänger, Rieke; Otto, Boris

    2015-01-01

    Business models comprehensively describe the functioning of businesses in contemporary economic, technological, and societal environments. This paper focuses on the characteristics of digital business models from the perspective of capability research and develops a capability model for digital businesses. Following the design science research (DSR) methodology, multiple evaluation and design iterations were performed. Contributions to the design process came from IS/IT practice and the resea...

  9. Boiling water reactor modeling capabilities of MMS-02

    International Nuclear Information System (INIS)

    May, R.S.; Abdollahian, D.A.; Elias, E.; Shak, D.P.

    1987-01-01

    During the development period for the Modular Modeling System (MMS) library modules, the Boiling Water Reactor (BWR) has been the last major component to be addressed. The BWRX module includes models of the reactor core, reactor vessel, and recirculation loop. A pre-release version was made available for utility use in September 1983. Since that time a number of changes have been incorporated in BWRX to (1) improve running time for most transient events of interest, (2) extend its capability to include certain events of interest in reactor safety analysis, and (3) incorporate a variety of improvements to the module interfaces and user input formats. The purposes of this study were to briefly review the module structure and physical models, to point the differences between the MMS-02 BWRX module and the BWRX version previously available in the TESTREV1 library, to provide guidelines for choosing among the various user options, and to present some representative results

  10. Distinctive Innovation Capabilities of Argentine Software Companies with High Innovation Results and Impacts

    Directory of Open Access Journals (Sweden)

    María Isabel Camio

    2018-04-01

    Full Text Available The software sector is of growing importance and, due to its degree of dynamism, the identification of capabilities for innovation is vital. This study identifies capabilities variables that distinguish Argentine software companies with high innovation results and high innovation impacts from those with lesser results and impacts. It is applied to a sample of 103 companies, a measurement model and the component variables of an innovation degree index for software companies (INIs formulated in previous studies. A Principal Component Analysis and a biplot are conducted. In the analysis of results and impacts, 100% of the variability within the first two components is explained, which shows the high correlation between variables. From the biplots, it appears that companies with high results have higher degrees in the variables of motivation, strategy, leadership and internal determinants; and those with high impacts present higher degrees of structure, strategy, leadership, free software and innovation activities. The findings add elements to the theory of capabilities for innovation in the software sector and allow us to consider the relative importance of different capabilities variables in the generation of innovation results and impacts.

  11. Are Hydrostatic Models Still Capable of Simulating Oceanic Fronts

    Science.gov (United States)

    2016-11-10

    Hydrostatic Models Still Capable of Simulating Oceanic Fronts Yalin Fan Zhitao Yu Ocean Dynamics and Prediction Branch Oceanography Division FengYan Shi...OF PAGES 17. LIMITATION OF ABSTRACT Are Hydrostatic Models Still Capable of Simulating Oceanic Fronts? Yalin Fan, Zhitao Yu, and, Fengyan Shi1 Naval...mixed layer and thermocline simulations as well as large scale circulations. Numerical experiments are conducted using hydrostatic (HY) and

  12. Neural network modeling of a dolphin's sonar discrimination capabilities

    OpenAIRE

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL; Nachtigall, PE; Roitblat, H.

    1994-01-01

    The capability of an echo-locating dolphin to discriminate differences in the wall thickness of cylinders was previously modeled by a counterpropagation neural network using only spectral information of the echoes [W. W. L. Au, J. Acoust. Soc. Am. 95, 2728–2735 (1994)]. In this study, both time and frequency information were used to model the dolphin discrimination capabilities. Echoes from the same cylinders were digitized using a broadband simulated dolphin sonar signal with the transducer ...

  13. Evaluating the habitat capability model for Merriam's turkeys

    Science.gov (United States)

    Mark A. Rumble; Stanley H. Anderson

    1995-01-01

    Habitat capability (HABCAP) models for wildlife assist land managers in predicting the consequences of their management decisions. Models must be tested and refined prior to using them in management planning. We tested the predicted patterns of habitat selection of the R2 HABCAP model using observed patterns of habitats selected by radio-marked Merriam’s turkey (

  14. Constructing a justice model based on Sen's capability approach

    OpenAIRE

    Yüksel, Sevgi; Yuksel, Sevgi

    2008-01-01

    The thesis provides a possible justice model based on Sen's capability approach. For this goal, we first analyze the general structure of a theory of justice, identifying the main variables and issues. Furthermore, based on Sen (2006) and Kolm (1998), we look at 'transcendental' and 'comparative' approaches to justice and concentrate on the sufficiency condition for the comparative approach. Then, taking Rawls' theory of justice as a starting point, we present how Sen's capability approach em...

  15. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  16. Three Models of Education: Rights, Capabilities and Human Capital

    Science.gov (United States)

    Robeyns, Ingrid

    2006-01-01

    This article analyses three normative accounts that can underlie educational policies, with special attention to gender issues. These three models of education are human capital theory, rights discourses and the capability approach. I first outline five different roles that education can play. Then I analyse these three models of educational…

  17. ADVANCED SIMULATION CAPABILITY FOR ENVIRONMENTAL MANAGEMENT- CURRENT STATUS AND PHASE II DEMONSTRATION RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.

    2013-02-26

    The U.S. Department of Energy (USDOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Toolsets and High-Performance Computing (HPC) Multiprocess Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, toolsets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial toolsets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  18. Stochastic and simulation models of maritime intercept operations capabilities

    OpenAIRE

    Sato, Hiroyuki

    2005-01-01

    The research formulates and exercises stochastic and simulation models to assess the Maritime Intercept Operations (MIO) capabilities. The models focus on the surveillance operations of the Maritime Patrol Aircraft (MPA). The analysis using the models estimates the probability with which a terrorist vessel (Red) is detected, correctly classified, and escorted for intensive investigation and neutralization before it leaves an area of interest (AOI). The difficulty of obtaining adequate int...

  19. Experiences with the Capability Maturity Model in a research environment

    NARCIS (Netherlands)

    Velden, van der M.J.; Vreke, J.; Wal, van der B.; Symons, A.

    1996-01-01

    The project described here was aimed at evaluating the Capability Maturity Model (CMM) in the context of a research organization. Part of the evaluation was a standard CMM assessment. It was found that CMM could be applied to a research organization, although its five maturity levels were considered

  20. Neural network modeling of a dolphin's sonar discrimination capabilities

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL

    1994-01-01

    The capability of an echo-locating dolphin to discriminate differences in the wall thickness of cylinders was previously modeled by a counterpropagation neural network using only spectral information of the echoes [W. W. L. Au, J. Acoust. Soc. Am. 95, 2728–2735 (1994)]. In this study, both time a...

  1. Space Weather Models at the CCMC And Their Capabilities

    Science.gov (United States)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2007-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. In this presentation, we will provide an overview of the community-provided, space weather-relevant, model suite, which resides at CCMC. We will discuss current capabilities, and analyze expected future developments of space weather related modeling.

  2. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  3. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Garcia, Humberto; Burr, Tom; Coles, Garill A.; Edmunds, Thomas A.; Garrett, Alfred; Gorensek, Maximilian; Hamm, Luther; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Tzanos, Constantine P.; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  4. Integration Of Facility Modeling Capabilities For Nuclear Nonproliferation Analysis

    International Nuclear Information System (INIS)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  5. Simulation modeling on the growth of firm's safety management capability

    Institute of Scientific and Technical Information of China (English)

    LIU Tie-zhong; LI Zhi-xiang

    2008-01-01

    Aiming to the deficiency of safety management measure, established simulation model about firm's safety management capability(FSMC) based on organizational learning theory. The system dynamics(SD) method was used, in which level and rate system, variable equation and system structure flow diagram was concluded. Simulation model was verified from two aspects: first, model's sensitivity to variable was tested from the gross of safety investment and the proportion of safety investment; second, variables dependency was checked up from the correlative variable of FSMC and organizational learning. The feasibility of simulation model is verified though these processes.

  6. Capability maturity models in engineering companies: case study analysis

    Directory of Open Access Journals (Sweden)

    Titov Sergei

    2016-01-01

    Full Text Available In the conditions of the current economic downturn engineering companies in Russia and worldwide are searching for new approaches and frameworks to improve their strategic position, increase the efficiency of the internal business processes and enhance the quality of the final products. Capability maturity models are well-known tools used by many foreign engineering companies to assess the productivity of the processes, to elaborate the program of business process improvement and to prioritize the efforts to optimize the whole company performance. The impact of capability maturity model implementation on cost and time are documented and analyzed in the existing research. However, the potential of maturity models as tools of quality management is less known. The article attempts to analyze the impact of CMM implementation on the quality issues. The research is based on a case study methodology and investigates the real life situation in a Russian engineering company.

  7. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  8. Atmospheric Deposition Modeling Results

    Data.gov (United States)

    U.S. Environmental Protection Agency — This asset provides data on model results for dry and total deposition of sulfur, nitrogen and base cation species. Components include deposition velocities, dry...

  9. Simulation and Modeling Capability for Standard Modular Hydropower Technology

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Kevin M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Brennan T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Witt, Adam M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); DeNeale, Scott T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevelhimer, Mark S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pries, Jason L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burress, Timothy A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kao, Shih-Chieh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mobley, Miles H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Kyutae [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Curd, Shelaine L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Tsakiris, Achilleas [Univ. of Tennessee, Knoxville, TN (United States); Mooneyham, Christian [Univ. of Tennessee, Knoxville, TN (United States); Papanicolaou, Thanos [Univ. of Tennessee, Knoxville, TN (United States); Ekici, Kivanc [Univ. of Tennessee, Knoxville, TN (United States); Whisenant, Matthew J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Welch, Tim [US Department of Energy, Washington, DC (United States); Rabon, Daniel [US Department of Energy, Washington, DC (United States)

    2017-08-01

    Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.

  10. Fuel analysis code FAIR and its high burnup modelling capabilities

    International Nuclear Information System (INIS)

    Prasad, P.S.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1995-01-01

    A computer code FAIR has been developed for analysing performance of water cooled reactor fuel pins. It is capable of analysing high burnup fuels. This code has recently been used for analysing ten high burnup fuel rods irradiated at Halden reactor. In the present paper, the code FAIR and its various high burnup models are described. The performance of code FAIR in analysing high burnup fuels and its other applications are highlighted. (author). 21 refs., 12 figs

  11. The Aviation System Analysis Capability Airport Capacity and Delay Models

    Science.gov (United States)

    Lee, David A.; Nelson, Caroline; Shapiro, Gerald

    1998-01-01

    The ASAC Airport Capacity Model and the ASAC Airport Delay Model support analyses of technologies addressing airport capacity. NASA's Aviation System Analysis Capability (ASAC) Airport Capacity Model estimates the capacity of an airport as a function of weather, Federal Aviation Administration (FAA) procedures, traffic characteristics, and the level of technology available. Airport capacity is presented as a Pareto frontier of arrivals per hour versus departures per hour. The ASAC Airport Delay Model allows the user to estimate the minutes of arrival delay for an airport, given its (weather dependent) capacity. Historical weather observations and demand patterns are provided by ASAC as inputs to the delay model. The ASAC economic models can translate a reduction in delay minutes into benefit dollars.

  12. Advanced Simulation Capability for Environmental Management - Current Status and Phase II Demonstration Results - 13161

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, Roger R.; Flach, Greg [Savannah River National Laboratory, Savannah River Site, Bldg 773-43A, Aiken, SC 29808 (United States); Freshley, Mark D.; Freedman, Vicky; Gorton, Ian [Pacific Northwest National Laboratory, MSIN K9-33, P.O. Box 999, Richland, WA 99352 (United States); Dixon, Paul; Moulton, J. David [Los Alamos National Laboratory, MS B284, P.O. Box 1663, Los Alamos, NM 87544 (United States); Hubbard, Susan S.; Faybishenko, Boris; Steefel, Carl I.; Finsterle, Stefan [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, MS 50B-4230, Berkeley, CA 94720 (United States); Marble, Justin [Department of Energy, 19901 Germantown Road, Germantown, MD 20874-1290 (United States)

    2013-07-01

    The U.S. Department of Energy (US DOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Tool-sets and High-Performance Computing (HPC) Multi-process Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial tool-sets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  13. Advanced Simulation Capability for Environmental Management - Current Status and Phase II Demonstration Results - 13161

    International Nuclear Information System (INIS)

    Seitz, Roger R.; Flach, Greg; Freshley, Mark D.; Freedman, Vicky; Gorton, Ian; Dixon, Paul; Moulton, J. David; Hubbard, Susan S.; Faybishenko, Boris; Steefel, Carl I.; Finsterle, Stefan; Marble, Justin

    2013-01-01

    The U.S. Department of Energy (US DOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Tool-sets and High-Performance Computing (HPC) Multi-process Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial tool-sets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  14. Parameter sampling capabilities of sequential and simultaneous data assimilation: II. Statistical analysis of numerical results

    International Nuclear Information System (INIS)

    Fossum, Kristian; Mannseth, Trond

    2014-01-01

    We assess and compare parameter sampling capabilities of one sequential and one simultaneous Bayesian, ensemble-based, joint state-parameter (JS) estimation method. In the companion paper, part I (Fossum and Mannseth 2014 Inverse Problems 30 114002), analytical investigations lead us to propose three claims, essentially stating that the sequential method can be expected to outperform the simultaneous method for weakly nonlinear forward models. Here, we assess the reliability and robustness of these claims through statistical analysis of results from a range of numerical experiments. Samples generated by the two approximate JS methods are compared to samples from the posterior distribution generated by a Markov chain Monte Carlo method, using four approximate measures of distance between probability distributions. Forward-model nonlinearity is assessed from a stochastic nonlinearity measure allowing for sufficiently large model dimensions. Both toy models (with low computational complexity, and where the nonlinearity is fairly easy to control) and two-phase porous-media flow models (corresponding to down-scaled versions of problems to which the JS methods have been frequently applied recently) are considered in the numerical experiments. Results from the statistical analysis show strong support of all three claims stated in part I. (paper)

  15. Off-Gas Adsorption Model Capabilities and Recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Lyon, Kevin L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Welty, Amy K. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Law, Jack [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ladshaw, Austin [Georgia Inst. of Technology, Atlanta, GA (United States); Yiacoumi, Sotira [Georgia Inst. of Technology, Atlanta, GA (United States); Tsouris, Costas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-03-01

    Off-gas treatment is required to reduce emissions from aqueous fuel reprocessing. Evaluating the products of innovative gas adsorption research requires increased computational simulation capability to more effectively transition from fundamental research to operational design. Early modeling efforts produced the Off-Gas SeParation and REcoverY (OSPREY) model that, while efficient in terms of computation time, was of limited value for complex systems. However, the computational and programming lessons learned in development of the initial model were used to develop Discontinuous Galerkin OSPREY (DGOSPREY), a more effective model. Initial comparisons between OSPREY and DGOSPREY show that, while OSPREY does reasonably well to capture the initial breakthrough time, it displays far too much numerical dispersion to accurately capture the real shape of the breakthrough curves. DGOSPREY is a much better tool as it utilizes a more stable set of numerical methods. In addition, DGOSPREY has shown the capability to capture complex, multispecies adsorption behavior, while OSPREY currently only works for a single adsorbing species. This capability makes DGOSPREY ultimately a more practical tool for real world simulations involving many different gas species. While DGOSPREY has initially performed very well, there is still need for improvement. The current state of DGOSPREY does not include any micro-scale adsorption kinetics and therefore assumes instantaneous adsorption. This is a major source of error in predicting water vapor breakthrough because the kinetics of that adsorption mechanism is particularly slow. However, this deficiency can be remedied by building kinetic kernels into DGOSPREY. Another source of error in DGOSPREY stems from data gaps in single species, such as Kr and Xe, isotherms. Since isotherm data for each gas is currently available at a single temperature, the model is unable to predict adsorption at temperatures outside of the set of data currently

  16. Climbing the ladder: capability maturity model integration level 3

    Science.gov (United States)

    Day, Bryce; Lutteroth, Christof

    2011-02-01

    This article details the attempt to form a complete workflow model for an information and communication technologies (ICT) company in order to achieve a capability maturity model integration (CMMI) maturity rating of 3. During this project, business processes across the company's core and auxiliary sectors were documented and extended using modern enterprise modelling tools and a The Open Group Architectural Framework (TOGAF) methodology. Different challenges were encountered with regard to process customisation and tool support for enterprise modelling. In particular, there were problems with the reuse of process models, the integration of different project management methodologies and the integration of the Rational Unified Process development process framework that had to be solved. We report on these challenges and the perceived effects of the project on the company. Finally, we point out research directions that could help to improve the situation in the future.

  17. Evacuation emergency response model coupling atmospheric release advisory capability output

    International Nuclear Information System (INIS)

    Rosen, L.C.; Lawver, B.S.; Buckley, D.W.; Finn, S.P.; Swenson, J.B.

    1983-01-01

    A Federal Emergency Management Agency (FEMA) sponsored project to develop a coupled set of models between those of the Lawrence Livermore National Laboratory (LLNL) Atmospheric Release Advisory Capability (ARAC) system and candidate evacuation models is discussed herein. This report describes the ARAC system and discusses the rapid computer code developed and the coupling with ARAC output. The computer code is adapted to the use of color graphics as a means to display and convey the dynamics of an emergency evacuation. The model is applied to a specific case of an emergency evacuation of individuals surrounding the Rancho Seco Nuclear Power Plant, located approximately 25 miles southeast of Sacramento, California. The graphics available to the model user for the Rancho Seco example are displayed and noted in detail. Suggestions for future, potential improvements to the emergency evacuation model are presented

  18. Nuclear Hybrid Energy System Modeling: RELAP5 Dynamic Coupling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Piyush Sabharwall; Nolan Anderson; Haihua Zhao; Shannon Bragg-Sitton; George Mesina

    2012-09-01

    The nuclear hybrid energy systems (NHES) research team is currently developing a dynamic simulation of an integrated hybrid energy system. A detailed simulation of proposed NHES architectures will allow initial computational demonstration of a tightly coupled NHES to identify key reactor subsystem requirements, identify candidate reactor technologies for a hybrid system, and identify key challenges to operation of the coupled system. This work will provide a baseline for later coupling of design-specific reactor models through industry collaboration. The modeling capability addressed in this report focuses on the reactor subsystem simulation.

  19. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  20. Spent fuel reprocessing system security engineering capability maturity model

    International Nuclear Information System (INIS)

    Liu Yachun; Zou Shuliang; Yang Xiaohua; Ouyang Zigen; Dai Jianyong

    2011-01-01

    In the field of nuclear safety, traditional work places extra emphasis on risk assessment related to technical skills, production operations, accident consequences through deterministic or probabilistic analysis, and on the basis of which risk management and control are implemented. However, high quality of product does not necessarily mean good safety quality, which implies a predictable degree of uniformity and dependability suited to the specific security needs. In this paper, we make use of the system security engineering - capability maturity model (SSE-CMM) in the field of spent fuel reprocessing, establish a spent fuel reprocessing systems security engineering capability maturity model (SFR-SSE-CMM). The base practices in the model are collected from the materials of the practice of the nuclear safety engineering, which represent the best security implementation activities, reflect the regular and basic work of the implementation of the security engineering in the spent fuel reprocessing plant, the general practices reveal the management, measurement and institutional characteristics of all process activities. The basic principles that should be followed in the course of implementation of safety engineering activities are indicated from 'what' and 'how' aspects. The model provides a standardized framework and evaluation system for the safety engineering of the spent fuel reprocessing system. As a supplement to traditional methods, this new assessment technique with property of repeatability and predictability with respect to cost, procedure and quality control, can make or improve the activities of security engineering to become a serial of mature, measurable and standard activities. (author)

  1. Human, Social, Cultural Behavior (HSCB) Modeling Workshop I: Characterizing the Capability Needs for HSCB Modeling

    Science.gov (United States)

    2008-07-01

    The expectations correspond to different roles individuals perform SocialConstructionis Social constructionism is a school of thought Peter L...HUMAN, SOCIAL , CULTURAL BEHAVIOR (HSCB) MODELING WORKSHOP I: CHARACTERIZING THE CAPABILITY NEEDS FOR HSCB MODELING FINAL REPORT... Social , Cultural Behavior (HSCB) Modeling Workshop I: Characterizing the Capability Needs for HSCB Modeling 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  2. EASEWASTE-life cycle modeling capabilities for waste management technologies

    DEFF Research Database (Denmark)

    Bhander, Gurbakhash Singh; Christensen, Thomas Højlund; Hauschild, Michael Zwicky

    2010-01-01

    Background, Aims and Scope The management of municipal solid waste and the associated environmental impacts are subject of growing attention in industrialized countries. EU has recently strongly emphasized the role of LCA in its waste and resource strategies. The development of sustainable solid...... waste management systems applying a life-cycle perspective requires readily understandable tools for modelling the life cycle impacts of waste management systems. The aim of the paper is to demonstrate the structure, functionalities and LCA modelling capabilities of the PC-based life cycle oriented...... waste management model EASEWASTE, developed at the Technical University of Denmark specifically to meet the needs of the waste system developer with the objective to evaluate the environmental performance of the various elements of existing or proposed solid waste management systems. Materials...

  3. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  4. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-05-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  5. Antecedents of CIOs' Innovation Capability in Hospitals: Results of an Empirical Study.

    Science.gov (United States)

    Liebe, Jan-David; Esdar, Moritz; Thye, Johannes; Hübner, Ursula

    2017-01-01

    CIOs' innovation capability is regarded as a precondition of successful HIT adoption in hospitals. Based on the data of 142 CIOs, this study aimed at identifying antecedents of perceived innovation capability. Eight features describing the status quo of the hospital IT management (e.g. use of IT governance frameworks), four features of the hospital structure (e.g. functional diversification) and four CIO characteristics (e.g. duration of employment) were tested as potential antecedents in an exploratory stepwise regression approach. Perceived innovation capability in its entirety and its three sub-dimensions served as criterion. The results show that CIOs' perceived innovation capability could be explained significantly (R2=0.34) and exclusively by facts that described the degree of formalism and structure of IT management in a hospital, e.g. intensive and formalised strategic communication, the existence of an IT strategy and the use of IT governance frameworks. Breaking down innovation capability into its constituents revealed that "innovative organisational culture" contributed to a large extent (R2=0.26) to the overall result sharing several predictors. In contrast, "intrapreneurial personality" (R2=0.11) and "openness towards users" (R2=0.18) could be predicted less well. These results hint at the relationship between working in a well-structured, formalised and strategy oriented environment and the overall feeling of being capable to promote IT innovation.

  6. Systems Security Engineering Capability Maturity Model SSE-CMM Model Description Document

    National Research Council Canada - National Science Library

    1999-01-01

    The Systems Security Engineering Capability Maturity Model (SSE-CMM) describes the essential characteristics of an organization's security engineering process that must exist to ensure good security engineering...

  7. Frameworks for Assessing the Quality of Modeling and Simulation Capabilities

    Science.gov (United States)

    Rider, W. J.

    2012-12-01

    The importance of assuring quality in modeling and simulation has spawned several frameworks for structuring the examination of quality. The format and content of these frameworks provides an emphasis, completeness and flow to assessment activities. I will examine four frameworks that have been developed and describe how they can be improved and applied to a broader set of high consequence applications. Perhaps the first of these frameworks was known as CSAU [Boyack] (code scaling, applicability and uncertainty) used for nuclear reactor safety and endorsed the United States' Nuclear Regulatory Commission (USNRC). This framework was shaped by nuclear safety practice, and the practical structure needed after the Three Mile Island accident. It incorporated the dominant experimental program, the dominant analysis approach, and concerns about the quality of modeling. The USNRC gave it the force of law that made the nuclear industry take it seriously. After the cessation of nuclear weapons' testing the United States began a program of examining the reliability of these weapons without testing. This program utilizes science including theory, modeling, simulation and experimentation to replace the underground testing. The emphasis on modeling and simulation necessitated attention on the quality of these simulations. Sandia developed the PCMM (predictive capability maturity model) to structure this attention [Oberkampf]. PCMM divides simulation into six core activities to be examined and graded relative to the needs of the modeling activity. NASA [NASA] has built yet another framework in response to the tragedy of the space shuttle accidents. Finally, Ben-Haim and Hemez focus upon modeling robustness and predictive fidelity in another approach. These frameworks are similar, and applied in a similar fashion. The adoption of these frameworks at Sandia and NASA has been slow and arduous because the force of law has not assisted acceptance. All existing frameworks are

  8. Systems Modeling to Implement Integrated System Health Management Capability

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark; Morris, Jonathan; Smith, Harvey; Schmalzel, John

    2007-01-01

    ISHM capability includes: detection of anomalies, diagnosis of causes of anomalies, prediction of future anomalies, and user interfaces that enable integrated awareness (past, present, and future) by users. This is achieved by focused management of data, information and knowledge (DIaK) that will likely be distributed across networks. Management of DIaK implies storage, sharing (timely availability), maintaining, evolving, and processing. Processing of DIaK encapsulates strategies, methodologies, algorithms, etc. focused on achieving high ISHM Functional Capability Level (FCL). High FCL means a high degree of success in detecting anomalies, diagnosing causes, predicting future anomalies, and enabling health integrated awareness by the user. A model that enables ISHM capability, and hence, DIaK management, is denominated the ISHM Model of the System (IMS). We describe aspects of the IMS that focus on processing of DIaK. Strategies, methodologies, and algorithms require proper context. We describe an approach to define and use contexts, implementation in an object-oriented software environment (G2), and validation using actual test data from a methane thruster test program at NASA SSC. Context is linked to existence of relationships among elements of a system. For example, the context to use a strategy to detect leak is to identify closed subsystems (e.g. bounded by closed valves and by tanks) that include pressure sensors, and check if the pressure is changing. We call these subsystems Pressurizable Subsystems. If pressure changes are detected, then all members of the closed subsystem become suspect of leakage. In this case, the context is defined by identifying a subsystem that is suitable for applying a strategy. Contexts are defined in many ways. Often, a context is defined by relationships of function (e.g. liquid flow, maintaining pressure, etc.), form (e.g. part of the same component, connected to other components, etc.), or space (e.g. physically close

  9. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    Science.gov (United States)

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  10. NGNP Data Management and Analysis System Modeling Capabilities

    International Nuclear Information System (INIS)

    Gentillon, Cynthia D.

    2009-01-01

    Projects for the very-high-temperature reactor (VHTR) program provide data in support of Nuclear Regulatory Commission licensing of the VHTR. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high temperature and high fluence environments. In addition, thermal-hydraulic experiments are conducted to validate codes used to assess reactor safety. The VHTR Program has established the NGNP Data Management and Analysis System (NDMAS) to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities for displaying the data in meaningful ways and identifying relationships among the measured quantities that contribute to their understanding.

  11. NGNP Data Management and Analysis System Modeling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Cynthia D. Gentillon

    2009-09-01

    Projects for the very-high-temperature reactor (VHTR) program provide data in support of Nuclear Regulatory Commission licensing of the VHTR. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high temperature and high fluence environments. In addition, thermal-hydraulic experiments are conducted to validate codes used to assess reactor safety. The VHTR Program has established the NGNP Data Management and Analysis System (NDMAS) to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities for displaying the data in meaningful ways and identifying relationships among the measured quantities that contribute to their understanding.

  12. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  13. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    Science.gov (United States)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  14. A model based lean approach to capability management

    CSIR Research Space (South Africa)

    Venter, Jacobus P

    2017-09-01

    Full Text Available It is argued that the definition of the required operational capabilities in the short and long term is an essential element of command. Defence Capability Management can be a cumbersome, long and very resource intensive activity. Given the new...

  15. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Burr, Tom; Gorensek, M.B.; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclearnonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facilitymodeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facilitymodeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facilitymodelingcapabilities and illustrates how they could be integrated and utilized for nonproliferationanalysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facilitymodeling tools. After considering a representative sampling of key facilitymodelingcapabilities, the proposed integration framework is illustrated with several examples.

  16. On the predictive capabilities of multiphase Darcy flow models

    KAUST Repository

    Icardi, Matteo; Prudhomme, Serge

    2016-01-01

    Darcy s law is a widely used model and the limit of its validity is fairly well known. When the flow is sufficiently slow and the porosity relatively homogeneous and low, Darcy s law is the homogenized equation arising from the Stokes and Navier- Stokes equations and depends on a single effective parameter (the absolute permeability). However when the model is extended to multiphase flows, the assumptions are much more restrictive and less realistic. Therefore it is often used in conjunction with empirical models (such as relative permeability and capillary pressure curves), derived usually from phenomenological speculations and experimental data fitting. In this work, we present the results of a Bayesian calibration of a two-phase flow model, using high-fidelity DNS numerical simulation (at the pore-scale) in a realistic porous medium. These reference results have been obtained from a Navier-Stokes solver coupled with an explicit interphase-tracking scheme. The Bayesian inversion is performed on a simplified 1D model in Matlab by using adaptive spectral method. Several data sets are generated and considered to assess the validity of this 1D model.

  17. On the predictive capabilities of multiphase Darcy flow models

    KAUST Repository

    Icardi, Matteo

    2016-01-09

    Darcy s law is a widely used model and the limit of its validity is fairly well known. When the flow is sufficiently slow and the porosity relatively homogeneous and low, Darcy s law is the homogenized equation arising from the Stokes and Navier- Stokes equations and depends on a single effective parameter (the absolute permeability). However when the model is extended to multiphase flows, the assumptions are much more restrictive and less realistic. Therefore it is often used in conjunction with empirical models (such as relative permeability and capillary pressure curves), derived usually from phenomenological speculations and experimental data fitting. In this work, we present the results of a Bayesian calibration of a two-phase flow model, using high-fidelity DNS numerical simulation (at the pore-scale) in a realistic porous medium. These reference results have been obtained from a Navier-Stokes solver coupled with an explicit interphase-tracking scheme. The Bayesian inversion is performed on a simplified 1D model in Matlab by using adaptive spectral method. Several data sets are generated and considered to assess the validity of this 1D model.

  18. Systems Security Engineering Capability Maturity Model (SSECMM), Model Description, Version 1.1

    National Research Council Canada - National Science Library

    1997-01-01

    This document is designed to acquaint the reader with the SSE-CMM Project as a whole and present the project's major work product - the Systems Security Engineering Capability Maturity Model (SSE- CMM...

  19. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni

    2017-05-01

    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  20. Lattice Boltzmann model capable of mesoscopic vorticity computation

    Science.gov (United States)

    Peng, Cheng; Guo, Zhaoli; Wang, Lian-Ping

    2017-11-01

    It is well known that standard lattice Boltzmann (LB) models allow the strain-rate components to be computed mesoscopically (i.e., through the local particle distributions) and as such possess a second-order accuracy in strain rate. This is one of the appealing features of the lattice Boltzmann method (LBM) which is of only second-order accuracy in hydrodynamic velocity itself. However, no known LB model can provide the same quality for vorticity and pressure gradients. In this paper, we design a multiple-relaxation time LB model on a three-dimensional 27-discrete-velocity (D3Q27) lattice. A detailed Chapman-Enskog analysis is presented to illustrate all the necessary constraints in reproducing the isothermal Navier-Stokes equations. The remaining degrees of freedom are carefully analyzed to derive a model that accommodates mesoscopic computation of all the velocity and pressure gradients from the nonequilibrium moments. This way of vorticity calculation naturally ensures a second-order accuracy, which is also proven through an asymptotic analysis. We thus show, with enough degrees of freedom and appropriate modifications, the mesoscopic vorticity computation can be achieved in LBM. The resulting model is then validated in simulations of a three-dimensional decaying Taylor-Green flow, a lid-driven cavity flow, and a uniform flow passing a fixed sphere. Furthermore, it is shown that the mesoscopic vorticity computation can be realized even with single relaxation parameter.

  1. Computable general equilibrium model fiscal year 2013 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  2. An Observation Capability Metadata Model for EO Sensor Discovery in Sensor Web Enablement Environments

    Directory of Open Access Journals (Sweden)

    Chuli Hu

    2014-10-01

    Full Text Available Accurate and fine-grained discovery by diverse Earth observation (EO sensors ensures a comprehensive response to collaborative observation-required emergency tasks. This discovery remains a challenge in an EO sensor web environment. In this study, we propose an EO sensor observation capability metadata model that reuses and extends the existing sensor observation-related metadata standards to enable the accurate and fine-grained discovery of EO sensors. The proposed model is composed of five sub-modules, namely, ObservationBreadth, ObservationDepth, ObservationFrequency, ObservationQuality and ObservationData. The model is applied to different types of EO sensors and is formalized by the Open Geospatial Consortium Sensor Model Language 1.0. The GeosensorQuery prototype retrieves the qualified EO sensors based on the provided geo-event. An actual application to flood emergency observation in the Yangtze River Basin in China is conducted, and the results indicate that sensor inquiry can accurately achieve fine-grained discovery of qualified EO sensors and obtain enriched observation capability information. In summary, the proposed model enables an efficient encoding system that ensures minimum unification to represent the observation capabilities of EO sensors. The model functions as a foundation for the efficient discovery of EO sensors. In addition, the definition and development of this proposed EO sensor observation capability metadata model is a helpful step in extending the Sensor Model Language (SensorML 2.0 Profile for the description of the observation capabilities of EO sensors.

  3. In-vessel retention modeling capabilities in MAAP5

    International Nuclear Information System (INIS)

    Paik, Chan Y.; Lee, Sung Jin; Zhou, Quan; Luangdilok, W.; Reeves, R.W.; Henry, R.E.; Plys, M.; Scobel, J.H.

    2012-01-01

    Modular Accident Analysis Program (MAAP) is an integrated severe accident analysis code for both light water and heavy water reactors. New and improved models to address the complex phenomena associated with in-vessel retention (IVR) were incorporated into MAAP5.01. They include: -a) time-dependent volatile and non-volatile decay heat, -b) material properties at high temperatures, -c) finer vessel wall nodalization, -d) new correlations for natural convection heat transfer in the oxidic pool, -e) refined metal layer heat transfer to the reactor vessel wall and surroundings, -f) formation of a heavy metal layer, and -g) insulation cooling channel model and associated ex-vessel heat transfer and critical heat flux correlations. In this paper, the new and improved models in MAAP5.01 are described and sample calculation results are presented for the AP1000 passive plant. For the IVR evaluation, a transient calculation is useful because the timing of corium relocation, decaying heat load, and formation of separate layers in the lower plenum all affect integrity of the lower head. The key parameters affecting the IVR success are the metal layer emissivity and thickness of the top metal layer, which depends on the amount of steel in the oxidic pool and in the heavy metal layer. With the best estimate inputs for the debris mixing parameters in a conservative IVR scenario, the AP1000 plant results show that the maximum ex-vessel heat flux to CHF ratio is about 0.7, which occurs before 10.000 seconds when the decay heat is high. The AP1000 plant results demonstrate how MAAP5.01 can be used to evaluate IVR and to gain insight into responses of the lower head during a severe accident

  4. An Empirical Competence-Capability Model of Supply Chain Innovation

    OpenAIRE

    Mandal, Santanu

    2016-01-01

    Supply chain innovation has become the new pre-requisite for the survival of firms in developing capabilities and strategies for sustaining their operations and performance in the market. This study investigates the influence of supply and demand competence on supply chain innovation and its influence on a firm’s operational and relational performance. While the former competence refers to production and supply management related activities, the latter refers to distribution and demand manage...

  5. The multipurpose thermalhydraulic test facility TOPFLOW: an overview on experimental capabilities, instrumentation and results

    International Nuclear Information System (INIS)

    Prasser, H.M.; Beyer, M.; Carl, H.; Manera, A.; Pietruske, H.; Schuetz, P.; Weiss, F.P.

    2006-01-01

    A new multipurpose thermalhydraulic test facility TOPFLOW (TwO Phase FLOW) was built and put into operation at Forschungszentrum Rossendorf in 2002 and 2003. Since then, it has been mainly used for the investigation of generic and applied steady state and transient two phase flow phenomena and the development and validation of models of computational fluid dynamic (CFD) codes in the frame of the German CFD initiative. The advantage of TOPFLOW consists in the combination of a large scale of the test channels with a wide operational range both of the flow velocities as well as of the system pressures and temperatures plus finally the availability of a special instrumentation that is capable in high spatial and temporal resolving two phase flow phenomena, for example the wire-mesh sensors. (orig.)

  6. Capabilities and requirements for modelling radionuclide transport in the geosphere

    International Nuclear Information System (INIS)

    Paige, R.W.; Piper, D.

    1989-02-01

    This report gives an overview of geosphere flow and transport models suitable for use by the Department of the Environment in the performance assessment of radioactive waste disposal sites. An outline methodology for geosphere modelling is proposed, consisting of a number of different types of model. A brief description of each of the component models is given, indicating the purpose of the model, the processes being modelled and the methodologies adopted. Areas requiring development are noted. (author)

  7. Capabilities of current wildfire models when simulating topographical flow

    Science.gov (United States)

    Kochanski, A.; Jenkins, M.; Krueger, S. K.; McDermott, R.; Mell, W.

    2009-12-01

    Accurate predictions of the growth, spread and suppression of wild fires rely heavily on the correct prediction of the local wind conditions and the interactions between the fire and the local ambient airflow. Resolving local flows, often strongly affected by topographical features like hills, canyons and ridges, is a prerequisite for accurate simulation and prediction of fire behaviors. In this study, we present the results of high-resolution numerical simulations of the flow over a smooth hill, performed using (1) the NIST WFDS (WUI or Wildland-Urban-Interface version of the FDS or Fire Dynamic Simulator), and (2) the LES version of the NCAR Weather Research and Forecasting (WRF-LES) model. The WFDS model is in the initial stages of development for application to wind flow and fire spread over complex terrain. The focus of the talk is to assess how well simple topographical flow is represented by WRF-LES and the current version of WFDS. If sufficient progress has been made prior to the meeting then the importance of the discrepancies between the predicted and measured winds, in terms of simulated fire behavior, will be examined.

  8. Expanding the modeling capabilities of the cognitive environment simulation

    International Nuclear Information System (INIS)

    Roth, E.M.; Mumaw, R.J.; Pople, H.E. Jr.

    1991-01-01

    The Nuclear Regulatory Commission has been conducting a research program to develop more effective tools to model the cognitive activities that underlie intention formation during nuclear power plant (NPP) emergencies. Under this program an artificial intelligence (AI) computer simulation called Cognitive Environment Simulation (CES) has been developed. CES simulates the cognitive activities involved in responding to a NPP accident situation. It is intended to provide an analytic tool for predicting likely human responses, and the kinds of errors that can plausibly arise under different accident conditions to support human reliability analysis. Recently CES was extended to handle a class of interfacing loss of coolant accidents (ISLOCAs). This paper summarizes the results of these exercises and describes follow-on work currently underway

  9. Management, innovation and business results under the Resources and Capabilities Theory

    Directory of Open Access Journals (Sweden)

    José Fernádez Palma

    2017-06-01

    Full Text Available Different lines of thought, which relate to how organizations can achieve their competitive advantage, can be found within the general theory of management. On the one hand, there are those who argue that the knowledge and analysis of the environment and the industry in which the organization participates is of vital importance and, on the other hand, those who believe that the main source of competitiveness is concentrated within the organizations themselves. In the latter case, the Resources and Capabilities approach asserts that such internal elements are a real and distinctive source of competitive advantage. It supports the existence of tangible and intangible assets, for example the managerial skills of the managers, the organizational processes and the internal procedures, which are key elements for effective operation and for the achievement of the company´s objectives. Therefore, the general purpose of this research is to determine if there is a relationship between a set of internal capabilities in a company and its respective financial results. An exploratory, descriptive and correlational study was carried out using secondary information about the main idea, as well as a descriptive survey taken by 12 medium-sized companies in the city of Punta Arenas. Through the study it was possible to identify and describe the main relationships between the indicated variables.

  10. A cellular automata model for traffic flow based on kinetics theory, vehicles capabilities and driver reactions

    Science.gov (United States)

    Guzmán, H. A.; Lárraga, M. E.; Alvarez-Icaza, L.; Carvajal, J.

    2018-02-01

    In this paper, a reliable cellular automata model oriented to faithfully reproduce deceleration and acceleration according to realistic reactions of drivers, when vehicles with different deceleration capabilities are considered is presented. The model focuses on describing complex traffic phenomena by coding in its rules the basic mechanisms of drivers behavior, vehicles capabilities and kinetics, while preserving simplicity. In particular, vehiclés kinetics is based on uniform accelerated motion, rather than in impulsive accelerated motion as in most existing CA models. Thus, the proposed model calculates in an analytic way three safe preserving distances to determine the best action a follower vehicle can take under a worst case scenario. Besides, the prediction analysis guarantees that under the proper assumptions, collision between vehicles may not happen at any future time. Simulations results indicate that all interactions of heterogeneous vehicles (i.e., car-truck, truck-car, car-car and truck-truck) are properly reproduced by the model. In addition, the model overcomes one of the major limitations of CA models for traffic modeling: the inability to perform smooth approach to slower or stopped vehicles. Moreover, the model is also capable of reproducing most empirical findings including the backward speed of the downstream front of the traffic jam, and different congested traffic patterns induced by a system with open boundary conditions with an on-ramp. Like most CA models, integer values are used to make the model run faster, which makes the proposed model suitable for real time traffic simulation of large networks.

  11. Advanced capabilities for materials modelling with Quantum ESPRESSO

    Science.gov (United States)

    Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.

    2017-11-01

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  12. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    Science.gov (United States)

    Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano

    2017-09-27

    Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.

  13. Long-term predictive capability of erosion models

    Science.gov (United States)

    Veerabhadra, P.; Buckley, D. H.

    1983-01-01

    A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.

  14. A user's guide to the SASSYS-1 control system modeling capability

    International Nuclear Information System (INIS)

    Vilim, R.B.

    1987-06-01

    This report describes a control system modeling capability that has been developed for the analysis of control schemes for advanced liquid metal reactors. The general class of control equations that can be represented using the modeling capability is identified, and the numerical algorithms used to solve these equations are described. The modeling capability has been implemented in the SASSYS-1 systems analysis code. A description of the card input, a sample input deck and some guidelines for running the code are given

  15. Demonstrating the European capability for airborne gamma spectrometry: results from the eccomags exercise

    International Nuclear Information System (INIS)

    Sanderson, D.C.W.

    2003-01-01

    Full text: Airborne gamma spectrometry has emerged over recent years as an important means for mapping deposited activity and dose-rates in the environment. Its importance to emergency response has been increasingly recognised in defining contaminated areas at rates of measurement and spatial density which simply cannot be matched using ground based methods. In Europe the practical capability for AGS has developed markedly over the period since the Chernobyl accident, and significant progress towards cooperation between AGS teams, and convergence of methodology has been made, largely as the result of work conducted with support from the Commission under Frameworks IV and V. As an important part of the ECCOMAGS project an international comparison was undertaken in SW Scotland in 2002 to evaluate the performance of AGS teams in comparison with established groundbased methods. The work included a composite mapping task whereby different AGS teams recorded data over adjacent areas, to demonstrate the potential for collective actions in emergency response. More than 70,000 observations were made from a 90x40 km area during a 3 day period by several European teams, and compiled within days to produce regional scale maps for 137-Cs contamination and environmental gamma dose-rates. In three disjoint common areas AGS and ground based observations were taken. The ground based work was split for practical reasons into two phases; a pre-characterization stage to define calibration sites, and a second stage where sampling of additional sites was undertake synchronously with the exercise to provide the basis for blind intercomparison with AGS. More than 800 laboratory gamma spectrometry measurements were taken in support of this activity, which in combination with in-situ gamma spectrometry and instrumental dose-rate measurements were used to define the ground-based estimates of 137-Cs activity per unit area and dose-rate. AGS for the common areas, included points for ground

  16. The Creation and Use of an Analysis Capability Maturity Model (trademark) (ACMM)

    National Research Council Canada - National Science Library

    Covey, R. W; Hixon, D. J

    2005-01-01

    .... Capability Maturity Models (trademark) (CMMs) are being used in several intellectual endeavors, such as software engineering, software acquisition, and systems engineering. This Analysis CMM (ACMM...

  17. Capabilities For Modelling Of Conversion Processes In Life Cycle Assessment

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide

    considering how the biochemical parameters change through a process chain. A good example of this is bio-refinery processes where different residual biomass products are converted through different steps into the final energy product. Here it is necessary to know the stoichiometry of the different products...... little focus on the chemical composition of the functional flows, as flows in the models have mainly been tracked on a mass basis, as emphasis was the function of the product and not the chemical composition of said product. Conversely, in modelling of environmental technologies, such as wastewater...... varies considerably. To address this, EASETECH (Clavreul et al., 2014) was developed which integrates a matrix approach for the reference flow which contains the full chemical composition for different material fractions, and also the number of different material fractions present in the overall mass...

  18. Capabilities for modelling of conversion processes in LCA

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide

    2015-01-01

    substances themselves change through a process chain. A good example of this is bio-refinery processes where different residual biomass products are converted through different steps into the final energy product. Here it is necessary to know the stoichiometry of the different products going in, and being...... little focus on the chemical composition of the functional flows, as flows in the models have mainly been tracked on a mass basis, as focus was on the function of the product and not the chemical composition of said product. Conversely modelling environmental technologies, such as wastewater treatment......, EASETECH (Clavreul et al., 2014) was developed which integrates a matrix approach for the functional unit which contains the full chemical composition for different material fractions, and also the number of different material fractions present in the overall mass being handled. These chemical substances...

  19. Research Opportunities from Emerging Atmospheric Observing and Modeling Capabilities.

    Science.gov (United States)

    Dabberdt, Walter F.; Schlatter, Thomas W.

    1996-02-01

    The Second Prospectus Development Team (PDT-2) of the U.S. Weather Research Program was charged with identifying research opportunities that are best matched to emerging operational and experimental measurement and modeling methods. The overarching recommendation of PDT-2 is that inputs for weather forecast models can best be obtained through the use of composite observing systems together with adaptive (or targeted) observing strategies employing both in situ and remote sensing. Optimal observing systems and strategies are best determined through a three-part process: observing system simulation experiments, pilot field measurement programs, and model-assisted data sensitivity experiments. Furthermore, the mesoscale research community needs easy and timely access to the new operational and research datasets in a form that can readily be reformatted into existing software packages for analysis and display. The value of these data is diminished to the extent that they remain inaccessible.The composite observing system of the future must combine synoptic observations, routine mobile observations, and targeted observations, as the current or forecast situation dictates. High costs demand fuller exploitation of commercial aircraft, meteorological and navigation [Global Positioning System (GPS)] satellites, and Doppler radar. Single observing systems must be assessed in the context of a composite system that provides complementary information. Maintenance of the current North American rawinsonde network is critical for progress in both research-oriented and operational weather forecasting.Adaptive sampling strategies are designed to improve large-scale and regional weather prediction but they will also improve diagnosis and prediction of flash flooding, air pollution, forest fire management, and other environmental emergencies. Adaptive measurements can be made by piloted or unpiloted aircraft. Rawinsondes can be launched and satellites can be programmed to make

  20. New generation of space capabilities resulting from US/RF cooperative efforts

    Science.gov (United States)

    Humpherys, Thomas; Misnik, Victor; Sinelshchikov, Valery; Stair, A. T., Jr.; Khatulev, Valery; Carpenter, Jack; Watson, John; Chvanov, Dmitry; Privalsky, Victor

    2006-09-01

    Previous successful international cooperative efforts offer a wealth of experience in dealing with highly sensitive issues, but cooperative remote sensing for monitoring and understanding the global environmental is in the national interest of all countries. Cooperation between international partners is paramount, particularly with the Russian Federation, due to its technological maturity and strategic political and geographical position in the world. Based on experience gained over a decade of collaborative space research efforts, continued cooperation provides an achievable goal as well as understanding the fabric of our coexistence. Past cooperative space research efforts demonstrate the ability of the US and Russian Federation to develop a framework for cooperation, working together on a complex, state-of-the-art joint satellite program. These efforts consisted of teams of scientists and engineers who overcame numerous cultural, linguistic, engineering approaches and different political environments. Among these major achievements are: (1) field measurement activities with US satellites MSTI and MSX and the Russian RESURS-1 satellite, as well as the joint experimental use of the US FISTA aircraft; (2) successful joint Science, Conceptual and Preliminary Design Reviews; (3) joint publications of scientific research technical papers, (4) Russian investment in development, demonstration and operation of the Monitor-E spacecraft (Yacht satellite bus), (5) successful demonstration of the conversion of the SS-19 into a satellite launch system, and (6) negotiation of contractual and technical assistant agreements. This paper discusses a new generation of science and space capabilities available to the Remote Sensing community. Specific topics include: joint requirements definition process and work allocation for hardware and responsibility for software development; the function, description and status of Russian contributions in providing space component prototypes

  1. IMPACT OF CO-CREATION ON INNOVATION CAPABILITY AND FIRM PERFORMANCE: A STRUCTURAL EQUATION MODELING

    Directory of Open Access Journals (Sweden)

    FATEMEH HAMIDI

    Full Text Available ABSTRACT Traditional firms used to design products, evaluate marketing messages and control product distribution channels with no costumer interface. With the advancements in interaction technologies, however, users can easily make impacts on firms; the interaction between costumers and firms is now in peak condition in comparison to the past and is no longer controlled by firms. Customers are playing two roles of value creators and consumers simultaneously. We examine the role of co-creation on the influences of innovation capability and firm performance. We develop hypotheses and test them using researcher survey data. The results suggest that implement of co-creation partially mediate the effect of process innovation capability. We discuss the implications of these findings for research and practice on the depict and implement of unique value co-creation model.

  2. Landscape capability models as a tool to predict fine-scale forest bird occupancy and abundance

    Science.gov (United States)

    Loman, Zachary G.; DeLuca, William; Harrison, Daniel J.; Loftin, Cynthia S.; Rolek, Brian W.; Wood, Petra B.

    2018-01-01

    ContextSpecies-specific models of landscape capability (LC) can inform landscape conservation design. Landscape capability is “the ability of the landscape to provide the environment […] and the local resources […] needed for survival and reproduction […] in sufficient quantity, quality and accessibility to meet the life history requirements of individuals and local populations.” Landscape capability incorporates species’ life histories, ecologies, and distributions to model habitat for current and future landscapes and climates as a proactive strategy for conservation planning.ObjectivesWe tested the ability of a set of LC models to explain variation in point occupancy and abundance for seven bird species representative of spruce-fir, mixed conifer-hardwood, and riparian and wooded wetland macrohabitats.MethodsWe compiled point count data sets used for biological inventory, species monitoring, and field studies across the northeastern United States to create an independent validation data set. Our validation explicitly accounted for underestimation in validation data using joint distance and time removal sampling.ResultsBlackpoll warbler (Setophaga striata), wood thrush (Hylocichla mustelina), and Louisiana (Parkesia motacilla) and northern waterthrush (P. noveboracensis) models were validated as predicting variation in abundance, although this varied from not biologically meaningful (1%) to strongly meaningful (59%). We verified all seven species models [including ovenbird (Seiurus aurocapilla), blackburnian (Setophaga fusca) and cerulean warbler (Setophaga cerulea)], as all were positively related to occupancy data.ConclusionsLC models represent a useful tool for conservation planning owing to their predictive ability over a regional extent. As improved remote-sensed data become available, LC layers are updated, which will improve predictions.

  3. Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Hamlin, Nathaniel D., E-mail: nh322@cornell.edu [438 Rhodes Hall, Cornell University, Ithaca, NY, 14853 (United States); Seyler, Charles E., E-mail: ces7@cornell.edu [Cornell University, Ithaca, NY, 14853 (United States)

    2014-12-15

    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm’s law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.

  4. A Computational Model of the SC Multisensory Neurons: Integrative Capabilities, Maturation, and Plasticity

    Directory of Open Access Journals (Sweden)

    Cristiano Cuppini

    2011-10-01

    Full Text Available Different cortical and subcortical structures present neurons able to integrate stimuli of different sensory modalities. Among the others, one of the most investigated integrative regions is the Superior Colliculus (SC, a midbrain structure whose aim is to guide attentive behaviour and motor responses toward external events. Despite the large amount of experimental data in the literature, the neural mechanisms underlying the SC response are not completely understood. Moreover, recent data indicate that multisensory integration ability is the result of maturation after birth, depending on sensory experience. Mathematical models and computer simulations can be of value to investigate and clarify these phenomena. In the last few years, several models have been implemented to shed light on these mechanisms and to gain a deeper comprehension of the SC capabilities. Here, a neural network model (Cuppini et al., 2010 is extensively discussed. The model considers visual-auditory interaction, and is able to reproduce and explain the main physiological features of multisensory integration in SC neurons, and their acquisition during postnatal life. To reproduce a neonatal condition, the model assumes that during early life: 1 cortical-SC synapses are present but not active; 2 in this phase, responses are driven by non-cortical inputs with very large receptive fields (RFs and little spatial tuning; 3 a slight spatial preference for the visual inputs is present. Sensory experience is modeled by a “training phase” in which the network is repeatedly exposed to modality-specific and cross-modal stimuli at different locations. As results, Cortical-SC synapses are crafted during this period thanks to the Hebbian rules of potentiation and depression, RFs are reduced in size, and neurons exhibit integrative capabilities to cross-modal stimuli, such as multisensory enhancement, inverse effectiveness, and multisensory depression. The utility of the modelling

  5. Results from the Operational Testing of the Eaton Smart Grid Capable Electric Vehicle Supply Equipment

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, Brion [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-10-01

    The Idaho National Laboratory conducted testing and analysis of the Eaton smart grid capable electric vehicle supply equipment (EVSE), which was a deliverable from Eaton for the U.S. Department of Energy FOA-554. The Idaho National Laboratory has extensive knowledge and experience in testing advanced conductive and wireless charging systems though INL’s support of the U.S. Department of Energy’s Advanced Vehicle Testing Activity. This document details the findings from the EVSE operational testing conducted at the Idaho National Laboratory on the Eaton smart grid capable EVSE. The testing conducted on the EVSE included energy efficiency testing, SAE J1772 functionality testing, abnormal conditions testing, and charging of a plug-in vehicle.

  6. Results from Operational Testing of the Siemens Smart Grid-Capable Electric Vehicle Supply Equipment

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, Brion [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-05-01

    The Idaho National Laboratory conducted testing and analysis of the Siemens smart grid capable electric vehicle supply equipment (EVSE), which was a deliverable from Siemens for the U.S. Department of Energy FOA-554. The Idaho National Laboratory has extensive knowledge and experience in testing advanced conductive and wireless charging systems though INL’s support of the U.S. Department of Energy’s Advanced Vehicle Testing Activity. This document details the findings from the EVSE operational testing conducted at the Idaho National Laboratory on the Siemens smart grid capable EVSE. The testing conducted on the EVSE included energy efficiency testing, SAE J1772 functionality testing, abnormal conditions testing, and charging of a plug-in vehicle.

  7. Status Report on Modelling and Simulation Capabilities for Nuclear-Renewable Hybrid Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Epiney, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, P. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, J. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bragg-Sitton, S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Yigitoglu, A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Greenwood, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, S. M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ganda, F. [Argonne National Lab. (ANL), Argonne, IL (United States); Maronati, G. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    This report summarizes the current status of the modeling and simulation capabilities developed for the economic assessment of Nuclear-Renewable Hybrid Energy Systems (N-R HES). The increasing penetration of variable renewables is altering the profile of the net demand, with which the other generators on the grid have to cope. N-R HES analyses are being conducted to determine the potential feasibility of mitigating the resultant volatility in the net electricity demand by adding industrial processes that utilize either thermal or electrical energy as stabilizing loads. This coordination of energy generators and users is proposed to mitigate the increase in electricity cost and cost volatility through the production of a saleable commodity. Overall, the financial performance of a system that is comprised of peaking units (i.e. gas turbine), baseload supply (i.e. nuclear power plant), and an industrial process (e.g. hydrogen plant) should be optimized under the constraint of satisfying an electricity demand profile with a certain level of variable renewable (wind) penetration. The optimization should entail both the sizing of the components/subsystems that comprise the system and the optimal dispatch strategy (output at any given moment in time from the different subsystems). Some of the capabilities here described have been reported separately in [1, 2, 3]. The purpose of this report is to provide an update on the improvement and extension of those capabilities and to illustrate their integrated application in the economic assessment of N-R HES.

  8. Concerns over modeling and warning capabilities in wake of Tohoku Earthquake and Tsunami

    Science.gov (United States)

    Showstack, Randy

    2011-04-01

    Improved earthquake models, better tsunami modeling and warning capabilities, and a review of nuclear power plant safety are all greatly needed following the 11 March Tohoku earthquake and tsunami, according to scientists at the European Geosciences Union's (EGU) General Assembly, held 3-8 April in Vienna, Austria. EGU quickly organized a morning session of oral presentations and an afternoon panel discussion less than 1 month after the earthquake and the tsunami and the resulting crisis at Japan's Fukushima nuclear power plant, which has now been identified as having reached the same level of severity as the 1986 Chernobyl disaster. Many of the scientists at the EGU sessions expressed concern about the inability to have anticipated the size of the earthquake and the resulting tsunami, which appears likely to have caused most of the fatalities and damage, including damage to the nuclear plant.

  9. Evaluation of the Predictive Capabilities of a Phenomenological Combustion Model for Natural Gas SI Engine

    Directory of Open Access Journals (Sweden)

    Toman Rastislav

    2017-12-01

    Full Text Available The current study evaluates the predictive capabilities of a new phenomenological combustion model, available as a part of the GT-Suite software package. It is comprised of two main sub-models: 0D model of in-cylinder flow and turbulence, and turbulent SI combustion model. The 0D in-cylinder flow model (EngCylFlow uses a combined K-k-ε kinetic energy cascade approach to predict the evolution of the in-cylinder charge motion and turbulence, where K and k are the mean and turbulent kinetic energies, and ε is the turbulent dissipation rate. The subsequent turbulent combustion model (EngCylCombSITurb gives the in-cylinder burn rate; based on the calculation of flame speeds and flame kernel development. This phenomenological approach reduces significantly the overall computational effort compared to the 3D-CFD, thus allowing the computation of full engine operating map and the vehicle driving cycles. Model was calibrated using a full map measurement from a turbocharged natural gas SI engine, with swirl intake ports. Sensitivity studies on different calibration methods, and laminar flame speed sub-models were conducted. Validation process for both the calibration and sensitivity studies was concerning the in-cylinder pressure traces and burn rates for several engine operation points achieving good overall results.

  10. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis

    Science.gov (United States)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2013-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.

  11. Guidelines for Applying the Capability Maturity Model Analysis to Connected and Automated Vehicle Deployment

    Science.gov (United States)

    2017-11-23

    The Federal Highway Administration (FHWA) has adapted the Transportation Systems Management and Operations (TSMO) Capability Maturity Model (CMM) to describe the operational maturity of Infrastructure Owner-Operator (IOO) agencies across a range of i...

  12. A Stochastic Model for the Landing Dispersion of Hazard Detection and Avoidance Capable Flight Systems

    Science.gov (United States)

    Witte, L.

    2014-06-01

    To support landing site assessments for HDA-capable flight systems and to facilitate trade studies between the potential HDA architectures versus the yielded probability of safe landing a stochastic landing dispersion model has been developed.

  13. Adaptive Planning: Understanding Organizational Workload to Capability/ Capacity through Modeling and Simulation

    Science.gov (United States)

    Hase, Chris

    2010-01-01

    In August 2003, the Secretary of Defense (SECDEF) established the Adaptive Planning (AP) initiative [1] with an objective of reducing the time necessary to develop and revise Combatant Commander (COCOM) contingency plans and increase SECDEF plan visibility. In addition to reducing the traditional plan development timeline from twenty-four months to less than twelve months (with a goal of six months)[2], AP increased plan visibility to Department of Defense (DoD) leadership through In-Progress Reviews (IPRs). The IPR process, as well as the increased number of campaign and contingency plans COCOMs had to develop, increased the workload while the number of planners remained fixed. Several efforts from collaborative planning tools to streamlined processes were initiated to compensate for the increased workload enabling COCOMS to better meet shorter planning timelines. This paper examines the Joint Strategic Capabilities Plan (JSCP) directed contingency planning and staffing requirements assigned to a combatant commander staff through the lens of modeling and simulation. The dynamics of developing a COCOM plan are captured with an ExtendSim [3] simulation. The resulting analysis provides a quantifiable means by which to measure a combatant commander staffs workload associated with development and staffing JSCP [4] directed contingency plans with COCOM capability/capacity. Modeling and simulation bring significant opportunities in measuring the sensitivity of key variables in the assessment of workload to capability/capacity analysis. Gaining an understanding of the relationship between plan complexity, number of plans, planning processes, and number of planners with time required for plan development provides valuable information to DoD leadership. Through modeling and simulation AP leadership can gain greater insight in making key decisions on knowing where to best allocate scarce resources in an effort to meet DoD planning objectives.

  14. Models to enhance research capacity and capability in clinical nurses: a narrative review.

    Science.gov (United States)

    O'Byrne, Louise; Smith, Sheree

    2011-05-01

    To identify models used as local initiatives to build capability and capacity in clinical nurses. The National Health Service, Nursing and Midwifery Council and the United Kingdom Clinical Research Collaboration all support the development of the building of research capability and capacity in clinical nurses in the UK. Narrative review. A literature search of databases (including Medline and Pubmed) using the search terms nursing research, research capacity and research capability combined with building, development, model and collaboration. Publications which included a description or methodological study of a structured initiative to tackle research capacity and capability development in clinical nurses were selected. Three models were found to be dominant in the literature. These comprised evidence-based practice, facilitative and experiential learning models. Strong leadership, organisational need and support management were elements found in all three models. Methodological issues were evident and pertain to small sample sizes, inconsistent and poorly defined outcomes along with a lack of data. Whilst the vision of a research ready and active National Health Service is to be applauded to date, there appears to be limited research on the best approach to support local initiatives for nurses that build research capability and capacity. Future studies will need to focus on well-defined objectives and outcomes to enable robust evidence to support local initiatives. To build research capability and capacity in clinical nurses, there is a need to evaluate models and determine the best approach that will provide clinical nurses with research opportunities. © 2010 Blackwell Publishing Ltd.

  15. Precision Departure Release Capability (PDRC) Overview and Results: NASA to FAA Research Transition

    Science.gov (United States)

    Engelland, Shawn; Davis, Tom.

    2013-01-01

    NASA researchers developed the Precision Departure Release Capability (PDRC) concept to improve the tactical departure scheduling process. The PDRC system is comprised of: 1) a surface automation system that computes ready time predictions and departure runway assignments, 2) an en route scheduling automation tool that uses this information to estimate ascent trajectories to the merge point and computes release times and, 3) an interface that provides two-way communication between the two systems. To minimize technology transfer issues and facilitate its adoption by TMCs and Frontline Managers (FLM), NASA developed the PDRC prototype using the Surface Decision Support System (SDSS) for the Tower surface automation tool, a research version of the FAA TMA (RTMA) for en route automation tool and a digital interface between the two DSTs to facilitate coordination.

  16. Security Process Capability Model Based on ISO/IEC 15504 Conformant Enterprise SPICE

    Directory of Open Access Journals (Sweden)

    Mitasiunas Antanas

    2014-07-01

    Full Text Available In the context of modern information systems, security has become one of the most critical quality attributes. The purpose of this paper is to address the problem of quality of information security. An approach to solve this problem is based on the main assumption that security is a process oriented activity. According to this approach, product quality can be achieved by means of process quality - process capability. Introduced in the paper, SPICE conformant information security process capability model is based on process capability modeling elaborated by world-wide software engineering community during the last 25 years, namely ISO/IEC 15504 that defines the capability dimension and the requirements for process definition and domain independent integrated model for enterprise-wide assessment and Enterprise SPICE improvement

  17. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    Energy Technology Data Exchange (ETDEWEB)

    Burke, G.J.

    1988-04-08

    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  18. The Sentry Autonomous Underwater Vehicle: Field Trial Results and Future Capabilities

    Science.gov (United States)

    Yoerger, D. R.; Bradley, A. M.; Martin, S. C.; Whitcomb, L. L.

    2006-12-01

    The Sentry autonomous underwater vehicle combines an efficient long range survey capability with the ability to maneuver at low speeds. These attributes will permit Sentry to perform a variety of conventional and unconventional surveys including long range sonar surveys, hydrothermal plume surveys and near-bottom photo surveys. Sentry's streamlined body and fore and aft tilting planes, each possessing an independently controlled thruster, enable efficient operation in both near-bottom and cruising operations. Sentry is capable of being configured in two modes: hover mode, which commands Sentry's control surfaces to be aligned vertically, and forward flight mode, which allows Sentry's control surfaces to actuate between plus or minus 45 degrees. Sentry is equipped for full 6-Degrees of freedom position measurement. Vehicle heading, roll, and pitch are instrumented with a TCM2 PNI heading and attitude sensor. A Systron Donner yaw rate sensor instrumented heading rate. Depth is instrumented by a Paroscientific depth sensor. A 300kHz RD Instruments Doppler Sonar provides altitude and XYZ velocity measurements. In April 2006, we conducted our first deep water field trials of Sentry in Bermuda. These trials enabled us to examine a variety of issues, including the control software, vehicle safety systems, launch and recovery procedures, operation at depth, heading and depth controllers over a range of speeds, and power consumption. Sentry employ's a control system based upon the Jason 2 control system for low-level control, which has proven effective and reliable over several hundred deep-water dives. The Jason 2 control system, developed jointly at Johns Hopkins University and Woods Hole Oceanographic Institution, was augmented to manage Sentry-specific devices (sensors, actuators, and power storage) and to employ a high-level mission controller that supported autonomous mission scripting and error detection and response. This control suite will also support the Nereus

  19. Co-firing biomass and coal-progress in CFD modelling capabilities

    DEFF Research Database (Denmark)

    Kær, Søren Knudsen; Rosendahl, Lasse Aistrup; Yin, Chungen

    2005-01-01

    This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra-particle......This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra......-particle gradients. The advanced reaction model predicts moisture and volatiles release characteristics that differ significantly from those found from a 0-dimensional model partly due to the processes occurring in parallel rather than sequentially. This is demonstrated for a test case that illustrates single...

  20. Capabilities of gravitational surgery for improvement of treatment results in patients with diabetic foot syndrome

    Directory of Open Access Journals (Sweden)

    M B Akhmedov

    2018-06-01

    Full Text Available Aim. Improvement of complex treatment results in patients with diabetic foot syndrome by introducing methods of gravitational surgery and α-lipoic acid. Methods. The results of treatment were analyzed for 558 patients with diabetic foot syndrome treated in Scientific Centre of Surgery named after M.A. Topchubashov (Baku, Azerbaijan from 1988 to 2015. The age varied from 28 to 83 years. The patients included 416 men and 142 women. The control group included 90 patients who at the perioperative period underwent basic therapy including antibiotics, anticoagulants, antiaggregants, dextrans, angioprotectors, spasmolytics, corticosteroids, narcotic and non-narcotic analgesics. The study group included 468 patients, along with traditional therapy receiving efferent methods (plasmapheresis, ultraviolet blood irradiation, ozone therapy and α-lipoic acid. 282 patients of the study group received outpatient treatment and 186 - complex inpatient surgical treatment. A comparative evaluation of the results was performed separately in three groups: angiopathy, neuropathy, angioneuropathy. The results were evaluated by clinical and instrumental examinations before and after treatment (6, 12, 60 months and more. Results. In the study group a satisfactory result of treatment was registered in 85.5% of patients, in the control group - in 62.2%, unsatisfactory in 14.5 and 37.8% of patients, respectively (p=0.046. Conclusion. The use of efferent methods and α-lipoic acid provided prompt elimination of numerous pathogenetic disorders observed in diabetes mellitus, decrease of amputation frequency and improvement of complex surgical treatment results in patients with diabetic foot syndrome.

  1. Improving National Capability in Biogeochemical Flux Modelling: the UK Environmental Virtual Observatory (EVOp)

    Science.gov (United States)

    Johnes, P.; Greene, S.; Freer, J. E.; Bloomfield, J.; Macleod, K.; Reaney, S. M.; Odoni, N. A.

    2012-12-01

    The best outcomes from watershed management arise where policy and mitigation efforts are underpinned by strong science evidence, but there are major resourcing problems associated with the scale of monitoring needed to effectively characterise the sources rates and impacts of nutrient enrichment nationally. The challenge is to increase national capability in predictive modelling of nutrient flux to waters, securing an effective mechanism for transferring knowledge and management tools from data-rich to data-poor regions. The inadequacy of existing tools and approaches to address these challenges provided the motivation for the Environmental Virtual Observatory programme (EVOp), an innovation from the UK Natural Environment Research Council (NERC). EVOp is exploring the use of a cloud-based infrastructure in catchment science, developing an exemplar to explore N and P fluxes to inland and coastal waters in the UK from grid to catchment and national scale. EVOp is bringing together for the first time national data sets, models and uncertainty analysis into cloud computing environments to explore and benchmark current predictive capability for national scale biogeochemical modelling. The objective is to develop national biogeochemical modelling capability, capitalising on extensive national investment in the development of science understanding and modelling tools to support integrated catchment management, and supporting knowledge transfer from data rich to data poor regions, The AERC export coefficient model (Johnes et al., 2007) has been adapted to function within the EVOp cloud environment, and on a geoclimatic basis, using a range of high resolution, geo-referenced digital datasets as an initial demonstration of the enhanced national capacity for N and P flux modelling using cloud computing infrastructure. Geoclimatic regions are landscape units displaying homogenous or quasi-homogenous functional behaviour in terms of process controls on N and P cycling

  2. The capability and constraint model of recoverability: An integrated theory of continuity planning.

    Science.gov (United States)

    Lindstedt, David

    2017-01-01

    While there are best practices, good practices, regulations and standards for continuity planning, there is no single model to collate and sort their various recommended activities. To address this deficit, this paper presents the capability and constraint model of recoverability - a new model to provide an integrated foundation for business continuity planning. The model is non-linear in both construct and practice, thus allowing practitioners to remain adaptive in its application. The paper presents each facet of the model, outlines the model's use in both theory and practice, suggests a subsequent approach that arises from the model, and discusses some possible ramifications to the industry.

  3. Uncertainty quantification's role in modeling and simulation planning, and credibility assessment through the predictive capability maturity model

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Witkowski, Walter R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-04-13

    The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationally simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.

  4. Air Monitoring Network at Tonopah Test Range: Network Description, Capabilities, and Analytical Results

    International Nuclear Information System (INIS)

    Hartwell, William T.; Daniels, Jeffrey; Nikolich, George; Shadel, Craig; Giles, Ken; Karr, Lynn; Kluesner, Tammy

    2012-01-01

    During the period April to June 2008, at the behest of the Department of Energy (DOE), National Nuclear Security Administration, Nevada Site Office (NNSA/NSO); the Desert Research Institute (DRI) constructed and deployed two portable environmental monitoring stations at the Tonopah Test Range (TTR) as part of the Environmental Restoration Project Soils Activity. DRI has operated these stations since that time. A third station was deployed in the period May to September 2011. The TTR is located within the northwest corner of the Nevada Test and Training Range (NTTR), and covers an area of approximately 725.20 km2 (280 mi2). The primary objective of the monitoring stations is to evaluate whether and under what conditions there is wind transport of radiological contaminants from Soils Corrective Action Units (CAUs) associated with Operation Roller Coaster on TTR. Operation Roller Coaster was a series of tests, conducted in 1963, designed to examine the stability and dispersal of plutonium in storage and transportation accidents. These tests did not result in any nuclear explosive yield. However, the tests did result in the dispersal of plutonium and contamination of surface soils in the surrounding area.

  5. The influence of ligament modelling strategies on the predictive capability of finite element models of the human knee joint.

    Science.gov (United States)

    Naghibi Beidokhti, Hamid; Janssen, Dennis; van de Groes, Sebastiaan; Hazrati, Javad; Van den Boogaard, Ton; Verdonschot, Nico

    2017-12-08

    In finite element (FE) models knee ligaments can represented either by a group of one-dimensional springs, or by three-dimensional continuum elements based on segmentations. Continuum models closer approximate the anatomy, and facilitate ligament wrapping, while spring models are computationally less expensive. The mechanical properties of ligaments can be based on literature, or adjusted specifically for the subject. In the current study we investigated the effect of ligament modelling strategy on the predictive capability of FE models of the human knee joint. The effect of literature-based versus specimen-specific optimized material parameters was evaluated. Experiments were performed on three human cadaver knees, which were modelled in FE models with ligaments represented either using springs, or using continuum representations. In spring representation collateral ligaments were each modelled with three and cruciate ligaments with two single-element bundles. Stiffness parameters and pre-strains were optimized based on laxity tests for both approaches. Validation experiments were conducted to evaluate the outcomes of the FE models. Models (both spring and continuum) with subject-specific properties improved the predicted kinematics and contact outcome parameters. Models incorporating literature-based parameters, and particularly the spring models (with the representations implemented in this study), led to relatively high errors in kinematics and contact pressures. Using a continuum modelling approach resulted in more accurate contact outcome variables than the spring representation with two (cruciate ligaments) and three (collateral ligaments) single-element-bundle representations. However, when the prediction of joint kinematics is of main interest, spring ligament models provide a faster option with acceptable outcome. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  7. Capability-based Access Control Delegation Model on the Federated IoT Network

    DEFF Research Database (Denmark)

    Anggorojati, Bayu; Mahalle, Parikshit N.; Prasad, Neeli R.

    2012-01-01

    Flexibility is an important property for general access control system and especially in the Internet of Things (IoT), which can be achieved by access or authority delegation. Delegation mechanisms in access control that have been studied until now have been intended mainly for a system that has...... no resource constraint, such as a web-based system, which is not very suitable for a highly pervasive system such as IoT. To this end, this paper presents an access delegation method with security considerations based on Capability-based Context Aware Access Control (CCAAC) model intended for federated...... machine-to-machine communication or IoT networks. The main idea of our proposed model is that the access delegation is realized by means of a capability propagation mechanism, and incorporating the context information as well as secure capability propagation under federated IoT environments. By using...

  8. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration

    2004-12-01

    these models. In some cases, the CME system also provides alternatives to the SCEC community models. The CME system hosts a collection of community geophysical software codes. These codes include seismic hazard analysis (SHA) programs developed by the SCEC/USGS OpenSHA group. Also, the CME system hosts anelastic wave propagation codes including Kim Olsen's Finite Difference code and Carnegie Mellon's Hercules Finite Element tool chain. The CME system can execute a workflow, that is, a series of geophysical computations using the output of one processing step as the input to a subsequent step. Our workflow capability utilizes grid-based computing software that can submit calculations to a pool of computing resources as well as data management tools that help us maintain an association between data files and metadata descriptions of those files. The CME system maintains, and provides access to, a collection of valuable geophysical data sets. The current CME Digital Library holdings include a collection of 60 ground motion simulation results calculated by a SCEC/PEER working group and a collection of Greens Functions calculated for 33 TriNet broadband receiver sites in the Los Angeles area.

  9. Testing an integrated model of operations capabilities An empirical study of Australian airlines

    NARCIS (Netherlands)

    Nand, Alka Ashwini; Singh, Prakash J.; Power, Damien

    2013-01-01

    Purpose - The purpose of this paper is to test the integrated model of operations strategy as proposed by Schmenner and Swink to explain whether firms trade-off or accumulate capabilities, taking into account their positions relative to their asset and operating frontiers.

  10. University-Industry Research Collaboration: A Model to Assess University Capability

    Science.gov (United States)

    Abramo, Giovanni; D'Angelo, Ciriaco Andrea; Di Costa, Flavia

    2011-01-01

    Scholars and policy makers recognize that collaboration between industry and the public research institutions is a necessity for innovation and national economic development. This work presents an econometric model which expresses the university capability for collaboration with industry as a function of size, location and research quality. The…

  11. The Advanced Modeling, Simulation and Analysis Capability Roadmap Vision for Engineering

    Science.gov (United States)

    Zang, Thomas; Lieber, Mike; Norton, Charles; Fucik, Karen

    2006-01-01

    This paper summarizes a subset of the Advanced Modeling Simulation and Analysis (AMSA) Capability Roadmap that was developed for NASA in 2005. The AMSA Capability Roadmap Team was chartered to "To identify what is needed to enhance NASA's capabilities to produce leading-edge exploration and science missions by improving engineering system development, operations, and science understanding through broad application of advanced modeling, simulation and analysis techniques." The AMSA roadmap stressed the need for integration, not just within the science, engineering and operations domains themselves, but also across these domains. Here we discuss the roadmap element pertaining to integration within the engineering domain, with a particular focus on implications for future observatory missions. The AMSA products supporting the system engineering function are mission information, bounds on information quality, and system validation guidance. The Engineering roadmap element contains 5 sub-elements: (1) Large-Scale Systems Models, (2) Anomalous Behavior Models, (3) advanced Uncertainty Models, (4) Virtual Testing Models, and (5) space-based Robotics Manufacture and Servicing Models.

  12. Evaluation of prediction capability, robustness, and sensitivity in non-linear landslide susceptibility models, Guantánamo, Cuba

    Science.gov (United States)

    Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.

    2011-04-01

    This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.

  13. Enhancing Interoperability and Capabilities of Earth Science Data using the Observations Data Model 2 (ODM2

    Directory of Open Access Journals (Sweden)

    Leslie Hsu

    2017-02-01

    Full Text Available Earth Science researchers require access to integrated, cross-disciplinary data in order to answer critical research questions. Partially due to these science drivers, it is common for disciplinary data systems to expand from their original scope in order to accommodate collaborative research. The result is multiple disparate databases with overlapping but incompatible data. In order to enable more complete data integration and analysis, the Observations Data Model Version 2 (ODM2 was developed to be a general information model, with one of its major goals to integrate data collected by 'in situ' sensors with those by 'ex-situ' analyses of field specimens. Four use cases with different science drivers and disciplines have adopted ODM2 because of benefits to their users. The disciplines behind the four cases are diverse – hydrology, rock geochemistry, soil geochemistry, and biogeochemistry. For each case, we outline the benefits, challenges, and rationale for adopting ODM2. In each case, the decision to implement ODM2 was made to increase interoperability and expand data and metadata capabilities. One of the common benefits was the ability to use the flexible handling and comprehensive description of specimens and data collection sites in ODM2’s sampling feature concept. We also summarize best practices for implementing ODM2 based on the experience of these initial adopters. The descriptions here should help other potential adopters of ODM2 implement their own instances or to modify ODM2 to suit their needs.

  14. How do dynamic capabilities transform external technologies into firms’ renewed technological resources? – A mediation model

    DEFF Research Database (Denmark)

    Li-Ying, Jason; Wang, Yuandi; Ning, Lutao

    2016-01-01

    microfoundations of dynamic technological capabilities, mediate the relationship between external technology breadth and firms’ technological innovation performance, based on the resource-based view and dynamic capability view. Using a sample of listed Chinese licensee firms, we find that firms must broadly......How externally acquired resources may become valuable, rare, hard-to-imitate, and non-substitute resource bundles through the development of dynamic capabilities? This study proposes and tests a mediation model of how firms’ internal technological diversification and R&D, as two distinctive...... explore external technologies to ignite the dynamism in internal technological diversity and in-house R&D, which play their crucial roles differently to transform and reconfigure firms’ technological resources....

  15. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    Science.gov (United States)

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; hide

    2016-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  16. Konsep Tingkat Kematangan penerapan Internet Protokol versi 6 (Capability Maturity Model for IPv6 Implementation

    Directory of Open Access Journals (Sweden)

    Riza Azmi

    2015-03-01

    Full Text Available Internet Protocol atau IP merupakan standar penomoran internet di dunia yang jumlahnya terbatas. Di dunia, alokasi IP diatur oleh Internet Assignd Number Authority (IANA dan didelegasikan ke melalui otoritas masing-masing benua. IP sendiri terdiri dari 2 jenis versi yaitu IPv4 dan IPv6 dimana alokasi IPv4 dinyatakan habis di tingkat IANA pada bulan April 2011. Oleh karena itu, penggunaan IP diarahkan kepada penggunaan IPv6. Untuk melihat bagaimana kematangan suatu organisasi terhadap implementasi IPv6, penelitian ini mencoba membuat sebuah model tingkat kematangan penerapan IPv6. Konsep dasar dari model ini mengambil konsep Capability Maturity Model Integrated (CMMI, dengan beberapa tambahan yaitu roadmap migrasi IPv6 di Indonesia, Request for Comment (RFC yang terkait dengan IPv6 serta beberapa best-practice implementasi dari IPv6. Dengan konsep tersebut, penelitian ini menghasilkan konsep Capability Maturity for IPv6 Implementation.

  17. Gamma-Ray Emission Tomography: Modeling and Evaluation of Partial-Defect Testing Capabilities

    International Nuclear Information System (INIS)

    Jacobsson Svard, S.; Jansson, P.; Davour, A.; Grape, S.; White, T.A.; Smith, L.E.; Deshmukh, N.; Wittman, R.S.; Mozin, V.; Trellue, H.

    2015-01-01

    Gamma emission tomography (GET) for spent nuclear fuel verification is the subject for IAEA MSP project JNT1955. In line with IAEA Safeguards R&D plan 2012-2023, the aim of this effort is to ''develop more sensitive and less intrusive alternatives to existing NDA instruments to perform partial defect test on spent fuel assembly prior to transfer to difficult to access storage''. The current viability study constitutes the first phase of three, with evaluation and decision points between each phase. Two verification objectives have been identified; (1) counting of fuel pins in tomographic images without any a priori knowledge of the fuel assembly under study, and (2) quantitative measurements of pinby- pin properties, e.g., burnup, for the detection of anomalies and/or verification of operator-declared data. Previous measurements performed in Sweden and Finland have proven GET highly promising for detecting removed or substituted fuel rods in BWR and VVER-440 fuel assemblies even down to the individual fuel rod level. The current project adds to previous experiences by pursuing a quantitative assessment of the capabilities of GET for partial defect detection, across a broad range of potential IAEA applications, fuel types and fuel parameters. A modelling and performance-evaluation framework has been developed to provide quantitative GET performance predictions, incorporating burn-up and cooling-time calculations, Monte Carlo radiation-transport and detector-response modelling, GET instrument definitions (existing and notional) and tomographic reconstruction algorithms, which use recorded gamma-ray intensities to produce images of the fuel's internal source distribution or conclusive rod-by-rod data. The framework also comprises image-processing algorithms and performance metrics that recognize the inherent tradeoff between the probability of detecting missing pins and the false-alarm rate. Here, the modelling and analysis framework is

  18. The Community WRF-Hydro Modeling System Version 4 Updates: Merging Toward Capabilities of the National Water Model

    Science.gov (United States)

    McAllister, M.; Gochis, D.; Dugger, A. L.; Karsten, L. R.; McCreight, J. L.; Pan, L.; Rafieeinasab, A.; Read, L. K.; Sampson, K. M.; Yu, W.

    2017-12-01

    The community WRF-Hydro modeling system is publicly available and provides researchers and operational forecasters a flexible and extensible capability for performing multi-scale, multi-physics options for hydrologic modeling that can be run independent or fully-interactive with the WRF atmospheric model. The core WRF-Hydro physics model contains very high-resolution descriptions of terrestrial hydrologic process representations such as land-atmosphere exchanges of energy and moisture, snowpack evolution, infiltration, terrain routing, channel routing, basic reservoir representation and hydrologic data assimilation. Complementing the core physics components of WRF-Hydro are an ecosystem of pre- and post-processing tools that facilitate the preparation of terrain and meteorological input data, an open-source hydrologic model evaluation toolset (Rwrfhydro), hydrologic data assimilation capabilities with DART and advanced model visualization capabilities. The National Center for Atmospheric Research (NCAR), through collaborative support from the National Science Foundation and other funding partners, provides community support for the entire WRF-Hydro system through a variety of mechanisms. This presentation summarizes the enhanced user support capabilities that are being developed for the community WRF-Hydro modeling system. These products and services include a new website, open-source code repositories, documentation and user guides, test cases, online training materials, live, hands-on training sessions, an email list serve, and individual user support via email through a new help desk ticketing system. The WRF-Hydro modeling system and supporting tools which now include re-gridding scripts and model calibration have recently been updated to Version 4 and are merging toward capabilities of the National Water Model.

  19. Power Beamed Photon Sails: New Capabilities Resulting From Recent Maturation Of Key Solar Sail And High Power Laser Technologies

    International Nuclear Information System (INIS)

    Montgomery, Edward E. IV

    2010-01-01

    This paper revisits some content in the First International Symposium on Beamed Energy Propulsion in 2002 related to the concept of propellantless in-space propulsion utilizing an external high energy laser to provide momentum to an ultralightweight (gossamer) spacecraft. The design and construction of the NanoSail-D solar sail demonstration spacecraft has demonstrated in space flight hardware the concept of small, very light--yet capable--spacecraft. The results of the Joint High Power Solid State Laser (JHPSSL) have also increased the effectiveness and reduced the cost of an entry level laser source. This paper identifies the impact from improved system parameters on current mission applications.

  20. In-Vessel Retention Modeling Capabilities of SCDAP/RELAP5-3DC

    International Nuclear Information System (INIS)

    Knudson, D.L.; Rempe, J.L.

    2002-01-01

    Molten core materials may relocate to the lower head of a reactor vessel in the latter stages of a severe accident. Under such circumstances, in-vessel retention (IVR) of the molten materials is a vital step in mitigating potential severe accident consequences. Whether IVR occurs depends on the interactions of a number of complex processes including heat transfer inside the accumulated molten pool, heat transfer from the molten pool to the reactor vessel (and to overlying fluids), and heat transfer from exterior vessel surfaces. SCDAP/RELAP5-3D C has been developed at the Idaho National Engineering and Environmental Laboratory to facilitate simulation of the processes affecting the potential for IVR, as well as processes involved in a wide variety of other reactor transients. In this paper, current capabilities of SCDAP/RELAP5-3D C relative to IVR modeling are described and results from typical applications are provided. In addition, anticipated developments to enhance IVR simulation with SCDAP/RELAP5-3D C are outlined. (authors)

  1. Identifying extensions required by RUP (Rational Unified Process) to comply with CMM (Capability Maturity Model) levels 2 and 3

    OpenAIRE

    Manzoni, Lisandra Vielmo; Price, Roberto Tom

    2003-01-01

    This paper describes an assessment of the Rational Unified Process (RUP) based on the Capability Maturity Model (CMM). For each key practice (KP) identified in each key process area (KPA) of CMM levels 2 and 3, the Rational Unified Process was assessed to determine whether it satisfied the KP or not. For each KPA, the percentage of the key practices supported was calculated, and the results were tabulated. The report includes considerations about the coverage of each key process area, describ...

  2. Engineering model cryocooler test results

    International Nuclear Information System (INIS)

    Skimko, M.A.; Stacy, W.D.; McCormick, J.A.

    1992-01-01

    This paper reports that recent testing of diaphragm-defined, Stirling-cycle machines and components has demonstrated cooling performance potential, validated the design code, and confirmed several critical operating characteristics. A breadboard cryocooler was rebuilt and tested from cryogenic to near-ambient cold end temperatures. There was a significant increase in capacity at cryogenic temperatures and the performance results compared will with code predictions at all temperatures. Further testing on a breadboard diaphragm compressor validated the calculated requirement for a minimum axial clearance between diaphragms and mating heads

  3. Spatial Preference Modelling for equitable infrastructure provision: an application of Sen's Capability Approach

    Science.gov (United States)

    Wismadi, Arif; Zuidgeest, Mark; Brussel, Mark; van Maarseveen, Martin

    2014-01-01

    To determine whether the inclusion of spatial neighbourhood comparison factors in Preference Modelling allows spatial decision support systems (SDSSs) to better address spatial equity, we introduce Spatial Preference Modelling (SPM). To evaluate the effectiveness of this model in addressing equity, various standardisation functions in both Non-Spatial Preference Modelling and SPM are compared. The evaluation involves applying the model to a resource location-allocation problem for transport infrastructure in the Special Province of Yogyakarta in Indonesia. We apply Amartya Sen's Capability Approach to define opportunity to mobility as a non-income indicator. Using the extended Moran's I interpretation for spatial equity, we evaluate the distribution output regarding, first, `the spatial distribution patterns of priority targeting for allocation' (SPT) and, second, `the effect of new distribution patterns after location-allocation' (ELA). The Moran's I index of the initial map and its comparison with six patterns for SPT as well as ELA consistently indicates that the SPM is more effective for addressing spatial equity. We conclude that the inclusion of spatial neighbourhood comparison factors in Preference Modelling improves the capability of SDSS to address spatial equity. This study thus proposes a new formal method for SDSS with specific attention on resource location-allocation to address spatial equity.

  4. Dynamic capabilities, Marketing Capability and Organizational Performance

    Directory of Open Access Journals (Sweden)

    Adriana Roseli Wünsch Takahashi

    2017-01-01

    Full Text Available The goal of the study is to investigate the influence of dynamic capabilities on organizational performance and the role of marketing capabilities as a mediator in this relationship in the context of private HEIs in Brazil. As a research method we carried out a survey with 316 IES and data analysis was operationalized with the technique of structural equation modeling. The results indicate that the dynamic capabilities have influence on organizational performance only when mediated by marketing ability. The marketing capability has an important role in the survival, growth and renewal on educational services offerings for HEIs in private sector, and consequently in organizational performance. It is also demonstrated that mediated relationship is more intense for HEI with up to 3,000 students and other organizational profile variables such as amount of courses, the constitution, the type of institution and type of education do not significantly alter the results.

  5. Results from the IAEA benchmark of spallation models

    International Nuclear Information System (INIS)

    Leray, S.; David, J.C.; Khandaker, M.; Mank, G.; Mengoni, A.; Otsuka, N.; Filges, D.; Gallmeier, F.; Konobeyev, A.; Michel, R.

    2011-01-01

    Spallation reactions play an important role in a wide domain of applications. In the simulation codes used in this field, the nuclear interaction cross-sections and characteristics are computed by spallation models. The International Atomic Energy Agency (IAEA) has recently organised a benchmark of the spallation models used or that could be used in the future into high-energy transport codes. The objectives were, first, to assess the prediction capabilities of the different spallation models for the different mass and energy regions and the different exit channels and, second, to understand the reason for the success or deficiency of the models. Results of the benchmark concerning both the analysis of the prediction capabilities of the models and the first conclusions on the physics of spallation models are presented. (authors)

  6. Building a Conceptual Model of Routines, Capabilities, and Absorptive Capacity Interplay

    Directory of Open Access Journals (Sweden)

    Ivan Stefanovic

    2014-05-01

    Full Text Available Researchers have often used constructs such as routines, operational capability, dynamic capability, absorptive capacity, etc., to explain various organizational phenomena, especially a competitive advantage of firms. As a consequence of their frequent use in different contexts, these constructs have become extremely broad and blurred, thus making a void in strategic management literature. In this paper we attempt to bring a sense of holistic perspective on these constructs by briefly reviewing the current state of the research and presenting a conceptual model that provides an explanation for the causal relationships between them. The final section of the paper sheds some light on this topic from the econophysics perspective. Authors hope that findings in this paper may serve as a foundation for other research endeavours related to the topic of how firms achieve competitive advantage and thrive in their environments.

  7. Estimation of process capability indices from the results of limit gauge inspection of dimensional parameters in machining industry

    Science.gov (United States)

    Masterenko, Dmitry A.; Metel, Alexander S.

    2018-03-01

    The process capability indices Cp, Cpk are widely used in the modern quality management as statistical measures of the ability of a process to produce output X within specification limits. The customer's requirement to ensure Cp ≥ 1.33 is often applied in contracts. Capability indices estimates may be calculated with the estimates of the mean µ and the variability 6σ, and for it, the quality characteristic in a sample of pieces should be measured. It requires, in turn, using advanced measuring devices and well-qualified staff. From the other hand, quality inspection by attributes, fulfilled with limit gauges (go/no-go) is much simpler and has a higher performance, but it does not give the numerical values of the quality characteristic. The described method allows estimating the mean and the variability of the process on the basis of the results of limit gauge inspection with certain lower limit LCL and upper limit UCL, which separates the pieces into three groups: where X control of the manufacturing process. It is very important for improving quality of articles in machining industry through their tolerance.

  8. Transitioning Enhanced Land Surface Initialization and Model Verification Capabilities to the Kenya Meteorological Department (KMD)

    Science.gov (United States)

    Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Zavodsky, Bradley T.; Srikishen, Jayanthi; Limaye, Ashutosh; Blankenship, Clay B.

    2016-01-01

    Flooding, severe weather, and drought are key forecasting challenges for the Kenya Meteorological Department (KMD), based in Nairobi, Kenya. Atmospheric processes leading to convection, excessive precipitation and/or prolonged drought can be strongly influenced by land cover, vegetation, and soil moisture content, especially during anomalous conditions and dry/wet seasonal transitions. It is thus important to represent accurately land surface state variables (green vegetation fraction, soil moisture, and soil temperature) in Numerical Weather Prediction (NWP) models. The NASA SERVIR and the Short-term Prediction Research and Transition (SPoRT) programs in Huntsville, AL have established a working partnership with KMD to enhance its regional modeling capabilities. SPoRT and SERVIR are providing experimental land surface initialization datasets and model verification capabilities for capacity building at KMD. To support its forecasting operations, KMD is running experimental configurations of the Weather Research and Forecasting (WRF; Skamarock et al. 2008) model on a 12-km/4-km nested regional domain over eastern Africa, incorporating the land surface datasets provided by NASA SPoRT and SERVIR. SPoRT, SERVIR, and KMD participated in two training sessions in March 2014 and June 2015 to foster the collaboration and use of unique land surface datasets and model verification capabilities. Enhanced regional modeling capabilities have the potential to improve guidance in support of daily operations and high-impact weather and climate outlooks over Eastern Africa. For enhanced land-surface initialization, the NASA Land Information System (LIS) is run over Eastern Africa at 3-km resolution, providing real-time land surface initialization data in place of interpolated global model soil moisture and temperature data available at coarser resolutions. Additionally, real-time green vegetation fraction (GVF) composites from the Suomi-NPP VIIRS instrument is being incorporated

  9. From one to two – a possible model of organizational development and development of organizational capabilities

    OpenAIRE

    M. Somosi Veres

    2013-01-01

    The business management of most successful companies is a result of the coordinated operation of the processes, organizational structure, supporting systems and employees which make up the organizational capabilities of the company. Within the business processes, this includes development and continuous improvement of key internal rules and regulations, the division of spheres of power and responsibility, the requirements and the operation of fundamental checkpoints for organizational units, ...

  10. Gossiping Capabilities

    DEFF Research Database (Denmark)

    Mogensen, Martin; Frey, Davide; Guerraoui, Rachid

    Gossip-based protocols are now acknowledged as a sound basis to implement collaborative high-bandwidth content dissemination: content location is disseminated through gossip, the actual contents being subsequently pulled. In this paper, we present HEAP, HEterogeneity Aware gossip Protocol, where...... nodes dynamically adjust their contribution to gossip dissemination according to their capabilities. Using a continuous, itself gossip-based, approximation of relative capabilities, HEAP dynamically leverages the most capable nodes by (a) increasing their fanouts (while decreasing by the same proportion...... declare a high capability in order to augment their perceived quality without contributing accordingly. We evaluate HEAP in the context of a video streaming application on a 236 PlanetLab nodes testbed. Our results shows that HEAP improves the quality of the streaming by 25% over a standard gossip...

  11. Hybrid modeling approach to improve the forecasting capability for the gaseous radionuclide in a nuclear site

    International Nuclear Information System (INIS)

    Jeong, Hyojoon; Hwang, Wontae; Kim, Eunhan; Han, Moonhee

    2012-01-01

    Highlights: ► This study is to improve the reliability of air dispersion modeling. ► Tracer experiments assumed gaseous radionuclides were conducted at a nuclear site. ► The performance of a hybrid modeling combined ISC with ANFIS was investigated.. ► Hybrid modeling approach shows better performance rather than a single ISC model. - Abstract: Predicted air concentrations of radioactive materials are important for an environmental impact assessment for the public health. In this study, the performance of a hybrid modeling combined with the industrial source complex (ISC) model and an adaptive neuro-fuzzy inference system (ANFIS) for predicting tracer concentrations was investigated. Tracer dispersion experiments were performed to produce the field data assuming the accidental release of radioactive material. ANFIS was trained in order that the outputs of the ISC model are similar to the measured data. Judging from the higher correlation coefficients between the measured and the calculated ones, the hybrid modeling approach could be an appropriate technique for an improvement of the modeling capability to predict the air concentrations for radioactive materials.

  12. Expand the Modeling Capabilities of DOE's EnergyPlus Building Energy Simulation Program

    Energy Technology Data Exchange (ETDEWEB)

    Don Shirey

    2008-02-28

    EnergyPlus{trademark} is a new generation computer software analysis tool that has been developed, tested, and commercialized to support DOE's Building Technologies (BT) Program in terms of whole-building, component, and systems R&D (http://www.energyplus.gov). It is also being used to support evaluation and decision making of zero energy building (ZEB) energy efficiency and supply technologies during new building design and existing building retrofits. Version 1.0 of EnergyPlus was released in April 2001, followed by semiannual updated versions over the ensuing seven-year period. This report summarizes work performed by the University of Central Florida's Florida Solar Energy Center (UCF/FSEC) to expand the modeling capabilities of EnergyPlus. The project tasks involved implementing, testing, and documenting the following new features or enhancement of existing features: (1) A model for packaged terminal heat pumps; (2) A model for gas engine-driven heat pumps with waste heat recovery; (3) Proper modeling of window screens; (4) Integrating and streamlining EnergyPlus air flow modeling capabilities; (5) Comfort-based controls for cooling and heating systems; and (6) An improved model for microturbine power generation with heat recovery. UCF/FSEC located existing mathematical models or generated new model for these features and incorporated them into EnergyPlus. The existing or new models were (re)written using Fortran 90/95 programming language and were integrated within EnergyPlus in accordance with the EnergyPlus Programming Standard and Module Developer's Guide. Each model/feature was thoroughly tested and identified errors were repaired. Upon completion of each model implementation, the existing EnergyPlus documentation (e.g., Input Output Reference and Engineering Document) was updated with information describing the new or enhanced feature. Reference data sets were generated for several of the features to aid program users in selecting proper

  13. Initiative-taking, Improvisational Capability and Business Model Innovation in Emerging Market

    DEFF Research Database (Denmark)

    Cao, Yangfeng

    Business model innovation plays a very important role in developing competitive advantage when multinational small and medium-sized enterprises (SMEs) from developed country enter into emerging markets because of the large contextual distances or gaps between the emerging and developed economies....... Many prior researches have shown that the foreign subsidiaries play important role in shaping the overall strategy of the parent company. However, little is known about how subsidiary specifically facilitates business model innovation (BMI) in emerging markets. Adopting the method of comparative...... innovation in emerging markets. We find that high initiative-taking and strong improvisational capability can accelerate the business model innovation. Our research contributes to the literatures on international and strategic entrepreneurship....

  14. Entry into new markets: the development of the business model and dynamic capabilities

    Directory of Open Access Journals (Sweden)

    Victor Wolowski Kenski

    2017-12-01

    Full Text Available This work shows the path through which companies enter new markets or bring new propositions to established ones. It presents the market analysis process, the strategical decisions that determine the company’s position on it and the required changes in the configurations for this new action. It also studies the process of selecting the business model and the conditions for its definition the adoption and subsequent development of resources and capabilities required to conquer this new market. It is presented the necessary conditions to remain and maintain its market position. These concepts are presented through a case study of a business group that takes part in different franchises.

  15. Including capabilities of local actors in regional economic development: Empirical results of local seaweed industries in Sulawesi

    Directory of Open Access Journals (Sweden)

    Mark T.J. Vredegoor

    2013-11-01

    Full Text Available Stimson, et al. (2009 developed one of the most relevant and well known model for Regional Economic Development. This model covers the most important factors related to economic development question. However, this model excludes the social components of development. Local community should be included in terms of the development of a region. This paper introduced to the Stimson model “Skills” and “Knowledge” at the individual level for local actors indicating the capabilities at the individual level and introduced “Human Coordination” for the capabilities at the collective level. In our empirical research we looked at the Indonesian seaweed market with a specific focus on the region of Baubau. This region was chosen because there are hardly any economic developments. Furthermore this study focuses on the poorer community who are trying to improve their situation by the cultivation of Seaweed. Eighteen local informants was interviewed besides additional interviews of informants from educational and governmental institutions in the cities of Jakarta, Bandung and Yogyakarta. The informants selected had a direct or indirect relationship with the region of Baubau. With the support of the empirical data from this region we can confirm that it is worthwhile to include the local community in the model for regional economic development.  The newly added variables: at the individual level; Skills and Knowledge and at the level of the collective: Human Coordination was supported by the empirical material. It is an indication that including the new variables can give regional economic an extra dimension.  In this way we think that it becomes more explicit that “endogenous” means that the people, or variables closely related to them, should be more explicitly included in models trying to capture Regional Economic Development or rephrased as Local Economic Development Keywords:Regional and endogenous development; Fisheries and seaweed

  16. Predictive capabilities of a two-dimensional model in the ground water transport of radionuclides

    International Nuclear Information System (INIS)

    Gureghian, A.B.; Beskid, N.J.; Marmer, G.J.

    1978-01-01

    The discharge of low-level radioactive waste into tailings ponds is a potential source of ground water contamination. The estimation of the radiological hazards related to the ground water transport of radionuclides from tailings retention systems depends on reasonably accurate estimates of the movement of both water and solute. A two-dimensional mathematical model having predictive capability for ground water flow and solute transport has been developed. The flow equation has been solved under steady-state conditions and the mass transport equation under transient conditions. The simultaneous solution of both equations is achieved through the finite element technique using isoparametric elements, based on the Galerkin formulation. However, in contrast to the flow equation solution, the weighting functions used in the solution of the mass transport equation have a non-symmetric form. The predictive capability of the model is demonstrated using an idealized case based on analyses of field data obtained from the sites of operating uranium mills. The pH of the solution, which regulates the variation of the distribution coefficient (K/sub d/) in a particular site, appears to be the most important factor in the assessment of the rate of migration of the elements considered herein

  17. Proposing a Qualitative Approach for Corporate Competitive Capability Modeling in High-Tech Business (Case study: Software Industry

    Directory of Open Access Journals (Sweden)

    Mahmoud Saremi Saremi

    2010-09-01

    Full Text Available The evolution of global business trend for ICT-based products in recent decades shows the intensive activity of pioneer developing countries to gain a powerful competitive position in global software industry. In this research, with regard to importance of competition issue for top managers of Iranian software companies, a conceptual model has been developed for Corporate Competitive Capability concept. First, after describing the research problem, we present a comparative review of recent theories of firm and competition that has been applied by different researchers in the High-Tech and Knowledge Intensive Organization filed. Afterwards, with a detailed review of literature and previous research papers, an initial research framework and applied research method has been proposed. The main and final section of paper assigned to describing the result of research in different steps of qualitative modeling process. The agreed concepts are related to corporate competitive capability, the elicited and analyzed experts Cause Map, the elicited collective causal maps, and the final proposed model for software industry are the modeling results for this paper.

  18. Comparison of Fuzzy AHP Buckley and ANP Models in Forestry Capability Evaluation (Case Study: Behbahan City Fringe

    Directory of Open Access Journals (Sweden)

    V. Rahimi

    2015-12-01

    Full Text Available The area of Zagros forests is continuously in danger of destruction. Therefore, the remaining forests should be carefully managed based on ecological capability evaluation. In fact, land evaluation includes prediction or assessment of land quality for a special land use with regard to production, vulnerability and management requirements. In this research, we studied the ecological capability of Behbahan city fringe for forestry land use. After the basic studies were completed and the thematic maps such as soil criteria, climate, physiography, vegetation and bedrock were prepared, the fuzzy multi-criteria decision-making methods of Fuzzy AHP Buckley and ANP were used to standardize and determine the weights of criteria. Finally, the ecological model of the region’s capability was generated to prioritize forestry land use and prepare the final map of evaluation using WLC model in seven classes. The results showed that in ANP method, 55.58% of the area is suitable for forestry land use which is more consistent with the reality, while in the Fuzzy AHP method, 95.23% of the area was found suitable. Finally, it was concluded that the ANP method shows more flexibility and ability to determine suitable areas for forestry land use in the study area.

  19. Full optical model of micro-endoscope with optical coherence microscopy, multiphoton microscopy and visible capabilities

    Science.gov (United States)

    Vega, David; Kiekens, Kelli C.; Syson, Nikolas C.; Romano, Gabriella; Baker, Tressa; Barton, Jennifer K.

    2018-02-01

    While Optical Coherence Microscopy (OCM), Multiphoton Microscopy (MPM), and narrowband imaging are powerful imaging techniques that can be used to detect cancer, each imaging technique has limitations when used by itself. Combining them into an endoscope to work in synergy can help achieve high sensitivity and specificity for diagnosis at the point of care. Such complex endoscopes have an elevated risk of failure, and performing proper modelling ensures functionality and minimizes risk. We present full 2D and 3D models of a multimodality optical micro-endoscope to provide real-time detection of carcinomas, called a salpingoscope. The models evaluate the endoscope illumination and light collection capabilities of various modalities. The design features two optical paths with different numerical apertures (NA) through a single lens system with a scanning optical fiber. The dual path is achieved using dichroic coatings embedded in a triplet. A high NA optical path is designed to perform OCM and MPM while a low NA optical path is designed for the visible spectrum to navigate the endoscope to areas of interest and narrowband imaging. Different tests such as the reflectance profile of homogeneous epithelial tissue were performed to adjust the models properly. Light collection models for the different modalities were created and tested for efficiency. While it is challenging to evaluate the efficiency of multimodality endoscopes, the models ensure that the system is design for the expected light collection levels to provide detectable signal to work for the intended imaging.

  20. A static VAR compensator model for improved ride-through capability of wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Akhmatov, V.; Soebrink, K.

    2004-12-01

    Dynamic reactive compensation is associated with reactive power and voltage control of induction generator based wind turbines. With regard to wind power, application areas of dynamic reactive compensation can be improvement of the power quality and the voltage stability, the control of the reactive power exchange between the wind farm and the power grid in the wind farm connection point as well as improvement of the ride-through capability of the wind farm. This article presents a model of a Static VAR Compensator (SVC) with dynamic generic control that is a kind of dynamic reactive compensation device. The term 'generic' implies that the model is general and must cover a variety of the SVC units and their specific controls from different manufacturers. The SVC model with dynamic generic control is implemented by Eltra in the simulation tool Powerfactory and validated from the SVC model in the tool PSCAD/EMTDC. Implementation in the tool Powerfactory makes it possible to apply the SVC model with dynamic generic control in investigations of power system stability with regard to establishment of large wind farms without restrictions on the model size of the power grid. (Author)

  1. Functional capabilities of the breadboard model of SIDRA satellite-borne instrument

    International Nuclear Information System (INIS)

    Dudnik, O.V.; Kurbatov, E.V.; Titov, K.G.; Prieto, M.; Sanchez, S.; Sylwester, J.; Gburek, S.; Podgorski, P.

    2013-01-01

    This paper presents the structure, principles of operation and functional capabilities of the breadboard model of SIDRA compact satellite-borne instrument. SIDRA is intended for monitoring fluxes of high-energy charged particles under outer-space conditions. We present the reasons to develop a particle spectrometer and we list the main objectives to be achieved with the help of this instrument. The paper describes the major specifications of the analog and digital signal processing units of the breadboard model. A specially designed and developed data processing module based on the Actel ProAsic3E A3PE3000 FPGA is presented and compared with the all-in one digital processing signal board based on the Xilinx Spartan 3 XC3S1500 FPGA.

  2. Programming with models: modularity and abstraction provide powerful capabilities for systems biology.

    Science.gov (United States)

    Mallavarapu, Aneil; Thomson, Matthew; Ullian, Benjamin; Gunawardena, Jeremy

    2009-03-06

    Mathematical models are increasingly used to understand how phenotypes emerge from systems of molecular interactions. However, their current construction as monolithic sets of equations presents a fundamental barrier to progress. Overcoming this requires modularity, enabling sub-systems to be specified independently and combined incrementally, and abstraction, enabling generic properties of biological processes to be specified independently of specific instances. These, in turn, require models to be represented as programs rather than as datatypes. Programmable modularity and abstraction enables libraries of modules to be created, which can be instantiated and reused repeatedly in different contexts with different components. We have developed a computational infrastructure that accomplishes this. We show here why such capabilities are needed, what is required to implement them and what can be accomplished with them that could not be done previously.

  3. The Aviation System Analysis Capability Air Carrier Cost-Benefit Model

    Science.gov (United States)

    Gaier, Eric M.; Edlich, Alexander; Santmire, Tara S.; Wingrove, Earl R.., III

    1999-01-01

    To meet its objective of assisting the U.S. aviation industry with the technological challenges of the future, NASA must identify research areas that have the greatest potential for improving the operation of the air transportation system. Therefore, NASA is developing the ability to evaluate the potential impact of various advanced technologies. By thoroughly understanding the economic impact of advanced aviation technologies and by evaluating how the new technologies will be used in the integrated aviation system, NASA aims to balance its aeronautical research program and help speed the introduction of high-leverage technologies. To meet these objectives, NASA is building the Aviation System Analysis Capability (ASAC). NASA envisions ASAC primarily as a process for understanding and evaluating the impact of advanced aviation technologies on the U.S. economy. ASAC consists of a diverse collection of models and databases used by analysts and other individuals from the public and private sectors brought together to work on issues of common interest to organizations in the aviation community. ASAC also will be a resource available to the aviation community to analyze; inform; and assist scientists, engineers, analysts, and program managers in their daily work. The ASAC differs from previous NASA modeling efforts in that the economic behavior of buyers and sellers in the air transportation and aviation industries is central to its conception. Commercial air carriers, in particular, are an important stakeholder in this community. Therefore, to fully evaluate the implications of advanced aviation technologies, ASAC requires a flexible financial analysis tool that credibly links the technology of flight with the financial performance of commercial air carriers. By linking technical and financial information, NASA ensures that its technology programs will continue to benefit the user community. In addition, the analysis tool must be capable of being incorporated into the

  4. Classification Method to Define Synchronization Capability Limits of Line-Start Permanent-Magnet Motor Using Mesh-Based Magnetic Equivalent Circuit Computation Results

    Directory of Open Access Journals (Sweden)

    Bart Wymeersch

    2018-04-01

    Full Text Available Line start permanent magnet synchronous motors (LS-PMSM are energy-efficient synchronous motors that can start asynchronously due to a squirrel cage in the rotor. The drawback, however, with this motor type is the chance of failure to synchronize after start-up. To identify the problem, and the stable operation limits, the synchronization at various parameter combinations is investigated. For accurate knowledge of the operation limits to assure synchronization with the utility grid, an accurate classification of parameter combinations is needed. As for this, many simulations have to be executed, a rapid evaluation method is indispensable. To simulate the dynamic behavior in the time domain, several modeling methods exist. In this paper, a discussion is held with respect to different modeling methods. In order to include spatial factors and magnetic nonlinearities, on the one hand, and to restrict the computation time on the other hand, a magnetic equivalent circuit (MEC modeling method is developed. In order to accelerate numerical convergence, a mesh-based analysis method is applied. The novelty in this paper is the implementation of support vector machine (SVM to classify the results of simulations at various parameter combinations into successful or unsuccessful synchronization, in order to define the synchronization capability limits. It is explained how these techniques can benefit the simulation time and the evaluation process. The results of the MEC modeling correspond to those obtained with finite element analysis (FEA, despite the reduced computation time. In addition, simulation results obtained with MEC modeling are experimentally validated.

  5. Extending the Lunar Mapping and Modeling Portal - New Capabilities and New Worlds

    Science.gov (United States)

    Day, B. H.; Law, E.; Arevalo, E.; Bui, B.; Chang, G.; Dodge, K.; Kim, R. M.; Malhotra, S.; Sadaqathullah, S.

    2015-12-01

    NASA's Lunar Mapping and Modeling Portal (LMMP) provides a web-based Portal and a suite of interactive visualization and analysis tools to enable mission planners, lunar scientists, and engineers to access mapped lunar data products from past and current lunar missions (http://lmmp.nasa.gov). During the past year, the capabilities and data served by LMMP have been significantly expanded. New interfaces are providing improved ways to access and visualize data. Many of the recent enhancements to LMMP have been specifically in response to the requirements of NASA's proposed Resource Prospector lunar rover, and as such, provide an excellent example of the application of LMMP to mission planning. At the request of NASA's Science Mission Directorate, LMMP's technology and capabilities are now being extended to additional planetary bodies. New portals for Vesta and Mars are the first of these new products to be released. On March 31, 2015, the LMMP team released Vesta Trek (http://vestatrek.jpl.nasa.gov), a web-based application applying LMMP technology to visualizations of the asteroid Vesta. Data gathered from multiple instruments aboard Dawn have been compiled into Vesta Trek's user-friendly set of tools, enabling users to study the asteroid's features. With an initial release on July 1, 2015, Mars Trek replicates the functionality of Vesta Trek for the surface of Mars. While the entire surface of Mars is covered, higher levels of resolution and greater numbers of data products are provided for special areas of interest. Early releases focus on past, current, and future robotic sites of operation. Future releases will add many new data products and analysis tools as Mars Trek has been selected for use in site selection for the Mars 2020 rover and in identifying potential human landing sites on Mars. Other destinations will follow soon. The user community is invited to provide suggestions and requests as the development team continues to expand the capabilities of LMMP

  6. Induction generator model in phase coordinates for fault ride-through capability studies of wind turbines

    DEFF Research Database (Denmark)

    Fajardo, L.A.; Iov, F.; Medina, R.J.A.

    2007-01-01

    A phase coordinates induction generator model with time varying electrical parameters as influenced by magnetic saturation and rotor deep bar effects, is presented in this paper. The model exhibits a per-phase formulation, uses standard data sheet for characterization of the electrical parameters...... are conducted in a representative sized system and results show aptness of the proposed model over other two models. This approach is also constructive to support grid code requirements....

  7. Overview of the development of a biosphere modelling capability for UK DoE (HMIP)

    International Nuclear Information System (INIS)

    Nancarrow, D.J.; Ashton, J.; Little, R.H.

    1990-01-01

    A programme of research has been funded, since 1982, by the United Kingdom Department of the Environment (Her Majesty's Inspectorate of Pollution, HMIP), to develop a procedure for post-closure radiological assessment of underground disposal facilities for low and intermediate level radioactive wastes. It is conventional to regard the disposal system as comprising the engineered barriers of the repository, the geological setting which provides natural barriers to migration, and the surface environment or biosphere. The requirement of a biosphere submodel, therefore, is to provide estimates, for given radionuclide inputs, of the dose or probability distribution function of dose to a maximally exposed individual as a function of time. This paper describes the development of the capability for biosphere modelling for HMIP in the context of the development of other assessment procedures. 11 refs., 3 figs., 2 tabs

  8. New Modelling Capabilities in Commercial Software for High-Gain Antennas

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Lumholt, Michael; Meincke, Peter

    2012-01-01

    characterization of the reflectarray element, an initial phaseonly synthesis, followed by a full optimization procedure taking into account the near-field from the feed and the finite extent of the array. Another interesting new modelling capability is made available through the DIATOOL software, which is a new...... type of EM software tool aimed at extending the ways engineers can use antenna measurements in the antenna design process. The tool allows reconstruction of currents and near fields on a 3D surface conformal to the antenna, by using the measured antenna field as input. The currents on the antenna...... surface can provide valuable information about the antenna performance or undesired contributions, e.g. currents on a cable,can be artificially removed. Finally, the CHAMP software will be extended to cover reflector shaping and more complex materials,which combined with a much faster execution speed...

  9. Analysis of environmental contamination resulting from catastrophic incidents: part 2. Building laboratory capability by selecting and developing analytical methodologies.

    Science.gov (United States)

    Magnuson, Matthew; Campisano, Romy; Griggs, John; Fitz-James, Schatzi; Hall, Kathy; Mapp, Latisha; Mullins, Marissa; Nichols, Tonya; Shah, Sanjiv; Silvestri, Erin; Smith, Terry; Willison, Stuart; Ernst, Hiba

    2014-11-01

    Catastrophic incidents can generate a large number of samples of analytically diverse types, including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface residue. Such samples may arise not only from contamination from the incident but also from the multitude of activities surrounding the response to the incident, including decontamination. This document summarizes a range of activities to help build laboratory capability in preparation for sample analysis following a catastrophic incident, including selection and development of fit-for-purpose analytical methods for chemical, biological, and radiological contaminants. Fit-for-purpose methods are those which have been selected to meet project specific data quality objectives. For example, methods could be fit for screening contamination in the early phases of investigation of contamination incidents because they are rapid and easily implemented, but those same methods may not be fit for the purpose of remediating the environment to acceptable levels when a more sensitive method is required. While the exact data quality objectives defining fitness-for-purpose can vary with each incident, a governing principle of the method selection and development process for environmental remediation and recovery is based on achieving high throughput while maintaining high quality analytical results. This paper illustrates the result of applying this principle, in the form of a compendium of analytical methods for contaminants of interest. The compendium is based on experience with actual incidents, where appropriate and available. This paper also discusses efforts aimed at adaptation of existing methods to increase fitness-for-purpose and development of innovative methods when necessary. The contaminants of interest are primarily those potentially released through catastrophes resulting from malicious activity

  10. Research Capabilities Directed to all Electric Engineering Teachers, from an Alternative Energy Model

    Directory of Open Access Journals (Sweden)

    Víctor Hugo Ordóñez Navea

    2017-08-01

    Full Text Available The purpose of this work was to contemplate research capabilities directed to all electric engineering teachers from an alternative energy model intro the explanation of a semiconductor in the National Training Program in Electricity. Some authors, such as. Vidal (2016, Atencio (2014 y Camilo (2012 point out to technological applications with semiconductor electrical devices. In this way; a diagnostic phase is presented, held on this field research as a descriptive type about: a how to identify the necessities of alternative energies, and b The research competences in the alternatives energies of researcher from a solar cell model, to boost and innovate the academic praxis and technologic ingenuity. Themselves was applied a survey for a group of 15 teachers in the National Program of Formation in electricity to diagnose the deficiencies in the research area of alternatives energies. The process of data analysis was carried out through descriptive statistic. Later the conclusions are presented the need to generate strategies for stimulate and propose exploration of alternatives energies to the development of research competences directed to the teachers of electrical engineering for develop the research competences in the enforcement of the teachers exercise for the electric engineering, from an alternative energy model and boost the technologic research in the renewal energies field.

  11. The Global Modeling Test Bed - Building a New National Capability for Advancing Operational Global Modeling in the United States.

    Science.gov (United States)

    Toepfer, F.; Cortinas, J. V., Jr.; Kuo, W.; Tallapragada, V.; Stajner, I.; Nance, L. B.; Kelleher, K. E.; Firl, G.; Bernardet, L.

    2017-12-01

    NOAA develops, operates, and maintains an operational global modeling capability for weather, sub seasonal and seasonal prediction for the protection of life and property and fostering the US economy. In order to substantially improve the overall performance and accelerate advancements of the operational modeling suite, NOAA is partnering with NCAR to design and build the Global Modeling Test Bed (GMTB). The GMTB has been established to provide a platform and a capability for researchers to contribute to the advancement primarily through the development of physical parameterizations needed to improve operational NWP. The strategy to achieve this goal relies on effectively leveraging global expertise through a modern collaborative software development framework. This framework consists of a repository of vetted and supported physical parameterizations known as the Common Community Physics Package (CCPP), a common well-documented interface known as the Interoperable Physics Driver (IPD) for combining schemes into suites and for their configuration and connection to dynamic cores, and an open evidence-based governance process for managing the development and evolution of CCPP. In addition, a physics test harness designed to work within this framework has been established in order to facilitate easier like-to-like comparison of physics advancements. This paper will present an overview of the design of the CCPP and test platform. Additionally, an overview of potential new opportunities of how physics developers can engage in the process, from implementing code for CCPP/IPD compliance to testing their development within an operational-like software environment, will be presented. In addition, insight will be given as to how development gets elevated to CPPP-supported status, the pre-cursor to broad availability and use within operational NWP. An overview of how the GMTB can be expanded to support other global or regional modeling capabilities will also be presented.

  12. Results of the naive quark model

    International Nuclear Information System (INIS)

    Gignoux, C.

    1987-10-01

    The hypotheses and limits of the naive quark model are recalled and results on nucleon-nucleon scattering and possible multiquark states are presented. Results show that with this model, ropers do not come. For hadron-hadron interactions, the model predicts Van der Waals forces that the resonance group method does not allow. Known many-body forces are not found in the model. The lack of mesons shows up in the absence of a far reaching force. However, the model does have strengths. It is free from spuriousness of center of mass, and allows a democratic handling of flavor. It has few parameters, and its predictions are very good [fr

  13. National Research Council Dialogue to Assess Progress on NASA's Advanced Modeling, Simulation and Analysis Capability and Systems Engineering Capability Roadmap Development

    Science.gov (United States)

    Aikins, Jan

    2005-01-01

    Contents include the following: General Background and Introduction of Capability Roadmaps. Agency Objective. Strategic Planning Transformation. Advanced Planning Organizational Roles. Public Involvement in Strategic Planning. Strategic Roadmaps and Schedule. Capability Roadmaps and Schedule. Purpose of NRC Review. Capability Roadmap Development (Progress to Date).

  14. Semantic Model of Variability and Capabilities of IoT Applications for Embedded Software Ecosystems

    DEFF Research Database (Denmark)

    Tomlein, Matus; Grønbæk, Kaj

    2016-01-01

    reasoning to resolve context requirements. We present the implications on the architecture of the ecosystem and the concepts defined in the model. Finally, we discuss the evaluation of the model and its benefits and liabilities. Although the approach results in more complex descriptions of applications, we...

  15. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    Energy Technology Data Exchange (ETDEWEB)

    Schraad, Mark William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Physics and Engineering Models; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Advanced Simulation and Computing

    2016-09-06

    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  16. Multi-phase model development to assess RCIC system capabilities under severe accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kirkland, Karen Vierow [Texas A & M Univ., College Station, TX (United States); Ross, Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beeny, Bradley [Texas A & M Univ., College Station, TX (United States); Luthman, Nicholas [Texas A& M Engineering Experiment Station, College Station, TX (United States); Strater, Zachary [Texas A & M Univ., College Station, TX (United States)

    2017-12-23

    The Reactor Core Isolation Cooling (RCIC) System is a safety-related system that provides makeup water for core cooling of some Boiling Water Reactors (BWRs) with a Mark I containment. The RCIC System consists of a steam-driven Terry turbine that powers a centrifugal, multi-stage pump for providing water to the reactor pressure vessel. The Fukushima Dai-ichi accidents demonstrated that the RCIC System can play an important role under accident conditions in removing core decay heat. The unexpectedly sustained, good performance of the RCIC System in the Fukushima reactor demonstrates, firstly, that its capabilities are not well understood, and secondly, that the system has high potential for extended core cooling in accident scenarios. Better understanding and analysis tools would allow for more options to cope with a severe accident situation and to reduce the consequences. The objectives of this project were to develop physics-based models of the RCIC System, incorporate them into a multi-phase code and validate the models. This Final Technical Report details the progress throughout the project duration and the accomplishments.

  17. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    Energy Technology Data Exchange (ETDEWEB)

    Herranz, Luis E., E-mail: luisen.herranz@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Garcia, Monica, E-mail: monica.gmartin@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Morandi, Sonia, E-mail: sonia.morandi@rse-web.it [Nuclear and Industrial Plant Safety Team, Power Generation System Department, RSE, via Rubattino 54, 20134 Milano (Italy)

    2013-12-15

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have

  18. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    International Nuclear Information System (INIS)

    Herranz, Luis E.; Garcia, Monica; Morandi, Sonia

    2013-01-01

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that

  19. Capability Paternalism

    NARCIS (Netherlands)

    Claassen, R.J.G.|info:eu-repo/dai/nl/269266224

    A capability approach prescribes paternalist government actions to the extent that it requires the promotion of specific functionings, instead of the corresponding capabilities. Capability theorists have argued that their theories do not have much of these paternalist implications, since promoting

  20. A Process and Environment Aware Sierra/SolidMechanics Cohesive Zone Modeling Capability for Polymer/Solid Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Reedy, E. D. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Chambers, Robert S. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Hughes, Lindsey Gloe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Kropka, Jamie Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Stavig, Mark E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Stevens, Mark J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    The performance and reliability of many mechanical and electrical components depend on the integrity of po lymer - to - solid interfaces . Such interfaces are found in adhesively bonded joints, encapsulated or underfilled electronic modules, protective coatings, and laminates. The work described herein was aimed at improving Sandia's finite element - based capability to predict interfacial crack growth by 1) using a high fidelity nonlinear viscoelastic material model for the adhesive in fracture simulations, and 2) developing and implementing a novel cohesive zone fracture model that generates a mode - mixity dependent toughness as a natural consequence of its formulation (i.e., generates the observed increase in interfacial toughness wi th increasing crack - tip interfacial shear). Furthermore, molecular dynamics simulations were used to study fundamental material/interfa cial physics so as to develop a fuller understanding of the connection between molecular structure and failure . Also reported are test results that quantify how joint strength and interfacial toughness vary with temperature.

  1. Results from the Operational Testing of the General Electric Smart Grid Capable Electric Vehicle Supply Equipment (EVSE)

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, Richard Barney [Idaho National Lab. (INL), Idaho Falls, ID (United States); Scoffield, Don [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bennett, Brion [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2013-12-01

    The Idaho National Laboratory conducted testing and analysis of the General Electric (GE) smart grid capable electric vehicle supply equipment (EVSE), which was a deliverable from GE for the U.S. Department of Energy FOA-554. The Idaho National Laboratory has extensive knowledge and experience in testing advanced conductive and wireless charging systems though INL’s support of the U.S. Department of Energy’s Advanced Vehicle Testing Activity. This document details the findings from the EVSE operational testing conducted at the Idaho National Laboratory on the GE smart grid capable EVSE. The testing conducted on the EVSE included energy efficiency testing, SAE J1772 functionality testing, abnormal conditions testing, and charging of a plug-in vehicle.

  2. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...

  3. Model-based Assessment for Balancing Privacy Requirements and Operational Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Knirsch, Fabian [Salzburg Univ. (Austria); Engel, Dominik [Salzburg Univ. (Austria); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States)

    2015-02-17

    The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and – if feasible – an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.

  4. Immune Modulating Capability of Two Exopolysaccharide-Producing Bifidobacterium Strains in a Wistar Rat Model

    Directory of Open Access Journals (Sweden)

    Nuria Salazar

    2014-01-01

    Full Text Available Fermented dairy products are the usual carriers for the delivery of probiotics to humans, Bifidobacterium and Lactobacillus being the most frequently used bacteria. In this work, the strains Bifidobacterium animalis subsp. lactis IPLA R1 and Bifidobacterium longum IPLA E44 were tested for their capability to modulate immune response and the insulin-dependent glucose homeostasis using male Wistar rats fed with a standard diet. Three intervention groups were fed daily for 24 days with 10% skimmed milk, or with 109 cfu of the corresponding strain suspended in the same vehicle. A significant increase of the suppressor-regulatory TGF-β cytokine occurred with both strains in comparison with a control (no intervention group of rats; the highest levels were reached in rats fed IPLA R1. This strain presented an immune protective profile, as it was able to reduce the production of the proinflammatory IL-6. Moreover, phosphorylated Akt kinase decreased in gastroctemius muscle of rats fed the strain IPLA R1, without affecting the glucose, insulin, and HOMA index in blood, or levels of Glut-4 located in the membrane of muscle and adipose tissue cells. Therefore, the strain B. animalis subsp. lactis IPLA R1 is a probiotic candidate to be tested in mild grade inflammation animal models.

  5. Petroleum system modeling capabilities for use in oil and gas resource assessments

    Science.gov (United States)

    Higley, Debra K.; Lewan, Michael; Roberts, Laura N.R.; Henry, Mitchell E.

    2006-01-01

    Summary: Petroleum resource assessments are among the most highly visible and frequently cited scientific products of the U.S. Geological Survey. The assessments integrate diverse and extensive information on the geologic, geochemical, and petroleum production histories of provinces and regions of the United States and the World. Petroleum systems modeling incorporates these geoscience data in ways that strengthen the assessment process and results are presented visually and numerically. The purpose of this report is to outline the requirements, advantages, and limitations of one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) petroleum systems modeling that can be applied to the assessment of oil and gas resources. Primary focus is on the application of the Integrated Exploration Systems (IES) PetroMod? software because of familiarity with that program as well as the emphasis by the USGS Energy Program on standardizing to one modeling application. The Western Canada Sedimentary Basin (WCSB) is used to demonstrate the use of the PetroMod? software. Petroleum systems modeling quantitatively extends the 'total petroleum systems' (TPS) concept (Magoon and Dow, 1994; Magoon and Schmoker, 2000) that is employed in USGS resource assessments. Modeling allows integration of state-of-the-art analysis techniques, and provides the means to test and refine understanding of oil and gas generation, migration, and accumulation. Results of modeling are presented visually, numerically, and statistically, which enhances interpretation of the processes that affect TPSs through time. Modeling also provides a framework for the input and processing of many kinds of data essential in resource assessment, including (1) petroleum system elements such as reservoir, seal, and source rock intervals; (2) timing of depositional, hiatus, and erosional events and their influences on petroleum systems; (3) incorporation of vertical and lateral distribution and lithologies of

  6. Evaluation of the 3d Urban Modelling Capabilities in Geographical Information Systems

    Science.gov (United States)

    Dogru, A. O.; Seker, D. Z.

    2010-12-01

    containing the same object in different LoD may be combined and integrated. In this study GIS tools used for 3D modeling issues were examined. In this context, the availability of the GIS tools for obtaining different LoDs of CityGML standard. Additionally a 3D GIS application that covers a small part of the city of Istanbul was implemented for communicating the thematic information rather than photorealistic visualization by using 3D model. An abstract model was created by using a commercial GIS software modeling tools and the results of the implementation were also presented in the study.

  7. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  8. Effect of radar rainfall time resolution on the predictive capability of a distributed hydrologic model

    Science.gov (United States)

    Atencia, A.; Llasat, M. C.; Garrote, L.; Mediero, L.

    2010-10-01

    The performance of distributed hydrological models depends on the resolution, both spatial and temporal, of the rainfall surface data introduced. The estimation of quantitative precipitation from meteorological radar or satellite can improve hydrological model results, thanks to an indirect estimation at higher spatial and temporal resolution. In this work, composed radar data from a network of three C-band radars, with 6-minutal temporal and 2 × 2 km2 spatial resolution, provided by the Catalan Meteorological Service, is used to feed the RIBS distributed hydrological model. A Window Probability Matching Method (gage-adjustment method) is applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation in both convective and stratiform Z/R relations used over Catalonia. Once the rainfall field has been adequately obtained, an advection correction, based on cross-correlation between two consecutive images, was introduced to get several time resolutions from 1 min to 30 min. Each different resolution is treated as an independent event, resulting in a probable range of input rainfall data. This ensemble of rainfall data is used, together with other sources of uncertainty, such as the initial basin state or the accuracy of discharge measurements, to calibrate the RIBS model using probabilistic methodology. A sensitivity analysis of time resolutions was implemented by comparing the various results with real values from stream-flow measurement stations.

  9. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available In this second part of the two-part paper, the data driven modeling (DDM experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs, genetic programming (GP, evolutionary polynomial regression (EPR, Support vector machines (SVM, M5 model trees (M5, K-nearest neighbors (K-nn, and multiple linear regression (MLR techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it

  10. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Science.gov (United States)

    Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D. P.

    2010-10-01

    In this second part of the two-part paper, the data driven modeling (DDM) experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets) are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations) were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs), genetic programming (GP), evolutionary polynomial regression (EPR), Support vector machines (SVM), M5 model trees (M5), K-nearest neighbors (K-nn), and multiple linear regression (MLR) techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it should

  11. The EURAD model: Design and first results

    International Nuclear Information System (INIS)

    1989-01-01

    The contributions are abridged versions of lectures delivered on the occasion of the presentation meeting of the EURAD project on the 20th and 21st of February 1989 in Cologne. EURAD stands for European Acid Deposition Model. The project takes one of the possible and necessary ways to search for scientific answers to the questions which the modifications of the atmosphere caused by anthropogenic influence raise. One of the objectives is to develop a realistic numeric model of long-distance transport of harmful substances in the troposphere over Europe and to use this model for the investigation of pollutant distribution but also for the support of their experimental study. The EURAD Model consists of two parts: a meteorologic mesoscale model and a chemical transport model. In the first part of the presentation, these parts are introduced and questions concerning the implementation of the entire model on the computer system CRAY X-MP/22 discussed. Afterwards it is reported upon the results of the test calculations for the cases 'Chernobyl' and 'Alpex'. Thereafter selected problems concerning the treatments of meteorological and air-chemistry processes as well as the parametrization of subscale processes within the model are discussed. The conclusion is made by two lectures upon emission evaluations and emission scenarios. (orig./KW) [de

  12. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  13. The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle

    OpenAIRE

    Laaksonen, Pekka

    2011-01-01

    Laaksonen, Pekka The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle Jyväskylä: Jyväskylän yliopisto, 2011, 42 s. Tietojärjestelmätiede, kandidaatintutkielma Ohjaaja(t): Käkölä, Timo Tässä kandidaatintutkielmassa selvitettiin sitä, miten the eSourcing Capability Model for Service Providers-mallin käytännöt (practices) ovat liittyneet tietä-myksenhallinnan neljään prosessiin: tiedon luominen, varastointi/noutaminen, jakamine...

  14. A Simplified Ab Initio Cosmic-ray Modulation Model with Simulated Time Dependence and Predictive Capability

    Science.gov (United States)

    Moloto, K. D.; Engelbrecht, N. E.; Burger, R. A.

    2018-06-01

    A simplified ab initio approach is followed to model cosmic-ray proton modulation, using a steady-state three-dimensional stochastic solver of the Parker transport equation that simulates some effects of time dependence. Standard diffusion coefficients based on Quasilinear Theory and Nonlinear Guiding Center Theory are employed. The spatial and temporal dependences of the various turbulence quantities required as inputs for the diffusion, as well as the turbulence-reduced drift coefficients, follow from parametric fits to results from a turbulence transport model as well as from spacecraft observations of these turbulence quantities. Effective values are used for the solar wind speed, magnetic field magnitude, and tilt angle in the modulation model to simulate temporal effects due to changes in the large-scale heliospheric plasma. The unusually high cosmic-ray intensities observed during the 2009 solar minimum follow naturally from the current model for most of the energies considered. This demonstrates that changes in turbulence contribute significantly to the high intensities during that solar minimum. We also discuss and illustrate how this model can be used to predict future cosmic-ray intensities, and comment on the reliability of such predictions.

  15. Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Rivera, Michael K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC)

    2014-04-01

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  16. MODELING OF ACCOUNTING ASPECTS OF FINANCIAL RESULTS

    Directory of Open Access Journals (Sweden)

    Hrayr HANISYAN

    2014-06-01

    Full Text Available Modern trends in world economy require expanding functional informational capabilities of financial statements. Market-oriented users of financial statements are interested in information necessary when making decisions of financial character.

  17. Modelling rainfall erosion resulting from climate change

    Science.gov (United States)

    Kinnell, Peter

    2016-04-01

    It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.

  18. INTRAVAL test case 1b - modelling results

    International Nuclear Information System (INIS)

    Jakob, A.; Hadermann, J.

    1991-07-01

    This report presents results obtained within Phase I of the INTRAVAL study. Six different models are fitted to the results of four infiltration experiments with 233 U tracer on small samples of crystalline bore cores originating from deep drillings in Northern Switzerland. Four of these are dual porosity media models taking into account advection and dispersion in water conducting zones (either tubelike veins or planar fractures), matrix diffusion out of these into pores of the solid phase, and either non-linear or linear sorption of the tracer onto inner surfaces. The remaining two are equivalent porous media models (excluding matrix diffusion) including either non-linear sorption onto surfaces of a single fissure family or linear sorption onto surfaces of several different fissure families. The fits to the experimental data have been carried out by Marquardt-Levenberg procedure yielding error estimates of the parameters, correlation coefficients and also, as a measure for the goodness of the fits, the minimum values of the χ 2 merit function. The effects of different upstream boundary conditions are demonstrated and the penetration depth for matrix diffusion is discussed briefly for both alternative flow path scenarios. The calculations show that the dual porosity media models are significantly more appropriate to the experimental data than the single porosity media concepts. Moreover, it is matrix diffusion rather than the non-linearity of the sorption isotherm which is responsible for the tailing part of the break-through curves. The extracted parameter values for some models for both the linear and non-linear (Freundlich) sorption isotherms are consistent with the results of independent static batch sorption experiments. From the fits, it is generally not possible to discriminate between the two alternative flow path geometries. On the basis of the modelling results, some proposals for further experiments are presented. (author) 15 refs., 23 figs., 7 tabs

  19. Capabilities and performance of Elmer/Ice, a new-generation ice sheet model

    Directory of Open Access Journals (Sweden)

    O. Gagliardini

    2013-08-01

    Full Text Available The Fourth IPCC Assessment Report concluded that ice sheet flow models, in their current state, were unable to provide accurate forecast for the increase of polar ice sheet discharge and the associated contribution to sea level rise. Since then, the glaciological community has undertaken a huge effort to develop and improve a new generation of ice flow models, and as a result a significant number of new ice sheet models have emerged. Among them is the parallel finite-element model Elmer/Ice, based on the open-source multi-physics code Elmer. It was one of the first full-Stokes models used to make projections for the evolution of the whole Greenland ice sheet for the coming two centuries. Originally developed to solve local ice flow problems of high mechanical and physical complexity, Elmer/Ice has today reached the maturity to solve larger-scale problems, earning the status of an ice sheet model. Here, we summarise almost 10 yr of development performed by different groups. Elmer/Ice solves the full-Stokes equations, for isotropic but also anisotropic ice rheology, resolves the grounding line dynamics as a contact problem, and contains various basal friction laws. Derived fields, like the age of the ice, the strain rate or stress, can also be computed. Elmer/Ice includes two recently proposed inverse methods to infer badly known parameters. Elmer is a highly parallelised code thanks to recent developments and the implementation of a block preconditioned solver for the Stokes system. In this paper, all these components are presented in detail, as well as the numerical performance of the Stokes solver and developments planned for the future.

  20. Re-framing Inclusive Education Through the Capability Approach: An Elaboration of the Model of Relational Inclusion

    Directory of Open Access Journals (Sweden)

    Maryam Dalkilic

    2016-09-01

    Full Text Available Scholars have called for the articulation of new frameworks in special education that are responsive to culture and context and that address the limitations of medical and social models of disability. In this article, we advance a theoretical and practical framework for inclusive education based on the integration of a model of relational inclusion with Amartya Sen’s (1985 Capability Approach. This integrated framework engages children, educators, and families in principled practices that acknowledge differences, rather than deficits, and enable attention to enhancing the capabilities of children with disabilities in inclusive educational environments. Implications include the development of policy that clarifies the process required to negotiate capabilities and valued functionings and the types of resources required to permit children, educators, and families to create relationally inclusive environments.

  1. A model-free approach to eliminate autocorrelation when testing for process capability

    DEFF Research Database (Denmark)

    Vanmann, Kerstin; Kulahci, Murat

    2008-01-01

    There is an increasing use of on-line data acquisition systems in industry. This usually leads to autocorrelated data and implies that the assumption of independent observations has to be re-examined. Most decision procedures for capability analysis assume independent data. In this article we pre...

  2. A Model for a Single Unmanned Aircraft Systems (UAS) Program Office Managing Joint ISR Capabilities

    Science.gov (United States)

    2017-10-01

    managing the efforts of medium and high altitude UAS assets from a Joint perspective while employing agile principles versus the duplicative efforts... managed using agile principles , could provide greater capability to the warfighter. Consolidation cost efficiencies became an independent variable...ensure quality and value are delivered. Senior leadership buy in is needed, along with an understanding of agile principles and management style that

  3. Seasonal Characteristics of Widespread Ozone Pollution in China and India: Current Model Capabilities and Source Attributions

    Science.gov (United States)

    Gao, M.; Song, S.; Beig, G.; Zhang, H.; Hu, J.; Ying, Q.; McElroy, M. B.

    2017-12-01

    Fast urbanization and industrialization in China and India have led to severe ozone pollution, threatening public health in these densely populated countries. We show the spatial and seasonal characteristics of ozone concentrations using nation-wide observations for these two countries in 2013. We used the Weather Research and Forecasting model coupled to chemistry (WRF-Chem) to conduct one-year simulations and to evaluate how current models capture the important photochemical processes using the exhaustive available datasets in China and India, including surface measurements, ozonesonde data and satellite retrievals. We also employed the factor separation approach to distinguish the contributions of different sectors to ozone during different seasons. The back trajectory model FLEXPART was applied to investigate the role of transport in highly polluted regions (e.g., North China Plain, Yangtze River delta, and Pearl River Delta) during different seasons. Preliminary results indicate that the WRF-Chem model provides a satisfactory representation of the temporal and spatial variations of ozone for both China and India. The factor separation approach offers valuable insights into relevant sources of ozone for both countries providing valuable guidance for policy options designed to mitigate the related problem.

  4. A Grid Connected Transformerless Inverter and its Model Predictive Control Strategy with Leakage Current Elimination Capability

    Directory of Open Access Journals (Sweden)

    J. Fallah Ardashir

    2017-06-01

    Full Text Available This paper proposes a new single phase transformerless Photovoltaic (PV inverter for grid connected systems. It consists of six power switches, two diodes, one capacitor and filter at the output stage. The neutral of the grid is directly connected to the negative terminal of the source. This results in constant common mode voltage and zero leakage current. Model Predictive Controller (MPC technique is used to modulate the converter to reduce the output current ripple and filter requirements. The main advantages of this inverter are compact size, low cost, flexible grounding configuration. Due to brevity, the operating principle and analysis of the proposed circuit are presented in brief. Simulation and experimental results of 200W prototype are shown at the end to validate the proposed topology and concept. The results obtained clearly verifies the performance of the proposed inverter and its practical application for grid connected PV systems.

  5. Introduction of an Evaluation Tool to Predict the Probability of Success of Companies: The Innovativeness, Capabilities and Potential Model (ICP

    Directory of Open Access Journals (Sweden)

    Michael Lewrick

    2009-05-01

    Full Text Available Successful innovation requires management and in this paper a model to help manage the innovation process is presented. This model can be used to audit the management capability to innovate and to monitor how sales increase is related to innovativeness. The model was developed from a study of companies in the high technology cluster around Munich and validated using statistical procedures. The model was found to be effective at predicting the success or otherwise of the innovation strategy pursued by the company. The use of this model and how it can be used to identify areas for improvement are documented in this paper.

  6. Assessing the capability of CORDEX models in simulating onset of rainfall in West Africa

    Science.gov (United States)

    Mounkaila, Moussa S.; Abiodun, Babatunde J.; `Bayo Omotosho, J.

    2015-01-01

    Reliable forecasts of rainfall-onset dates (RODs) are crucial for agricultural planning and food security in West Africa. This study evaluates the ability of nine CORDEX regional climate models (RCMs: ARPEGE, CRCM5, RACMO, RCA35, REMO, RegCM3, PRECIS, CCLM and WRF) in simulating RODs over the region. Four definitions are used to compute RODs, and two observation datasets (GPCP and TRMM) are used in the model evaluation. The evaluation considers how well the RCMs, driven by ERA-Interim reanalysis (ERAIN), simulate the observed mean, standard deviation and inter-annual variability of RODs over West Africa. It also investigates how well the models link RODs with the northward movement of the monsoon system over the region. The model performances are compared to that of the driving reanalysis—ERAIN. Observations show that the mean RODs in West Africa have a zonal distribution, and the dates increase from the Guinea coast northward. ERAIN fails to reproduce the spatial distribution of the RODs as observed. The performance of some RCMs in simulating the RODs depends on the ROD definition used. For instance, ARPEGE, RACMO, PRECIS and CCLM produce a better ROD distribution than that of ERAIN when three of the ROD definitions are used, but give a worse ROD distribution than that of ERAIN when the fourth definition is used. However, regardless of the definition used, CCRM5, RCA35, REMO, RegCM3 and WRF show a remarkable improvement over ERAIN. The study shows that the ability of the RCMs in simulating RODs over West Africa strongly depends on how well the models reproduce the northward movement of the monsoon system and the associated features. The results show that there are some differences in the RODs obtained between the two observation datasets and RCMs, and the differences are magnified by differences in the ROD definitions. However, the study shows that most CORDEX RCMs have remarkable skills in predicting the RODs in West Africa.

  7. A triple helix model of medical innovation: Supply, demand, and technological capabilities in terms of Medical Subject Headings

    NARCIS (Netherlands)

    Petersen, A.M.; Rotolo, D.; Leydesdorff, L.

    We develop a model of innovation that enables us to trace the interplay among three key dimensions of the innovation process: (i) demand of and (ii) supply for innovation, and (iii) technological capabilities available to generate innovation in the forms of products, processes, and services.

  8. Remote Sensing of Seagrass Leaf Area Index and Species: The Capability of a Model Inversion Method Assessed by Sensitivity Analysis and Hyperspectral Data of Florida Bay

    Directory of Open Access Journals (Sweden)

    John D. Hedley

    2017-11-01

    Full Text Available The capability for mapping two species of seagrass, Thalassia testudinium and Syringodium filiforme, by remote sensing using a physics based model inversion method was investigated. The model was based on a three-dimensional canopy model combined with a model for the overlying water column. The model included uncertainty propagation based on variation in leaf reflectances, canopy structure, water column properties, and the air-water interface. The uncertainty propagation enabled both a-priori predictive sensitivity analysis of potential capability and the generation of per-pixel error bars when applied to imagery. A primary aim of the work was to compare the sensitivity analysis to results achieved in a practical application using airborne hyperspectral data, to gain insight on the validity of sensitivity analyses in general. Results showed that while the sensitivity analysis predicted a weak but positive discrimination capability for species, in a practical application the relevant spectral differences were extremely small compared to discrepancies in the radiometric alignment of the model with the imagery—even though this alignment was very good. Complex interactions between spectral matching and uncertainty propagation also introduced biases. Ability to discriminate LAI was good, and comparable to previously published methods using different approaches. The main limitation in this respect was spatial alignment with the imagery with in situ data, which was heterogeneous on scales of a few meters. The results provide insight on the limitations of physics based inversion methods and seagrass mapping in general. Complex models can degrade unpredictably when radiometric alignment of the model and imagery is not perfect and incorporating uncertainties can have non-intuitive impacts on method performance. Sensitivity analyses are upper bounds to practical capability, incorporating a term for potential systematic errors in radiometric alignment may

  9. Data engineering systems: Computerized modeling and data bank capabilities for engineering analysis

    Science.gov (United States)

    Kopp, H.; Trettau, R.; Zolotar, B.

    1984-01-01

    The Data Engineering System (DES) is a computer-based system that organizes technical data and provides automated mechanisms for storage, retrieval, and engineering analysis. The DES combines the benefits of a structured data base system with automated links to large-scale analysis codes. While the DES provides the user with many of the capabilities of a computer-aided design (CAD) system, the systems are actually quite different in several respects. A typical CAD system emphasizes interactive graphics capabilities and organizes data in a manner that optimizes these graphics. On the other hand, the DES is a computer-aided engineering system intended for the engineer who must operationally understand an existing or planned design or who desires to carry out additional technical analysis based on a particular design. The DES emphasizes data retrieval in a form that not only provides the engineer access to search and display the data but also links the data automatically with the computer analysis codes.

  10. Verification of the New FAST v8 Capabilities for the Modeling of Fixed-Bottom Offshore Wind Turbines: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Barahona, B.; Jonkman, J.; Damiani, R.; Robertson, A.; Hayman, G.

    2014-12-01

    Coupled dynamic analysis has an important role in the design of offshore wind turbines because the systems are subject to complex operating conditions from the combined action of waves and wind. The aero-hydro-servo-elastic tool FAST v8 is framed in a novel modularization scheme that facilitates such analysis. Here, we present the verification of new capabilities of FAST v8 to model fixed-bottom offshore wind turbines. We analyze a series of load cases with both wind and wave loads and compare the results against those from the previous international code comparison projects-the International Energy Agency (IEA) Wind Task 23 Subtask 2 Offshore Code Comparison Collaboration (OC3) and the IEA Wind Task 30 OC3 Continued (OC4) projects. The verification is performed using the NREL 5-MW reference turbine supported by monopile, tripod, and jacket substructures. The substructure structural-dynamics models are built within the new SubDyn module of FAST v8, which uses a linear finite-element beam model with Craig-Bampton dynamic system reduction. This allows the modal properties of the substructure to be synthesized and coupled to hydrodynamic loads and tower dynamics. The hydrodynamic loads are calculated using a new strip theory approach for multimember substructures in the updated HydroDyn module of FAST v8. These modules are linked to the rest of FAST through the new coupling scheme involving mapping between module-independent spatial discretizations and a numerically rigorous implicit solver. The results show that the new structural dynamics, hydrodynamics, and coupled solutions compare well to the results from the previous code comparison projects.

  11. Capability ethics

    OpenAIRE

    Robeyns, Ingrid

    2012-01-01

    textabstractThe capability approach is one of the most recent additions to the landscape of normative theories in ethics and political philosophy. Yet in its present stage of development, the capability approach is not a full-blown normative theory, in contrast to utilitarianism, deontological theories, virtue ethics, or pragmatism. As I will argue in this chapter, at present the core of the capability approach is an account of value, which together with some other (more minor) normative comm...

  12. An assessment system for the system safety engineering capability maturity model in the case of spent fuel reprocessing

    International Nuclear Information System (INIS)

    Yang Xiaohua; Liu Zhenghai; Liu Zhiming; Wan Yaping; Bai Xiaofeng

    2012-01-01

    We can improve the processing, the evaluation of capability and promote the user's trust by using system security engineering capability maturity model (SSE-CMM). SSE-CMM is the common method for organizing and implementing safety engineering, and it is a mature method for system safety engineering. Combining capability maturity model (CMM) with total quality management and statistic theory, SSE-CMM turns systems security engineering into a well-defined, mature, measurable, advanced engineering discipline. Lack of domain knowledge, the size of data, the diversity of evidences, the cumbersomeness of processes, and the complexity of matching evidences with problems are the main issues that SSE-CMM assessment has to face. To improve effectively the efficiency of assessment of spent fuel reprocessing system security engineering capability maturity model (SFR-SSE-CMM), in this paper we de- signed an intelligent assessment software based on domain ontology and that uses methods such as ontology, evidence theory, semantic web, intelligent information retrieval and intelligent auto-matching techniques. This software includes four subsystems, which are domain ontology creation and management system, evidence auto collection system, and a problem and evidence matching system. The architecture of the software is divided into five layers: a data layer, an oncology layer, a knowledge layer, a service layer arid a presentation layer. (authors)

  13. Discussion of gas trade model (GTM) results

    International Nuclear Information System (INIS)

    Manne, A.

    1989-01-01

    This is in response to your invitation to comment on the structure of GTM and also upon the differences between its results and those of other models participating in EMF9. First a word upon the structure. GTM was originally designed to provide both regional and sectoral detail within the North American market for natural gas at a single point in time, e.g. the year 2000. It is a spatial equilibrium model in which a solution is obtained by maximizing a nonlinear function, the sum of consumers and producers surplus. Since transport costs are included in producers cost, this formulation automatically ensures that geographical price differentials will not differ by more than transport costs. For purposes of EMF9, GTM was modified to allow for resource development and depletion over time

  14. Bumetanide is not capable of terminating status epilepticus but enhances phenobarbital efficacy in different rat models.

    Science.gov (United States)

    Töllner, Kathrin; Brandt, Claudia; Erker, Thomas; Löscher, Wolfgang

    2015-01-05

    In about 20-40% of patients, status epilepticus (SE) is refractory to standard treatment with benzodiazepines, necessitating second- and third-line treatments that are not always successful, resulting in increased mortality. Rat models of refractory SE are instrumental in studying the changes underlying refractoriness and to develop more effective treatments for this severe medical emergency. Failure of GABAergic inhibition is a likely cause of the development of benzodiazepine resistance during SE. In addition to changes in GABAA receptor expression, trafficking, and function, alterations in Cl(-) homeostasis with increased intraneuronal Cl(-) levels may be involved. Bumetanide, which reduces intraneuronal Cl(-) by inhibiting the Cl(-) intruding Na(+), K(+), Cl(-) cotransporter NKCC1, has been reported to interrupt SE induced by kainate in urethane-anesthetized rats, indicating that this diuretic drug may be an interesting candidate for treatment of refractory SE. In this study, we evaluated the effects of bumetanide in the kainate and lithium-pilocarpine models of SE as well as a model in which SE is induced by sustained electrical stimulation of the basolateral amygdala. Unexpectedly, bumetanide alone was ineffective to terminate SE in both conscious and anesthetized adult rats. However, it potentiated the anticonvulsant effect of low doses of phenobarbital, although this was only seen in part of the animals; higher doses of phenobarbital, particularly in combination with diazepam, were more effective to terminate SE than bumetanide/phenobarbital combinations. These data do not suggest that bumetanide, alone or in combination with phenobarbital, is a valuable option in the treatment of refractory SE in adult patients. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Summary of Recent Results from NASA's Space Solar Power (SSP) Programs and the Current Capabilities of Microwave WPT Technology

    Science.gov (United States)

    McSpadden, James; Mankins, John C.; Howell, Joe T. (Technical Monitor)

    2002-01-01

    The concept of placing enormous solar power satellite (SPS) systems in space represents one of a handful of new technological options that might provide large-scale, environmentally clean base load power into terrestrial markets. In the US, the SPS concept was examined extensively during the late 1970s by the U.S. Department of Energy (DOE) and the National Aeronautics and Space Administration (NASA). More recently, the subject of space solar power (SSP) was reexamined by NASA from 1995-1997 in the "fresh look" study, and during 1998 in an SSP "concept definition study". As a result of these efforts, in 1999-2000, NASA undertook the SSP Exploratory Research and Technology (SERT) program which pursued preliminary strategic technology research and development to enable large, multi-megawatt SSP systems and wireless power transmission (WPT) for government missions and commercial markets (in-space and terrestrial). During 2001-2002, NASA has been pursuing an SSP Concept and Technology Maturation (SCTM) program follow-on to the SERT, with special emphasis on identifying new, high-leverage technologies that might advanced the feasibility of future SSP systems. In addition, in 2001, the U.S. National Research Council (NRC) released a major report providing the results of a peer review of NASA's SSP strategic research and technology (R&T) road maps. One of the key technologies needed to enable the future feasibility of SSP/SPS is that of wireless power transmission. Advances in phased array antennas and rectennas have provided the building blocks for a realizable WPT system. These key components include the dc-RF converters in the transmitter, the retrodirective beam control system, and the receiving rectenna. Each subject is briefly covered, and results from the SERT program that studied a 5.8 GHz SPS system are presented. This paper presents a summary results from NASA's SSP efforts, along with a summary of the status of microwave WPT technology development.

  16. Dynamic Capabilities

    DEFF Research Database (Denmark)

    Grünbaum, Niels Nolsøe; Stenger, Marianne

    2013-01-01

    The findings reveal a positive relationship between dynamic capabilities and innovation performance in the case enterprises, as we would expect. It was, however, not possible to establish a positive relationship between innovation performance and profitability. Nor was there any positive...... relationship between dynamic capabilities and profitability....

  17. Capability ethics

    NARCIS (Netherlands)

    I.A.M. Robeyns (Ingrid)

    2012-01-01

    textabstractThe capability approach is one of the most recent additions to the landscape of normative theories in ethics and political philosophy. Yet in its present stage of development, the capability approach is not a full-blown normative theory, in contrast to utilitarianism, deontological

  18. Sustainable solar energy capability studies by using S2H model in treating groundwater supply

    Science.gov (United States)

    Musa, S.; Anuar, M. F.; Shahabuddin, M. M.; Ridzuan, M. B.; Radin Mohamed, R. M. S.; Madun, M. A.

    2018-04-01

    Groundwater extracted in Research Centre for Soft Soil Malaysia (RECESS) contains a number of pollutants that exceed the safe level for consumption. A Solar-Hydro (S2H) model which is a practical prototype has been introduced to treat the groundwater sustainably by solar energy process (evaporation method). Selected parameters was tested which are sulphate, nitrate, chloride, fluoride, pH and dissolved oxygen. The water quality result shows that all parameters have achieved 100% of the drinking water quality standard issued by the Ministry of Health Malaysia. Evaporation method was proven that this solar energy can be applied in sustainably treating groundwater quality with up to 90% effectiveness. On the other hand, the quantitative analysis has shown that the production of clean water is below than 2% according to time constraints and design factors. Thus, this study can be generate clean and fresh water from groundwater by using a simplified model and it has huge potential to be implemented by the local communities with a larger scale and affordable design.

  19. Investigating Integration Capabilities Between Ifc and Citygml LOD3 for 3d City Modelling

    Science.gov (United States)

    Floros, G.; Pispidikis, I.; Dimopoulou, E.

    2017-10-01

    Smart cities are applied to an increasing number of application fields. This evolution though urges data collection and integration, hence major issues arise that need to be tackled. One of the most important challenges is the heterogeneity of collected data, especially if those data derive from different standards and vary in terms of geometry, topology and semantics. Another key challenge is the efficient analysis and visualization of spatial data, which due to the complexity of the physical reality in modern world, 2D GIS struggles to cope with. So, in order to facilitate data analysis and enhance the role of smart cities, the 3rd dimension needs to be implemented. Standards such as CityGML and IFC fulfill that necessity but they present major differences in their schemas that render their integration a challenging task. This paper focuses on addressing those differences, examining the up to date research work and investigates an alternative methodology in order to bridge the gap between those Standards. Within this framework, a generic IFC model is generated and converted to a CityGML Model, which is validated and evaluated on its geometrical correctness and semantical coherence. General results as well as future research considerations are presented.

  20. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  1. Superconducting solenoid model magnet test results

    Energy Technology Data Exchange (ETDEWEB)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; /Fermilab

    2006-08-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests.

  2. Superconducting solenoid model magnet test results

    International Nuclear Information System (INIS)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; Tompkins, J.C.; Wokas, T.; Fermilab

    2006-01-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests

  3. Developing the Practising Model in Physical Education: An Expository Outline Focusing on Movement Capability

    Science.gov (United States)

    Barker, D. M.; Aggerholm, K.; Standal, O.; Larsson, H.

    2018-01-01

    Background: Physical educators currently have a number of pedagogical (or curricular) models at their disposal. While existing models have been well-received in educational contexts, these models seek to extend students' capacities within a limited number of "human activities" (Arendt, 1958). The activity of "human practising,"…

  4. Designing and Validating a Model for Measuring Sustainability of Overall Innovation Capability of Small and Medium-Sized Enterprises

    Directory of Open Access Journals (Sweden)

    Mohd Nizam Ab Rahman

    2015-01-01

    Full Text Available The business environment is currently characterized by intensified competition at both the national and firm levels. Many studies have shown that innovation positively affect firms in enhancing their competitiveness. Innovation is a dynamic process that requires a continuous, evolving, and mastered management. Evaluating the sustainability of overall innovation capability of a business is a major means of determining how well this firm effectively and efficiently manages its innovation process. A psychometrically valid scale of evaluating the sustainability of overall innovation capability of a firm is still insufficient in the current innovation literature. Thus, this study developed a reliable and valid scale of measuring the sustainability of overall innovation capability construct. The unidimensionality, reliability, and several validity components of the developed scale were tested using the data collected from 175 small and medium-sized enterprises in Iran. A series of systematic statistical analyses were performed. Results of the reliability measures, exploratory and confirmatory factor analyses, and several components of validity tests strongly supported an eight-dimensional (8D scale of measuring the sustainability of overall innovation capability construct. The dimensions of the scale were strategic management, supportive culture and structure, resource allocation, communication and networking, knowledge and technology management, idea management, project development, and commercialization capabilities.

  5. TRANSFORMATION OF THE STUDENTS’ INQUIRY CAPABILITY THROUGH MINDMAP EDUCATIVE BY USING GAME OBSERVATION NORMATIVELY (MEGONO LEARNING MODEL

    Directory of Open Access Journals (Sweden)

    Tasiwan Tasiwan

    2016-04-01

    Full Text Available This classroom action research was conducted to analyze the development of the students’ inquiry abilities in science learning by a learning model of mindmap educative by using game observation normatively (Megono. The study was conducted in three cycles. In each cycle, the students were divided into five groups, each groups consisted of seven students. Each group was mandated to observe and to analyze the images/photos. After the image observations, they were asked to discuss, write and compile the information into a concept map.  One of the students was act as a representative of the group in a game of observation. Data were obtained through the pre-test, post-test, and observation by the observers as well as from the photo and video recording. The results showed that the students’ inquiry ability increased by 63.27% at the end of the cycle. At the initial conditions, the ability of the student was low (0.49. After the first cycle, it increased to 0.63 (medium, and then increased to 0.68 (moderate on the second cycle, and finally it increased to 0.80 (high in the third cycle. The average increase in every aspect was 68.59%.  The highest inquiry capability was achieved in aspects of reasoning amounted to 89.29 (very high. It was suggested to use the observation games fairly and needed more time adjustment to obtain higher learning outcomes.

  6. Scale Model Thruster Acoustic Measurement Results

    Science.gov (United States)

    Vargas, Magda; Kenny, R. Jeremy

    2013-01-01

    The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.

  7. CMS standard model Higgs boson results

    Directory of Open Access Journals (Sweden)

    Garcia-Abia Pablo

    2013-11-01

    Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.

  8. Modelling Extortion Racket Systems: Preliminary Results

    Science.gov (United States)

    Nardin, Luis G.; Andrighetto, Giulia; Székely, Áron; Conte, Rosaria

    Mafias are highly powerful and deeply entrenched organised criminal groups that cause both economic and social damage. Overcoming, or at least limiting, their harmful effects is a societally beneficial objective, which renders its dynamics understanding an objective of both scientific and political interests. We propose an agent-based simulation model aimed at understanding how independent and combined effects of legal and social norm-based processes help to counter mafias. Our results show that legal processes are effective in directly countering mafias by reducing their activities and changing the behaviour of the rest of population, yet they are not able to change people's mind-set that renders the change fragile. When combined with social norm-based processes, however, people's mind-set shifts towards a culture of legality rendering the observed behaviour resilient to change.

  9. New results in the Dual Parton Model

    International Nuclear Information System (INIS)

    Van, J.T.T.; Capella, A.

    1984-01-01

    In this paper, the similarity between the x distribution for particle production and the fragmentation functions are observed in e+e- collisions and in deep inelastic scattering are presented. Based on the observation, the authors develop a complete approach to multiparticle production which incorporates the most important features and concepts learned about high energy collisions. 1. Topological expansion : the dominant diagram at high energy corresponds to the simplest topology. 2. Unitarity : diagrams of various topology contribute to the cross sections in a way that unitary is preserved. 3. Regge behaviour and Duality. 4. Partonic structure of hadrons. These general theoretical ideas, result from many joint experimental and theoretical efforts on the study of soft hadron physics. The dual parton model is able to explain all the experimental features from FNAL to SPS collider energies. It has all the properties of an S-matrix theory and provides a unified description of hadron-hadron, hadron-nucleus and nucleus-nucleus collisions

  10. Validation and comparison of two-phase flow modeling capabilities of CFD, sub channel and system codes by means of post-test calculations of BFBT transient tests

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Wadim; Manes, Jorge Perez; Imke, Uwe; Escalante, Javier Jimenez; Espinoza, Victor Sanchez, E-mail: victor.sanchez@kit.edu

    2013-10-15

    Highlights: • Simulation of BFBT turbine and pump transients at multiple scales. • CFD, sub-channel and system codes are used for the comparative study. • Heat transfer models are compared to identify difference between the code predictions. • All three scales predict results in good agreement to experiment. • Sub cooled boiling models are identified as field for future research. -- Abstract: The Institute for Neutron Physics and Reactor Technology (INR) at the Karlsruhe Institute of Technology (KIT) is involved in the validation and qualification of modern thermo hydraulic simulations tools at various scales. In the present paper, the prediction capabilities of four codes from three different scales – NEPTUNE{sub C}FD as fine mesh computational fluid dynamics code, SUBCHANFLOW and COBRA-TF as sub channels codes and TRACE as system code – are assessed with respect to their two-phase flow modeling capabilities. The subject of the investigations is the well-known and widely used data base provided within the NUPEC BFBT benchmark related to BWRs. Void fraction measurements simulating a turbine and a re-circulation pump trip are provided at several axial levels of the bundle. The prediction capabilities of the codes for transient conditions with various combinations of boundary conditions are validated by comparing the code predictions with the experimental data. In addition, the physical models of the different codes are described and compared to each other in order to explain the different results and to identify areas for further improvements.

  11. SCANAIR a transient fuel performance code Part two: Assessment of modelling capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Georgenthum, Vincent, E-mail: vincent.georgenthum@irsn.fr; Moal, Alain; Marchand, Olivier

    2014-12-15

    Highlights: • The SCANAIR code is devoted to the study of irradiated fuel rod behaviour during RIA. • The paper deals with the status of the code validation for PWR rods. • During the PCMI stage there is a good agreement between calculations and experiments. • The boiling crisis occurrence is rather well predicted. • The code assessment during the boiling crisis has still to be improved. - Abstract: In the frame of their research programmes on fuel safety, the French Institut de Radioprotection et de Sûreté Nucléaire develops the SCANAIR code devoted to the study of irradiated fuel rod behaviour during reactivity initiated accident. A first paper was focused on detailed modellings and code description. This second paper deals with the status of the code validation for pressurised water reactor rods performed thanks to the available experimental results. About 60 integral tests carried out in CABRI and NSRR experimental reactors and 24 separated tests performed in the PATRICIA facility (devoted to the thermal-hydraulics study) have been recalculated and compared to experimental data. During the first stage of the transient, the pellet clad mechanical interaction phase, there is a good agreement between calculations and experiments: the clad residual elongation and hoop strain of non failed tests but also the failure occurrence and failure enthalpy of failed tests are correctly calculated. After this first stage, the increase of cladding temperature can lead to the Departure from Nucleate Boiling. During the film boiling regime, the clad temperature can reach a very high temperature (>700 °C). If the boiling crisis occurrence is rather well predicted, the calculation of the clad temperature and the clad hoop strain during this stage have still to be improved.

  12. Simulating run-up on steep slopes with operational Boussinesq models; capabilities, spurious effects and instabilities

    Directory of Open Access Journals (Sweden)

    F. Løvholt

    2013-06-01

    Full Text Available Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.

  13. Finiteness results for Abelian tree models

    NARCIS (Netherlands)

    Draisma, J.; Eggermont, R.H.

    2015-01-01

    Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the

  14. Finiteness results for Abelian tree models

    NARCIS (Netherlands)

    Draisma, J.; Eggermont, R.H.

    2012-01-01

    Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the

  15. Finiteness results for Abelian tree models

    NARCIS (Netherlands)

    Draisma, J.; Eggermont, R.H.

    2015-01-01

    Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant§ refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the

  16. Using Marketing Capability Maturity Model to Measure Marketing Processes at Iran Transfo Corporation

    OpenAIRE

    Arman Ahmadizad; Seyyed Mojtaba Akhavan Hejazi; Amirhossein Sabourtinat

    2011-01-01

    Abstract In this study marketing maturity model has been used in Iran Transfo Corporation. For this purpose, the five levels process maturity model has been applied. The statistical population includes managers, supervisors and experts of marketing and sales at Iran Transfo Corporation and due to its small size, the entire population has been studied as the sample of research. 11 questionnaires have been used for data collection its validity has been confirmed by content validity analysis ...

  17. Becker meets Ricardo: A social and cognitive skills model of human capabilities

    OpenAIRE

    Xianwen Shi; Ronald Wolthoff; Aloysius Siow; Robert McCann

    2012-01-01

    This paper studies an equilibrium model of social and cognitive skills interactions in school, work and marriage. The model uses a common team production function in each sector which integrates the complementarity concerns of Becker with the task assigment and comparative advantage concerns of Ricardo. The theory delivers full task specialization in the labor and education markets, incomplete task specialization in marriage. It rationalizes many to one matching, a common feature in labor mar...

  18. Results of the eruptive column model inter-comparison study

    Science.gov (United States)

    Costa, Antonio; Suzuki, Yujiro; Cerminara, M.; Devenish, Ben J.; Esposti Ongaro, T.; Herzog, Michael; Van Eaton, Alexa; Denby, L.C.; Bursik, Marcus; de' Michieli Vitturi, Mattia; Engwell, S.; Neri, Augusto; Barsotti, Sara; Folch, Arnau; Macedonio, Giovanni; Girault, F.; Carazzo, G.; Tait, S.; Kaminski, E.; Mastin, Larry G.; Woodhouse, Mark J.; Phillips, Jeremy C.; Hogg, Andrew J.; Degruyter, Wim; Bonadonna, Costanza

    2016-01-01

    This study compares and evaluates one-dimensional (1D) and three-dimensional (3D) numerical models of volcanic eruption columns in a set of different inter-comparison exercises. The exercises were designed as a blind test in which a set of common input parameters was given for two reference eruptions, representing a strong and a weak eruption column under different meteorological conditions. Comparing the results of the different models allows us to evaluate their capabilities and target areas for future improvement. Despite their different formulations, the 1D and 3D models provide reasonably consistent predictions of some of the key global descriptors of the volcanic plumes. Variability in plume height, estimated from the standard deviation of model predictions, is within ~ 20% for the weak plume and ~ 10% for the strong plume. Predictions of neutral buoyancy level are also in reasonably good agreement among the different models, with a standard deviation ranging from 9 to 19% (the latter for the weak plume in a windy atmosphere). Overall, these discrepancies are in the range of observational uncertainty of column height. However, there are important differences amongst models in terms of local properties along the plume axis, particularly for the strong plume. Our analysis suggests that the simplified treatment of entrainment in 1D models is adequate to resolve the general behaviour of the weak plume. However, it is inadequate to capture complex features of the strong plume, such as large vortices, partial column collapse, or gravitational fountaining that strongly enhance entrainment in the lower atmosphere. We conclude that there is a need to more accurately quantify entrainment rates, improve the representation of plume radius, and incorporate the effects of column instability in future versions of 1D volcanic plume models.

  19. Immersive visualization of dynamic CFD model results

    International Nuclear Information System (INIS)

    Comparato, J.R.; Ringel, K.L.; Heath, D.J.

    2004-01-01

    With immersive visualization the engineer has the means for vividly understanding problem causes and discovering opportunities to improve design. Software can generate an interactive world in which collaborators experience the results of complex mathematical simulations such as computational fluid dynamic (CFD) modeling. Such software, while providing unique benefits over traditional visualization techniques, presents special development challenges. The visualization of large quantities of data interactively requires both significant computational power and shrewd data management. On the computational front, commodity hardware is outperforming large workstations in graphical quality and frame rates. Also, 64-bit commodity computing shows promise in enabling interactive visualization of large datasets. Initial interactive transient visualization methods and examples are presented, as well as development trends in commodity hardware and clustering. Interactive, immersive visualization relies on relevant data being stored in active memory for fast response to user requests. For large or transient datasets, data management becomes a key issue. Techniques for dynamic data loading and data reduction are presented as means to increase visualization performance. (author)

  20. Transient Mathematical Modeling for Liquid Rocket Engine Systems: Methods, Capabilities, and Experience

    Science.gov (United States)

    Seymour, David C.; Martin, Michael A.; Nguyen, Huy H.; Greene, William D.

    2005-01-01

    The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.

  1. Linkage of PRA models. Phase 1, Results

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.; Knudsen, J.K.; Kelly, D.L.

    1995-12-01

    The goal of the Phase I work of the ``Linkage of PRA Models`` project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ``linking`` analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ``generic`` classification scheme to groups plants based upon a particular plant attribute.

  2. Linkage of PRA models. Phase 1, Results

    International Nuclear Information System (INIS)

    Smith, C.L.; Knudsen, J.K.; Kelly, D.L.

    1995-12-01

    The goal of the Phase I work of the ''Linkage of PRA Models'' project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ''linking'' analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ''generic'' classification scheme to groups plants based upon a particular plant attribute

  3. Engineering Glass Passivation Layers -Model Results

    Energy Technology Data Exchange (ETDEWEB)

    Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.

    2011-08-08

    The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan

  4. CIEMAT model results for Esthwaite Water

    International Nuclear Information System (INIS)

    Aguero, A.; Garcia-Olivares, A.

    2000-01-01

    This study used the transfer model PRYMA-LO, developed by CIEMAT-IMA, Madrid, Spain, to simulate the transfer of Cs-137 in watershed scenarios. The main processes considered by the model include: transfer of the fallout to the ground, incorporation of the fallout radioisotopes into the water flow, and their removal from the system. The model was tested against observation data obtained in water and sediments of Esthwaite Water, Lake District, UK. This comparison made it possible to calibrate the parameters of the model to the specific scenario

  5. Capability approach

    DEFF Research Database (Denmark)

    Jensen, Niels Rosendal; Kjeldsen, Christian Christrup

    Lærebogen er den første samlede danske præsentation af den af Amartya Sen og Martha Nussbaum udviklede Capability Approach. Bogen indeholder en præsentation og diskussion af Sen og Nussbaums teoretiske platform. I bogen indgår eksempler fra såvel uddannelse/uddannelsespolitik, pædagogik og omsorg....

  6. The Cyber Defense (CyDef) Model for Assessing Countermeasure Capabilities.

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Margot [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); DeVries, Troy Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gordon, Susanna P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-06-01

    Cybersecurity is essential to maintaining operations, and is now a de facto cost of business. Despite this, there is little consensus on how to systematically make decisions about cyber countermeasures investments. Identifying gaps and determining the expected return on investment (ROI) of adding a new cybersecurity countermeasure is frequently a hand-waving exercise at best. Worse, cybersecurity nomenclature is murky and frequently over-loaded, which further complicates issues by inhibiting clear communication. This paper presents a series of foundational models and nomenclature for discussing cybersecurity countermeasures, and then introduces the Cyber Defense (CyDef) model, which provides a systematic and intuitive way for decision-makers to effectively communicate with operations and device experts.

  7. A machine learning model with human cognitive biases capable of learning from small and biased datasets.

    Science.gov (United States)

    Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro

    2018-05-09

    Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.

  8. Modeling the Cloud to Enhance Capabilities for Crises and Catastrophe Management

    Science.gov (United States)

    2016-11-16

    through support by a prior DOD grant, and in this project, we focused on how to effectively adapt this for the cloud catastrophe environment. The...the effects of varying cloud resources and the cloud architecture on L, o, and g values, we will be able to formulate realistic analytical models of...variation in computing and communication costs of test problems due to varying loads in the cloud environment. We used the parallel matrix multiplication

  9. Preliminary Modeling of Acoustic Detection Capability for the Drifting Arctic Monitoring System

    Science.gov (United States)

    2015-02-01

    Sedimentary Basins in the Arctic, Polarforschung, 69, 243–249. [22] Poore, Richard Z, Ishman, Scott E, Phillips, R Lawrence, and McNeil, David H (1994...93, 1784. [28] Metzler, Adam M, Collis , Jon M, and Siegmann, William L (2012), Modeling low-frequency seismo-acoustic propagation in the Arctic using a...Atlantic. [50] Shnidman, David A (1998), Binary integration for Swerling target fluctuations, Aerospace and Electronic Systems, IEEE Transactions on

  10. The IDEAL (Integrated Design and Engineering Analysis Languages) modeling methodology: Capabilities and Applications

    Science.gov (United States)

    Evers, Ken H.; Bachert, Robert F.

    1987-01-01

    The IDEAL (Integrated Design and Engineering Analysis Languages) modeling methodology has been formulated and applied over a five-year period. It has proven to be a unique, integrated approach utilizing a top-down, structured technique to define and document the system of interest; a knowledge engineering technique to collect and organize system descriptive information; a rapid prototyping technique to perform preliminary system performance analysis; and a sophisticated simulation technique to perform in-depth system performance analysis.

  11. A hybrid model of QFD, SERVQUAL and KANO to increase bank's capabilities

    Directory of Open Access Journals (Sweden)

    Hasan Rajabi

    2012-10-01

    Full Text Available In global market, factors such as precedence of competitors extending shave on market, promoting quality of services and identifying customers' needs are important. This paper attempts to identify strategic services in one of the biggest governmental banks in Iran called Melli bank for getting competition merit using Kano and SERVQUAL compound models and to extend operation quality and to provide suitable strategies. The primary question of this paper is on how to introduce high quality services in this bank. The proposed model of this paper uses a hybrid of three quality-based methods including SERVQUAL, QFD and Kano models. Statistical society in this article is all clients and customers of Melli bank who use this banks' services and based on random sampling method, 170 customers were selected. The study was held in one of provinces located in west part of Iran called Semnan. Research findings show that Melli banks' customers are dissatisfied from the quality of services and to solve this problem the bank should do some restructuring to place some special characteristics to reach better operation at the heed of its affairs. The characteristics include, in terms of their priorities, possibility of transferring money by sale terminal, possibility of creating wireless pos, accelerating in doing bank works, getting special merits to customers who use electronic services, eliminating such bank commission, solving problems in least time as disconnecting system, possibility of receiving foreign exchange by ATM and suitable parking in city.

  12. DESTINY: A Comprehensive Tool with 3D and Multi-Level Cell Memory Modeling Capability

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2017-09-01

    Full Text Available To enable the design of large capacity memory structures, novel memory technologies such as non-volatile memory (NVM and novel fabrication approaches, e.g., 3D stacking and multi-level cell (MLC design have been explored. The existing modeling tools, however, cover only a few memory technologies, technology nodes and fabrication approaches. We present DESTINY, a tool for modeling 2D/3D memories designed using SRAM, resistive RAM (ReRAM, spin transfer torque RAM (STT-RAM, phase change RAM (PCM and embedded DRAM (eDRAM and 2D memories designed using spin orbit torque RAM (SOT-RAM, domain wall memory (DWM and Flash memory. In addition to single-level cell (SLC designs for all of these memories, DESTINY also supports modeling MLC designs for NVMs. We have extensively validated DESTINY against commercial and research prototypes of these memories. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g., latency, area or energy-delay product for a given memory technology, choosing the suitable memory technology or fabrication method (i.e., 2D v/s 3D for a given optimization target, etc. We believe that DESTINY will boost studies of next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers. The latest source-code of DESTINY is available from the following git repository: https://bitbucket.org/sparshmittal/destinyv2.

  13. Linear collider capabilities for supersymmetry in dark matter allowed regions of the mSUGRA model

    International Nuclear Information System (INIS)

    Baer, Howard; Belyaev, Alexander; Krupovnickas, Tadas; Tata, Xerxes

    2004-01-01

    Recent comparisons of minimal supergravity (mSUGRA) model predictions with WMAP measurements of the neutralino relic density point to preferred regions of model parameter space. We investigate the reach of linear colliders (LC) with (s) 1/2 = 0.5 and 1 TeV for SUSY in the framework of the mSUGRA model. We find that LCs can cover the entire stau co-annihilation region provided tan βalt30. In the hyperbolic branch/focus point (HB/FP) region of parameter space, specialized cuts are suggested to increase the reach in this important 'dark matter allowed' area. In the case of the HB/FP region, the reach of a LC extends well past the reach of the CERN LHC. We examine a case study in the HB/FP region, and show that the MSSM parameters μ and M 2 can be sufficiently well-measured to demonstrate that one would indeed be in the HB/FP region, where the lightest chargino and neutralino have a substantial higgsino component. (author)

  14. Flying Training Capacity Model: Initial Results

    National Research Council Canada - National Science Library

    Lynch, Susan

    2005-01-01

    OBJECTIVE: (1) Determine the flying training capacity for 6 bases: * Sheppard AFB * Randolph AFB * Moody AFB * Columbus AFB * Laughlin AFB * Vance AFB * (2) Develop versatile flying training capacity simulation model for AETC...

  15. A Performance Evaluation for IT/IS Implementation in Organisation: Preliminary New IT/IS Capability Evaluation (NICE Model

    Directory of Open Access Journals (Sweden)

    Hafez Salleh

    2011-12-01

    Full Text Available Most of the traditional IT/IS performance measures are based on productivity and process, which mainly focus on method of investment appraisal. There is a need to produce alternative holistic measurement models that enable soft and hard issues to be measured qualitatively. A New IT/IS Capability Evaluation (NICE framework has been designed to measure the capability of organisations to'successfully implement IT systems' and it is applicable across industries.The idea is to provide managers with measurement tools to enable them to identify where improvements are required within their organisations and to indicate their readiness prior to IT investment. The NICE framework investigates four organisational key elements: IT, Environment, Process and People, and is composed of six progressive stages of maturity that a company can achieve its IT/IS capabilities. For each maturity stage, the NICE framework describes a set of critical success factors that must be in place for the company to achieve each stage.

  16. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    Science.gov (United States)

    Iacobucci, Joseph V.

    problem domain by establishing an effective means to communicate the semantics from the RAAM framework. These techniques make it possible to include diverse multi-metric models within the RAAM framework in addition to system and operational level trades. A canonical example was used to explore the uses of the methodology. The canonical example contains all of the features of a full system of systems architecture analysis study but uses fewer tasks and systems. Using RAAM with the canonical example it was possible to consider both system and operational level trades in the same analysis. Once the methodology had been tested with the canonical example, a Suppression of Enemy Air Defenses (SEAD) capability model was developed. Due to the sensitive nature of analyses on that subject, notional data was developed. The notional data has similar trends and properties to realistic Suppression of Enemy Air Defenses data. RAAM was shown to be traceable and provided a mechanism for a unified treatment of a variety of metrics. The SEAD capability model demonstrated lower computer runtimes and reduced model creation complexity as compared to methods currently in use. To determine the usefulness of the implementation of the methodology on current computing hardware, RAAM was tested with system of system architecture studies of different sizes. This was necessary since system of systems may be called upon to accomplish thousands of tasks. It has been clearly demonstrated that RAAM is able to enumerate and evaluate the types of large, complex design spaces usually encountered in capability based design, oftentimes providing the ability to efficiently search the entire decision space. The core algorithms for generation and evaluation of alternatives scale linearly with expected problem sizes. The SEAD capability model outputs prompted the discovery a new issue, the data storage and manipulation requirements for an analysis. Two strategies were developed to counter large data sizes, the use

  17. 3CE Methodology for Conducting a Modeling, Simulation, and Instrumentation Tool Capability Analysis

    Science.gov (United States)

    2010-05-01

    flRmurn I F )T:Ir,tir)l! MCr)lto.-lng DHin nttbli..’"Ollc:~ E,;m:a..liut .!,)’l’lt’Mn:l’lll.ll~ t Managemen t F unction a l Arem 1 .5 Toola na...a modeling, simulation, and instrumentation (MS&I) environment. This methodology uses the DoDAF product set to document operational and systems...engineering process were identified and resolved, such as duplication of data elements derived from DoDAF operational and system views used to

  18. QMU as an approach to strengthening the predictive capabilities of complex models.

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Genetha Anne.; Boggs, Paul T.; Grace, Matthew D.

    2010-09-01

    Complex systems are made up of multiple interdependent parts, and the behavior of the entire system cannot always be directly inferred from the behavior of the individual parts. They are nonlinear and system responses are not necessarily additive. Examples of complex systems include energy, cyber and telecommunication infrastructures, human and animal social structures, and biological structures such as cells. To meet the goals of infrastructure development, maintenance, and protection for cyber-related complex systems, novel modeling and simulation technology is needed. Sandia has shown success using M&S in the nuclear weapons (NW) program. However, complex systems represent a significant challenge and relative departure from the classical M&S exercises, and many of the scientific and mathematical M&S processes must be re-envisioned. Specifically, in the NW program, requirements and acceptable margins for performance, resilience, and security are well-defined and given quantitatively from the start. The Quantification of Margins and Uncertainties (QMU) process helps to assess whether or not these safety, reliability and performance requirements have been met after a system has been developed. In this sense, QMU is used as a sort of check that requirements have been met once the development process is completed. In contrast, performance requirements and margins may not have been defined a priori for many complex systems, (i.e. the Internet, electrical distribution grids, etc.), particularly not in quantitative terms. This project addresses this fundamental difference by investigating the use of QMU at the start of the design process for complex systems. Three major tasks were completed. First, the characteristics of the cyber infrastructure problem were collected and considered in the context of QMU-based tools. Second, UQ methodologies for the quantification of model discrepancies were considered in the context of statistical models of cyber activity. Third

  19. Dynamic capabilities and innovation: a Multiple-Case Study

    OpenAIRE

    Bravo Ibarra, Edna Rocío; Mundet Hiern, Joan; Suñé Torrents, Albert

    2009-01-01

    After a detailed survey of the scientific literature, it was found that several characteristics of dynamic capabilities were similar to those of innovation capability. Therefore, with a deeper study of the first ones, it could be possible to design a model aimed to structure innovation capability. Thus, this work presents a conceptual model, where the innovation capability is shown as result of three processes: knowledge absorption and creation capability, knowledge integration and knowledge ...

  20. BUSINESS MODELS FOR EXTENDING OF 112 EMERGENCY CALL CENTER CAPABILITIES WITH E-CALL FUNCTION INSERTION

    Directory of Open Access Journals (Sweden)

    Pop Dragos Paul

    2010-12-01

    Full Text Available The present article concerns present status of implementation in Romania and Europe of eCall service and the proposed business models regarding eCall function implementation in Romania. eCall system is used for reliable transmission in case of crush between In Vehicle System and Public Service Answering Point, via the voice channel of cellular and Public Switched Telephone Network (PSTN. eCall service could be initiated automatically or manual the driver. All data presented in this article are part of researches made by authors in the Sectorial Contract Implementation study regarding eCall system, having as partners ITS Romania and Electronic Solution, with the Romanian Ministry of Communication and Information Technology as beneficiary.

  1. Evaluation of remote-sensing-based rainfall products through predictive capability in hydrological runoff modelling

    DEFF Research Database (Denmark)

    Stisen, Simon; Sandholt, Inge

    2010-01-01

    SRFEs, Climate Prediction Center MORPHing technique (CMORPH), Tropical Rainfall Measuring Mission (TRMM) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN). The best performing SRFE, CPC-FEWS, produced good results with values of R2NS between 0...

  2. Defining a Simulation Capability Hierarchy for the Modeling of a SeaBase Enabler (SBE)

    Science.gov (United States)

    2010-09-01

    ability to maintain the sea lanes of communication. Relief efforts in crisis-stricken countries like India in 2007, Aceh Indonesia and Sri Lanka in...the number of entities that were built into the scenario run for each category. 104 Advanced Scenario Results Speed Cargo Rate Escorts SURF

  3. ENTREPRENEURIAL CAPABILITIES

    DEFF Research Database (Denmark)

    Rasmussen, Lauge Baungaard; Nielsen, Thorkild

    2003-01-01

    The aim of this article is to analyse entrepreneurship from an action research perspective. What is entrepreneurship about? Which are the fundamental capabilities and processes of entrepreneurship? To answer these questions the article includes a case study of a Danish entrepreneur and his networ....... Finally, the article discuss, how more long term action research methods could be integrated into the entrepreneurial processes and the possible impacts of such an implementation?...

  4. Validation of foF2 and TEC Modeling During Geomagnetic Disturbed Times: Preliminary Outcomes of International Forum for Space Weather Modeling Capabilities Assessment

    Science.gov (United States)

    Shim, J. S.; Tsagouri, I.; Goncharenko, L. P.; Kuznetsova, M. M.

    2017-12-01

    To address challenges of assessment of space weather modeling capabilities, the CCMC (Community Coordinated Modeling Center) is leading the newly established "International Forum for Space Weather Modeling Capabilities Assessment." This presentation will focus on preliminary outcomes of the International Forum on validation of modeled foF2 and TEC during geomagnetic storms. We investigate the ionospheric response to 2013 Mar. geomagnetic storm event using ionosonde and GPS TEC observations in North American and European sectors. To quantify storm impacts on foF2 and TEC, we first quantify quiet-time variations of foF2 and TEC (e.g., the median and the average of the five quietest days for the 30 days during quiet conditions). It appears that the quiet time variation of foF2 and TEC are about 10% and 20-30%, respectively. Therefore, to quantify storm impact, we focus on foF2 and TEC changes during the storm main phase larger than 20% and 50%, respectively, compared to 30-day median. We find that in European sector, both foF2 and TEC response to the storm are mainly positive phase with foF2 increase of up to 100% and TEC increase of 150%. In North America sector, however, foF2 shows negative effects (up to about 50% decrease), while TEC shows positive response (the largest increase is about 200%). To assess modeling capability of reproducing the changes of foF2 and TEC due to the storm, we use various model simulations, which are obtained from empirical, physics-based, and data assimilation models. The performance of each model depends on the selected metrics, therefore, only one metrics is not enough to evaluate the models' predictive capabilities in capturing the storm impact. The performance of the model also varies with latitude and longitude.

  5. Graphical interpretation of numerical model results

    International Nuclear Information System (INIS)

    Drewes, D.R.

    1979-01-01

    Computer software has been developed to produce high quality graphical displays of data from a numerical grid model. The code uses an existing graphical display package (DISSPLA) and overcomes some of the problems of both line-printer output and traditional graphics. The software has been designed to be flexible enough to handle arbitrarily placed computation grids and a variety of display requirements

  6. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory

    2012-09-11

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  7. Ignalina NPP Safety Analysis: Models and Results

    International Nuclear Information System (INIS)

    Uspuras, E.

    1999-01-01

    Research directions, linked to safety assessment of the Ignalina NPP, of the scientific safety analysis group are presented: Thermal-hydraulic analysis of accidents and operational transients; Thermal-hydraulic assessment of Ignalina NPP Accident Localization System and other compartments; Structural analysis of plant components, piping and other parts of Main Circulation Circuit; Assessment of RBMK-1500 reactor core and other. Models and main works carried out last year are described. (author)

  8. Modelling and Assessment of the Capabilities of a Supermarket Refrigeration System for the Provision of Regulating Power

    DEFF Research Database (Denmark)

    O'Connell, Niamh; Madsen, Henrik; Pinson, Pierre

    is found to have time constants at 10 and 0.12 hours, indicating the potential for the system to provide exibility in both the long- and short-term. Direct- and indirect-control architectures are employed to simulate the demand response attainable from the refrigeration system. A number of complexities......This report presents an analysis of the demand response capabilities of a supermarket refrigeration system, with a particular focus on the suitability of this resource for participation in the regulating power market. An ARMAX model of the system is identified from experimental data, and the model...... are revealed that would complicate the task of devising bids on a conventional power market. These complexities are incurred due to the physical characteristics and constraints of the system as well as the particular characteristics of the control frameworks employed. Simulations considering the provision...

  9. Performance evaluation of the technical capabilities of DOE sites for disposal of mixed low-level waste. Volume 2: Technical basis and discussion of results

    International Nuclear Information System (INIS)

    Waters, R.D.; Gruebel, M.M.; Hospelhorn, M.B.

    1996-03-01

    A team of analysts designed and conducted a performance evaluation to estimate the technical capabilities of fifteen Department of Energy sites for disposal of mixed low-level waste (i.e., waste that contains both low-level radioactive materials and hazardous constituents). Volume 1 summarizes the process for selecting the fifteen sites, the methodology used in the evaluation, and the conclusions derived from the evaluation. Volume 2 first describes the screening process used to determine the sites to be considered in the PEs. This volume then provides the technical details of the methodology for conducting the performance evaluations. It also provides a comparison and analysis of the overall results for all sites that were evaluated. Volume 3 contains detailed evaluations of the fifteen sites and discussions of the results for each site

  10. Microplasticity of MMC. Experimental results and modelling

    International Nuclear Information System (INIS)

    Maire, E.; Lormand, G.; Gobin, P.F.; Fougeres, R.

    1993-01-01

    The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.)

  11. Microplasticity of MMC. Experimental results and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Maire, E. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Lormand, G. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Gobin, P.F. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Fougeres, R. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France))

    1993-11-01

    The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.).

  12. Nitrate reduction in geologically heterogeneous catchments — A framework for assessing the scale of predictive capability of hydrological models

    International Nuclear Information System (INIS)

    Refsgaard, Jens Christian; Auken, Esben; Bamberg, Charlotte A.; Christensen, Britt S.B.; Clausen, Thomas; Dalgaard, Esben; Effersø, Flemming; Ernstsen, Vibeke; Gertz, Flemming; Hansen, Anne Lausten; He, Xin; Jacobsen, Brian H.; Jensen, Karsten Høgh; Jørgensen, Flemming; Jørgensen, Lisbeth Flindt; Koch, Julian; Nilsson, Bertel; Petersen, Christian; De Schepper, Guillaume; Schamper, Cyril

    2014-01-01

    In order to fulfil the requirements of the EU Water Framework Directive nitrate load from agricultural areas to surface water in Denmark needs to be reduced by about 40%. The regulations imposed until now have been uniform, i.e. the same restrictions for all areas independent of the subsurface conditions. Studies have shown that on a national basis about 2/3 of the nitrate leaching from the root zone is reduced naturally, through denitrification, in the subsurface before reaching the streams. Therefore, it is more cost-effective to identify robust areas, where nitrate leaching through the root zone is reduced in the saturated zone before reaching the streams, and vulnerable areas, where no subsurface reduction takes place, and then only impose regulations/restrictions on the vulnerable areas. Distributed hydrological models can make predictions at grid scale, i.e. at much smaller scale than the entire catchment. However, as distributed models often do not include local scale hydrogeological heterogeneities, they are typically not able to make accurate predictions at scales smaller than they are calibrated. We present a framework for assessing nitrate reduction in the subsurface and for assessing at which spatial scales modelling tools have predictive capabilities. A new instrument has been developed for airborne geophysical measurements, Mini-SkyTEM, dedicated to identifying geological structures and heterogeneities with horizontal and lateral resolutions of 30–50 m and 2 m, respectively, in the upper 30 m. The geological heterogeneity and uncertainty are further analysed by use of the geostatistical software TProGS by generating stochastic geological realisations that are soft conditioned against the geophysical data. Finally, the flow paths within the catchment are simulated by use of the MIKE SHE hydrological modelling system for each of the geological models generated by TProGS and the prediction uncertainty is characterised by the variance between the

  13. Nitrate reduction in geologically heterogeneous catchments — A framework for assessing the scale of predictive capability of hydrological models

    Energy Technology Data Exchange (ETDEWEB)

    Refsgaard, Jens Christian, E-mail: jcr@geus.dk [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Auken, Esben [Department of Earth Sciences, Aarhus University (Denmark); Bamberg, Charlotte A. [City of Aarhus (Denmark); Christensen, Britt S.B. [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Clausen, Thomas [DHI, Hørsholm (Denmark); Dalgaard, Esben [Department of Earth Sciences, Aarhus University (Denmark); Effersø, Flemming [SkyTEM Aps, Beder (Denmark); Ernstsen, Vibeke [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Gertz, Flemming [Knowledge Center for Agriculture, Skejby (Denmark); Hansen, Anne Lausten [Department of Geosciences and Natural Resource Management, University of Copenhagen (Denmark); He, Xin [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Jacobsen, Brian H. [Department of Food and Resource Economics, University of Copenhagen (Denmark); Jensen, Karsten Høgh [Department of Geosciences and Natural Resource Management, University of Copenhagen (Denmark); Jørgensen, Flemming; Jørgensen, Lisbeth Flindt [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Koch, Julian [Department of Geosciences and Natural Resource Management, University of Copenhagen (Denmark); Nilsson, Bertel [Geological Survey of Denmark and Greenland (GEUS) (Denmark); Petersen, Christian [City of Odder (Denmark); De Schepper, Guillaume [Université Laval, Québec (Canada); Schamper, Cyril [Department of Earth Sciences, Aarhus University (Denmark); and others

    2014-01-01

    In order to fulfil the requirements of the EU Water Framework Directive nitrate load from agricultural areas to surface water in Denmark needs to be reduced by about 40%. The regulations imposed until now have been uniform, i.e. the same restrictions for all areas independent of the subsurface conditions. Studies have shown that on a national basis about 2/3 of the nitrate leaching from the root zone is reduced naturally, through denitrification, in the subsurface before reaching the streams. Therefore, it is more cost-effective to identify robust areas, where nitrate leaching through the root zone is reduced in the saturated zone before reaching the streams, and vulnerable areas, where no subsurface reduction takes place, and then only impose regulations/restrictions on the vulnerable areas. Distributed hydrological models can make predictions at grid scale, i.e. at much smaller scale than the entire catchment. However, as distributed models often do not include local scale hydrogeological heterogeneities, they are typically not able to make accurate predictions at scales smaller than they are calibrated. We present a framework for assessing nitrate reduction in the subsurface and for assessing at which spatial scales modelling tools have predictive capabilities. A new instrument has been developed for airborne geophysical measurements, Mini-SkyTEM, dedicated to identifying geological structures and heterogeneities with horizontal and lateral resolutions of 30–50 m and 2 m, respectively, in the upper 30 m. The geological heterogeneity and uncertainty are further analysed by use of the geostatistical software TProGS by generating stochastic geological realisations that are soft conditioned against the geophysical data. Finally, the flow paths within the catchment are simulated by use of the MIKE SHE hydrological modelling system for each of the geological models generated by TProGS and the prediction uncertainty is characterised by the variance between the

  14. Application of data assimilation to improve the forecasting capability of an atmospheric dispersion model for a radioactive plume

    International Nuclear Information System (INIS)

    Jeong, H.J.; Han, M.H.; Hwang, W.T.; Kim, E.H.

    2008-01-01

    Modeling an atmospheric dispersion of a radioactive plume plays an influential role in assessing the environmental impacts caused by nuclear accidents. The performance of data assimilation techniques combined with Gaussian model outputs and measurements to improve forecasting abilities are investigated in this study. Tracer dispersion experiments are performed to produce field data by assuming a radiological emergency. Adaptive neuro-fuzzy inference system (ANFIS) and linear regression filter are considered to assimilate the Gaussian model outputs with measurements. ANFIS is trained so that the model outputs are likely to be more accurate for the experimental data. Linear regression filter is designed to assimilate measurements similar to the ANFIS. It is confirmed that ANFIS could be an appropriate method for an improvement of the forecasting capability of an atmospheric dispersion model in the case of a radiological emergency, judging from the higher correlation coefficients between the measured and the assimilated ones rather than a linear regression filter. This kind of data assimilation method could support a decision-making system when deciding on the best available countermeasures for public health from among emergency preparedness alternatives

  15. Improving the capability of an integrated CA-Markov model to simulate spatio-temporal urban growth trends using an Analytical Hierarchy Process and Frequency Ratio

    Science.gov (United States)

    Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan

    2017-07-01

    The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.

  16. Modeling resources and capabilities in enterprise architecture: A well-founded ontology-based proposal for ArchiMate

    NARCIS (Netherlands)

    Azevedo, Carlos; Azevedo, Carlos L.B.; Iacob, Maria Eugenia; Andrade Almeida, João; van Sinderen, Marten J.; Ferreira Pires, Luis; Guizzardi, Giancarlo

    2015-01-01

    The importance of capabilities and resources for portfolio management and business strategy has been recognized in the management literature. Despite that, little attention has been given to integrate the notions of capabilities and resources in enterprise architecture descriptions. One notable

  17. Modeling and Control System Design for an Integrated Solar Generation and Energy Storage System with a Ride-Through Capability: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X.; Yue, M.; Muljadi, E.

    2012-09-01

    This paper presents a generic approach for PV panel modeling. Data for this modeling can be easily obtained from manufacturer datasheet, which provides a convenient way for the researchers and engineers to investigate the PV integration issues. A two-stage power conversion system (PCS) is adopted in this paper for the PV generation system and a Battery Energy Storage System (BESS) can be connected to the dc-link through a bi-directional dc/dc converter. In this way, the BESS can provide some ancillary services which may be required in the high penetration PV generation scenario. In this paper, the fault ride-through (FRT) capability is specifically focused. The integrated BESS and PV generation system together with the associated control systems is modeled in PSCAD and Matlab platforms and the effectiveness of the controller is validated by the simulation results.

  18. A JOINT VENTURE MODEL FOR ASSESSMENT OF PARTNER CAPABILITIES: THE CASE OF ESKOM ENTERPRISES AND THE AFRICAN POWER SECTOR

    Directory of Open Access Journals (Sweden)

    Y.V. Soni

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This article investigates the concept of joint ventures in the international energy sector and develops a joint venture model, as a business development and assessment tool. The joint venture model presents a systematic method that relies on modern business intelligence to assess a potential business venture by using a balanced score card technique to screen potential partners, based on their technological and financial core capabilities. The model can be used by business development managers to harness the potential of joint ventures to create economic growth and sustainable business expansion. Furthermore, partnerships with local companies can help to mitigate econo-political risk, and facilitate buy-in from the national governments that are normally the primary stakeholders in the energy sector ventures (directly or indirectly. The particular case of Eskom Enterprises (Pty Ltd, a wholly owned subsidiary of Eskom, is highlighted.

    AFRIKAANSE OPSOMMING: Hierdie artikel ondersoek die begrip gesamentlike onderneming in die internasionale energiesektor en ontwikkel 'n gesamentlike-onderneming-model as 'n sake-ontwikkeling- en takseermodel. Die gesamentlike-onderneming-model bied 'n stelselmatige metode wat op moderne sake-intelligensie staat maak om 'n potensiële sake-onderneming op grond van die tegnologiese en finansiële kernvermoëns daarvan te takseer deur 'n gebalanseerdepuntekaart-tegniek te gebruik. Die model kan deur sake-ontwikkelingsbestuurders gebruik word om die potensiaal van gesamentlike ondernemings in te span om ekonomiese groei en volhoubare sake-uitbreiding daar te stel. Verder kan venootskappe met plaaslike maatskappye help om die ekonomiese risiko te verminder en inkoop te vergemaklik van die nasionale regerings wat gewoonlik die primêre belanghebbendes in die energiesektorondernemings is (hetsy regstreeks of onregstreeks. Die besondere geval van Eskom Enterprises (Edms Bpk, 'n vol filiaal van Eskom

  19. Demonstration of the Recent Additions in Modeling Capabilities for the WEC-Sim Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tom, N.; Lawson, M.; Yu, Y. H.

    2015-03-01

    WEC-Sim is a mid-fidelity numerical tool for modeling wave energy conversion (WEC) devices. The code uses the MATLAB SimMechanics package to solve the multi-body dynamics and models the wave interactions using hydrodynamic coefficients derived from frequency domain boundary element methods. In this paper, the new modeling features introduced in the latest release of WEC-Sim will be presented. The first feature discussed is the conversion of the fluid memory kernel to a state-space approximation that provides significant gains in computational speed. The benefit of the state-space calculation becomes even greater after the hydrodynamic body-to-body coefficients are introduced as the number of interactions increases exponentially with the number of floating bodies. The final feature discussed is the capability toadd Morison elements to provide additional hydrodynamic damping and inertia. This is generally used as a tuning feature, because performance is highly dependent on the chosen coefficients. In this paper, a review of the hydrodynamic theory for each of the features is provided and successful implementation is verified using test cases.

  20. BWR modeling capability and Scale/Triton lattice-to-core integration of the Nestle nodal simulator - 331

    International Nuclear Information System (INIS)

    Galloway, J.; Hernandez, H.; Maldonado, G.I.; Jessee, M.; Popov, E.; Clarno, K.

    2010-01-01

    This article reports the status of recent and substantial enhancements made to the NESTLE nodal core simulator, a code originally developed at North Carolina State University (NCSU) of which version 5.2.1 has been available for several years through the Oak Ridge National Laboratory (ORNL) Radiation Safety Information Computational Center (RSICC) software repository. In its released and available form, NESTLE is a seasoned, well-developed and extensively tested code system particularly useful to model PWRs. In collaboration with NCSU, University of Tennessee (UT) and ORNL researchers have recently developed new enhancements for the NESTLE code, including the implementation of a two-phase drift-flux thermal hydraulic and flow redistribution model to facilitate modeling of Boiling Water Reactors (BWRs) as well as the development of an integrated coupling of SCALE/TRITON lattice physics to NESTLE so to produce an end-to-end capability for reactor simulations. These latest advancements implemented into NESTLE as well as an update of other ongoing efforts of this project are herein reported. (authors)

  1. Design and modeling of an autonomous multi-link snake robot, capable of 3D-motion

    Directory of Open Access Journals (Sweden)

    Rizkallah Rabel

    2016-01-01

    Full Text Available The paper presents the design of an autonomous, wheeless, mechanical snake robot that was modeled and built at Notre Dame University – Louaize. The robot is also capable of 3D motion with an ability to climb in the z-direction. The snake is made of a series links, each containing one to three high torque DC motors and a gearing system. They are connected to each other through Aluminum hollow rods that can be rotated through a 180° span. This allows the snake to move in various environments including unfriendly and cluttered ones. The front link has a proximity sensor used to map the environment. This mapping is sent to a microcontroller which controls and adapts the motion pattern of the snake. The snake can therefore choose to avoid obstacles, or climb over them if their height is within its range. The presented model is made of five links, but this number can be increased as their role is repetitive. The novel design is meant to overcome previous limitations by allowing 3D motion through electric actuators and low energy consumption.

  2. CASL Dakota Capabilities Summary

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Simmons, Chris [Univ. of Texas, Austin, TX (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-10

    The Dakota software project serves the mission of Sandia National Laboratories and supports a worldwide user community by delivering state-of-the-art research and robust, usable software for optimization and uncertainty quantification. These capabilities enable advanced exploration and riskinformed prediction with a wide range of computational science and engineering models. Dakota is the verification and validation (V&V) / uncertainty quantification (UQ) software delivery vehicle for CASL, allowing analysts across focus areas to apply these capabilities to myriad nuclear engineering analyses.

  3. Career Progression Impact on Active and Reserve Component Civil Affairs Officer and Enlisted Soldiers as a Result of a Civil Affairs Capabilities Restructuring within the Army

    National Research Council Canada - National Science Library

    Edmonds, Johnnie

    2004-01-01

    .... Since 96 percent of the Army's Civil Affairs structure and capabilities reside in the Army Reserve, the problems of access, operational tempo, and responsiveness have created a new requirement...

  4. Engineering Capabilities and Partnerships

    Science.gov (United States)

    Poulos, Steve

    2010-01-01

    This slide presentation reviews the engineering capabilities at Johnson Space Center, The presentation also reviews the partnerships that have resulted in successfully designed and developed projects that involved commercial and educational institutions.

  5. Results of AN Evaluation of the Orchestration Capabilities of the Zoo Project and the 52° North Framework for AN Intelligent Geoportal

    Science.gov (United States)

    Rautenbach, V.; Coetzee, S.; Strzelecki, M.; Iwaniak, A.

    2012-07-01

    The aim of a spatial data infrastructure (SDI) is to make data available for the economical and societal benefit to a wide audience. A geoportal typically provides access to spatial data and associated web services in an SDI, facilitating the discovery, display, editing and analysis of data. In contrast, a spatial information infrastructure (SII) should provide access to information, i.e. data that has been processed, organized and presented so as to be useful. Thematic maps are an example of the representation of spatial information. An SII geoportal requires intelligence to orchestrate (automatically coordinate) web services that prepare, discover and present information, instead of data, to the user. We call this an intelligent geoportal. The Open Geospatial Consortium's Web Processing Service (WPS) standard provides the rules for describing the input and output of any type of spatial process. In this paper we present the results of an evaluation of two orchestration platforms: the 52° North framework and ZOO project. We evaluated the frameworks' orchestration capabilities for producing thematic maps. Results of the evaluation show that both frameworks have potential to facilitate orchestration in an intelligent geoportal, but that some functionality is still lacking; lack of semantic information and usability of the framework; these limitations creates barriers for the wide spread use of the frameworks. Before, the frameworks can be used for advance orchestration these limitations need to be addressed. The results of our evaluation of these frameworks, both with their respective strengths and weaknesses, can guide developers to choose the framework best suitable for their specific needs.

  6. Effects of nursing intervention models on social adaption capability development in preschool children with malignant tumors: a randomized control trial.

    Science.gov (United States)

    Yu, Lu; Mo, Lin; Tang, Yan; Huang, Xiaoyan; Tan, Juan

    2014-06-01

    The objectives of this study are to compare the effects of two nursing intervention models on the ability of preschool children with malignant tumors to socialize and to determine if these interventions improved their social adaption capability (SAC) and quality of life. Inpatient preschool children with malignant tumors admitted to the hospital between December 2009 and March 2012 were recruited and randomized into either the experimental or control groups. The control group received routine nursing care, and the experimental group received family-centered nursing care, including physical, psychological, and social interventions. The Infants-Junior Middle School Student's Social-Life Abilities Scale was used to evaluate SAC development of participants. Participants (n = 240) were recruited and randomized into two groups. After the intervention, the excellent and normal SAC rates were 27.5% and 55% in the experimental group, respectively, compared with 2.5% and 32.5% in the control group (p intervention, SAC in experimental group was improved compared with before intervention (54.68 ± 10.85 vs 79.9 ± 22.3, p intervention in the control group (54.70 ± 11.47 vs. 52 ± 15.8, p = 0.38). The family-centered nursing care model that included physical, psychological, and social interventions improved the SAC of children with malignancies compared with children receiving routine nursing care. Establishing a standardized family-school-community-hospital hierarchical multi-management intervention model for children is important to the efficacy of long-term interventions and to the improvement of SAC of children with malignancies. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 1: Concepts and methodology

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available A comprehensive data driven modeling experiment is presented in a two-part paper. In this first part, an extensive data-driven modeling experiment is proposed. The most important concerns regarding the way data driven modeling (DDM techniques and data were handled, compared, and evaluated, and the basis on which findings and conclusions were drawn are discussed. A concise review of key articles that presented comparisons among various DDM techniques is presented. Six DDM techniques, namely, neural networks, genetic programming, evolutionary polynomial regression, support vector machines, M5 model trees, and K-nearest neighbors are proposed and explained. Multiple linear regression and naïve models are also suggested as baseline for comparison with the various techniques. Five datasets from Canada and Europe representing evapotranspiration, upper and lower layer soil moisture content, and rainfall-runoff process are described and proposed, in the second paper, for the modeling experiment. Twelve different realizations (groups from each dataset are created by a procedure involving random sampling. Each group contains three subsets; training, cross-validation, and testing. Each modeling technique is proposed to be applied to each of the 12 groups of each dataset. This way, both prediction accuracy and uncertainty of the modeling techniques can be evaluated. The description of the datasets, the implementation of the modeling techniques, results and analysis, and the findings of the modeling experiment are deferred to the second part of this paper.

  8. A Model of Entrepreneurial Capability Based on a Holistic Review of the Literature from Three Academic Domains

    Science.gov (United States)

    Lewis, Hilary

    2011-01-01

    While there has been a noted variation in the "species" of entrepreneur so that no single list of traits, characteristics or attributes is definitive, it is posited that to be an entrepreneur a certain amount of entrepreneurial capability is required. "Entrepreneurial capability" is a concept developed to place some form of…

  9. SU-E-T-134: Assessing the Capabilities of An MU Model for Fields as Small as 2cm in a Passively Scattered Proton Beam

    International Nuclear Information System (INIS)

    Simpson, R; Ghebremedhin, A; Gordon, I; Patyal, B

    2015-01-01

    Purpose: To assess and expand the capabilities of the current MU model for a passively scattered proton beam. The expanded MU model can potentially be used to predict the dose/MU for fields smaller than 2cm in diameter and reduce time needed for physical calibrations. Methods: The current MU model accurately predicted the dose/MU for more than 800 fields when compared to physical patient calibrations. Three different ion chambers were used in a Plastic Water phantom for physical measurements: T1, PIN, and A-16. The original MU model predicted output for fields that were affected by the bolus gap factor (BGF) and nozzle extension factor (NEF). As the system was tested for smaller treatment fields, the mod wheel dependent field size factor (MWDFSF) had to be included to describe the changes observed in treatment fields smaller than 3cm. The expanded model used Clarkson integration to determine the appropriate value for each factor (field size factor (FSF), BGF, NEF, and MWDFSF), to accurately predict the dose/MU for fields smaller than 2.5cm in effective diameter. Results: The expanded MU model demonstrated agreement better than 2% for more than 800 physical calibrations that were tested. The minimum tested fields were 1.7cm effective diameter for 149MeV and 2.4cm effective diameter for 186MeV. The inclusion of Clarkson integration into the MU model enabled accurate prediction of the dose/MU for very small and irregularly shaped treatment fields. Conclusion: The MU model accurately predicted the dose/MU for a wide range of treatment fields used in the clinic. The original MU model has been refined using factors that were problematic to accurately predict the dose/MU: the BGF, NEF, and MWDFSF. The MU model has minimized the time for determining dose/MU and reduced the time needed for physical calibrations, improving the efficiency of the patient treatment process

  10. Power Capability Investigation Based on Electrothermal Models of Press-pack IGBT Three-Level NPC and ANPC VSCs for Multimegawatt Wind Turbines

    DEFF Research Database (Denmark)

    Senturk, Osman Selcuk; Helle, Lars; Munk-Nielsen, Stig

    2012-01-01

    to provide reactive power support as an ancillary service. For multimegawatt full-scale wind turbines, power capability depends on converter topology and semiconductor switch technology. As power capability limiting factors, switch current, semiconductor junction temperature, and converter output voltage...... are addressed in this study for the three-level neutral-point-clamped voltage source converter (3L-NPC-VSC) and 3L Active NPC VSC (3L-ANPC-VSC) with press-pack insulated gate bipolar transistors employed as a grid-side converter. In order to investigate these VSCs' power capabilities under various operating...... conditions with respect to these limiting factors, a power capability generation algorithm based on the converter electrothermal model is developed. Built considering the VSCs' operation principles and physical structure, the model is validated by a 2 MV·A single-phase 3L-ANPC-VSC test setup. The power...

  11. Comparison of model results obtained with several European regional air quality models

    NARCIS (Netherlands)

    Hass, H.; Builtjes, P.J.H.; Simpson, D.; Stern, R.

    1997-01-01

    An intercomparison study has been performed with four photo-oxidant dispersion models (EMEP, EURAD, LOTOS and REM3) which are currently capable of performing photo-oxidant formation calculations over larger path of Europe. The models, in principle, were run in the mode in which they are normally

  12. State, space relay modeling and simulation using the electromagnetic Transients Program and its transient analysis of control systems capability

    International Nuclear Information System (INIS)

    Domijan, A.D. Jr.; Emami, M.V.

    1990-01-01

    This paper reports on a simulation of a MHO distance relay developed to study the effect of its operation under various system conditions. Simulation is accomplished using a state space approach and a modeling technique using ElectroMagnetic Transient Program (Transient Analysis of Control Systems). Furthermore, simulation results are compared with those obtained in another independent study as a control, to validate the results. A data code for the practical utilization of this simulation is given

  13. Capability and deficiency of the simplified model for energy calculation of commercial buildings in the Brazilian regulation

    NARCIS (Netherlands)

    Melo, A.P.; Lamberts, R.; Costola, D.; Hensen, J.L.M.

    2011-01-01

    This paper provides a preliminary assessment on the accuracy of the Brazilian regulation simplified model for commercial buildings. The first step was to compare its results with BESTEST. The study presents a straightforward approach to apply the BESTEST in other climates than the original one

  14. Attempting to train a digital human model to reproduce human subject reach capabilities in an ejection seat aircraft

    NARCIS (Netherlands)

    Zehner, G.F.; Hudson, J.A.; Oudenhuijzen, A.

    2006-01-01

    From 1997 through 2002, the Air Force Research Lab and TNO Defence, Security and Safety (Business Unit Human Factors) were involved in a series of tests to quantify the accuracy of five Human Modeling Systems (HMSs) in determining accommodation limits of ejection seat aircraft. The results of these

  15. The influence of generation mix on the wind integrating capability of North China power grids: A modeling interpretation and potential solutions

    International Nuclear Information System (INIS)

    Yu Dayang; Zhang Bo; Liang Jun; Han Xueshan

    2011-01-01

    The large-scale wind power development in China has reached a bottleneck of grid integrating capability. As a result, excess wind electricity has to be rejected in the nighttime low demand hours, when the wind power is ramping up. To compensate for the fluctuation of wind power, new coal-fired power plants are being constructed along with the big wind projects in the North China grids. This study analyzed why adding coal-fired generation cannot remove the bottleneck of wind integration by modeling the operating problem of the wind integration. The peak-load adjusting factor of the regional grid is defined. Building more coal-fired power plants will not increase the adjusting factor of the current grid. Although it does help to increase the total integrated wind power in the short term, it will add difficulties to the long-term wind integration. Alternatively, the coordinated resource utilization is then suggested with the discussion of both the effective pumped hydro storage and the potential electric vehicle storage. - Highlights: → Adjusting factors indicate the grid wind integrating capability. → Building coal-fired generation restrains long-term wind integration. → HVDC and nuclear projects should be planned integrated with the wind. → Pumped storage and electric vehicles provide potential solutions.

  16. Evaluating the capability of regional-scale air quality models to capture the vertical distribution of pollutants

    Directory of Open Access Journals (Sweden)

    E. Solazzo

    2013-06-01

    Full Text Available This study is conducted in the framework of the Air Quality Modelling Evaluation International Initiative (AQMEII and aims at the operational evaluation of an ensemble of 12 regional-scale chemical transport models used to predict air quality over the North American (NA and European (EU continents for 2006. The modelled concentrations of ozone and CO, along with the meteorological fields of wind speed (WS and direction (WD, temperature (T, and relative humidity (RH, are compared against high-quality in-flight measurements collected by instrumented commercial aircraft as part of the Measurements of OZone, water vapour, carbon monoxide and nitrogen oxides by Airbus In-service airCraft (MOZAIC programme. The evaluation is carried out for five model domains positioned around four major airports in NA (Portland, Philadelphia, Atlanta, and Dallas and one in Europe (Frankfurt, from the surface to 8.5 km. We compare mean vertical profiles of modelled and measured variables for all airports to compute error and variability statistics, perform analysis of altitudinal error correlation, and examine the seasonal error distribution for ozone, including an estimation of the bias introduced by the lateral boundary conditions (BCs. The results indicate that model performance is highly dependent on the variable, location, season, and height (e.g. surface, planetary boundary layer (PBL or free troposphere being analysed. While model performance for T is satisfactory at all sites (correlation coefficient in excess of 0.90 and fractional bias ≤ 0.01 K, WS is not replicated as well within the PBL (exhibiting a positive bias in the first 100 m and also underestimating observed variability, while above 1000 m, the model performance improves (correlation coefficient often above 0.9. The WD at NA airports is found to be biased in the PBL, primarily due to an overestimation of westerly winds. RH is modelled well within the PBL, but in the free troposphere large

  17. Use of results from microscopic methods in optical model calculations

    International Nuclear Information System (INIS)

    Lagrange, C.

    1985-11-01

    A concept of vectorization for coupled-channel programs based upon conventional methods is first presented. This has been implanted in our program for its use on the CRAY-1 computer. In a second part we investigate the capabilities of a semi-microscopic optical model involving fewer adjustable parameters than phenomenological ones. The two main ingredients of our calculations are, for spherical or well-deformed nuclei, the microscopic optical-model calculations of Jeukenne, Lejeune and Mahaux and nuclear densities from Hartree-Fock-Bogoliubov calculations using the density-dependent force D1. For transitional nuclei deformation-dependent nuclear structure wave functions are employed to weigh the scattering potentials for different shapes and channels [fr

  18. High Altitude Long Endurance UAV Analysis Model Development and Application Study Comparing Solar Powered Airplane and Airship Station-Keeping Capabilities

    Science.gov (United States)

    Ozoroski, Thomas A.; Nickol, Craig L.; Guynn, Mark D.

    2015-01-01

    There have been ongoing efforts in the Aeronautics Systems Analysis Branch at NASA Langley Research Center to develop a suite of integrated physics-based computational utilities suitable for modeling and analyzing extended-duration missions carried out using solar powered aircraft. From these efforts, SolFlyte has emerged as a state-of-the-art vehicle analysis and mission simulation tool capable of modeling both heavier-than-air (HTA) and lighter-than-air (LTA) vehicle concepts. This study compares solar powered airplane and airship station-keeping capability during a variety of high altitude missions, using SolFlyte as the primary analysis component. Three Unmanned Aerial Vehicle (UAV) concepts were designed for this study: an airplane (Operating Empty Weight (OEW) = 3285 kilograms, span = 127 meters, array area = 450 square meters), a small airship (OEW = 3790 kilograms, length = 115 meters, array area = 570 square meters), and a large airship (OEW = 6250 kilograms, length = 135 meters, array area = 1080 square meters). All the vehicles were sized for payload weight and power requirements of 454 kilograms and 5 kilowatts, respectively. Seven mission sites distributed throughout the United States were selected to provide a basis for assessing the vehicle energy budgets and site-persistent operational availability. Seasonal, 30-day duration missions were simulated at each of the sites during March, June, September, and December; one-year duration missions were simulated at three of the sites. Atmospheric conditions during the simulated missions were correlated to National Climatic Data Center (NCDC) historical data measurements at each mission site, at four flight levels. Unique features of the SolFlyte model are described, including methods for calculating recoverable and energy-optimal flight trajectories and the effects of shadows on solar energy collection. Results of this study indicate that: 1) the airplane concept attained longer periods of on

  19. The uncertainty analysis of model results a practical guide

    CERN Document Server

    Hofer, Eduard

    2018-01-01

    This book is a practical guide to the uncertainty analysis of computer model applications. Used in many areas, such as engineering, ecology and economics, computer models are subject to various uncertainties at the level of model formulations, parameter values and input data. Naturally, it would be advantageous to know the combined effect of these uncertainties on the model results as well as whether the state of knowledge should be improved in order to reduce the uncertainty of the results most effectively. The book supports decision-makers, model developers and users in their argumentation for an uncertainty analysis and assists them in the interpretation of the analysis results.

  20. V and V Efforts of Auroral Precipitation Models: Preliminary Results

    Science.gov (United States)

    Zheng, Yihua; Kuznetsova, Masha; Rastaetter, Lutz; Hesse, Michael

    2011-01-01

    Auroral precipitation models have been valuable both in terms of space weather applications and space science research. Yet very limited testing has been performed regarding model performance. A variety of auroral models are available, including empirical models that are parameterized by geomagnetic indices or upstream solar wind conditions, now casting models that are based on satellite observations, or those derived from physics-based, coupled global models. In this presentation, we will show our preliminary results regarding V&V efforts of some of the models.

  1. Capabilities for innovation

    DEFF Research Database (Denmark)

    Nielsen, Peter; Nielsen, Rene Nesgaard; Bamberger, Simon Grandjean

    2012-01-01

    is a survey that collected information from 601 firms belonging to the private urban sector in Denmark. The survey was carried out in late 2010. Keywords: dynamic capabilities/innovation/globalization/employee/employer cooperation/Nordic model Acknowledgment: The GOPA study was financed by grant 20080053113......Technological developments combined with increasing levels of competition related to the ongoing globalization imply that firms find themselves in dynamic, changing environments that call for dynamic capabilities. This challenges the internal human and organizational resources of firms in general...

  2. Human push capability.

    Science.gov (United States)

    Barnett, Ralph L; Liber, Theodore

    2006-02-22

    Use of unassisted human push capability arises from time to time in the areas of crowd and animal control, the security of locked doors, the integrity of railings, the removal of tree stumps and entrenched vehicles, the manoeuvering of furniture, and athletic pursuits such as US football or wrestling. Depending on the scenario, human push capability involves strength, weight, weight distribution, push angle, footwear/floor friction, and the friction between the upper body and the pushed object. Simple models are used to establish the relationships among these factors.

  3. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  4. Evaluating Internal Technological Capabilities in Energy Companies

    Directory of Open Access Journals (Sweden)

    Mingook Lee

    2016-03-01

    Full Text Available As global competition increases, technological capability must be evaluated objectively as one of the most important factors for predominance in technological competition and to ensure sustainable business excellence. Most existing capability evaluation models utilize either quantitative methods, such as patent analysis, or qualitative methods, such as expert panels. Accordingly, they may be in danger of reflecting only fragmentary aspects of technological capabilities, and produce inconsistent results when different models are used. To solve these problems, this paper proposes a comprehensive framework for evaluating technological capabilities in energy companies by considering the complex properties of technological knowledge. For this purpose, we first explored various factors affecting technological capabilities and divided the factors into three categories: individual, organizational, and technology competitiveness. Second, we identified appropriate evaluation items for each category to measure the technological capability. Finally, by using a hybrid approach of qualitative and quantitative methods, we developed an evaluation method for each item and suggested a method to combine the results. The proposed framework was then verified with an energy generation and supply company to investigate its practicality. As one of the earliest attempts to evaluate multi-faceted technological capabilities, the suggested model can support technology and strategic planning.

  5. A novel model of motor learning capable of developing an optimal movement control law online from scratch.

    Science.gov (United States)

    Shimansky, Yury P; Kang, Tao; He, Jiping

    2004-02-01

    A computational model of a learning system (LS) is described that acquires knowledge and skill necessary for optimal control of a multisegmental limb dynamics (controlled object or CO), starting from "knowing" only the dimensionality of the object's state space. It is based on an optimal control problem setup different from that of reinforcement learning. The LS solves the optimal control problem online while practicing the manipulation of CO. The system's functional architecture comprises several adaptive components, each of which incorporates a number of mapping functions approximated based on artificial neural nets. Besides the internal model of the CO's dynamics and adaptive controller that computes the control law, the LS includes a new type of internal model, the minimal cost (IM(mc)) of moving the controlled object between a pair of states. That internal model appears critical for the LS's capacity to develop an optimal movement trajectory. The IM(mc) interacts with the adaptive controller in a cooperative manner. The controller provides an initial approximation of an optimal control action, which is further optimized in real time based on the IM(mc). The IM(mc) in turn provides information for updating the controller. The LS's performance was tested on the task of center-out reaching to eight randomly selected targets with a 2DOF limb model. The LS reached an optimal level of performance in a few tens of trials. It also quickly adapted to movement perturbations produced by two different types of external force field. The results suggest that the proposed design of a self-optimized control system can serve as a basis for the modeling of motor learning that includes the formation and adaptive modification of the plan of a goal-directed movement.

  6. Development and Performance of the Modularized, High-performance Computing and Hybrid-architecture Capable GEOS-Chem Chemical Transport Model

    Science.gov (United States)

    Long, M. S.; Yantosca, R.; Nielsen, J.; Linford, J. C.; Keller, C. A.; Payer Sulprizio, M.; Jacob, D. J.

    2014-12-01

    The GEOS-Chem global chemical transport model (CTM), used by a large atmospheric chemistry research community, has been reengineered to serve as a platform for a range of computational atmospheric chemistry science foci and applications. Development included modularization for coupling to general circulation and Earth system models (ESMs) and the adoption of co-processor capable atmospheric chemistry solvers. This was done using an Earth System Modeling Framework (ESMF) interface that operates independently of GEOS-Chem scientific code to permit seamless transition from the GEOS-Chem stand-alone serial CTM to deployment as a coupled ESM module. In this manner, the continual stream of updates contributed by the CTM user community is automatically available for broader applications, which remain state-of-science and directly referenceable to the latest version of the standard GEOS-Chem CTM. These developments are now available as part of the standard version of the GEOS-Chem CTM. The system has been implemented as an atmospheric chemistry module within the NASA GEOS-5 ESM. The coupled GEOS-5/GEOS-Chem system was tested for weak and strong scalability and performance with a tropospheric oxidant-aerosol simulation. Results confirm that the GEOS-Chem chemical operator scales efficiently for any number of processes. Although inclusion of atmospheric chemistry in ESMs is computationally expensive, the excellent scalability of the chemical operator means that the relative cost goes down with increasing number of processes, making fine-scale resolution simulations possible.

  7. An Enhanced Capability to Model How Compensation Policy Affects U.S. Department of Defense Civil Service Retention and Cost

    Science.gov (United States)

    2016-01-01

    bill for the force, con- sistently with OPM’s actuarial practice. This gives an amount—an accrual charge—sufficient to cover the retirement...capability could also be extended to other occupational areas within DoD, including the cyber workforce; to other pay systems, such as the science

  8. Establishing Viable and Effective Information-Warfare Capability in Developing Nations Based on the U.S. Model

    Science.gov (United States)

    2012-12-01

    include law enforcement and intelligence capabilities in the lineup . However, national security strategy reflects the first four only. Figure 1...Term Joint Doctrine Identification Air Force Doctrine Identification Army Doctrine Identification Navy Doctrine Identification EW...59 Ibid., 39. 34 Term Joint Doctrine Identification Air Force Doctrine Identification Army Doctrine Identification Navy

  9. Equivalent modeling of PMSG-based wind power plants considering LVRT capabilities: electromechanical transients in power systems.

    Science.gov (United States)

    Ding, Ming; Zhu, Qianlong

    2016-01-01

    Hardware protection and control action are two kinds of low voltage ride-through technical proposals widely used in a permanent magnet synchronous generator (PMSG). This paper proposes an innovative clustering concept for the equivalent modeling of a PMSG-based wind power plant (WPP), in which the impacts of both the chopper protection and the coordinated control of active and reactive powers are taken into account. First, the post-fault DC link voltage is selected as a concentrated expression of unit parameters, incoming wind and electrical distance to a fault point to reflect the transient characteristics of PMSGs. Next, we provide an effective method for calculating the post-fault DC link voltage based on the pre-fault wind energy and the terminal voltage dip. Third, PMSGs are divided into groups by analyzing the calculated DC link voltages without any clustering algorithm. Finally, PMSGs of the same group are equivalent as one rescaled PMSG to realize the transient equivalent modeling of the PMSG-based WPP. Using the DIgSILENT PowerFactory simulation platform, the efficiency and accuracy of the proposed equivalent model are tested against the traditional equivalent WPP and the detailed WPP. The simulation results show the proposed equivalent model can be used to analyze the offline electromechanical transients in power systems.

  10. Critical Directed Energy Test and Evaluation Infrastructure Shortfalls: Results of the Directed Energy Test and Evaluation Capability Tri-Service Study Update

    Science.gov (United States)

    2009-06-01

    Sensor H11 HPM Chamber Test Capability—Explosive Equivalent Substitute H12 HEL Irradiance & Temperature H13 HEL Near/In-Beam Path Quality H14 HPM Sensor...such things as artillery shells or UAVs and may impact the earth. Possible targets include missiles in flight or a relatively close command, control...capability is a synergy of four high priority shortfalls identified by the T-SS Update. H13 —HEL near/in-beam path quality H13 is the need for a

  11. Verification of aseismic design model by using experimental results

    International Nuclear Information System (INIS)

    Mizuno, N.; Sugiyama, N.; Suzuki, T.; Shibata, Y.; Miura, K.; Miyagawa, N.

    1985-01-01

    A lattice model is applied as an analysis model for an aseismic design of the Hamaoka nuclear reactor building. With object to verify an availability of this design model, two reinforced concrete blocks are constructed on the ground and the forced vibration tests are carried out. The test results are well followed by simulation analysis using the lattice model. Damping value of the ground obtained from the test is more conservative than the design value. (orig.)

  12. Impact of Personnel Capabilities on Organizational Innovation Capability

    DEFF Research Database (Denmark)

    Nielsen, Susanne Balslev; Momeni, Mostafa

    2016-01-01

    in this rapidly changing world. This research focuses on definition of the personnel aspect of innovation capability, and proposes a conceptual model based on the scientific articles of academic literature on organisations innovation capability. This paper includes an expert based validation in three rounds...... of the Delphi method. And for the purpose of a better appreciation of the relationship dominating the factors of the model, it has distributed the questionnaire to Iranian companies in the Food industry. This research proposed a direct relationship between Innovation Capability and the Personnel Capability...

  13. Identifiability Results for Several Classes of Linear Compartment Models.

    Science.gov (United States)

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  14. Designing and Validating a Model for Measuring Sustainability of Overall Innovation Capability of Small and Medium-Sized Enterprises

    OpenAIRE

    Rahman, Mohd; Doroodian, Mahmood; Kamarulzaman, Yusniza; Muhamad, Norhamidi

    2015-01-01

    The business environment is currently characterized by intensified competition at both the national and firm levels. Many studies have shown that innovation positively affect firms in enhancing their competitiveness. Innovation is a dynamic process that requires a continuous, evolving, and mastered management. Evaluating the sustainability of overall innovation capability of a business is a major means of determining how well this firm effectively and efficiently manages its innovation proces...

  15. Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP

    Directory of Open Access Journals (Sweden)

    F. Pattyn

    2012-05-01

    Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.

  16. A Module for Graphical Display of Model Results with the CBP Toolbox

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-04-21

    This report describes work performed by the Savannah River National Laboratory (SRNL) in fiscal year 2014 to add enhanced graphical capabilities to display model results in the Cementitious Barriers Project (CBP) Toolbox. Because Version 2.0 of the CBP Toolbox has just been released, the graphing enhancements described in this report have not yet been integrated into a new version of the Toolbox. Instead they have been tested using a standalone GoldSim model and, while they are substantially complete, may undergo further refinement before full implementation. Nevertheless, this report is issued to document the FY14 development efforts which will provide a basis for further development of the CBP Toolbox.

  17. Atmospheric release advisory capability

    International Nuclear Information System (INIS)

    Sullivan, T.J.

    1981-01-01

    The ARAC system (Atmospheric Release Advisory Capability) is described. The system is a collection of people, computers, computer models, topographic data and meteorological input data that together permits a calculation of, in a quasi-predictive sense, where effluent from an accident will migrate through the atmosphere, where it will be deposited on the ground, and what instantaneous and integrated dose an exposed individual would receive

  18. Sensor Alerting Capability

    Science.gov (United States)

    Henriksson, Jakob; Bermudez, Luis; Satapathy, Goutam

    2013-04-01

    There is a large amount of sensor data generated today by various sensors, from in-situ buoys to mobile underwater gliders. Providing sensor data to the users through standardized services, language and data model is the promise of OGC's Sensor Web Enablement (SWE) initiative. As the amount of data grows it is becoming difficult for data providers, planners and managers to ensure reliability of data and services and to monitor critical data changes. Intelligent Automation Inc. (IAI) is developing a net-centric alerting capability to address these issues. The capability is built on Sensor Observation Services (SOSs), which is used to collect and monitor sensor data. The alerts can be configured at the service level and at the sensor data level. For example it can alert for irregular data delivery events or a geo-temporal statistic of sensor data crossing a preset threshold. The capability provides multiple delivery mechanisms and protocols, including traditional techniques such as email and RSS. With this capability decision makers can monitor their assets and data streams, correct failures or be alerted about a coming phenomena.

  19. Improving Performance of LVRT Capability in Single-phase Grid-tied PV Inverters by a Model Predictive Controller

    DEFF Research Database (Denmark)

    Zangeneh Bighash, Esmaeil; Sadeghzadeh, Seyed Mohammad; Ebrahimzadeh, Esmaeil

    2018-01-01

    dynamic response and stability. To fill in this gap, this paper presents a fast and robust current controller based on a Model-Predictive Control (MPC) for single-phase PV inverters in other to deal with the LVRT operation. In order to confirm the effectiveness of the proposed controller, results...... the voltage sag period is short, a fast dynamic performance along with a soft behavior of the controller is the most important issue in the LVRT duration. Recently, some methods like Proportional Resonant (PR) controllers, have been presented to control the single phase PV systems in LVRT mode. However......, these methods have had uncertainties in respect their contribution in LVRT mode. In PR controllers, a fast dynamic response can be obtained by tuning the gains of PR controllers for a high bandwidth, but typically the phase margin is decreased. Therefore, the design of PR controllers needs a tradeoff between...

  20. Generalised Chou-Yang model and recent results

    International Nuclear Information System (INIS)

    Fazal-e-Aleem; Rashid, H.

    1995-09-01

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author). 16 refs, 2 figs

  1. Generalised Chou-Yang model and recent results

    Energy Technology Data Exchange (ETDEWEB)

    Fazal-e-Aleem [International Centre for Theoretical Physics, Trieste (Italy); Rashid, H. [Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics

    1996-12-31

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and {rho} together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author) 16 refs.

  2. Generalised Chou-Yang model and recent results

    International Nuclear Information System (INIS)

    Fazal-e-Aleem; Rashid, H.

    1996-01-01

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author)

  3. Regionalization of climate model results for the North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Kauker, F. [Alfred-Wegener-Institut fuer Polar- und Meeresforschung, Bremerhaven (Germany); Storch, H. von [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2000-07-01

    A dynamical downscaling for the North Sea is presented. The numerical model used for the study is the coupled ice-ocean model OPYC. In a hindcast of the years 1979 to 1993 it was forced with atmospheric forcing of the ECMWF reanalysis. The models capability in simulating the observed mean state and variability in the North Sea is demonstrated by the hindcast. Two time scale ranges, from weekly to seasonal and the longer-than-seasonal time scales are investigated. Shorter time scales, for storm surges, are not captured by the model formulation. The main modes of variability of sea level, sea-surface circulation, sea-surface temperature, and sea-surface salinity are described and connections to atmospheric phenomena, like the NAO, are discussed. T106 ''time-slice'' simulations with a ''2 x CO{sub 2}'' horizon are used to estimate the effects of a changing climate on the shelf sea ''North Sea''. The ''2 x CO{sub 2}'' changes in the surface forcing are accompanied by changes in the lateral oceanic boundary conditions taken from a global coupled climate model. For ''2 x CO{sub 2}'' the time mean sea level increases up to 25 cm in the German Bight in the winter, where 15 cm are due to the surface forcing and 10 cm due to thermal expansion. This change is compared to the ''natural'' variability as simulated in the ECMWF integration and found to be not outside the range spanned by it. The variability of sea level on the weekly-to-seasonal time-scales is significantly reduced in the scenario integration. The variability on the longer-than-seasonal time-scales in the control and scenario runs is much smaller then in the ECMWF integration. This is traced back to the use of ''time-slice'' experiments. Discriminating between locally forced changes and changes induced at the lateral oceanic boundaries of the model in the circulation and

  4. Capability of the "Ball-Berry" model for predicting stomatal conductance and water use efficiency of potato leaves under different irrigation regimes

    DEFF Research Database (Denmark)

    Liu, Fulai; Andersen, Mathias N.; Jensen, Christian Richardt

    2009-01-01

    was used for model parameterization, where measurements of midday leaf gas exchange of potted potatoes were done during progressive soil drying for 2 weeks at tuber initiation and earlier bulking stages. The measured photosynthetic rate (An) was used as an input for the model. To account for the effects......The capability of the ‘Ball-Berry' model (BB-model) in predicting stomatal conductance (gs) and water use efficiency (WUE) of potato (Solanum tuberosum L.) leaves under different irrigation regimes was tested using data from two independent pot experiments in 2004 and 2007. Data obtained from 2004...... of soil water deficits on gs, a simple equation modifying the slope (m) based on the mean soil water potential (Ψs) in the soil columns was incorporated into the original BB-model. Compared with the original BB-model, the modified BB-model showed better predictability for both gs and WUE of potato leaves...

  5. Development of loca calculation capability with relap5-3D in accordance with the evaluation model methodology

    International Nuclear Information System (INIS)

    Liang, T.K.S.; Huan-Jen, Hung; Chin-Jang, Chang; Lance, Wang

    2001-01-01

    In light water reactors, particularly the pressurized water reactor (PWR), the severity of a LOCA (loss of coolant accident) will limit how high the reactor power can operate. Although the best-estimate LOCA licensing methodology can provide the greatest margin on the PCT (peak cladding temperature) evaluation during LOCA, it generally takes more resources to develop. Instead, implementation of evaluation models required by the Appendix K of 10 CFR 50 upon an advanced thermal-hydraulic platform can also enlarge significant margin between the highest calculated PCT and the safety limit of 2200 F. The compliance of the current RELAP5-3D code with Appendix K of 10 CFR50 has been evaluated, and it was found that there are ten areas where code assessment and/or further modifications were required to satisfy the requirements set forth in the Appendix K of 10 CFR 50. The associated models for LOCA consequent phenomenon analysis should follow the major concern of regulation and be expected to give more conservative results than those by the best-estimate methodology. They were required to predict the decay power level, the blowdown hydraulics, the blowdown heat transfer, the flooding rate, and the flooding heat transfer. All of the ten areas included in above classified simulations have been further evaluated and the RELAP5-3D has been successfully modified to fulfill the associated requirements. In addition, to verify and assess the development of the Appendix K version of RELAP5-3D, nine separate-effect experiments were adopted. Through the assessments against separate-effect experiments, the success of the code modification in accordance with the Appendix K of 10 CFR 50 was demonstrated. We will apply another six sets of integral-effect experiments in the next step to assure the integral conservatism of the Appendix K version of RELAP5-3D on LOCA licensing evaluation. (authors)

  6. Genome-enabled Modeling of Microbial Biogeochemistry using a Trait-based Approach. Does Increasing Metabolic Complexity Increase Predictive Capabilities?

    Science.gov (United States)

    King, E.; Karaoz, U.; Molins, S.; Bouskill, N.; Anantharaman, K.; Beller, H. R.; Banfield, J. F.; Steefel, C. I.; Brodie, E.

    2015-12-01

    The biogeochemical functioning of ecosystems is shaped in part by genomic information stored in the subsurface microbiome. Cultivation-independent approaches allow us to extract this information through reconstruction of thousands of genomes from a microbial community. Analysis of these genomes, in turn, gives an indication of the organisms present and their functional roles. However, metagenomic analyses can currently deliver thousands of different genomes that range in abundance/importance, requiring the identification and assimilation of key physiologies and metabolisms to be represented as traits for successful simulation of subsurface processes. Here we focus on incorporating -omics information into BioCrunch, a genome-informed trait-based model that represents the diversity of microbial functional processes within a reactive transport framework. This approach models the rate of nutrient uptake and the thermodynamics of coupled electron donors and acceptors for a range of microbial metabolisms including heterotrophs and chemolithotrophs. Metabolism of exogenous substrates fuels catabolic and anabolic processes, with the proportion of energy used for cellular maintenance, respiration, biomass development, and enzyme production based upon dynamic intracellular and environmental conditions. This internal resource partitioning represents a trade-off against biomass formation and results in microbial community emergence across a fitness landscape. Biocrunch was used here in simulations that included organisms and metabolic pathways derived from a dataset of ~1200 non-redundant genomes reflecting a microbial community in a floodplain aquifer. Metagenomic data was directly used to parameterize trait values related to growth and to identify trait linkages associated with respiration, fermentation, and key enzymatic functions such as plant polymer degradation. Simulations spanned a range of metabolic complexities and highlight benefits originating from simulations

  7. Improving wheat simulation capabilities in Australia from a cropping systems perspective. III. The integrated wheat model (I-WHEAT).

    NARCIS (Netherlands)

    Meinke, H.; Hammer, G.L.; Keulen, van H.; Rabbinge, R.

    1998-01-01

    Previous work has identified several short-comings in the ability of four spring wheat and one barley model to simulate crop processes and resource utilization. This can have important implications when such models are used within systems models where final soil water and nitrogen conditions of one

  8. The effect of bathymetric filtering on nearshore process model results

    Science.gov (United States)

    Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.

    2009-01-01

    Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.

  9. Capitalizing on capabilities.

    Science.gov (United States)

    Ulrich, Dave; Smallwood, Norm

    2004-06-01

    By making the most of organizational capabilities--employees' collective skills and fields of expertise--you can dramatically improve your company's market value. Although there is no magic list of proficiencies that every organization needs in order to succeed, the authors identify 11 intangible assets that well-managed companies tend to have: talent, speed, shared mind-set and coherent brand identity, accountability, collaboration, learning, leadership, customer connectivity, strategic unity, innovation, and efficiency. Such companies typically excel in only three of these capabilities while maintaining industry parity in the other areas. Organizations that fall below the norm in any of the 11 are likely candidates for dysfunction and competitive disadvantage. So you can determine how your company fares in these categories (or others, if the generic list doesn't suit your needs), the authors explain how to conduct a "capabilities audit," describing in particular the experiences and findings of two companies that recently performed such audits. In addition to highlighting which intangible assets are most important given the organization's history and strategy, this exercise will gauge how well your company delivers on its capabilities and will guide you in developing an action plan for improvement. A capabilities audit can work for an entire organization, a business unit, or a region--indeed, for any part of a company that has a strategy to generate financial or customer-related results. It enables executives to assess overall company strengths and weaknesses, senior leaders to define strategy, midlevel managers to execute strategy, and frontline leaders to achieve tactical results. In short, it helps turn intangible assets into concrete strengths.

  10. Rights, goals, and capabilities

    NARCIS (Netherlands)

    van Hees, M.V.B.P.M

    This article analyses the relationship between rights and capabilities in order to get a better grasp of the kind of consequentialism that the capability theory represents. Capability rights have been defined as rights that have a capability as their object (rights to capabilities). Such a

  11. A Hybrid Multi-Criteria Decision Model for Technological Innovation Capability Assessment: Research on Thai Automotive Parts Firms

    Directory of Open Access Journals (Sweden)

    Sumrit Detcharat

    2013-01-01

    Full Text Available The efficient appraisal of technological innovation capabilities (TICs of enterprises is an important factor to enhance competitiveness. This study aims to evaluate and rank TICs evaluation criteria in order to provide a practical insight of systematic analysis by gathering the qualified experts’ opinions combined with three methods of multi-criteria decision making approach. Firstly, Fuzzy Delphi method is used to screen TICs evaluation criteria from the recent published researches. Secondly, the Analytic Hierarchy Process is utilized to compute the relative important weights. Lastly, the VIKOR method is used to rank the enterprises based on TICs evaluation criteria. An empirical study is applied for Thai automotive parts firms to illustrate the proposed methods. This study found that the interaction between criteria is essential and influences TICs; furthermore, this ranking development of TICs assessment is also one of key management tools to simply facilitate and offer a new mindset for managements of other related industries.

  12. Platelet autologous growth factors decrease the osteochondral regeneration capability of a collagen-hydroxyapatite scaffold in a sheep model

    Directory of Open Access Journals (Sweden)

    Giavaresi Gianluca

    2010-09-01

    Full Text Available Abstract Background Current research aims to develop innovative approaches to improve chondral and osteochondral regeneration. The objective of this study was to investigate the regenerative potential of platelet-rich plasma (PRP to enhance the repair process of a collagen-hydroxyapatite scaffold in osteochondral defects in a sheep model. Methods PRP was added to a new, multi-layer gradient, nanocomposite scaffold that was obtained by nucleating collagen fibrils with hydroxyapatite nanoparticles. Twenty-four osteochondral lesions were created in sheep femoral condyles. The animals were randomised to three treatment groups: scaffold, scaffold loaded with autologous PRP, and empty defect (control. The animals were sacrificed and evaluated six months after surgery. Results Gross evaluation and histology of the specimens showed good integration of the chondral surface in both treatment groups. Significantly better bone regeneration and cartilage surface reconstruction were observed in the group treated with the scaffold alone. Incomplete bone regeneration and irregular cartilage surface integration were observed in the group treated with the scaffold where PRP was added. In the control group, no bone and cartilage defect healing occurred; defects were filled with fibrous tissue. Quantitative macroscopic and histological score evaluations confirmed the qualitative trends observed. Conclusions The hydroxyapatite-collagen scaffold enhanced osteochondral lesion repair, but the combination with platelet growth factors did not have an additive effect; on the contrary, PRP administration had a negative effect on the results obtained by disturbing the regenerative process. In the scaffold + PRP group, highly amorphous cartilaginous repair tissue and poorly spatially organised underlying bone tissue were found.

  13. Assesment of Innovation Process Capability-Based on Innovation Value Chain Model in East Java Footwear Industry

    Directory of Open Access Journals (Sweden)

    Benny Lianto

    2015-12-01

    Full Text Available This study attempts to assess the innovation process based on  innovation value chain model in footwear industry in East Java, Indonesia. A strength and weakness mapping analysis was performed and it included three factors related to company characteristics: operation scale based on number of employees, operational priod, and market orientation. The samples were 62 footwear industries, members of East Java  Indonesian Footwear Association (Aprisindo. The questionnaire was sent via email. Thirty industries (48.38% sent the questionnaire back. A focus group discussion (FGD was conducted with several representatives from footwear industry before the questionnaire was sent.  The study found that companies are relatively good at idea conversion (42,30%  but the companies have  a little difficulties at diffusion (50,80% and  at idea generation (55,80%. From the result respose show (see table.2 that the weakest links (the innovation process bottleneck is cross-pollination activity [in which the people typically don't collaborate on projects across units, businesses, or subsidiaries (88,6%],  while the strongest links is selection activity [the companies have a risk- averse attitude toward  investing in novel ideas (39,3%]. Based on p-value, the study found that company characteristics influencing a certain phase of innovation value chain significantly were company period (age of company and market orientation. Specifically, both of them influenced idea generation phase.

  14. Assessment of the Orion-SLS Interface Management Process in Achieving the EIA 731.1 Systems Engineering Capability Model Generic Practices Level 3 Criteria

    Science.gov (United States)

    Jellicorse, John J.; Rahman, Shamin A.

    2016-01-01

    NASA is currently developing the next generation crewed spacecraft and launch vehicle for exploration beyond earth orbit including returning to the Moon and making the transit to Mars. Managing the design integration of major hardware elements of a space transportation system is critical for overcoming both the technical and programmatic challenges in taking a complex system from concept to space operations. An established method of accomplishing this is formal interface management. In this paper we set forth an argument that the interface management process implemented by NASA between the Orion Multi-Purpose Crew Vehicle (MPCV) and the Space Launch System (SLS) achieves the Level 3 tier of the EIA 731.1 System Engineering Capability Model (SECM) for Generic Practices. We describe the relevant NASA systems and associated organizations, and define the EIA SECM Level 3 Generic Practices. We then provide evidence for our compliance with those practices. This evidence includes discussions of: NASA Systems Engineering Interface (SE) Management standard process and best practices; the tailoring of that process for implementation on the Orion to SLS interface; changes made over time to improve the tailored process, and; the opportunities to take the resulting lessons learned and propose improvements to our institutional processes and best practices. We compare this evidence against the practices to form the rationale for the declared SECM maturity level.

  15. Circulation in the Gulf of Trieste: measurements and model results

    International Nuclear Information System (INIS)

    Bogunovici, B.; Malacic, V.

    2008-01-01

    The study presents seasonal variability of currents in the southern part of the Gulf of Trieste. A time series analysis of currents and wind stress for the period 2003-2006, which were measured by the coastal oceanographic buoy, was conducted. A comparison between these data and results obtained from a numerical model of circulation in the Gulf was performed to validate model results. Three different approaches were applied to the wind data to determine the wind stress. Similarities were found between Kondo and Smith approaches while the method of Vera shows differences which were particularly noticeable for lower (= 1 m/s) and higher wind speeds (= 15 m/s). Mean currents in the surface layer are generally outflow currents from the Gulf due to wind forcing (bora). However in all other depth layers inflow currents are dominant. With the principal component analysis (Pca) major and minor axes were determined for all seasons. The major axis of maximum variance in years between 2003 and 2006 is prevailing in Ne-Sw direction, which is parallel to the coastline. Comparison of observation and model results is showing that currents are similar (in direction) for the surface and bottom layers but are significantly different for the middle layer (5-13 m). At a depth between 14-21 m velocities are comparable in direction as well as in magnitude even though model values are higher. Higher values of modelled currents at the surface and near the bottom are explained by higher values of wind stress that were used in the model as driving input with respect to the stress calculated from the measured winds. Larger values of modelled currents near the bottom are related to the larger inflow that needs to compensate for the larger modelled outflow at the surface. However, inspection of the vertical structure of temperature, salinity and density shows that the model is reproducing a weaker density gradient which enables the penetration of the outflow surface currents to larger depths.

  16. Melt coolability modeling and comparison to MACE test results

    International Nuclear Information System (INIS)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.

    1992-01-01

    An important question in the assessment of severe accidents in light water nuclear reactors is the ability of water to quench a molten corium-concrete interaction and thereby terminate the accident progression. As part of the Melt Attack and Coolability Experiment (MACE) Program, phenomenological models of the corium quenching process are under development. The modeling approach considers both bulk cooldown and crust-limited heat transfer regimes, as well as criteria for the pool thermal hydraulic conditions which separate the two regimes. The model is then compared with results of the MACE experiments

  17. NASA GISS Climate Change Research Initiative: A Multidisciplinary Vertical Team Model for Improving STEM Education by Using NASA's Unique Capabilities.

    Science.gov (United States)

    Pearce, M. D.

    2017-12-01

    CCRI is a year-long STEM education program designed to bring together teams of NASA scientists, graduate, undergraduate and high school interns and high school STEM educators to become immersed in NASA research focused on atmospheric and climate changes in the 21st century. GISS climate research combines analysis of global datasets with global models of atmospheric, land surface, and oceanic processes to study climate change on Earth and other planetary atmospheres as a useful tool in assessing our general understanding of climate change. CCRI interns conduct research, gain knowledge in assigned research discipline, develop and present scientific presentations summarizing their research experience. Specifically, CCRI interns write a scientific research paper explaining basic ideas, research protocols, abstract, results, conclusion and experimental design. Prepare and present a professional presentation of their research project at NASA GISS, prepare and present a scientific poster of their research project at local and national research symposiums along with other federal agencies. CCRI Educators lead research teams under the direction of a NASA GISS scientist, conduct research, develop research based learning units and assist NASA scientists with the mentoring of interns. Educators create an Applied Research STEM Curriculum Unit Portfolio based on their research experience integrating NASA unique resources, tools and content into a teacher developed unit plan aligned with the State and NGSS standards. STEM Educators also Integrate and implement NASA unique units and content into their STEM courses during academic year, perform community education STEM engagement events, mentor interns in writing a research paper, oral research reporting, power point design and scientific poster design for presentation to local and national audiences. The CCRI program contributes to the Federal STEM Co-STEM initiatives by providing opportunities, NASA education resources and

  18. An experimental evaluation of the generalizing capabilities of process discovery techniques and black-box sequence models

    NARCIS (Netherlands)

    Tax, N.; van Zelst, S.J.; Teinemaa, I.; Gulden, Jens; Reinhartz-Berger, Iris; Schmidt, Rainer; Guerreiro, Sérgio; Guédria, Wided; Bera, Palash

    2018-01-01

    A plethora of automated process discovery techniques have been developed which aim to discover a process model based on event data originating from the execution of business processes. The aim of the discovered process models is to describe the control-flow of the underlying business process. At the

  19. An experimentally verified model for estimating the distance resolution capability of direct time of flight 3D optical imaging systems

    International Nuclear Information System (INIS)

    Nguyen, K Q K; Fisher, E M D; Walton, A J; Underwood, I

    2013-01-01

    This report introduces a new statistical model for time-resolved photon detection in a generic single-photon-sensitive sensor array. The model is validated by comparing modelled data with experimental data collected on a single-photon avalanche diode sensor array. Data produced by the model are used alongside corresponding experimental data to calculate, for the first time, the effective distance resolution of a pulsed direct time of flight 3D optical imaging system over a range of conditions using four peak-detection algorithms. The relative performance of the algorithms is compared. The model can be used to improve the system design process and inform selection of the optimal peak-detection algorithm. (paper)

  20. SORTING CAPABILITIES OF CASTINGS FROM NODULAR AND GRAY IRON BY THE STRUCTURE BY THE RESULT OF THE MEASUREMENT OF THE MAGNETIC PARAMETERS AND THE SPEED OF SOUND

    Directory of Open Access Journals (Sweden)

    S. G. Sandomirskiy

    2013-01-01

    Full Text Available The results of the analysis of the influence of changes in the structure of the metal substrate and form of graphite inclusions in cast iron on the magnetic coercive sensitive parameter and the speed of sound are given. The efficiency of shared use of the results of magnetic and ultrasonic measurements to control the shape of inclusions in ductile iron and pearlite content in its metal matrix is shown.

  1. Capability Development in an Offshoring Context

    DEFF Research Database (Denmark)

    Jaura, Manya

    Capability development can be defined as deliberate firm-level investment involving a search and learning process aimed at modifying or enhancing existing capabilities. Increasingly, firms are relocating advanced services to offshore locations resulting in the challenge of capability development ...

  2. Mobile Test Capabilities

    Data.gov (United States)

    Federal Laboratory Consortium — The Electrical Power Mobile Test capabilities are utilized to conduct electrical power quality testing on aircraft and helicopters. This capability allows that the...

  3. Relationship Marketing results: proposition of a cognitive mapping model

    Directory of Open Access Journals (Sweden)

    Iná Futino Barreto

    2015-12-01

    Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.

  4. Functional results-oriented healthcare leadership: a novel leadership model.

    Science.gov (United States)

    Al-Touby, Salem Said

    2012-03-01

    This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes.

  5. Value of the distant future: Model-independent results

    Science.gov (United States)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  6. Storm-time ring current: model-dependent results

    Directory of Open Access Journals (Sweden)

    N. Yu. Ganushkina

    2012-01-01

    Full Text Available The main point of the paper is to investigate how much the modeled ring current depends on the representations of magnetic and electric fields and boundary conditions used in simulations. Two storm events, one moderate (SymH minimum of −120 nT on 6–7 November 1997 and one intense (SymH minimum of −230 nT on 21–22 October 1999, are modeled. A rather simple ring current model is employed, namely, the Inner Magnetosphere Particle Transport and Acceleration model (IMPTAM, in order to make the results most evident. Four different magnetic field and two electric field representations and four boundary conditions are used. We find that different combinations of the magnetic and electric field configurations and boundary conditions result in very different modeled ring current, and, therefore, the physical conclusions based on simulation results can differ significantly. A time-dependent boundary outside of 6.6 RE gives a possibility to take into account the particles in the transition region (between dipole and stretched field lines forming partial ring current and near-Earth tail current in that region. Calculating the model SymH* by Biot-Savart's law instead of the widely used Dessler-Parker-Sckopke (DPS relation gives larger and more realistic values, since the currents are calculated in the regions with nondipolar magnetic field. Therefore, the boundary location and the method of SymH* calculation are of key importance for ring current data-model comparisons to be correctly interpreted.

  7. Analysis of the 314th Contracting Squadrons Contract Management Capability Using the Contract Management Maturity Model (CMMM)

    National Research Council Canada - National Science Library

    Jackson, Jr, Carl J

    2007-01-01

    .... The purpose of this research project is to analyze the 314th Contracting Squadron contracting processes and requirement target areas for improvement efforts by the application of the Contract Management Maturity Model (CMMM...

  8. Modeling the effects of land cover and use on landscape capability for urban ungulate populations: Chapter 11

    Science.gov (United States)

    Underwood, Harold; Kilheffer, Chellby R.; Francis, Robert A.; Millington, James D. A.; Chadwick, Michael A.

    2016-01-01

    Expanding ungulate populations are causing concerns for wildlife professionals and residents in many urban areas worldwide. Nowhere is the phenomenon more apparent than in the eastern US, where urban white-tailed deer (Odocoileus virginianus) populations are increasing. Most habitat suitability models for deer have been developed in rural areas and across large (>1000 km2) spatial extents. Only recently have we begun to understand the factors that contribute to space use by deer over much smaller spatial extents. In this study, we explore the concepts, terminology, methodology and state-of-the-science in wildlife abundance modeling as applied to overabundant deer populations across heterogeneous urban landscapes. We used classified, high-resolution digital orthoimagery to extract landscape characteristics in several urban areas of upstate New York. In addition, we assessed deer abundance and distribution in 1-km2 blocks across each study area from either aerial surveys or ground-based distance sampling. We recorded the number of detections in each block and used binomial mixture models to explore important relationships between abundance and key landscape features. Finally, we cross-validated statistical models of abundance and compared covariate relationships across study sites. Study areas were characterized along a gradient of urbanization based on the proportions of impervious surfaces and natural vegetation which, based on the best-supported models, also distinguished blocks potentially occupied by deer. Models performed better at identifying occurrence of deer and worse at predicting abundance in cross-validation comparisons. We attribute poor predictive performance to differences in deer population trajectories over time. The proportion of impervious surfaces often yielded better predictions of abundance and occurrence than did the proportion of natural vegetation, which we attribute to a lack of certain land cover classes during cold and snowy winters

  9. Low Birth Weight Is Associated with a Decreased Overall Adult Health Status and Reproductive Capability - Results of a Cross-Sectional Study in Primary Infertile Patients.

    Directory of Open Access Journals (Sweden)

    Luca Boeri

    Full Text Available Individuals born with low birth weight (LBW risk cardiometabolic complications later in life. However the impact of LBW on general health status and male reproductive function has been scantly analysed. We investigated the clinical and seminal impact of different birth weights (BW in white-European men presenting for primary couple's infertility. Demographic, clinical, and laboratory data from 827 primary infertile men were compared with those of 373 consecutive fertile men. Patients with BW ≤2500, 2500-4200, and ≥4200gr were classified as having LBW, normal (NBW, and high BW (HBW, respectively. Health-significant comorbidities were scored with the Charlson Comorbidity Index (CCI. Testicular volume was assessed with a Prader orchidometer. Semen analysis values were assessed based on 2010 WHO reference criteria. Descriptive statistics and regression models tested associations between semen parameters, clinical characteristics and BW categories. LBW, NBW and HBW were found in 71 (8.6%, 651 (78.7% and 105 (12.7% infertile men, respectively. LBW was more frequent in infertile patients than fertile men (p = 0.002. Infertile patients with LBW had a higher rate of comorbidities (p = 0.003, lower mean testicular volume (p = 0.007, higher FSH (p = 0.02 and lower tT levels (p = 0.04 compared to other BW groups. Higher rates of asthenozoospermia (p = 0.02 and teratozoospermia (p = 0.03 were also found in LBW men. At logistic regression models, LBW was univariably associated with pathologic progressive motility (p≤0.02 and pathologic sperm morphology (p<0.005. At multivariable logistic regression analysis, LBW achieved independent predictor status for both lower sperm motility and pathologic sperm morphology (all p≤0.04. Only LBW independently predicted higher CCI values (p<0.001. In conclusion, we found that LBW was more frequent in infertile than in fertile men. Infertile individuals with LBW showed a higher rate of comorbidities and significantly

  10. Test results of the SMES model coil. Pulse performance

    International Nuclear Information System (INIS)

    Hamajima, Takataro; Shimada, Mamoru; Ono, Michitaka

    1998-01-01

    A model coil for superconducting magnetic energy storage (SMES model coil) has been developed to establish the component technologies needed for a small-scale 100 kWh SMES device. The SMES model coil was fabricated, and then performance tests were carried out in 1996. The coil was successfully charged up to around 30 kA and down to zero at the same ramp rate of magnetic field experienced in a 100 kWh SMES device. AC loss in the coil was measured by an enthalpy method as parameters of ramp rate and flat top current. The results were evaluated by an analysis and compared with short-sample test results. The measured hysteresis loss is in good agreement with that estimated from the short-sample results. It was found that the coupling loss of the coil consists of two major coupling time constants. One is a short time constant of about 200 ms, which is in agreement with the test results of a short real conductor. The other is a long time constant of about 30 s, which could not be expected from the short sample test results. (author)

  11. Modeling Results For the ITER Cryogenic Fore Pump. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)

    2014-03-31

    A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.

  12. How does a cadaver model work for testing ultrasound diagnostic capability for rheumatic-like tendon damage?

    DEFF Research Database (Denmark)

    Janta, Iustina; Morán, Julio; Naredo, Esperanza

    2016-01-01

    between the US findings and the surgically induced lesions in the cadaver model. RA-like tendon damage was surgically induced in the tibialis anterior tendon (TAT) and tibialis posterior tendon (TPT) of ten ankle/foot fresh-frozen cadaveric specimens. Of the 20 tendons examined, six were randomly assigned......To establish whether a cadaver model can serve as an effective surrogate for the detection of tendon damage characteristic of rheumatoid arthritis (RA). In addition, we evaluated intraobserver and interobserver agreement in the grading of RA-like tendon tears shown by US, as well as the concordance...... a surgically induced partial tear; six a complete tear; and eight left undamaged. Three rheumatologists, experts in musculoskeletal US, assessed from 1 to 5 the quality of US imaging of the cadaveric models on a Likert scale. Tendons were then categorized as having either no damage, (0); partial tear, (1...

  13. Recent Additions in the Modeling Capabilities of an Open-Source Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tom, N.; Lawson, M.; Yu, Y. H.

    2015-04-20

    WEC-Sim is a midfidelity numerical tool for modeling wave energy conversion devices. The code uses the MATLAB SimMechanics package to solve multibody dynamics and models wave interactions using hydrodynamic coefficients derived from frequency-domain boundary-element methods. This paper presents the new modeling features introduced in the latest release of WEC-Sim. The first feature discussed conversion of the fluid memory kernel to a state-space form. This enhancement offers a substantial computational benefit after the hydrodynamic body-to-body coefficients are introduced and the number of interactions increases exponentially with each additional body. Additional features include the ability to calculate the wave-excitation forces based on the instantaneous incident wave angle, allowing the device to weathervane, as well as import a user-defined wave elevation time series. A review of the hydrodynamic theory for each feature is provided and the successful implementation is verified using test cases.

  14. Capabilities of wind tunnels with two-adaptive walls to minimize boundary interference in 3-D model testing

    Science.gov (United States)

    Rebstock, Rainer; Lee, Edwin E., Jr.

    1989-01-01

    An initial wind tunnel test was made to validate a new wall adaptation method for 3-D models in test sections with two adaptive walls. First part of the adaptation strategy is an on-line assessment of wall interference at the model position. The wall induced blockage was very small at all test conditions. Lift interference occurred at higher angles of attack with the walls set aerodynamically straight. The adaptation of the top and bottom tunnel walls is aimed at achieving a correctable flow condition. The blockage was virtually zero throughout the wing planform after the wall adjustment. The lift curve measured with the walls adapted agreed very well with interference free data for Mach 0.7, regardless of the vertical position of the wing in the test section. The 2-D wall adaptation can significantly improve the correctability of 3-D model data. Nevertheless, residual spanwise variations of wall interference are inevitable.

  15. HYSPLIT's Capability for Radiological Aerial Monitoring in Nuclear Emergencies: Model Validation and Assessment on the Chernobyl Accident

    International Nuclear Information System (INIS)

    Jung, Gunhyo; Kim, Juyoul; Shin, Hyeongki

    2007-01-01

    The Chernobyl accident took place on 25 April 1986 in Ukraine. Consequently large amount of radionuclides were released into the atmosphere. The release was a widespread distribution of radioactivity throughout the northern hemisphere, mainly across Europe. A total of 31 persons died as a consequence of the accident, and about 140 persons suffered various degrees of radiation sickness and health impairment in the acute health impact. The possible increase of cancer incidence has been a real and significant increase of carcinomas of the thyroid among the children living in the contaminated regions as the late health effects. Recently, a variety of atmospheric dispersion models have been developed and used around the world. Among them, HYSPLIT (HYbrid Single-Particle Lagrangian Integrated Trajectory) model developed by NOAA (National Oceanic and Atmospheric Administration)/ARL (Air Resources Laboratory) is being widely used. To verify the HYSPLIT model for radiological aerial monitoring in nuclear emergencies, a case study on the Chernobyl accident is performed

  16. Methodology and Results of Mathematical Modelling of Complex Technological Processes

    Science.gov (United States)

    Mokrova, Nataliya V.

    2018-03-01

    The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.

  17. Modeling vertical loads in pools resulting from fluid injection

    International Nuclear Information System (INIS)

    Lai, W.; McCauley, E.W.

    1978-01-01

    Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the 1 / 5 -scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena

  18. Capabilities for Strategic Adaptation

    DEFF Research Database (Denmark)

    Distel, Andreas Philipp

    This dissertation explores capabilities that enable firms to strategically adapt to environmental changes and preserve competitiveness over time – often referred to as dynamic capabilities. While dynamic capabilities being a popular research domain, too little is known about what these capabiliti...

  19. An Interdisciplinary Approach to Designing Online Learning: Fostering Pre-Service Mathematics Teachers' Capabilities in Mathematical Modelling

    Science.gov (United States)

    Geiger, Vince; Mulligan, Joanne; Date-Huxtable, Liz; Ahlip, Rehez; Jones, D. Heath; May, E. Julian; Rylands, Leanne; Wright, Ian

    2018-01-01

    In this article we describe and evaluate processes utilized to develop an online learning module on mathematical modelling for pre-service teachers. The module development process involved a range of professionals working within the STEM disciplines including mathematics and science educators, mathematicians, scientists, in-service and pre-service…

  20. A Modified Multifrequency Passivity-Based Control for Shunt Active Power Filter With Model-Parameter-Adaptive Capability

    DEFF Research Database (Denmark)

    Mu, Xiaobin; Wang, Jiuhe; Wu, Weimin

    2018-01-01

    The passivity-based control (PBC) has a better control performance using an accurate mathematical model of the control object. It can offer an alternative tracking control scheme for the shunt active power filter (SAPF). However, the conventional PBC-based SAPF cannot achieve zero steady...

  1. Intercultural team maturity model: Unity, diversity, capability. Achieving optimal performance when leading \\ud a multicultural project team

    OpenAIRE

    Prabhakar, G. P.; Walker, S.

    2005-01-01

    Our research helps to judge ‘maturity’ as an asset to projects and heightens awareness of situational leadership, using intercultural team maturity levels as a tool for optimal project leadership success.\\ud \\ud This study focuses on exactly how to analyse the team members’ ability to adapt to complex intercultural project environments, using an intercultural team maturity model.

  2. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  3. How does a cadaver model work for testing ultrasound diagnostic capability for rheumatic-like tendon damage?

    Science.gov (United States)

    Janta, Iustina; Morán, Julio; Naredo, Esperanza; Nieto, Juan Carlos; Uson, Jacqueline; Möller, Ingrid; Bong, David; Bruyn, George A W; D Agostino, Maria Antonietta; Filippucci, Emilio; Hammer, Hilde Berner; Iagnocco, Annamaria; Terslev, Lene; González, Jorge Murillo; Mérida, José Ramón; Carreño, Luis

    2016-06-01

    To establish whether a cadaver model can serve as an effective surrogate for the detection of tendon damage characteristic of rheumatoid arthritis (RA). In addition, we evaluated intraobserver and interobserver agreement in the grading of RA-like tendon tears shown by US, as well as the concordance between the US findings and the surgically induced lesions in the cadaver model. RA-like tendon damage was surgically induced in the tibialis anterior tendon (TAT) and tibialis posterior tendon (TPT) of ten ankle/foot fresh-frozen cadaveric specimens. Of the 20 tendons examined, six were randomly assigned a surgically induced partial tear; six a complete tear; and eight left undamaged. Three rheumatologists, experts in musculoskeletal US, assessed from 1 to 5 the quality of US imaging of the cadaveric models on a Likert scale. Tendons were then categorized as having either no damage, (0); partial tear, (1); or complete tear (2). All 20 tendons were blindly and independently evaluated twice, over two rounds, by each of the three observers. Overall, technical performance was satisfactory for all items in the two rounds (all values over 2.9 in a Likert scale 1-5). Intraobserver and interobserver agreement for US grading of tendon damage was good (mean κ values 0.62 and 0.71, respectively), with greater reliability found in the TAT than the TPT. Concordance between US findings and experimental tendon lesions was acceptable (70-100 %), again greater for the TAT than for the TPT. A cadaver model with surgically created tendon damage can be useful in evaluating US metric properties of RA tendon lesions.

  4. First experiments results about the engineering model of Rapsodie

    International Nuclear Information System (INIS)

    Chalot, A.; Ginier, R.; Sauvage, M.

    1964-01-01

    This report deals with the first series of experiments carried out on the engineering model of Rapsodie and on an associated sodium facility set in a laboratory hall of Cadarache. It conveys more precisely: 1/ - The difficulties encountered during the erection and assembly of the engineering model and a compilation of the results of the first series of experiments and tests carried out on this installation (loading of the subassemblies preheating, thermal chocks...). 2/ - The experiments and tests carried out on the two prototypes control rod drive mechanisms which brought to the choice for the design of the definitive drive mechanism. As a whole, the results proved the validity of the general design principles adopted for Rapsodie. (authors) [fr

  5. Meteorological uncertainty of atmospheric dispersion model results (MUD)

    Energy Technology Data Exchange (ETDEWEB)

    Havskov Soerensen, J.; Amstrup, B.; Feddersen, H. [Danish Meteorological Institute, Copenhagen (Denmark)] [and others

    2013-08-15

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)

  6. Meteorological uncertainty of atmospheric dispersion model results (MUD)

    International Nuclear Information System (INIS)

    Havskov Soerensen, J.; Amstrup, B.; Feddersen, H.

    2013-08-01

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)

  7. Acoustic results of the Boeing model 360 whirl tower test

    Science.gov (United States)

    Watts, Michael E.; Jordan, David

    1990-09-01

    An evaluation is presented for whirl tower test results of the Model 360 helicopter's advanced, high-performance four-bladed composite rotor system intended to facilitate over-200-knot flight. During these performance measurements, acoustic data were acquired by seven microphones. A comparison of whirl-tower tests with theory indicate that theoretical prediction accuracies vary with both microphone position and the inclusion of ground reflection. Prediction errors varied from 0 to 40 percent of the measured signal-to-peak amplitude.

  8. Exact results for the one dimensional asymmetric exclusion model

    International Nuclear Information System (INIS)

    Derrida, B.; Evans, M.R.; Pasquier, V.

    1993-01-01

    The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices. (author)

  9. Review of Current Standard Model Results in ATLAS

    CERN Document Server

    Brandt, Gerhard; The ATLAS collaboration

    2018-01-01

    This talk highlights results selected from the Standard Model research programme of the ATLAS Collaboration at the Large Hadron Collider. Results using data from $p-p$ collisions at $\\sqrt{s}=7,8$~TeV in LHC Run-1 as well as results using data at $\\sqrt{s}=13$~TeV in LHC Run-2 are covered. The status of cross section measurements from soft QCD processes and jet production as well as photon production are presented. The presentation extends to vector boson production with associated jets. Precision measurements of the production of $W$ and $Z$ bosons, including a first measurement of the mass of the $W$ bosons, $m_W$, are discussed. The programme to measure electroweak processes with di-boson and tri-boson final states is outlined. All presented measurements are compatible with Standard Model descriptions and allow to further constrain it. In addition they allow to probe new physics which would manifest through extra gauge couplings, or Standard Model gauge couplings deviating from their predicted value.

  10. MELMRK 2.0: A description of computer models and results of code testing

    International Nuclear Information System (INIS)

    Wittman, R.S.; Denny, V.; Mertol, A.

    1992-01-01

    An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly

  11. Technological Dynamics and Social Capability

    DEFF Research Database (Denmark)

    Fagerberg, Jan; Feldman, Maryann; Srholec, Martin

    2014-01-01

    for the sample as a whole between 1998 and 2008. The results indicate that social capabilities, such as well-developed public knowledge infrastructure, an egalitarian distribution of income, a participatory democracy and prevalence of public safety condition the growth of technological capabilities. Possible...

  12. Identifying 21st Century Capabilities

    Science.gov (United States)

    Stevens, Robert

    2012-01-01

    What are the capabilities necessary to meet 21st century challenges? Much of the literature on 21st century skills focuses on skills necessary to meet those challenges associated with future work in a globalised world. The result is a limited characterisation of those capabilities necessary to address 21st century social, health and particularly…

  13. Evaluating Higher Education Institutions through Agency and Resources-Capabilities Theories. A Model for Measuring the Perceived Quality of Service

    Directory of Open Access Journals (Sweden)

    José Guadalupe Vargas-hernández

    2016-08-01

    Full Text Available The objective of this paper is to explain through the agency theory and theory of resources and capacities as is the process of assessment in higher education institutions. The actors that are involved in the decision-making and the use that is giving the resources derived from repeatedly to practices that opportunistic diminishing the value that is given to the evaluation, in addition to the decrease in team work. A model is presented to measure the perception of service quality by students of the Technological Institute of Celaya, as part of the system of quality control, based on the theoretical support of several authors who have developed this topic (SERVQUAL and SERPERF an instrument adapted to the student area of the institution called SERQUALITC is generated. The paper presents the areas or departments to assess and the convenient size, the number of items used by size and Likert scale, the validation study instrument is mentioned. Finally, it is presented the model that poses a global vision of quality measurement process including corrective action services that enable continuous improvement.

  14. Evaluating Higher Education Institutions through Agency and Resources-Capabilities Theories. A Model for Measuring the Perceived Quality of Service

    Directory of Open Access Journals (Sweden)

    José G. Vargas-Hernández

    2016-12-01

    Full Text Available The objective of this paper is to explain through the agency theory and theory of resources and capacities as is the process of assessment in higher education institutions. The actors that are involved in the decision-making and the use that is giving the resources derived from repeatedly to practices that opportunistic diminishing the value that is given to the evaluation, in addition to the decrease in team work. A model is presented to measure the perception of service quality by students of the Technological Institute of Celaya, as part of the system of quality control, based on the theoretical support of several authors who have developed this topic (SERVQUAL and SERPERF an instrument adapted to the student area of the institution called SERQUALITC is generated. The paper presents the areas or departments to assess and the convenient size, the number of items used by size and Likert scale, the validation study instrument is mentioned. Finally, it is presented the model that poses a global vision of quality measurement process including corrective action services that enable continuous improvement.

  15. Comparison of transient PCRV model test results with analysis

    International Nuclear Information System (INIS)

    Marchertas, A.H.; Belytschko, T.B.

    1979-01-01

    Comparisons are made of transient data derived from simple models of a reactor containment vessel with analytical solutions. This effort is a part of the ongoing process of development and testing of the DYNAPCON computer code. The test results used in these comparisons were obtained from scaled models of the British sodium cooled fast breeder program. The test structure is a scaled model of a cylindrically shaped reactor containment vessel made of concrete. This concrete vessel is prestressed axially by holddown bolts spanning the top and bottom slabs along the cylindrical walls, and is also prestressed circumferentially by a number of cables wrapped around the vessel. For test purposes this containment vessel is partially filled with water, which comes in direct contact with the vessel walls. The explosive charge is immersed in the pool of water and is centrally suspended from the top of the vessel. The load history was obtained from an ICECO analysis, using the equations of state for the source and the water. A detailed check of this solution was made to assure that the derived loading did provide the correct input. The DYNAPCON code was then used for the analysis of the prestressed concrete containment model. This analysis required the simulation of prestressing and the response of the model to the applied transient load. The calculations correctly predict the magnitudes of displacements of the PCRV model. In addition, the displacement time histories obtained from the calculations reproduce the general features of the experimental records: the period elongation and amplitude increase as compared to an elastic solution, and also the absence of permanent displacement. However, the period still underestimates the experiment, while the amplitude is generally somewhat large

  16. Thermal-Chemical Model Of Subduction: Results And Tests

    Science.gov (United States)

    Gorczyk, W.; Gerya, T. V.; Connolly, J. A.; Yuen, D. A.; Rudolph, M.

    2005-12-01

    Seismic structures with strong positive and negative velocity anomalies in the mantle wedge above subduction zones have been interpreted as thermally and/or chemically induced phenomena. We have developed a thermal-chemical model of subduction, which constrains the dynamics of seismic velocity structure beneath volcanic arcs. Our simulations have been calculated over a finite-difference grid with (201×101) to (201×401) regularly spaced Eulerian points, using 0.5 million to 10 billion markers. The model couples numerical thermo-mechanical solution with Gibbs energy minimization to investigate the dynamic behavior of partially molten upwellings from slabs (cold plumes) and structures associated with their development. The model demonstrates two chemically distinct types of plumes (mixed and unmixed), and various rigid body rotation phenomena in the wedge (subduction wheel, fore-arc spin, wedge pin-ball). These thermal-chemical features strongly perturb seismic structure. Their occurrence is dependent on the age of subducting slab and the rate of subduction.The model has been validated through a series of test cases and its results are consistent with a variety of geological and geophysical data. In contrast to models that attribute a purely thermal origin for mantle wedge seismic anomalies, the thermal-chemical model is able to simulate the strong variations of seismic velocity existing beneath volcanic arcs which are associated with development of cold plumes. In particular, molten regions that form beneath volcanic arcs as a consequence of vigorous cold wet plumes are manifest by > 20% variations in the local Poisson ratio, as compared to variations of ~ 2% expected as a consequence of temperature variation within the mantle wedge.

  17. Reliability–based economic model predictive control for generalised flow–based networks including actuators’ health–aware capabilities

    Directory of Open Access Journals (Sweden)

    Grosso Juan M.

    2016-09-01

    Full Text Available This paper proposes a reliability-based economic model predictive control (MPC strategy for the management of generalised flow-based networks, integrating some ideas on network service reliability, dynamic safety stock planning, and degradation of equipment health. The proposed strategy is based on a single-layer economic optimisation problem with dynamic constraints, which includes two enhancements with respect to existing approaches. The first enhancement considers chance-constraint programming to compute an optimal inventory replenishment policy based on a desired risk acceptability level, leading to dynamical allocation of safety stocks in flow-based networks to satisfy non-stationary flow demands. The second enhancement computes a smart distribution of the control effort and maximises actuators’ availability by estimating their degradation and reliability. The proposed approach is illustrated with an application of water transport networks using the Barcelona network as the case study considered.

  18. Measurement model choice influenced randomized controlled trial results.

    Science.gov (United States)

    Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos

    2016-11-01

    In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. The new MCNP6 depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.

    2012-01-01

    The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)

  20. The New MCNP6 Depletion Capability

    International Nuclear Information System (INIS)

    Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.

    2012-01-01

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.

  1. SR-Site groundwater flow modelling methodology, setup and results

    International Nuclear Information System (INIS)

    Selroos, Jan-Olof; Follin, Sven

    2010-12-01

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report

  2. Geochemical controls on shale groundwaters: Results of reaction path modeling

    International Nuclear Information System (INIS)

    Von Damm, K.L.; VandenBrook, A.J.

    1989-03-01

    The EQ3NR/EQ6 geochemical modeling code was used to simulate the reaction of several shale mineralogies with different groundwater compositions in order to elucidate changes that may occur in both the groundwater compositions, and rock mineralogies and compositions under conditions which may be encountered in a high-level radioactive waste repository. Shales with primarily illitic or smectitic compositions were the focus of this study. The reactions were run at the ambient temperatures of the groundwaters and to temperatures as high as 250/degree/C, the approximate temperature maximum expected in a repository. All modeling assumed that equilibrium was achieved and treated the rock and water assemblage as a closed system. Graphite was used as a proxy mineral for organic matter in the shales. The results show that the presence of even a very small amount of reducing mineral has a large influence on the redox state of the groundwaters, and that either pyrite or graphite provides essentially the same results, with slight differences in dissolved C, Fe and S concentrations. The thermodynamic data base is inadequate at the present time to fully evaluate the speciation of dissolved carbon, due to the paucity of thermodynamic data for organic compounds. In the illitic cases the groundwaters resulting from interaction at elevated temperatures are acid, while the smectitic cases remain alkaline, although the final equilibrium mineral assemblages are quite similar. 10 refs., 8 figs., 15 tabs

  3. Loss of spent fuel pool cooling PRA: Model and results

    International Nuclear Information System (INIS)

    Siu, N.; Khericha, S.; Conroy, S.; Beck, S.; Blackman, H.

    1996-09-01

    This letter report documents models for quantifying the likelihood of loss of spent fuel pool cooling; models for identifying post-boiling scenarios that lead to core damage; qualitative and quantitative results generated for a selected plant that account for plant design and operational practices; a comparison of these results and those generated from earlier studies; and a review of available data on spent fuel pool accidents. The results of this study show that for a representative two-unit boiling water reactor, the annual probability of spent fuel pool boiling is 5 x 10 -5 and the annual probability of flooding associated with loss of spent fuel pool cooling scenarios is 1 x 10 -3 . Qualitative arguments are provided to show that the likelihood of core damage due to spent fuel pool boiling accidents is low for most US commercial nuclear power plants. It is also shown that, depending on the design characteristics of a given plant, the likelihood of either: (a) core damage due to spent fuel pool-associated flooding, or (b) spent fuel damage due to pool dryout, may not be negligible

  4. SR-Site groundwater flow modelling methodology, setup and results

    Energy Technology Data Exchange (ETDEWEB)

    Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.

  5. Present capabilities and future requirements for computer-aided geometric modeling in the design and manufacture of gas turbine

    Science.gov (United States)

    Caille, E.; Propen, M.; Hoffman, A.

    1984-01-01

    Gas turbine engine design requires the ability to rapidly develop complex structures which are subject to severe thermal and mechanical operating loads. As in all facets of the aerospace industry, engine designs are constantly driving towards increased performance, higher temperatures, higher speeds, and lower weight. The ability to address such requirements in a relatively short time frame has resulted in a major thrust towards integrated design/analysis/manufacturing systems. These computer driven graphics systems represent a unique challenge, with major payback opportunities if properly conceived, implemented, and applied.

  6. Knowledge Operation Capability Evaluation Model and Strategic Orientation of Supply Chain: Exploratory Research Based on View of Ecology

    Science.gov (United States)

    Zhou, Wen-Yong; Song, Ze-Qian

    The competitiveness of Supply Chain (SC) correlates intimately with its knowledge operation (KO). In order to realize better assessment value, this paper constructed an evaluation framework on knowledge operation of SC and a detailed index system. According to theory of ecology, expounded the evaluation orientation and future research direction from view of comprehensiveness and adaptability. Additionally, a case about Toyota recall-gate was analyzed. Through research, it provides two dimensions of results evaluating orientation which may help enterprise make right decision upon SC.

  7. Lightweighting Automotive Materials for Increased Fuel Efficiency and Delivering Advanced Modeling and Simulation Capabilities to U.S. Manufacturers

    Energy Technology Data Exchange (ETDEWEB)

    Hale, Steve

    2013-09-11

    Abstract The National Center for Manufacturing Sciences (NCMS) worked with the U.S. Department of Energy (DOE), National Energy Technology Laboratory (NETL), to bring together research and development (R&D) collaborations to develop and accelerate the knowledgebase and infrastructure for lightweighting materials and manufacturing processes for their use in structural and applications in the automotive sector. The purpose/importance of this DOE program: • 2016 CAFÉ standards. • Automotive industry technology that shall adopt the insertion of lightweighting material concepts towards manufacturing of production vehicles. • Development and manufacture of advanced research tools for modeling and simulation (M&S) applications to reduce manufacturing and material costs. • U.S. competitiveness that will help drive the development and manufacture of the next generation of materials. NCMS established a focused portfolio of applied R&D projects utilizing lightweighting materials for manufacture into automotive structures and components. Areas that were targeted in this program: • Functionality of new lightweighting materials to meet present safety requirements. • Manufacturability using new lightweighting materials. • Cost reduction for the development and use of new lightweighting materials. The automotive industry’s future continuously evolves through innovation, and lightweight materials are key in achieving a new era of lighter, more efficient vehicles. Lightweight materials are among the technical advances needed to achieve fuel/energy efficiency and reduce carbon dioxide (CO2) emissions: • Establish design criteria methodology to identify the best materials for lightweighting. • Employ state-of-the-art design tools for optimum material development for their specific applications. • Match new manufacturing technology to production volume. • Address new process variability with new production-ready processes.

  8. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.

    2013-01-01

    A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...... Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade...

  9. Preliminary results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Matsumoto, T.; Komine, K.; Arai, S.

    1997-01-01

    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11-12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented

  10. Results of the ITER toroidal field model coil project

    International Nuclear Information System (INIS)

    Salpietro, E.; Maix, R.

    2001-01-01

    In the scope of the ITER EDA one of the seven largest projects was devoted to the development, manufacture and testing of a Toroidal Field Model Coil (TFMC). The industry consortium AGAN manufactured the TFMC based on on a conceptual design developed by the ITER EDA EU Home Team. The TFMC was completed and assembled in the test facility TOSKA of the Forschungszentrum Karlsruhe in the first half of 2001. The first testing phase started in June 2001 and lasted till October 2001. The first results have shown that the main goals of the project have been achieved

  11. Comparison of transient PCRV model test results with analysis

    International Nuclear Information System (INIS)

    Marchertas, A.H.; Belytschko, T.B.

    1979-01-01

    Comparisons are made of transient data derived from simple models of a reactor containment vessel with analytical solutions. This effort is a part of the ongoing process of development and testing of the DYNAPCON computer code. The test results used in these comparisons were obtained from scaled models of the British sodium cooled fast breeder program. The test structure is a scaled model of a cylindrically shaped reactor containment vessel made of concrete. This concrete vessel is prestressed axially by holddown bolts spanning the top and bottom slabs along the cylindrical walls, and is also prestressed circumferentially by a number of cables wrapped around the vessel. For test purposes this containment vessel is partially filled with water, which comes in direct contact with the vessel walls. The explosive charge is immersed in the pool of water and is centrally suspended from the top of the vessel. The tests are very similar to the series of tests made for the COVA experimental program, but the vessel here is the prestressed concrete container. (orig.)

  12. INTRAVAL Finnsjoen Test - modelling results for some tracer experiments

    International Nuclear Information System (INIS)

    Jakob, A.; Hadermann, J.

    1994-09-01

    This report presents the results within Phase II of the INTRAVAL study. Migration experiments performed at the Finnsjoen test site were investigated. The study was done to gain an improved understanding of not only the mechanisms of tracer transport, but also the accuracy and limitations of the model used. The model is based on the concept of a dual porosity medium, taking into account one dimensional advection, longitudinal dispersion, sorption onto the fracture surfaces, diffusion into connected pores of the matrix rock, and sorption onto matrix surfaces. The number of independent water carrying zones, represented either as planar fractures or tubelike veins, may be greater than one, and the sorption processes are described either by linear or non-linear Freundlich isotherms assuming instantaneous sorption equilibrium. The diffusion of the tracer out of the water-carrying zones into connected pore space of the adjacent rock is calculated perpendicular to the direction of the advective/dispersive flow. In the analysis, the fluid flow parameters are calibrated by the measured breakthrough curves for the conservative tracer (iodide). Subsequent fits to the experimental data for the two sorbing tracers strontium and cesium then involve element dependent parameters providing information on the sorption processes and on its representation in the model. The methodology of fixing all parameters except those for sorption with breakthrough curves for non-sorbing tracers generally worked well. The investigation clearly demonstrates the necessity of taking into account pump flow rate variations at both boundaries. If this is not done, reliable conclusions on transport mechanisms or geometrical factors can not be achieved. A two flow path model reproduces the measured data much better than a single flow path concept. (author) figs., tabs., 26 refs

  13. Portfolio Effects of Renewable Energies - Basics, Models, Exemplary Results

    Energy Technology Data Exchange (ETDEWEB)

    Wiese, Andreas; Herrmann, Matthias

    2007-07-01

    The combination of sites and technologies to so-called renewable energy portfolios, which are being developed and implemented under the same financing umbrella, is currently the subject of intense discussion in the finance world. The resulting portfolio effect may allow the prediction of a higher return with the same risk or the same return with a lower risk - always in comparison with the investment in a single project. Models are currently being developed to analyse this subject and derive the portfolio effect. In particular, the effect of the spatial distribution, as well as the effects of using different technologies, suppliers and cost assumptions with different level of uncertainties, are of importance. Wind parks, photovoltaic, biomass, biogas and hydropower are being considered. The status of the model development and first results are being presented in the current paper. In a first example, the portfolio effect has been calculated and analysed using selected parameters for a wind energy portfolio of 39 sites distributed over Europe. Consequently it has been shown that the predicted yield, with the predetermined probabilities between 75 to 90%, is 3 - 8% higher than the sum of the yields for the individual wind parks using the same probabilities. (auth)

  14. Results and Error Estimates from GRACE Forward Modeling over Antarctica

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-04-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.

  15. Some exact results for the three-layer Zamolodchikov model

    International Nuclear Information System (INIS)

    Boos, H.E.; Mangazeev, V.V.

    2001-01-01

    In this paper we continue the study of the three-layer Zamolodchikov model started in our previous works (H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 3041-3054 and H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298). We analyse numerically the solutions to the Bethe ansatz equations obtained in H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298. We consider two regimes I and II which differ by the signs of the spherical sides (a 1 ,a 2 ,a 3 )→(-a 1 ,-a 2 ,-a 3 ). We accept the two-line hypothesis for the regime I and the one-line hypothesis for the regime II. In the thermodynamic limit we derive integral equations for distribution densities and solve them exactly. We calculate the partition function for the three-layer Zamolodchikov model and check a compatibility of this result with the functional relations obtained in H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298. We also do some numeric checkings of our results

  16. Preliminary time-phased TWRS process model results

    International Nuclear Information System (INIS)

    Orme, R.M.

    1995-01-01

    This report documents the first phase of efforts to model the retrieval and processing of Hanford tank waste within the constraints of an assumed tank farm configuration. This time-phased approach simulates a first try at a retrieval sequence, the batching of waste through retrieval facilities, the batching of retrieved waste through enhanced sludge washing, the batching of liquids through pretreatment and low-level waste (LLW) vitrification, and the batching of pretreated solids through high-level waste (HLW) vitrification. The results reflect the outcome of an assumed retrieval sequence that has not been tailored with respect to accepted measures of performance. The batch data, composition variability, and final waste volume projects in this report should be regarded as tentative. Nevertheless, the results provide interesting insights into time-phased processing of the tank waste. Inspection of the composition variability, for example, suggests modifications to the retrieval sequence that will further improve the uniformity of feed to the vitrification facilities. This model will be a valuable tool for evaluating suggested retrieval sequences and establishing a time-phased processing baseline. An official recommendation on tank retrieval sequence will be made in September, 1995

  17. RELAP5 nuclear plant analyzer capabilities

    International Nuclear Information System (INIS)

    Wagner, R.J.; Ransom, V.H.

    1982-01-01

    An interactive execution capability has been developed for the RELAP5 code which permits it to be used as a Nuclear Plant Analyzer. This capability has been demonstrated using a simplified primary and secondary loop model of a PWR. A variety of loss-of-feed-water accidents have been simulated using this model. The computer execution time on a CDC Cyber 176 is one half of the transient simulation time so that the results can be displayed in real time. The results of the demonstration problems are displayed in digital form on a color schematic of the plant model using a Textronics 4027 CRT terminal. The interactive feature allows the user to enter commands in much the same manner as a reactor operator

  18. Assessment of capability for modeling the core degradation in 2D geometry with ASTEC V2 integral code for VVER type of reactor

    International Nuclear Information System (INIS)

    Dimov, D.

    2011-01-01

    The ASTEC code is progressively becoming the reference European severe accident integral code through in particular the intensification of research activities carried out since 2004. The purpose of this analysis is to assess ASTEC code modelling of main phenomena arising during hypothetical severe accidents and particularly in-vessel degradation in 2D geometry. The investigation covers both early and late phase of degradation of reactor core as well as determination of corium which will enter the reactor cavity. The initial event is station back-out. In order to receive severe accident condition, failure of all active component of emergency core cooling system is apply. The analysis is focus on ICARE module of ASTEC code and particularly on so call MAGMA model. The aim of study is to determine the capability of the integral code to simulate core degradation and to determine the corium composition entering the reactor cavity. (author)

  19. A Compact Synchronous Cellular Model of Nonlinear Calcium Dynamics: Simulation and FPGA Synthesis Results.

    Science.gov (United States)

    Soleimani, Hamid; Drakakis, Emmanuel M

    2017-06-01

    Recent studies have demonstrated that calcium is a widespread intracellular ion that controls a wide range of temporal dynamics in the mammalian body. The simulation and validation of such studies using experimental data would benefit from a fast large scale simulation and modelling tool. This paper presents a compact and fully reconfigurable cellular calcium model capable of mimicking Hopf bifurcation phenomenon and various nonlinear responses of the biological calcium dynamics. The proposed cellular model is synthesized on a digital platform for a single unit and a network model. Hardware synthesis, physical implementation on FPGA, and theoretical analysis confirm that the proposed cellular model can mimic the biological calcium behaviors with considerably low hardware overhead. The approach has the potential to speed up large-scale simulations of slow intracellular dynamics by sharing more cellular units in real-time. To this end, various networks constructed by pipelining 10 k to 40 k cellular calcium units are compared with an equivalent simulation run on a standard PC workstation. Results show that the cellular hardware model is, on average, 83 times faster than the CPU version.

  20. Developing Collaborative Product Development Capabilities

    DEFF Research Database (Denmark)

    Mahnke, Volker; Tran, Yen

    2012-01-01

    innovation strategies’. Our analyses suggest that developing such collaboration capabilities benefits from the search for complementary practices, the combination of learning styles, and the development of weak and strong ties. Results also underscore the crucial importance of co-evolution of multi......Collaborative product development capabilities support a company’s product innovation activities. In the context of the fast fashion sector, this paper examines the development of the product development capabilities (PDC) that align product development capabilities in a dual innovation context......, one, slow paced, where the firm is well established and the other, fast paced, which represents a new competitive arena in which the company competes. To understand the process associated with collaborative capability development, we studied three Scandinavian fashion companies pursuing ‘dual...

  1. A business analytics capability framework

    Directory of Open Access Journals (Sweden)

    Ranko Cosic

    2015-09-01

    Full Text Available Business analytics (BA capabilities can potentially provide value and lead to better organisational performance. This paper develops a holistic, theoretically-grounded and practically relevant business analytics capability framework (BACF that specifies, defines and ranks the capabilities that constitute an organisational BA initiative. The BACF was developed in two phases. First, an a priori conceptual framework was developed based on the Resource-Based View theory of the firm and a thematic content analysis of the BA literature. Second, the conceptual framework was further developed and refined using a three round Delphi study involving 16 BA experts. Changes from the Delphi study resulted in a refined and confirmed framework including detailed capability definitions, together with a ranking of the capabilities based on importance. The BACF will help academic researchers and industry practitioners to better understand the capabilities that constitute an organisational BA initiative and their relative importance. In future work, the capabilities in the BACF will be operationalised to measure their as-is status, thus enabling organisations to identify key areas of strength and weakness and prioritise future capability improvement efforts.

  2. Graph and model transformation tools for model migration : empirical results from the transformation tool contest

    NARCIS (Netherlands)

    Rose, L.M.; Herrmannsdoerfer, M.; Mazanek, S.; Van Gorp, P.M.E.; Buchwald, S.; Horn, T.; Kalnina, E.; Koch, A.; Lano, K.; Schätz, B.; Wimmer, M.

    2014-01-01

    We describe the results of the Transformation Tool Contest 2010 workshop, in which nine graph and model transformation tools were compared for specifying model migration. The model migration problem—migration of UML activity diagrams from version 1.4 to version 2.2—is non-trivial and practically

  3. Laboratory microfusion capability study

    International Nuclear Information System (INIS)

    1993-05-01

    The purpose of this study is to elucidate the issues involved in developing a Laboratory Microfusion Capability (LMC) which is the major objective of the Inertial Confinement Fusion (ICF) program within the purview of the Department of Energy's Defense Programs. The study was initiated to support a number of DOE management needs: to provide insight for the evolution of the ICF program; to afford guidance to the ICF laboratories in planning their research and development programs; to inform Congress and others of the details and implications of the LMC; to identify criteria for selection of a concept for the Laboratory Microfusion Facility and to develop a coordinated plan for the realization of an LMC. As originally proposed, the LMC study was divided into two phases. The first phase identifies the purpose and potential utility of the LMC, the regime of its performance parameters, driver independent design issues and requirements, its development goals and requirements, and associated technical, management, staffing, environmental, and other developmental and operational issues. The second phase addresses driver-dependent issues such as specific design, range of performance capabilities, and cost. The study includes four driver options; the neodymium-glass solid state laser, the krypton fluoride excimer gas laser, the light-ion accelerator, and the heavy-ion induction linear accelerator. The results of the Phase II study are described in the present report

  4. MHTGR inherent heat transfer capability

    International Nuclear Information System (INIS)

    Berkoe, J.M.

    1992-01-01

    This paper reports on the Commercial Modular High Temperature Gas-Cooled Reactor (MHTGR) which achieves improved reactor safety performance and reliability by utilizing a completely passive natural convection cooling system called the RCCS to remove decay heat in the event that all active cooling systems fail to operate. For the highly improbable condition that the RCCS were to become non-functional following a reactor depressurization event, the plant would be forced to rely upon its inherent thermo-physical characteristics to reject decay heat to the surrounding earth and ambient environment. A computational heat transfer model was created to simulate such a scenario. Plant component temperature histories were computed over a period of 20 days into the event. The results clearly demonstrate the capability of the MHTGR to maintain core integrity and provide substantial lead time for taking corrective measures

  5. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  6. Further Results on Dynamic Additive Hazard Rate Model

    Directory of Open Access Journals (Sweden)

    Zhengcheng Zhang

    2014-01-01

    Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.

  7. VNIR spectral modeling of Mars analogue rocks: first results

    Science.gov (United States)

    Pompilio, L.; Roush, T.; Pedrazzi, G.; Sgavetti, M.

    Knowledge regarding the surface composition of Mars and other bodies of the inner solar system is fundamental to understanding of their origin, evolution, and internal structures. Technological improvements of remote sensors and associated implications for planetary studies have encouraged increased laboratory and field spectroscopy research to model the spectral behavior of terrestrial analogues for planetary surfaces. This approach has proven useful during Martian surface and orbital missions, and petrologic studies of Martian SNC meteorites. Thermal emission data were used to suggest two lithologies occurring on Mars surface: basalt with abundant plagioclase and clinopyroxene and andesite, dominated by plagioclase and volcanic glass [1,2]. Weathered basalt has been suggested as an alternative to the andesite interpretation [3,4]. Orbital VNIR spectral imaging data also suggest the crust is dominantly basaltic, chiefly feldspar and pyroxene [5,6]. A few outcrops of ancient crust have higher concentrations of olivine and low-Ca pyroxene, and have been interpreted as cumulates [6]. Based upon these orbital observations future lander/rover missions can be expected to encounter particulate soils, rocks, and rock outcrops. Approaches to qualitative and quantitative analysis of remotely-acquired spectra have been successfully used to infer the presence and abundance of minerals and to discover compositionally associated spectral trends [7-9]. Both empirical [10] and mathematical [e.g. 11-13] methods have been applied, typically with full compositional knowledge, to chiefly particulate samples and as a result cannot be considered as objective techniques for predicting the compositional information, especially for understanding the spectral behavior of rocks. Extending the compositional modeling efforts to include more rocks and developing objective criteria in the modeling are the next required steps. This is the focus of the present investigation. We present results of

  8. Tapering of the CHESS-APS undulator: Results and modelling

    International Nuclear Information System (INIS)

    Lai, B.; Viccaro, P.J.; Dejus, R.; Gluskin, E.; Yun, W.B.; McNulty, I.; Henderson, C.; White, J.; Shen, Q.; Finkelstein, K.

    1992-01-01

    When the magnetic gap of an undulator is tapered along the beam direction, the slowly varying peak field B o introduces a spread in the value of the deflection parameter K. The result is a broad energy-band undulator that still maintains high degree of spatial collimation. These properties are very useful for EXAFS and energy dispersive techniques. We have characterized the CHESS-APS undulator (1 υ = 3.3cm) at one tapered configuration (10% change of the magnetic gap from one end of the undulator to the other). Spatial distribution and energy spectra of the first three harmonics through a pinhole were measured. The on-axis first harmonic width increased from 0.27 keV to 0.61 keV (FWHM) at the central energy of E 1 = 6.6 keV (K average = 0.69). Broadening in the angular distribution due to tapering was minimal. These results will be compared with computer modelling which simulates the actual electron trajectory in the tapered case

  9. Building Service Provider Capabilities

    DEFF Research Database (Denmark)

    Brandl, Kristin; Jaura, Manya; Ørberg Jensen, Peter D.

    2015-01-01

    In this paper we study whether and how the interaction between clients and the service providers contributes to the development of capabilities in service provider firms. In situations where such a contribution occurs, we analyze how different types of activities in the production process...... process. We find that clients influence the development of human capital capabilities and management capabilities in reciprocally produced services. While in sequential produced services clients influence the development of organizational capital capabilities and management capital capabilities....... of the services, such as sequential or reciprocal task activities, influence the development of different types of capabilities. We study five cases of offshore-outsourced knowledge-intensive business services that are distinguished according to their reciprocal or sequential task activities in their production...

  10. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    Energy Technology Data Exchange (ETDEWEB)

    Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E. [Department of Radiation Oncology, University of Iowa, 200 Hawkins Drive, Iowa City, Iowa 52242 (United States); Hill, Patrick M. [Department of Human Oncology, University of Wisconsin, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Gao, Mingcheng; Laub, Steve; Pankuch, Mark [Division of Medical Physics, CDH Proton Center, 4455 Weaver Parkway, Warrenville, Illinois 60555 (United States)

    2015-03-15

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ{sub x1},σ{sub x2},σ{sub y1},σ{sub y2}) together with the spatial location of the maximum dose (μ{sub x},μ{sub y}). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets.

  11. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    International Nuclear Information System (INIS)

    Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E.; Hill, Patrick M.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ x1 ,σ x2 ,σ y1 ,σ y2 ) together with the spatial location of the maximum dose (μ x ,μ y ). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets

  12. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    Science.gov (United States)

    Gelover, Edgar; Wang, Dongxu; Hill, Patrick M.; Flynn, Ryan T.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark; Hyer, Daniel E.

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σx1,σx2,σy1,σy2) together with the spatial location of the maximum dose (μx,μy). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets. PMID:25735287

  13. Demixing in a metal halide lamp, results from modelling

    NARCIS (Netherlands)

    Beks, M.L.; Hartgers, A.; Mullen, van der J.J.A.M.

    2006-01-01

    Convection and diffusion in the discharge region of a metal halide lamp is studied using a computer model built with the plasma modeling package Plasimo. A model lamp contg. mercury and sodium iodide is studied. The effects of the total lamp pressure on the degree of segregation of the light

  14. A Duality Result for the Generalized Erlang Risk Model

    Directory of Open Access Journals (Sweden)

    Lanpeng Ji

    2014-11-01

    Full Text Available In this article, we consider the generalized Erlang risk model and its dual model. By using a conditional measure-preserving correspondence between the two models, we derive an identity for two interesting conditional probabilities. Applications to the discounted joint density of the surplus prior to ruin and the deficit at ruin are also discussed.

  15. Exotic B=2 states in the SU(2) Skyrme model and other recent results in the B=1 sector

    International Nuclear Information System (INIS)

    Schwesinger, B.

    1986-01-01

    Effective theories with surprising phenomenological success immediatly prompt the suspicion that they are intimately connected to a more fundamental theory. In the case of the Skyrme model things have gone the other way round: first there was the finding that the large N c -limit of QCD results in an effective theory of free mesons where baryons emerge as solitons from meson fields. Subsequently the long forgotten Skyrme model was unearthed by Witten as a possible candidate for such a theory. Examined in the light of its phenomenological capabilities the Skyrme model lead to the surprising success it enjoys till now. (orig./BBOE)

  16. Waste glass corrosion modeling: Comparison with experimental results

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-01-01

    Models for borosilicate glass dissolution must account for the processes of (1) kinetically-controlled network dissolution, (2) precipitation of secondary phases, (3) ion exchange, (4) rate-limiting diffusive transport of silica through a hydrous surface reaction layer, and (5) specific glass surface interactions with dissolved cations and anions. Current long-term corrosion models for borosilicate glass employ a rate equation consistent with transition state theory embodied in a geochemical reaction-path modeling program that calculates aqueous phase speciation and mineral precipitation/dissolution. These models are currently under development. Future experimental and modeling work to better quantify the rate-controlling processes and validate these models are necessary before the models can be used in repository performance assessment calculations

  17. Channel flow and trichloroethylene treatment in a partly iron-filled fracture: Experimental and model results

    Science.gov (United States)

    Cai, Zuansi; Merly, Corrine; Thomson, Neil R.; Wilson, Ryan D.; Lerner, David N.

    2007-08-01

    Technical developments have now made it possible to emplace granular zero-valent iron (Fe 0) in fractured media to create a Fe 0 fracture reactive barrier (Fe 0 FRB) for the treatment of contaminated groundwater. To evaluate this concept, we conducted a laboratory experiment in which trichloroethylene (TCE) contaminated water was flushed through a single uniform fracture created between two sandstone blocks. This fracture was partly filled with what was intended to be a uniform thickness of iron. Partial treatment of TCE by iron demonstrated that the concept of a Fe 0 FRB is practical, but was less than anticipated for an iron layer of uniform thickness. When the experiment was disassembled, evidence of discrete channelised flow was noted and attributed to imperfect placement of the iron. To evaluate the effect of the channel flow, an explicit Channel Model was developed that simplifies this complex flow regime into a conceptualised set of uniform and parallel channels. The mathematical representation of this conceptualisation directly accounts for (i) flow channels and immobile fluid arising from the non-uniform iron placement, (ii) mass transfer from the open fracture to iron and immobile fluid regions, and (iii) degradation in the iron regions. A favourable comparison between laboratory data and the results from the developed mathematical model suggests that the model is capable of representing TCE degradation in fractures with non-uniform iron placement. In order to apply this Channel Model concept to a Fe 0 FRB system, a simplified, or implicit, Lumped Channel Model was developed where the physical and chemical processes in the iron layer and immobile fluid regions are captured by a first-order lumped rate parameter. The performance of this Lumped Channel Model was compared to laboratory data, and benchmarked against the Channel Model. The advantages of the Lumped Channel Model are that the degradation of TCE in the system is represented by a first

  18. Phase separated membrane bioreactor - Results from model system studies

    Science.gov (United States)

    Petersen, G. R.; Seshan, P. K.; Dunlop, E. H.

    1989-01-01

    The operation and evaluation of a bioreactor designed for high intensity oxygen transfer in a microgravity environment is described. The reactor itself consists of a zero headspace liquid phase separated from the air supply by a long length of silicone rubber tubing through which the oxygen diffuses in and the carbon dioxide diffuses out. Mass transfer studies show that the oxygen is film diffusion controlled both externally and internally to the tubing and not by diffusion across the tube walls. Methods of upgrading the design to eliminate these resistances are proposed. Cell growth was obtained in the fermenter using Saccharomyces cerevisiae showing that this concept is capable of sustaining cell growth in the terrestrial simulation.

  19. Phase separated membrane bioreactor: Results from model system studies

    Science.gov (United States)

    Petersen, G. R.; Seshan, P. K.; Dunlop, E. H.

    The operation and evaluation of a bioreactor designed for high intensity oxygen transfer in a microgravity environment is described. The reactor itself consists of a zero headspace liquid phase separated from the air supply by a long length of silicone rubber tubing through which the oxygen diffuses in and the carbon dioxide diffuses out. Mass transfer studies show that the oxygen is film diffusion controlled both externally and internally to the tubing and not by diffusion across the tube walls. Methods of upgrading the design to eliminate these resistances are proposed. Cell growth was obtained in the fermenter using Saccharomyces cerevisiae showing that this concept is capable of sustaining cell growth in the terrestial simulation.

  20. Argonne Fuel Cycle Facility ventilation system -- modeling and results

    International Nuclear Information System (INIS)

    Mohr, D.; Feldman, E.E.; Danielson, W.F.

    1995-01-01

    This paper describes an integrated study of the Argonne-West Fuel Cycle Facility (FCF) interconnected ventilation systems during various operations. Analyses and test results include first a nominal condition reflecting balanced pressures and flows followed by several infrequent and off-normal scenarios. This effort is the first study of the FCF ventilation systems as an integrated network wherein the hydraulic effects of all major air systems have been analyzed and tested. The FCF building consists of many interconnected regions in which nuclear fuel is handled, transported and reprocessed. The ventilation systems comprise a large number of ducts, fans, dampers, and filters which together must provide clean, properly conditioned air to the worker occupied spaces of the facility while preventing the spread of airborne radioactive materials to clean am-as or the atmosphere. This objective is achieved by keeping the FCF building at a partial vacuum in which the contaminated areas are kept at lower pressures than the other worker occupied spaces. The ventilation systems of FCF and the EBR-II reactor are analyzed as an integrated totality, as demonstrated. We then developed the network model shown in Fig. 2 for the TORAC code. The scope of this study was to assess the measured results from the acceptance/flow balancing testing and to predict the effects of power failures, hatch and door openings, single-failure faulted conditions, EBR-II isolation, and other infrequent operations. The studies show that the FCF ventilation systems am very controllable and remain stable following off-normal events. In addition, the FCF ventilation system complex is essentially immune to reverse flows and spread of contamination to clean areas during normal and off-normal operation

  1. ExEP yield modeling tool and validation test results

    Science.gov (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  2. Final model independent result of DAMA/LIBRA-phase1

    Energy Technology Data Exchange (ETDEWEB)

    Bernabei, R.; D' Angelo, S.; Di Marco, A. [Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Belli, P. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Cappella, F.; D' Angelo, A.; Prosperi, D. [Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma, Rome (Italy); Caracciolo, V.; Castellano, S.; Cerulli, R. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (Italy); Dai, C.J.; He, H.L.; Kuang, H.H.; Ma, X.H.; Sheng, X.D.; Wang, R.G. [Chinese Academy, IHEP, Beijing (China); Incicchitti, A. [INFN, sez. Roma, Rome (Italy); Montecchia, F. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Ingegneria Civile e Ingegneria Informatica, Rome (Italy); Ye, Z.P. [Chinese Academy, IHEP, Beijing (China); University of Jing Gangshan, Jiangxi (China)

    2013-12-15

    The results obtained with the total exposure of 1.04 ton x yr collected by DAMA/LIBRA-phase1 deep underground at the Gran Sasso National Laboratory (LNGS) of the I.N.F.N. during 7 annual cycles (i.e. adding a further 0.17 ton x yr exposure) are presented. The DAMA/LIBRA-phase1 data give evidence for the presence of Dark Matter (DM) particles in the galactic halo, on the basis of the exploited model independent DM annual modulation signature by using highly radio-pure NaI(Tl) target, at 7.5{sigma} C.L. Including also the first generation DAMA/NaI experiment (cumulative exposure 1.33 ton x yr, corresponding to 14 annual cycles), the C.L. is 9.3{sigma} and the modulation amplitude of the single-hit events in the (2-6) keV energy interval is: (0.0112{+-}0.0012) cpd/kg/keV; the measured phase is (144{+-}7) days and the measured period is (0.998{+-}0.002) yr, values well in agreement with those expected for DM particles. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. (orig.)

  3. Innovation ecosystem model for commercialization of research results

    Directory of Open Access Journals (Sweden)

    Vlăduţ Gabriel

    2017-07-01

    Full Text Available Innovation means Creativity and Added value recognise by the market. The first step in creating a sustainable commercialization of research results, Technological Transfer – TT mechanism, on one hand is to define the “technology” which will be transferred and on other hand to define the context in which the TT mechanism work, the ecosystem. The focus must be set on technology as an entity, not as a science or a study of the practical industrial arts and certainly not any specific applied science. The transfer object, the technology, must rely on a subjectively determined but specifiable set of processes and products. Focusing on the product is not sufficient to the transfer and diffusion of technology. It is not merely the product that is transferred but also knowledge of its use and application. The innovation ecosystem model brings together new companies, experienced business leaders, researchers, government officials, established technology companies, and investors. This environment provides those new companies with a wealth of technical expertise, business experience, and access to capital that supports innovation in the early stages of growth.

  4. The necessity of environmental capability evaluation in physical planning

    International Nuclear Information System (INIS)

    Tavakol, M.

    1997-01-01

    For the physical planning of V az research Forest, the necessity of site selection in the context of land evaluation, has been discussed. the project has studied the evaluation of ecological capability of Land in the following stages: 1- Studying of physical and Biological resources in the context of GIS system. 2- Analysis and integration of data. Evaluation of ecological capability of the land by employing the suitable ecological model. 4- Site selection by comparison and coordination of the principles used in the model with the results of ecological capability of the land in GIS system. 5- Site and environmental design based on ecological principles

  5. Energetic neutral atom imaging with the Polar CEPPAD/IPS instrument: Initial forward modeling results

    International Nuclear Information System (INIS)

    Henderson, M.G.; Reeves, G.D.; Moore, K.R.; Spence, H.E.; Jorgensen, A.M.; Roelof, E.C.

    1997-01-01

    Although the primary function of the CEP-PAD/IPS instrument on Polar is the measurement of energetic ions in-situ, it has also proven to be a very capable Energetic neutral Atom (ENA) imager. Raw ENA images are currently being constructed on a routine basis with a temporal resolution of minutes during both active and quiet times. However, while analyses of these images by themselves provide much information on the spatial distribution and dynamics of the energetic ion population in the ring current, detailed modeling is required to extract the actual ion distributions. In this paper, the authors present the initial results of forward modeling an IPS ENA image obtained during a small geo-magnetic storm on June 9, 1997. The equatorial ion distribution inferred with this technique reproduces the expected large noon/midnight and dawn/dusk asymmetries. The limitations of the model are discussed and a number of modifications to the basic forward modeling technique are proposed which should significantly improve its performance in future studies

  6. Results from the Savannah River Laboratory model validation workshop

    International Nuclear Information System (INIS)

    Pepper, D.W.

    1981-01-01

    To evaluate existing and newly developed air pollution models used in DOE-funded laboratories, the Savannah River Laboratory sponsored a model validation workshop. The workshop used Kr-85 measurements and meteorology data obtained at SRL during 1975 to 1977. Individual laboratories used models to calculate daily, weekly, monthly or annual test periods. Cumulative integrated air concentrations were reported at each grid point and at each of the eight sampler locations

  7. Waste glass corrosion modeling: Comparison with experimental results

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1993-11-01

    A chemical model of glass corrosion will be used to predict the rates of release of radionuclides from borosilicate glass waste forms in high-level waste repositories. The model will be used both to calculate the rate of degradation of the glass, and also to predict the effects of chemical interactions between the glass and repository materials such as spent fuel, canister and container materials, backfill, cements, grouts, and others. Coupling between the degradation processes affecting all these materials is expected. Models for borosilicate glass dissolution must account for the processes of (1) kinetically-controlled network dissolution, (2) precipitation of secondary phases, (3) ion exchange, (4) rate-limiting diffusive transport of silica through a hydrous surface reaction layer, and (5) specific glass surface interactions with dissolved cations and anions. Current long-term corrosion models for borosilicate glass employ a rate equation consistent with transition state theory embodied in a geochemical reaction-path modeling program that calculates aqueous phase speciation and mineral precipitation/dissolution. These models are currently under development. Future experimental and modeling work to better quantify the rate-controlling processes and validate these models are necessary before the models can be used in repository performance assessment calculations

  8. Regionalization of climate model results for the North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Kauker, F.

    1999-07-01

    A dynamical downscaling is presented that allows an estimation of potential effects of climate change on the North Sea. Therefore, the ocean general circulation model OPYC is adapted for application on a shelf by adding a lateral boundary formulation and a tide model. In this set-up the model is forced, first, with data from the ECMWF reanalysis for model validation and the study of the natural variability, and, second, with data from climate change experiments to estimate the effects of climate change on the North Sea. (orig.)

  9. Evaluation of land capability and suitability for irrigated agriculture in the Emirate of Abu Dhabi, UAE, using an integrated AHP-GIS model

    Science.gov (United States)

    Aldababseh, A.; Temimi, M.; Maghelal, P.; Branch, O.; Wulfmeyer, V.

    2017-12-01

    The rapid economic development and high population growth in the United Arab Emirates (UAE) have impacted utilization and management of agricultural land. The development of large-scale agriculture in unsuitable areas can severely impact groundwater resources in the UAE. More than 60% of UAE's water resources are being utilized by the agriculture, forestry, and urban greenery sectors. However, the contribution of the agricultural sector to the national GDP is negligible. Several programs have been introduced by the government aimed at achieving sustainable agriculture whilst preserving valuable water resources. Local subsistence farming has declined considerably during the past few years, due to low soil moisture content, sandy soil texture, lack of arable land, natural climatic disruptions, water shortages, and declined rainfall. The limited production of food and the continuing rise in the food prices on a global and local level are expected to increase low-income households' vulnerability to food insecurity. This research aims at developing a suitability index for the evaluation and prioritization of areas in the UAE for large-scale agriculture. The AHP-GIS integrated model developed in this study facilitates a step by step aggregation of a large number of datasets representing the most important criteria, and the generation of agricultural suitability and land capability maps. To provide the necessary criteria to run the model, a comprehensive geospatial database was built, including climate conditions, water potential, soil capabilities, topography, and land management. A hieratical structure is built as a decomposition structure that includes all criteria and sub-criteria used to define land suitability based on literature review and experts' opinions. Pairwise comparisons matrix are used to calculate criteria' weights. The GIS Model Builder function is used to integrate all spatial processes to model land suitability. In order to preserve some flexibility

  10. Capability Handbook- offline metrology

    DEFF Research Database (Denmark)

    Islam, Aminul; Marhöfer, David Maximilian; Tosello, Guido

    This offline metrological capability handbook has been made in relation to HiMicro Task 3.3. The purpose of this document is to assess the metrological capability of the HiMicro partners and to gather the information of all available metrological instruments in the one single document. It provides...

  11. Dynamic Capabilities and Performance

    DEFF Research Database (Denmark)

    Wilden, Ralf; Gudergan, Siegfried P.; Nielsen, Bo Bernhard

    2013-01-01

    are contingent on the competitive intensity faced by firms. Our findings demonstrate the performance effects of internal alignment between organizational structure and dynamic capabilities, as well as the external fit of dynamic capabilities with competitive intensity. We outline the advantages of PLS...

  12. Developing Alliance Capabilities

    DEFF Research Database (Denmark)

    Heimeriks, Koen H.; Duysters, Geert; Vanhaverbeke, Wim

    This paper assesses the differential performance effects of learning mechanisms on the development of alliance capabilities. Prior research has suggested that different capability levels could be identified in which specific intra-firm learning mechanisms are used to enhance a firm's alliance...

  13. Telematics Options and Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Hodge, Cabell [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-05

    This presentation describes the data tracking and analytical capabilities of telematics devices. Federal fleet managers can use the systems to keep their drivers safe, maintain a fuel efficient fleet, ease their reporting burden, and save money. The presentation includes an example of how much these capabilities can save fleets.

  14. results

    Directory of Open Access Journals (Sweden)

    Salabura Piotr

    2017-01-01

    Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.

  15. Spinal cord stimulation: modeling results and clinical data

    NARCIS (Netherlands)

    Struijk, Johannes J.; Struijk, J.J.; Holsheimer, J.; Barolat, Giancarlo; He, Jiping

    1992-01-01

    The potential distribution in volume couductor models of the spinal cord at cervical, midthoracic and lowthoracic levels, due to epidural stimulation, was calculated. Treshold stimuli of modeled myelhated dorsal column and dorsal root fibers were calculated and were compared with perception

  16. How to: understanding SWAT model uncertainty relative to measured results

    Science.gov (United States)

    Watershed models are being relied upon to contribute to most policy-making decisions of watershed management, and the demand for an accurate accounting of complete model uncertainty is rising. Generalized likelihood uncertainty estimation (GLUE) is a widely used method for quantifying uncertainty i...

  17. Urban traffic noise assessment by combining measurement and model results

    NARCIS (Netherlands)

    Eerden, F.J.M. van der; Graafland, F.; Wessels, P.W.; Basten, T.G.H.

    2013-01-01

    A model based monitoring system is applied on a local scale in an urban area to obtain a better understanding of the traffic noise situation. The system consists of a scalable sensor network and an engineering model. A better understanding is needed to take appropriate and cost efficient measures,

  18. Noise and dose modeling for pediatric CT optimization: preliminary results

    International Nuclear Information System (INIS)

    Miller Clemente, Rafael A.; Perez Diaz, Marlen; Mora Reyes, Yudel; Rodriguez Garlobo, Maikel; Castillo Salazar, Rafael

    2008-01-01

    Full text: A Multiple Linear Regression Model was developed to predict noise and dose in computed tomography pediatric imaging for head and abdominal examinations. Relative values of Noise and Volumetric Computed Tomography Dose Index was used to estimate de model respectively. 54 images of physical phantoms were performed. Independent variables considered included: phantom diameter, tube current and kilovolts, x ray beam collimation, reconstruction diameter and equipment's post processing filters. Predicted values show good agreement with measurements, which were better in noise model (R 2 adjusted =0.953) than the dose model (R 2 adjusted =0.744). Tube current, object diameter, beam collimation and reconstruction filter were identified as the most influencing factors in models. (author)

  19. The value of oxygen-isotope data and multiple discharge records in calibrating a fully-distributed, physically-based rainfall-runoff model (CRUM3) to improve predictive capability

    Science.gov (United States)

    Neill, Aaron; Reaney, Sim

    2015-04-01

    Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to

  20. Results on a Binding Neuron Model and Their Implications for Modified Hourglass Model for Neuronal Network

    Directory of Open Access Journals (Sweden)

    Viswanathan Arunachalam

    2013-01-01

    Full Text Available The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008 in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.

  1. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario....

  2. Verification of Simulation Results Using Scale Model Flight Test Trajectories

    National Research Council Canada - National Science Library

    Obermark, Jeff

    2004-01-01

    .... A second compromise scaling law was investigated as a possible improvement. For ejector-driven events at minimum sideslip, the most important variables for scale model construction are the mass moment of inertia and ejector...

  3. Box photosynthesis modeling results for WRF/CMAQ LSM

    Data.gov (United States)

    U.S. Environmental Protection Agency — Box Photosynthesis model simulations for latent heat and ozone at 6 different FLUXNET sites. This dataset is associated with the following publication: Ran, L., J....

  4. Some Econometric Results for the Blanchard-Watson Bubble Model

    DEFF Research Database (Denmark)

    Johansen, Soren; Lange, Theis

    The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)¿y(t-1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ¿>1 so...

  5. The 3D Reference Earth Model: Status and Preliminary Results

    Science.gov (United States)

    Moulik, P.; Lekic, V.; Romanowicz, B. A.

    2017-12-01

    In the 20th century, seismologists constructed models of how average physical properties (e.g. density, rigidity, compressibility, anisotropy) vary with depth in the Earth's interior. These one-dimensional (1D) reference Earth models (e.g. PREM) have proven indispensable in earthquake location, imaging of interior structure, understanding material properties under extreme conditions, and as a reference in other fields, such as particle physics and astronomy. Over the past three decades, new datasets motivated more sophisticated efforts that yielded models of how properties vary both laterally and with depth in the Earth's interior. Though these three-dimensional (3D) models exhibit compelling similarities at large scales, differences in the methodology, representation of structure, and dataset upon which they are based, have prevented the creation of 3D community reference models. As part of the REM-3D project, we are compiling and reconciling reference seismic datasets of body wave travel-time measurements, fundamental mode and overtone surface wave dispersion measurements, and normal mode frequencies and splitting functions. These reference datasets are being inverted for a long-wavelength, 3D reference Earth model that describes the robust long-wavelength features of mantle heterogeneity. As a community reference model with fully quantified uncertainties and tradeoffs and an associated publically available dataset, REM-3D will facilitate Earth imaging studies, earthquake characterization, inferences on temperature and composition in the deep interior, and be of improved utility to emerging scientific endeavors, such as neutrino geoscience. Here, we summarize progress made in the construction of the reference long period dataset and present a preliminary version of REM-3D in the upper-mantle. In order to determine the level of detail warranted for inclusion in REM-3D, we analyze the spectrum of discrepancies between models inverted with different subsets of the

  6. A Comparative Investigation on the Capability of Modified Zerilli-Armstrong and Arrhenius-Type Constitutive Models to Describe Flow Behavior of BFe10-1-2 Cupronickel Alloy at Elevated Temperature

    Science.gov (United States)

    Cai, Jun; Lei, Ying; Wang, Kuaishe; Zhang, Xiaolu; Miao, Chengpeng; Li, Wenbing

    2016-05-01

    True stress and true strain data obtained from isothermal compression tests on a Gleeble-3800 thermo-mechanical simulator, in a wide range of temperatures (1073-1323 K) and strain rates (0.001-10 s-1), has been used to evaluate the material constants of two constitutive models: the modified Zerilli-Armstrong and the strain compensation Arrhenius-type models. Furthermore, a comparative study was conducted on the capabilities of the two models in order to represent the elevated temperature flow behavior of BFe10-1-2 cupronickel alloy. The suitability levels of these two models were evaluated by comparing the accuracy of their predictions of deformation behavior, correlation coefficient ( R), average absolute relative error ( AARE), relative errors of prediction, and the number of material constants. The results show that the predicted values of these two models agree well with the experimental values of BFe10-1-2 cupronickel alloy except at the temperature of 1123 K and the strain rate of 1 s-1. Meanwhile, the strain compensated Arrhenius-type model can track the deformation behavior of BFe10-1-2 cupronickel alloy more accurately throughout the entire temperature and strain rate range, while fewer material constants are involved in the modified Zerilli-Armstrong model.

  7. A Mathematical Model for Reactions During Top-Blowing in the AOD Process: Validation and Results

    Science.gov (United States)

    Visuri, Ville-Valtteri; Järvinen, Mika; Kärnä, Aki; Sulasalmi, Petri; Heikkinen, Eetu-Pekka; Kupari, Pentti; Fabritius, Timo

    2017-06-01

    In earlier work, a fundamental mathematical model was proposed for side-blowing operation in the argon oxygen decarburization (AOD) process. In the preceding part "Derivation of the Model," a new mathematical model was proposed for reactions during top-blowing in the AOD process. In this model it was assumed that reactions occur simultaneously at the surface of the cavity caused by the gas jet and at the surface of the metal droplets ejected from the metal bath. This paper presents validation and preliminary results with twelve industrial heats. In the studied heats, the last combined-blowing stage was altered so that oxygen was introduced from the top lance only. Four heats were conducted using an oxygen-nitrogen mixture (1:1), while eight heats were conducted with pure oxygen. Simultaneously, nitrogen or argon gas was blown via tuyères in order to provide mixing that is comparable to regular practice. The measured carbon content varied from 0.4 to 0.5 wt pct before the studied stage to 0.1 to 0.2 wt pct after the studied stage. The results suggest that the model is capable of predicting changes in metal bath composition and temperature with a reasonably high degree of accuracy. The calculations indicate that the top slag may supply oxygen for decarburization during top-blowing. Furthermore, it is postulated that the metal droplets generated by the shear stress of top-blowing create a large mass exchange area, which plays an important role in enabling the high decarburization rates observed during top-blowing in the AOD process. The overall rate of decarburization attributable to top-blowing in the last combined-blowing stage was found to be limited by the mass transfer of dissolved carbon.

  8. The animal model determines the results of Aeromonas virulence factors

    Directory of Open Access Journals (Sweden)

    Alejandro Romero

    2016-10-01

    Full Text Available The selection of an experimental animal model is of great importance in the study of bacterial virulence factors. Here, a bath infection of zebrafish larvae is proposed as an alternative model to study the virulence factors of A. hydrophila. Intraperitoneal infections in mice and trout were compared with bath infections in zebrafish larvae using specific mutants. The great advantage of this model is that bath immersion mimics the natural route of infection, and injury to the tail also provides a natural portal of entry for the bacteria. The implication of T3SS in the virulence of A. hydrophila was analysed using the AH-1::aopB mutant. This mutant was less virulent than the wild-type strain when inoculated into zebrafish larvae, as described in other vertebrates. However, the zebrafish model exhibited slight differences in mortality kinetics only observed using invertebrate models. Infections using the mutant AH-1∆vapA lacking the gene coding for the surface S-layer suggested that this protein was not totally necessary to the bacteria once it was inside the host, but it contributed to the inflammatory response. Only when healthy zebrafish larvae were infected did the mutant produce less mortality than the wild type. Variations between models were evidenced using the AH-1∆rmlB, which lacks the O-antigen lipopolysaccharide (LPS, and the AH-1∆wahD, which lacks the O-antigen LPS and part of the LPS outer-core. Both mutants showed decreased mortality in all of the animal models, but the differences between them were only observed in injured zebrafish larvae, suggesting that residues from the LPS outer core must be important for virulence. The greatest differences were observed using the AH-1ΔFlaB-J (lacking polar flagella and unable to swim and the AH-1::motX (non-motile but producing flagella. They were as pathogenic as the wild-type strain when injected into mice and trout, but no mortalities were registered in zebrafish larvae. This study

  9. FMEF/experimental capabilities

    International Nuclear Information System (INIS)

    Burgess, C.A.; Dronen, V.R.

    1981-01-01

    The Fuels and Materials Examination Facility (FMEF), under construction at the Hanford site north of Richland, Washington, will be one of the most modern facilities offering irradiated fuels and materials examination capabilities and fuel fabrication development technologies. Scheduled for completion in 1984, the FMEF will provide examination capability for fuel assemblies, fuel pins and test pins irradiated in the FFTF. Various functions of the FMEF are described, with emphasis on experimental data-gathering capabilities in the facility's Nondestructive and Destructive examination cell complex

  10. KSC Technical Capabilities Website

    Science.gov (United States)

    Nufer, Brian; Bursian, Henry; Brown, Laurette L.

    2010-01-01

    This document is the website pages that review the technical capabilities that the Kennedy Space Center (KSC) has for partnership opportunities. The purpose of this information is to make prospective customers aware of the capabilities and provide an opportunity to form relationships with the experts at KSC. The technical capabilities fall into these areas: (1) Ground Operations and Processing Services, (2) Design and Analysis Solutions, (3) Command and Control Systems / Services, (4) Materials and Processes, (5) Research and Technology Development and (6) Laboratories, Shops and Test Facilities.

  11. Judgmental Forecasting of Operational Capabilities

    DEFF Research Database (Denmark)

    Hallin, Carina Antonia; Tveterås, Sigbjørn; Andersen, Torben Juul

    This paper explores a new judgmental forecasting indicator, the Employee Sensed Operational Capabilities (ESOC). The purpose of the ESOC is to establish a practical prediction tool that can provide early signals about changes in financial performance by gauging frontline employees’ sensing...... of changes in the firm’s operational capabilities. We present the first stage of the development of ESOC by applying a formative measurement approach to test the index in relation to financial performance and against an organizational commitment scale. We use distributed lag models to test whether the ESOC...

  12. Recent shell-model results for exotic nuclei

    Directory of Open Access Journals (Sweden)

    Utsuno Yusuke

    2014-03-01

    Full Text Available We report on our recent advancement in the shell model and its applications to exotic nuclei, focusing on the shell evolution and large-scale calculations with the Monte Carlo shell model (MCSM. First, we test the validity of the monopole-based universal interaction (VMU as a shell-model interaction by performing large-scale shell-model calculations in two different mass regions using effective interactions which partly comprise VMU. Those calculations are successful and provide a deeper insight into the shell evolution beyond the single-particle model, in particular showing that the evolution of the spin-orbit splitting due to the tensor force plays a decisive role in the structure of the neutron-rich N ∼ 28 region and antimony isotopes. Next, we give a brief overview of recent developments in MCSM, and show that it is applicable to exotic nuclei that involve many valence orbits. As an example of its applications to exotic nuclei, shape coexistence in 32Mg is examined.

  13. Sensing capabilities of graphite based MR elastomers

    International Nuclear Information System (INIS)

    Tian, T F; Li, W H; Deng, Y M

    2011-01-01

    This paper presents both experimental and theoretical investigations of the sensing capabilities of graphite based magnetorheological elastomers (MREs). In this study, eight MRE samples with varying graphite weight fractions were fabricated and their resistance under different magnetic fields and external loadings were measured with a multi-meter. With an increment of graphite weight fraction, the resistance of MRE sample decreases steadily. Higher magnetic fields result in a resistance increase. Based on an ideal assumption of a perfect chain structure, a mathematical model was developed to investigate the relationship between the MRE resistance with external loading. In this model, the current flowing through the chain structure consists of both a tunnel current and a conductivity current, both of which depend on external loadings. The modelling parameters have been identified and reconstructed from comparison with experimental results. The comparison indicates that both experimental results and modelling predictions agree favourably well

  14. Metal fires and their implications for advanced reactors. Part 3: Experimental and modeling results

    International Nuclear Information System (INIS)

    Nowlen, Steven Patrick; Figueroa, Victor G.; Olivier, Tara Jean; Hewson, John C.; Blanchat, Thomas K.

    2010-01-01

    This report details the primary results of the Laboratory Directed Research and Development project (LDRD 08-0857) Metal Fires and Their Implications for Advance Reactors. Advanced reactors may employ liquid metal coolants, typically sodium, because of their many desirable qualities. This project addressed some of the significant challenges associated with the use of liquid metal coolants, primary among these being the extremely rapid oxidation (combustion) that occurs at the high operating temperatures in reactors. The project has identified a number of areas for which gaps existed in knowledge pertinent to reactor safety analyses. Experimental and analysis capabilities were developed in these areas to varying degrees. In conjunction with team participation in a DOE gap analysis panel, focus was on the oxidation of spilled sodium on thermally massive surfaces. These are spills onto surfaces that substantially cool the sodium during the oxidation process, and they are relevant because standard risk mitigation procedures seek to move spill environments into this regime through rapid draining of spilled sodium. While the spilled sodium is not quenched, the burning mode is different in that there is a transition to a smoldering mode that has not been comprehensively described previously. Prior work has described spilled sodium as a pool fire, but there is a crucial, experimentally-observed transition to a smoldering mode of oxidation. A series of experimental measurements have comprehensively described the thermal evolution of this type of sodium fire for the first time. A new physics-based model has been developed that also predicts the thermal evolution of this type of sodium fire for the first time. The model introduces smoldering oxidation through porous oxide layers to go beyond traditional pool fire analyses that have been carried out previously in order to predict experimentally observed trends. Combined, these developments add significantly to the safety

  15. Evolution of Soybean mosaic virus-G7 molecularly cloned genome in Rsv1-genotype soybean results in emergence of a mutant capable of evading Rsv1-mediated recognition

    International Nuclear Information System (INIS)

    Hajimorad, M.R.; Eggenberger, A.L.; Hill, J.H.

    2003-01-01

    Plant resistance (R) genes direct recognition of pathogens harboring matching avirluent signals leading to activation of defense responses. It has long been hypothesized that under selection pressure the infidelity of RNA virus replication together with large population size and short generation times results in emergence of mutants capable of evading R-mediated recognition. In this study, the Rsv1/Soybean mosaic virus (SMV) pathosystem was used to investigate this hypothesis. In soybean line PI 96983 (Rsv1), the progeny of molecularly cloned SMV strain G7 (pSMV-G7) provokes a lethal systemic hypersensitive response (LSHR) with up regulation of a defense-associated gene transcript (PR-1). Serial passages of a large population of the progeny in PI 96983 resulted in emergence of a mutant population (vSMV-G7d), incapable of provoking either Rsv1-mediated LSHR or PR-1 protein gene transcript up regulation. An infectious clone of the mutant (pSMV-G7d) was synthesized whose sequences were very similar but not identical to the vSMV-G7d population; however, it displayed a similar phenotype. The genome of pSMV-G7d differs from parental pSMV-G7 by 17 substitutions, of which 10 are translationally silent. The seven amino acid substitutions in deduced sequences of pSMV-G7d differ from that of pSMV-G7 by one each in P1 proteinase, helper component-proteinase, and coat protein, respectively, and by four in P3. To the best of our knowledge, this is the first demonstration in which experimental evolution of a molecularly cloned plant RNA virus resulted in emergence of a mutant capable of evading an R-mediated recognition

  16. Advanced Modeling and Uncertainty Quantification for Flight Dynamics; Interim Results and Challenges

    Science.gov (United States)

    Hyde, David C.; Shweyk, Kamal M.; Brown, Frank; Shah, Gautam

    2014-01-01

    into a real-time simulation capability, generating techniques for uncertainty modeling that draw data from multiple modeling sources, and providing a unified database model that includes nominal plus increments for each flight condition. This paper presents status of testing in the BR&T water tunnel and analysis of the resulting data and efforts to characterize these data using alternative modeling methods. Program challenges and issues are also presented.

  17. A model for hot electron phenomena: Theory and general results

    International Nuclear Information System (INIS)

    Carrillo, J.L.; Rodriquez, M.A.

    1988-10-01

    We propose a model for the description of the hot electron phenomena in semiconductors. Based on this model we are able to reproduce accurately the main characteristics observed in experiments of electric field transport, optical absorption, steady state photoluminescence and relaxation process. Our theory does not contain free nor adjustable parameters, it is very fast computerwise, and incorporates the main collision mechanisms including screening and phonon heating effects. Our description on a set of nonlinear rate equations in which the interactions are represented by coupling coefficients or effective frequencies. We calculate three coefficients from the characteristic constants and the band structure of the material. (author). 22 refs, 5 figs, 1 tab

  18. Results from Development of Model Specifications for Multifamily Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Brozyna, K.

    2012-08-01

    Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.

  19. Results From Development of Model Specifications for Multifamily Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Brozyna, Kevin [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-08-01

    Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.

  20. Resources, constraints and capabilities

    NARCIS (Netherlands)

    Dhondt, S.; Oeij, P.R.A.; Schröder, A.

    2018-01-01

    Human and financial resources as well as organisational capabilities are needed to overcome the manifold constraints social innovators are facing. To unlock the potential of social innovation for the whole society new (social) innovation friendly environments and new governance structures

  1. a Capability approach

    African Journals Online (AJOL)

    efforts towards gender equality in education as a means of achieving social justice. ... should mean that a lot of capability approach-oriented commentators are ... processes, their forms of exercising power, and their rules, unwritten cultures, ...

  2. Brandishing Cyberattack Capabilities

    Science.gov (United States)

    2013-01-01

    Advertising cyberwar capabilities may be helpful. It may back up a deterrence strategy. It might dissuade other states from conventional mischief or...to enable the attack.5 Many of the instruments of the attack remain with the target system, nestled in its log files, or even in the malware itself...debat- able. Even if demonstrated, what worked yesterday may not work today. But difficult does not mean impossible. Advertising cyberwar capabilities

  3. Production capability and supply

    International Nuclear Information System (INIS)

    Klemenic, J.

    1977-01-01

    The strong market for uranium of recent years is about to usher in a new era in domestic uranium production. The spot market price of uranium has remained relatively stable at a little over $40/lb for more than 18 months. Many of the recent contracts for delivery in the early 1980s are calling for prices in the range of $40 to $65 per lb in year-of-delivery dollars. Low-grade, high-cost projects, such as uranium recovery from mill tailings and the reopening of ''mined-out'' ore bodies, have already been initiated. New underground mines to produce at greater depths, and new surface mines to recover lower grade ores, are being developed or seriously planned. In keeping with this movement to recover uranium from low-grade ore and other high cost materials, the Grand Junction Office has examined, for the first time, the production capability of the domestic industry assuming a $30/lb (or less) ''forward cost'' resource base. As in the past, keep in mind that the market price needed to stimulate full production of a given resource base may be significantly higher than the estimated forward cost of producing that resource. Results of the $30/lb study are presented

  4. Strength capability while kneeling.

    Science.gov (United States)

    Haslegrave, C M; Tracy, M F; Corlett, E N

    1997-12-01

    Work sometimes has to be carried out kneeling, particularly where jobs are performed in confined spaces as is common for miners, aircraft baggage handlers and maintenance workers. In order to assess the risks in performing forceful tasks under such conditions, data is needed on strength capabilities of kneeling subjects. A study was undertaken to measure isometric strength in single-handed exertions for male subjects and to investigate the effects on this of task layout factors (direction of force exertion, reach distance, height of the workpiece and orientation relative to the subject's sagittal plane). The data has been tabulated to show the degree to which strength may be reduced in different situations and analysis of the task factors showed their influence to be complex with direction of exertion and reach distance having the greatest effect. The results also suggest that exertions are weaker when subjects are kneeling on two knees than when kneeling on one knee, although this needs to be confirmed by direct experimental comparison.

  5. Analytical results for the Sznajd model of opinion formation

    Czech Academy of Sciences Publication Activity Database

    Slanina, František; Lavička, H.

    2003-01-01

    Roč. 35, - (2003), s. 279-288 ISSN 1434-6028 R&D Projects: GA ČR GA202/01/1091 Institutional research plan: CEZ:AV0Z1010914 Keywords : agent models * sociophysics Subject RIV: BE - Theoretical Physics Impact factor: 1.457, year: 2003

  6. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the ‘most likely’ di...

  7. Some Results On The Modelling Of TSS Manufacturing Lines

    Directory of Open Access Journals (Sweden)

    Viorel MÎNZU

    2000-12-01

    Full Text Available This paper deals with the modelling of a particular class of manufacturing lines, governed by a decentralised control strategy so that they balance themselves. Such lines are known as “bucket brigades” and also as “TSS lines”, after their first implementation, at Toyota, in the 70’s. A first study of their behaviour was based upon modelling as stochastic dynamic systems, which emphasised, in the frame of the so-called “Normative Model”, a sufficient condition for self-balancing, that means for autonomous functioning at a steady production rate (stationary behaviour. Under some particular conditions, a simulation analysis of TSS lines could be made on non-linear block diagrams, showing that the state trajectories are piecewise continuous in between occurrences of certain discrete events, which determine their discontinuity. TSS lines may therefore be modelled as hybrid dynamic systems, more specific, with autonomous switching and autonomous impulses (jumps. A stability analysis of such manufacturing lines is allowed by modelling them as hybrid dynamic systems with discontinuous motions.

  8. Recent numerical results on the two dimensional Hubbard model

    Energy Technology Data Exchange (ETDEWEB)

    Parola, A.; Sorella, S.; Baroni, S.; Car, R.; Parrinello, M.; Tosatti, E. (SISSA, Trieste (Italy))

    1989-12-01

    A new method for simulating strongly correlated fermionic systems, has been applied to the study of the ground state properties of the 2D Hubbard model at various fillings. Comparison has been made with exact diagonalizations in the 4 x 4 lattices where very good agreement has been verified in all the correlation functions which have been studied: charge, magnetization and momentum distribution. (orig.).

  9. Recent numerical results on the two dimensional Hubbard model

    International Nuclear Information System (INIS)

    Parola, A.; Sorella, S.; Baroni, S.; Car, R.; Parrinello, M.; Tosatti, E.

    1989-01-01

    This paper reports a new method for simulating strongly correlated fermionic systems applied to the study of the ground state properties of the 2D Hubbard model at various fillings. Comparison has been made with exact diagonalizations in the 4 x 4 lattices where very good agreement has been verified in all the correlation functions which have been studied: charge, magnetization and momentum distribution

  10. Some rigorous results on the Hopfield neural network model

    International Nuclear Information System (INIS)

    Koch, H.; Piasko, J.

    1989-01-01

    The authors analyze the thermal equilibrium distribution of 2 p mean field variables for the Hopfield model with p stored patterns, in the case where 2 p is small compared to the number of spins. In particular, they give a full description of the free energy density in the thermodynamic limit, and of the so-called symmetric solutions for the mean field equations

  11. RACLETTE: a model for evaluating the thermal response of plasma facing components to slow high power plasma transients. Part I: Theory and description of model capabilities

    Science.gov (United States)

    Raffray, A. René; Federici, Gianfranco

    1997-04-01

    RACLETTE (Rate Analysis Code for pLasma Energy Transfer Transient Evaluation), a comprehensive but relatively simple and versatile model, was developed to help in the design analysis of plasma facing components (PFCs) under 'slow' high power transients, such as those associated with plasma vertical displacement events. The model includes all the key surface heat transfer processes such as evaporation, melting, and radiation, and their interaction with the PFC block thermal response and the coolant behaviour. This paper represents part I of two sister and complementary papers. It covers the model description, calibration and validation, and presents a number of parametric analyses shedding light on and identifying trends in the PFC armour block response to high plasma energy deposition transients. Parameters investigated include the plasma energy density and deposition time, the armour thickness and the presence of vapour shielding effects. Part II of the paper focuses on specific design analyses of ITER plasma facing components (divertor, limiter, primary first wall and baffle), including improvements in the thermal-hydraulic modeling required for better understanding the consequences of high energy deposition transients in particular for the ITER limiter case.

  12. RACLETTE: a model for evaluating the thermal response of plasma facing components to slow high power plasma transients. Pt. I. Theory and description of model capabilities

    International Nuclear Information System (INIS)

    Raffray, A.R.; Federici, G.

    1997-01-01

    For pt.II see ibid., p.101-30, 1997. RACLETTE (Rate Analysis Code for pLasma Energy Transfer Transient Evaluation), a comprehensive but relatively simple and versatile model, was developed to help in the design analysis of plasma facing components (PFCs) under 'slow' high power transients, such as those associated with plasma vertical displacement events. The model includes all the key surface heat transfer processes such as evaporation, melting, and radiation, and their interaction with the PFC block thermal response and the coolant behaviour. This paper represents part I of two sister and complementary papers. It covers the model description, calibration and validation, and presents a number of parametric analyses shedding light on and identifying trends in the PFC armour block response to high plasma energy deposition transients. Parameters investigated include the plasma energy density and deposition time, the armour thickness and the presence of vapour shielding effects. Part II of the paper focuses on specific design analyses of ITER plasma facing components (divertor, limiter, primary first wall and baffle), including improvements in the thermal-hydraulic modeling required for better understanding the consequences of high energy deposition transients in particular for the ITER limiter case. (orig.)

  13. Guiding center model to interpret neutral particle analyzer results

    Science.gov (United States)

    Englert, G. W.; Reinmann, J. J.; Lauver, M. R.

    1974-01-01

    The theoretical model is discussed, which accounts for drift and cyclotron components of ion motion in a partially ionized plasma. Density and velocity distributions are systematically precribed. The flux into the neutral particle analyzer (NPA) from this plasma is determined by summing over all charge exchange neutrals in phase space which are directed into apertures. Especially detailed data, obtained by sweeping the line of sight of the apertures across the plasma of the NASA Lewis HIP-1 burnout device, are presented. Selection of randomized cyclotron velocity distributions about mean azimuthal drift yield energy distributions which compared well with experiment. Use of data obtained with a bending magnet on the NPA showed that separation between energy distribution curves of various mass species correlate well with a drift divided by mean cyclotron energy parameter of the theory. Use of the guiding center model in conjunction with NPA scans across the plasma aid in estimates of ion density and E field variation with plasma radius.

  14. 1-g model loading tests: methods and results

    Czech Academy of Sciences Publication Activity Database

    Feda, Jaroslav

    1999-01-01

    Roč. 2, č. 4 (1999), s. 371-381 ISSN 1436-6517. [Int.Conf. on Soil - Structure Interaction in Urban Civ. Engineering. Darmstadt, 08.10.1999-09.10.1999] R&D Projects: GA MŠk OC C7.10 Keywords : shallow foundation * model tests * sandy subsoil * bearing capacity * subsoil failure * volume deformation Subject RIV: JM - Building Engineering

  15. Considerations on Modeling Strategies of the Financial Result

    Directory of Open Access Journals (Sweden)

    Lucian Cernuşca

    2012-12-01

    Full Text Available This study's objective is to highlight some of the strategies to maximize or minimize the accounting result, situated un-der the impulse of bad accounting. Although we assist the manipulation of the accounting result, this procedure is done according to the law, been exploited by some entities in knowledge of the lack of justice and accounting regulations.

  16. The physical model of a terraced plot: first results

    Science.gov (United States)

    Perlotto, Chiara; D'Agostino, Vincenzo; Buzzanca, Giacomo

    2017-04-01

    Terrace building have been expanded in the 19th century because of the increased demographic pressure and the need to crop additional areas at steeper slopes. Terraces are also important to regulate the hydrological behavior of the hillslope. Few studies are available in literature on rainfall-runoff processes and flood risk mitigation in terraced areas. Bench terraces, reducing the terrain slope and the length of the overland flow, quantitatively control the runoff flow velocity, facilitating the drainage and thus leading to a reduction of soil erosion. The study of the hydrologic-hydraulic function of terraced slopes is essential in order to evaluate their possible use to cooperate for flood-risk mitigation also preserving the landscape value. This research aims to better focus the times of the hydrological response, which are determined by a hillslope plot bounded by a dry-stone wall, considering both the overland flow and the groundwater. A physical model, characterized by a quasi-real scale, has been built to reproduce the behavior of a 3% outward sloped terrace at bare soil condition. The model consists of a steel metal box (1 m large, 3.3 m long, 2 m high) containing the hillslope terrain. The terrain is equipped with two piezometers, 9 TDR sensors measuring the volumetric water content, a surface spillway at the head releasing the steady discharge under test, a scale at the wall base to measure the outflowing discharge. The experiments deal with different initial moisture condition (non-saturated and saturated), and discharges of 19.5, 12.0 and 5.0 l/min. Each experiment has been replicated, conducting a total number of 12 tests. The volumetric water content analysis produced by the 9 TDR sensors was able to provide a quite satisfactory representation of the soil moisture during the runs. Then, different lag times at the outlet since the inflow initiation were measured both for runoff and groundwater. Moreover, the time of depletion and the piezometer

  17. DISCRETE DEFORMATION WAVE DYNAMICS IN SHEAR ZONES: PHYSICAL MODELLING RESULTS

    Directory of Open Access Journals (Sweden)

    S. A. Bornyakov

    2016-01-01

    Full Text Available Observations of earthquake migration along active fault zones [Richter, 1958; Mogi, 1968] and related theoretical concepts [Elsasser, 1969] have laid the foundation for studying the problem of slow deformation waves in the lithosphere. Despite the fact that this problem has been under study for several decades and discussed in numerous publications, convincing evidence for the existence of deformation waves is still lacking. One of the causes is that comprehensive field studies to register such waves by special tools and equipment, which require sufficient organizational and technical resources, have not been conducted yet.The authors attempted at finding a solution to this problem by physical simulation of a major shear zone in an elastic-viscous-plastic model of the lithosphere. The experiment setup is shown in Figure 1 (A. The model material and boundary conditions were specified in accordance with the similarity criteria (described in detail in [Sherman, 1984; Sherman et al., 1991; Bornyakov et al., 2014]. The montmorillonite clay-and-water paste was placed evenly on two stamps of the installation and subject to deformation as the active stamp (1 moved relative to the passive stamp (2 at a constant speed. The upper model surface was covered with fine sand in order to get high-contrast photos. Photos of an emerging shear zone were taken every second by a Basler acA2000-50gm digital camera. Figure 1 (B shows an optical image of a fragment of the shear zone. The photos were processed by the digital image correlation method described in [Sutton et al., 2009]. This method estimates the distribution of components of displacement vectors and strain tensors on the model surface and their evolution over time [Panteleev et al., 2014, 2015].Strain fields and displacements recorded in the optical images of the model surface were estimated in a rectangular box (220.00×72.17 mm shown by a dot-and-dash line in Fig. 1, A. To ensure a sufficient level of

  18. Space Logistics: Launch Capabilities

    Science.gov (United States)

    Furnas, Randall B.

    1989-01-01

    The current maximum launch capability for the United States are shown. The predicted Earth-to-orbit requirements for the United States are presented. Contrasting the two indicates the strong National need for a major increase in Earth-to-orbit lift capability. Approximate weights for planned payloads are shown. NASA is studying the following options to meet the need for a new heavy-lift capability by mid to late 1990's: (1) Shuttle-C for near term (include growth versions); and (2) the Advanced Lauching System (ALS) for the long term. The current baseline two-engine Shuttle-C has a 15 x 82 ft payload bay and an expected lift capability of 82,000 lb to Low Earth Orbit. Several options are being considered which have expanded diameter payload bays. A three-engine Shuttle-C with an expected lift of 145,000 lb to LEO is being evaluated as well. The Advanced Launch System (ALS) is a potential joint development between the Air Force and NASA. This program is focused toward long-term launch requirements, specifically beyond the year 2000. The basic approach is to develop a family of vehicles with the same high reliability as the Shuttle system, yet offering a much greater lift capability at a greatly reduced cost (per pound of payload). The ALS unmanned family of vehicles will provide a low end lift capability equivalent to Titan IV, and a high end lift capability greater than the Soviet Energia if requirements for such a high-end vehicle are defined.In conclusion, the planning of the next generation space telescope should not be constrained to the current launch vehicles. New vehicle designs will be driven by the needs of anticipated heavy users.

  19. Analysis of tracer tests with multirate diffusion models: recent results and future directions within the WIPP project

    International Nuclear Information System (INIS)

    McKenna, S.A.; Meigs, L.C.; Altman, S.J.; Haggerty, R.

    1998-01-01

    A series of single-well injection-withdrawal (SWIW) and two-well convergent-flow (TWCF) tracer tests were conducted in the Culebra dolomite at the WIPP site in late 1995 and early 1996. Modeling analyses over the past year have focused on reproducing the observed mass-recovery curves and understanding the basis physical processes controlling tracer transport in SWIW and TWCF tests. To date, specific modeling efforts have focused on five SWIW tests and one TWCF pathway at each of two different locations. An inverse parameter-estimation procedure was implemented to model the SWIW and TWCF tests with both traditional and multirate double-porosity formulations. The traditional model assumes a single diffusion rate while the multirate model uses a first-order approximation to model a continuous distribution of diffusion coefficients. Conceptually, the multirate model represents variable matrix block sizes within the Culebra as observed in geologic investigations and also variability in diffusion rates within the matrix blocks as observed with X-ray imaging in the laboratory. Single-rate double-porosity models cannot provide an adequate match to the SWIW data. Multirate double-porosity models provide excellent fits to all five SWIW mass-recovery curves. Models of the TWCF tests show that, at one location, the tracer test can be modeled with both single-rate and multirate double-porosity models. At the other location, only the multi-rate double-porosity model is capable of explaining the test results

  20. Analysis of inelastic neutron scattering results on model compounds ...

    Indian Academy of Sciences (India)

    Vibrational spectroscopy; nitrogenous bases; inelastic neutron scattering. PACS No. ... obtain good quality, high resolution results in this region. Here the .... knowledge of the character of each molecular transition as well as the calculated.

  1. Modelling of wind power plant controller, wind speed time series, aggregation and sample results

    DEFF Research Database (Denmark)

    Hansen, Anca Daniela; Altin, Müfit; Cutululis, Nicolaos Antonio

    This report describes the modelling of a wind power plant (WPP) including its controller. Several ancillary services like inertial response (IR), power oscillation damping (POD) and synchronising power (SP) are implemented. The focus in this document is on the performance of the WPP output...... and not the impact of the WPP on the power system. By means of simulation tests, the capability of the implemented wind power plant model to deliver ancillary services is investigated....

  2. Comparison of capabilities of reluctance synchronous motor and induction motor

    International Nuclear Information System (INIS)

    Stumberger, Gorazd; Hadziselimovic, Miralem; Stumberger, Bojan; Miljavec, Damijan; Dolinar, Drago; Zagradisnik, Ivan

    2006-01-01

    This paper compares the capabilities of a reluctance synchronous motor (RSM) with those of an induction motor (IM). An RSM and IM were designed and made, with the same rated power and speed. They differ only in the rotor portion while their stators, housings and cooling systems are identical. The capabilities of both motors in a variable speed drive are evaluated by comparison of the results obtained by magnetically nonlinear models and by measurements

  3. MCNP Modeling Results for Location of Buried TRU Waste Drums

    International Nuclear Information System (INIS)

    Steinman, D K; Schweitzer, J S

    2006-01-01

    In the 1960's, fifty-five gallon drums of TRU waste were buried in shallow pits on remote U.S. Government facilities such as the Idaho National Engineering Laboratory (now split into the Idaho National Laboratory and the Idaho Completion Project [ICP]). Subsequently, it was decided to remove the drums and the material that was in them from the burial pits and send the material to the Waste Isolation Pilot Plant in New Mexico. Several technologies have been tried to locate the drums non-intrusively with enough precision to minimize the chance for material to be spread into the environment. One of these technologies is the placement of steel probe holes in the pits into which wireline logging probes can be lowered to measure properties and concentrations of material surrounding the probe holes for evidence of TRU material. There is also a concern that large quantities of volatile organic compounds (VOC) are also present that would contaminate the environment during removal. In 2001, the Idaho National Engineering and Environmental Laboratory (INEEL) built two pulsed neutron wireline logging tools to measure TRU and VOC around the probe holes. The tools are the Prompt Fission Neutron (PFN) and the Pulsed Neutron Gamma (PNG), respectively. They were tested experimentally in surrogate test holes in 2003. The work reported here estimates the performance of the tools using Monte-Carlo modelling prior to field deployment. A MCNP model was constructed by INEEL personnel. It was modified by the authors to assess the ability of the tools to predict quantitatively the position and concentration of TRU and VOC materials disposed around the probe holes. The model was used to simulate the tools scanning the probe holes vertically in five centimetre increments. A drum was included in the model that could be placed near the probe hole and at other locations out to forty-five centimetres from the probe-hole in five centimetre increments. Scans were performed with no chlorine in the

  4. Delta-tilde interpretation of standard linear mixed model results

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra

    2016-01-01

    effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...

  5. Solar activity variations of ionosonde measurements and modeling results

    Czech Academy of Sciences Publication Activity Database

    Altadill, D.; Arrazola, D.; Blanch, E.; Burešová, Dalia

    2008-01-01

    Roč. 42, č. 4 (2008), s. 610-616 ISSN 0273-1177 R&D Projects: GA AV ČR 1QS300120506 Grant - others:MCYT(ES) REN2003-08376-C02-02; CSIC(XE) 2004CZ0002; AGAUR(XE) 2006BE00112; AF Research Laboratory(XE) FA8718-L-0072 Institutional research plan: CEZ:AV0Z30420517 Keywords : mid-latitude ionosphere * bottomside modeling * ionospheric variability Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.860, year: 2008 http://www.sciencedirect.com/science/journal/02731177

  6. The calculation of exchange forces: General results and specific models

    International Nuclear Information System (INIS)

    Scott, T.C.; Babb, J.F.; Dalgarno, A.; Morgan, J.D. III

    1993-01-01

    In order to clarify questions about the calculation of the exchange energy of a homonuclear molecular ion, an analysis is carried out of a model problem consisting of the one-dimensional limit of H 2 + . It is demonstrated that the use of the infinite polarization expansion for the localized wave function in the Holstein--Herring formula yields an approximate exchange energy which at large internuclear distances R has the correct leading behavior to O(e -R ) and is close to but not equal to the exact exchange energy. The extension to the n-dimensional double-well problem is presented

  7. Modelling the effects of phase change materials on the energy use in buildings. Results of Experiments and System Dynamics Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Prins, J.

    2012-02-15

    The current era is in need for more and more sustainable energy solutions. Phase Change Materials (PCM's) are a solution for a more sustainable build environment because they can help to reduce the energy use of buildings during heating and cooling of the indoor air. This paper presents the results of recent experiments that have been executed with test boxes. In addition a System Dynamics model has been developed to find out how PCM's can be used efficiently without testing in reality. The first experiment, in which PCM's were applied in a concrete floor, shows a reduction of peak temperatures with 4C {+-} 0.7C on maximum temperatures and over 1.5C {+-} 0.7C on minimum temperatures during warm periods. The model confirmed these findings, although the predicted reductions were slightly. During the second experiment more PCM's were applied by mounting them into the walls using gypsum plasterboard to increase the latent heat capacity. Remarkably, both the experimental set-up as the model showed that the increase of PCM's (of almost 98%) causes hardly any difference compared to the first situation. Adapting the exterior in a way to absorb more solar energy, increases the average indoor temperature but decreases the reduction of peak temperatures. Again the model confirmed these findings of the experiment. These results show that the effect of PCM's varies on different climatological contexts and with different construction components physics. This means no straight forward advice on the use of PCM's for a building design can be given. The solution for this problem is provided by the model, showing that the effects of PCM's can be modelled in order to use PCM's in an effective way in different climatological contexts and with different characteristics of construction components. The research shows that a simple model is already capable of predicting PCM performance in test boxes with reasonable accuracy. Therefore it can be

  8. Guiding center model to interpret neutral particle analyzer results

    International Nuclear Information System (INIS)

    Englert, G.W.; Reinmann, J.J.; Lauver, M.R.

    1974-01-01

    The theoretical model is discussed, which accounts for drift and cyclotron components of ion motion in a partially ionized plasma. Density and velocity distributions are systematically prescribed. The flux into the neutron particle analyzer (NPA) from this plasma is determined by summing over all charge exchange neutrals in phase space which are directed into apertures. Especially detailed data, obtained by sweeping the line of sight of the apertures across the plasma of the NASA Lewis HIP-1 burnout device, are presented. Selection of randomized cyclotron velocity distributions about mean azimuthal drift yield energy distributions which compared well with experiment. Use of data obtained with a bending magnet on the NPA showed that separation between energy distribution curves of various mass species correlate well with a drift divided by mean cyclotron energy parameter of the theory. Use of the guiding center model in conjunction with NPA scans across the plasma aid in estimates of ion density and E field variation with plasma radius. (U.S.)

  9. Developing maturity grids for assessing organisational capabilities

    DEFF Research Database (Denmark)

    Maier, Anja; Moultrie, James; Clarkson, P John

    2009-01-01

    Keyword: Maturity Model,Maturity Grid,Maturity Matrix,Organisational Capabilities,Benchmarking,New Product Development,Perfirmance Assessment......Keyword: Maturity Model,Maturity Grid,Maturity Matrix,Organisational Capabilities,Benchmarking,New Product Development,Perfirmance Assessment...

  10. First Results of Modeling Radiation Belt Electron Dynamics with the SAMI3 Plasmasphere Model

    Science.gov (United States)

    Komar, C. M.; Glocer, A.; Huba, J.; Fok, M. C. H.; Kang, S. B.; Buzulukova, N.

    2017-12-01

    The radiation belts were one of the first discoveries of the Space Age some sixty years ago and radiation belt models have been improving since the discovery of the radiation belts. The plasmasphere is one region that has been critically important to determining the dynamics of radiation belt populations. This region of space plays a critical role in describing the distribution of chorus and magnetospheric hiss waves throughout the inner magnetosphere. Both of these waves have been shown to interact with energetic electrons in the radiation belts and can result in the energization or loss of radiation belt electrons. However, radiation belt models have been historically limited in describing the distribution of cold plasmaspheric plasma and have relied on empirically determined plasmasphere models. Some plasmasphere models use an azimuthally symmetric distribution of the plasmasphere which can fail to capture important plasmaspheric dynamics such as the development of plasmaspheric drainage plumes. Previous work have coupled the kinetic bounce-averaged Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model used to model ring current and radiation belt populations with the Block-adaptive Tree Solar wind Roe-type Upwind Scheme (BATSRUS) global magnetohydrodynamic model to self-consistently obtain the magnetospheric magnetic field and ionospheric potential. The present work will utilize this previous coupling and will additionally couple the SAMI3 plasmasphere model to better represent the dynamics on the plasmasphere and its role in determining the distribution of waves throughout the inner magnetosphere. First results on the relevance of chorus, hiss, and ultralow frequency waves on radiation belt electron dynamics will be discussed in context of the June 1st, 2013 storm-time dropout event.

  11. Mechanism of supply chain coordination cased on dynamic capability framework-the mediating role of manufacturing capabilities

    Directory of Open Access Journals (Sweden)

    Tiantian Gao

    2014-10-01

    Full Text Available Purpose: A critical issue has been absent from the conversation on supply chain coordination: how supply chain coordination influence the enterprise performance. This research proposes a new vision to research the performance mechanism of supply chain coordination capability as a dynamic capability. Manufacturing capabilities are existed as mediating role. Design/methodology/approach: Data from International Manufacturing Strategy Survey in 2009 is used to verify the mediating model by hierarchical regression analysis. Findings: The results show that supply chain coordination impacts the enterprise performance positively and indirect impacts the enterprise performance through quality, cost, flexibility. Research implications: This study presents an overview of the impact of supply chain coordination and manufacturing capabilities on enterprise performance, giving grasp for further research of the relationships that exist between them. Originality/value: This finding integrates insights from previous research in dynamic capability framework and supply chain management into a generalization and extension of the performance mechanism in manufacturing enterprises.

  12. Experimental results and modeling of a dynamic hohlraum on SATURN

    International Nuclear Information System (INIS)

    Derzon, M.S.; Allshouse, G.O.; Deeney, C.; Leeper, R.J.; Nash, T.J.; Matuska, W.; Peterson, D.L.; MacFarlane, J.J.; Ryutov, D.D.

    1998-06-01

    Experiments were performed at SATURN, a high current z-pinch, to explore the feasibility of creating a hohlraum by imploding a tungsten wire array onto a low-density foam. Emission measurements in the 200--280 eV energy band were consistent with a 110--135 eV Planckian before the target shock heated, or stagnated, on-axis. Peak pinch radiation temperatures of nominally 160 eV were obtained. Measured early time x-ray emission histories and temperature estimates agree well with modeled performance in the 200--280 eV band using a 2D radiation magneto-hydrodynamics code. However, significant differences are observed in comparisons of the x-ray images and 2D simulations

  13. Campus Capability Plan

    Energy Technology Data Exchange (ETDEWEB)

    Adams, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Arsenlis, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bailey, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bergman, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brase, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brenner, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Camara, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carlton, H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cheng, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chrzanowski, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Colson, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); East, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Farrell, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ferranti, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gursahani, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hansen, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Helms, L. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hernandez, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jeffries, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Larson, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lu, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McNabb, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mercer, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Skeate, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sueksdorf, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zucca, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Le, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ancria, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Scott, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leininger, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gagliardi, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gash, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bronson, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chung, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hobson, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Meeker, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sanchez, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zagar, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Quivey, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sommer, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Atherton, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-06-06

    Lawrence Livermore National Laboratory Campus Capability Plan for 2018-2028. Lawrence Livermore National Laboratory (LLNL) is one of three national laboratories that are part of the National Nuclear Security Administration. LLNL provides critical expertise to strengthen U.S. security through development and application of world-class science and technology that: Ensures the safety, reliability, and performance of the U.S. nuclear weapons stockpile; Promotes international nuclear safety and nonproliferation; Reduces global danger from weapons of mass destruction; Supports U.S. leadership in science and technology. Essential to the execution and continued advancement of these mission areas are responsive infrastructure capabilities. This report showcases each LLNL capability area and describes the mission, science, and technology efforts enabled by LLNL infrastructure, as well as future infrastructure plans.

  14. Technological Capability's Predictor Variables

    Directory of Open Access Journals (Sweden)

    Fernanda Maciel Reichert

    2011-03-01

    Full Text Available The aim of this study was to identify the factors that influence in configuration of the technological capability of companies in sectors with medium-low technological intensity. To achieve the goal proposed in this article a survey was carried out. Based on the framework developed by Lall (1992 which classifies firms in basic, intermediate and advanced level of technological capability; it was found that the predominant technological capability is intermediate, with 83.7% of respondent companies (plastics companies in Brazil. It is believed that the main contribution of this study is the finding that the dependent variable named “Technological Capability” can be explained at a rate of 65% by six variables: development of new processes; selection of the best equipment supplier; sales of internally developed new technology to third parties; design and manufacture of equipment; study of the work methods and perform inventory control; and improvement of product quality.

  15. Technological Capability and Firm Performance

    Directory of Open Access Journals (Sweden)

    Fernanda Maciel Reichert

    2014-08-01

    Full Text Available This research aims to investigate the relationship between investments in technological capability and economic performance in Brazilian firms. Based on economic development theory and on developed countries history, it is assumed that this relationship is positive. Through key indicators, 133 Brazilian firms have been analyzed. Given the economic circumstances of an emerging economy, which the majority of businesses are primarily based on low and medium-low-technology industries, it is not possible to affirm the existence of a positive relation between technological capability and firm performance. There are other elements that allow firms to achieve such results. Firms of lower technological intensity industries performed above average in the economic performance indicators, adversely, they invested below average in technological capability. These findings do not diminish the merit of firms’ and country’s success. They in fact confirm a historical tradition of a country that concentrates its efforts on basic industries.

  16. Models of cognitive behavior in nuclear power plant personnel. A feasibility study: summary of results. Volume 1

    International Nuclear Information System (INIS)

    Woods, D.D.; Roth, E.M.; Hanes, L.F.

    1986-07-01

    This report summarizes the results of a feasibility study to determine if the current state of models of human cognitive activities can serve as the basis for improved techniques for predicting human error in nuclear power plants emergency operations. Based on the answer to this question, two subsequent phases of research are planned. Phase II is to develop a model of cognitive activities, and Phase III is to test the model. The feasibility study included an analysis of the cognitive activities that occur in emergency operations and an assessment of the modeling concepts/tools available to capture these cognitive activities. The results indicated that a symbolic processing (or artificial intelligence) model of cognitive activities in nuclear power plants is both desirable and feasible. This cognitive model can be built upon the computational framework provided by an existing artificial intelligence system for medical problem solving, called Caduceus. The resulting cognitive model will increase the capability to capture the human contribution to risk in probabilistic risk assessment studies. Volume 1 summarizes the major findings and conclusions of the study. Volume 2 provides a complete description of the methods and results, including a synthesis of the cognitive activities that occur during emergency operations, and a literature review on cognitive modeling relevant to nuclear power plants. 19 refs

  17. The Capability Approach

    OpenAIRE

    Robeyns, Ingrid

    2011-01-01

    textabstract In its most general description, the capability approach is a flexible and multi-purpose normative framework, rather than a precise theory of well-being, freedom or justice. At its core are two normative claims: first, the claim that the freedom to achieve well-being is of primary moral importance, and second, that freedom to achieve well-being is to be understood in terms of people’s capabilities, that is, their real opportunities to do and be what they have reason to value. Thi...

  18. Sandia QIS Capabilities.

    Energy Technology Data Exchange (ETDEWEB)

    Muller, Richard P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    Sandia National Laboratories has developed a broad set of capabilities in quantum information science (QIS), including elements of quantum computing, quantum communications, and quantum sensing. The Sandia QIS program is built atop unique DOE investments at the laboratories, including the MESA microelectronics fabrication facility, the Center for Integrated Nanotechnologies (CINT) facilities (joint with LANL), the Ion Beam Laboratory, and ASC High Performance Computing (HPC) facilities. Sandia has invested $75 M of LDRD funding over 12 years to develop unique, differentiating capabilities that leverage these DOE infrastructure investments.

  19. Results of modeling advanced BWR fuel designs using CASMO-4

    International Nuclear Information System (INIS)

    Knott, D.; Edenius, M.

    1996-01-01

    Advanced BWR fuel designs from General Electric, Siemens and ABB-Atom have been analyzed using CASMO-4 and compared against fission rate distributions and control rod worths from MCNP. Included in the analysis were fuel storage rack configurations and proposed mixed oxide (MOX) designs. Results are also presented from several cycles of SIMULATE-3 core follow analysis, using nodal data generated by CASMO-4, for cycles in transition from 8x8 designs to advanced fuel designs. (author)

  20. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    Science.gov (United States)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run