WorldWideScience

Sample records for core component-based modelling

  1. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behaviour as a means of computation...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has......Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...

  2. Component-based event composition modeling for CPS

    Science.gov (United States)

    Yin, Zhonghai; Chu, Yanan

    2017-06-01

    In order to combine event-drive model with component-based architecture design, this paper proposes a component-based event composition model to realize CPS’s event processing. Firstly, the formal representations of component and attribute-oriented event are defined. Every component is consisted of subcomponents and the corresponding event sets. The attribute “type” is added to attribute-oriented event definition so as to describe the responsiveness to the component. Secondly, component-based event composition model is constructed. Concept lattice-based event algebra system is built to describe the relations between events, and the rules for drawing Hasse diagram are discussed. Thirdly, as there are redundancies among composite events, two simplification methods are proposed. Finally, the communication-based train control system is simulated to verify the event composition model. Results show that the event composition model we have constructed can be applied to express composite events correctly and effectively.

  3. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae; Top, Søren

    2008-01-01

    of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behaviour, and the transformation of the software system into the S-functions. The general aim of this work is the improvement of multi-disciplinary development of embedded systems with the focus on the relation...... constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behaviour as a means of computation...

  4. Optimization of Component Based Software Engineering Model Using Neural Network

    Directory of Open Access Journals (Sweden)

    Gaurav Kumar

    2014-10-01

    Full Text Available The goal of Component Based Software Engineering (CBSE is to deliver high quality, more reliable and more maintainable software systems in a shorter time and within limited budget by reusing and combining existing quality components. A high quality system can be achieved by using quality components, framework and integration process that plays a significant role. So, techniques and methods used for quality assurance and assessment of a component based system is different from those of the traditional software engineering methodology. In this paper, we are presenting a model for optimizing Chidamber and Kemerer (CK metric values of component-based software. A deep analysis of a series of CK metrics of the software components design patterns is done and metric values are drawn from them. By using unsupervised neural network- Self Organizing Map, we have proposed a model that provides an optimized model for Software Component engineering model based on reusability that depends on CK metric values. Average, standard deviated and optimized values for the CK metric are compared and evaluated to show the optimized reusability of component based model.

  5. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos

    2012-07-01

    We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.

  6. A Component-Based Conference Control Model and Implementation for Loosely Coupled Sessions

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Conference control is a very important core part to compose a complete Internet multimedia conference system and has been a hot research area over the years, but there are currently no widely accepted robust and scalable solutions and standards. This paper proposes a component-based conference control model for loosely coupled sessions in which media applications can collaborate with a Session Controller(SC) to provide the conference control. A SC prototype has been built.

  7. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  8. PyCatch: Component based hydrological catchment modelling

    NARCIS (Netherlands)

    Lana-Renault, N.; Karssenberg, D.J.

    2013-01-01

    Dynamic numerical models are powerful tools for representing and studying environmental processes through time. Usually they are constructed with environmental modelling languages, which are high-level programming languages that operate at the level of thinking of the scientists. In this paper we pr

  9. Component-based Discrete Event Simulation Using the Fractal Component Model

    OpenAIRE

    Dalle, Olivier

    2007-01-01

    In this paper we show that Fractal, a generic component model coming from the Component-Based Software Engineering (CBSE) community, meets most of the functional expectations identified so far in the simulation community for component-based modeling and simulation. We also demonstrate that Fractal offers additional features that have not yet been identified in the simulation community despite their potential usefulness. Eventually we describe our ongoing work on such a new simulation architec...

  10. Mapping of Core Components Based e-Business Standards into Ontology

    Science.gov (United States)

    Magdalenić, Ivan; Vrdoljak, Boris; Schatten, Markus

    A mapping of Core Components specification based e-business standards to an ontology is presented. The Web Ontology Language (OWL) is used for ontology development. In order to preserve the existing hierarchy of the standards, an emphasis is put on the mapping of Core Components elements to specific constructs in OWL. The main purpose of developing an e-business standards' ontology is to create a foundation for an automated mapping system that would be able to convert concepts from various standards in an independent fashion. The practical applicability and verification of the presented mappings is tested on the mapping of Universal Business Language version 2.0 and Cross Industry Invoice version 2.0 to OWL.

  11. Hirabayashi, Satoshi; Kroll, Charles N.; Nowak, David J. 2011. Component-based development and sensitivity analyses of an air pollutant dry deposition model. Environmental Modelling & Software. 26(6): 804-816.

    Science.gov (United States)

    Satoshi Hirabayashi; Chuck Kroll; David Nowak

    2011-01-01

    The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...

  12. Component-Based Model for Single-Plate Shear Connections with Pretension and Pinched Hysteresis.

    Science.gov (United States)

    Weigand, Jonathan M

    2017-02-01

    Component-based connection models provide a natural framework for modeling the complex behaviors of connections under extreme loads by capturing both the individual behaviors of the connection components, such as the bolt, shear plate, and beam web, and the complex interactions between those components. Component-based models also provide automatic coupling between the in-plane flexural and axial connection behaviors, a feature that is essential for modeling the behavior of connections under column removal. This paper presents a new component-based model for single-plate shear connections that includes the effects of pre-tension in the bolts and provides the capability to model standard and slotted holes. The component-based models are exercised under component-level deformations calculated from the connection demands via a practical rigid-body displacement model, so that the results of the presented modeling approach remains hand-calculable. Validation cases are presented for connections subjected to both seismic and column removal loading. These validation cases show that the component-based model is capable of predicting the response of single-plate shear connections for both seismic and column removal loads.

  13. A Component-Based Software Configuration Management Model and Its Supporting System

    Institute of Scientific and Technical Information of China (English)

    梅宏; 张路; 杨芙清

    2002-01-01

    Software configuration management (SCM) is an important key technology in software development. Component-based software development (CBSD) is an emerging paradigm in software development. However, to apply CBSD effectively in real world practice,supporting SCM in CBSD needs to be further investigated. In this paper, the objects that need to be managed in CBSD is analyzed and a component-based SCM model is presented. In this model, components, as the integral logical constituents in a system, are managed as the basic configuration items in SCM, and the relationships between/among components are defined and maintained. Based on this model, a configuration management system is implemented.

  14. Feedback loops and temporal misalignment in component-based hydrologic modeling

    Science.gov (United States)

    Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.

    2011-12-01

    In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.

  15. Embedded System Construction: Evaluation of a Model-Driven and Component-Based Develpoment Approach

    OpenAIRE

    Bunse, C.; Gross, H.G.; Peper, C. (Claudia)

    2008-01-01

    Preprint of paper published in: Models in Software Engineering, Lecture Notes in Computer Science 5421, 2009; doi:10.1007/978-3-642-01648-6_8 Model-driven development has become an important engineering paradigm. It is said to have many advantages over traditional approaches, such as reuse or quality improvement, also for embedded systems. Along a similar line of argumentation, component-based software engineering is advocated. In order to investigate these claims, the MARMOT method was appli...

  16. Refinement and verification in component-based model-driven design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter

    2009-01-01

    Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object...... of Refinement of Component and Object Systems (rCOS) and illustrates it with experiences from the work on the Common Component Modelling Example (CoCoME). This gives evidence that the formal techniques developed in rCOS can be integrated into a model-driven development process and shows where it may...

  17. Models and frameworks: a synergistic association for developing component-based applications.

    Science.gov (United States)

    Alonso, Diego; Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development.

  18. A Component-Based Modeling and Validation Method for PLC Systems

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2014-05-01

    Full Text Available Programmable logic controllers (PLCs are complex embedded systems that are widely used in industry. This paper presents a component-based modeling and validation method for PLC systems using the behavior-interaction-priority (BIP framework. We designed a general system architecture and a component library for a type of device control system. The control software and hardware of the environment were all modeled as BIP components. System requirements were formalized as monitors. Simulation was carried out to validate the system model. A realistic example from industry of the gates control system was employed to illustrate our strategies. We found a couple of design errors during the simulation, which helped us to improve the dependability of the original systems. The results of experiment demonstrated the effectiveness of our approach.

  19. A Component-Based Debugging Approach for Detecting Structural Inconsistencies in Declarative Equation Based Models

    Institute of Scientific and Technical Information of China (English)

    Jian-Wan Ding; Li-Ping Chen; Fan-Li Zhou

    2006-01-01

    Object-oriented modeling with declarative equation based languages often unconsciously leads to structural inconsistencies. Component-based debugging is a new structural analysis approach that addresses this problem by analyzing the structure of each component in a model to separately locate faulty components. The analysis procedure is performed recursively based on the depth-first rule. It first generates fictitious equations for a component to establish a debugging environment, and then detects structural defects by using graph theoretical approaches to analyzing the structure of the system of equations resulting from the component. The proposed method can automatically locate components that cause the structural inconsistencies, and show the user detailed error messages. This information can be a great help in finding and localizing structural inconsistencies, and in some cases pinpoints them immediately.

  20. Towards uncertainty quantification and parameter estimation for Earth system models in a component-based modeling framework

    Science.gov (United States)

    Peckham, Scott D.; Kelbert, Anna; Hill, Mary C.; Hutton, Eric W. H.

    2016-05-01

    Component-based modeling frameworks make it easier for users to access, configure, couple, run and test numerical models. However, they do not typically provide tools for uncertainty quantification or data-based model verification and calibration. To better address these important issues, modeling frameworks should be integrated with existing, general-purpose toolkits for optimization, parameter estimation and uncertainty quantification. This paper identifies and then examines the key issues that must be addressed in order to make a component-based modeling framework interoperable with general-purpose packages for model analysis. As a motivating example, one of these packages, DAKOTA, is applied to a representative but nontrivial surface process problem of comparing two models for the longitudinal elevation profile of a river to observational data. Results from a new mathematical analysis of the resulting nonlinear least squares problem are given and then compared to results from several different optimization algorithms in DAKOTA.

  1. Component-based model to predict aerodynamic noise from high-speed train pantographs

    Science.gov (United States)

    Latorre Iglesias, E.; Thompson, D. J.; Smith, M. G.

    2017-04-01

    At typical speeds of modern high-speed trains the aerodynamic noise produced by the airflow over the pantograph is a significant source of noise. Although numerical models can be used to predict this they are still very computationally intensive. A semi-empirical component-based prediction model is proposed to predict the aerodynamic noise from train pantographs. The pantograph is approximated as an assembly of cylinders and bars with particular cross-sections. An empirical database is used to obtain the coefficients of the model to account for various factors: incident flow speed, diameter, cross-sectional shape, yaw angle, rounded edges, length-to-width ratio, incoming turbulence and directivity. The overall noise from the pantograph is obtained as the incoherent sum of the predicted noise from the different pantograph struts. The model is validated using available wind tunnel noise measurements of two full-size pantographs. The results show the potential of the semi-empirical model to be used as a rapid tool to predict aerodynamic noise from train pantographs.

  2. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  3. Component-based Topological Data Model for Three-dimensional Geology Modeling

    Institute of Scientific and Technical Information of China (English)

    HOU Enke; WU Lixin; WU Yuhua; JU Tianyi

    2005-01-01

    On the study of the basic characteristics of geological objects and the special requirement for computing 3D geological model, this paper gives an object-oriented 3D topologic data model.In this model, the geological objects are divided into four object classes: point, line, area and volume.The volume class is further divided into four subclasses: the composite volume, the complex volume, the simple volume and the component.Twelve kinds of topological relations and the related data structures are designed for the geological objects.

  4. Modeling Core Collapse Supernovae

    Science.gov (United States)

    Mezzacappa, Anthony

    2017-01-01

    Core collapse supernovae, or the death throes of massive stars, are general relativistic, neutrino-magneto-hydrodynamic events. The core collapse supernova mechanism is still not in hand, though key components have been illuminated, and the potential for multiple mechanisms for different progenitors exists. Core collapse supernovae are the single most important source of elements in the Universe, and serve other critical roles in galactic chemical and thermal evolution, the birth of neutron stars, pulsars, and stellar mass black holes, the production of a subclass of gamma-ray bursts, and as potential cosmic laboratories for fundamental nuclear and particle physics. Given this, the so called ``supernova problem'' is one of the most important unsolved problems in astrophysics. It has been fifty years since the first numerical simulations of core collapse supernovae were performed. Progress in the past decade, and especially within the past five years, has been exponential, yet much work remains. Spherically symmetric simulations over nearly four decades laid the foundation for this progress. Two-dimensional modeling that assumes axial symmetry is maturing. And three-dimensional modeling, while in its infancy, has begun in earnest. I will present some of the recent work from the ``Oak Ridge'' group, and will discuss this work in the context of the broader work by other researchers in the field. I will then point to future requirements and challenges. Connections with other experimental, observational, and theoretical efforts will be discussed, as well.

  5. PARAMETRIC STUDIES ON THE COMPONENT-BASED APPROACH TO MODELLING BEAM BOTTOM FLANGE BUCKLING AT ELEVATED TEMPERATURES

    Directory of Open Access Journals (Sweden)

    Guan Quan

    2016-04-01

    Full Text Available In this study, an analytical model of the combination of beam-web shear buckling and bottom-flange buckling at elevated temperatures has been introduced. This analytical model is able to track the force-deflection path during post-buckling. A range of 3D finite element models has been created using the ABAQUS software. Comparisons have been carried out between the proposed analytical model, finite element modelling and an existing theoretical model by Dharma (2007. Comparisons indicate that the proposed method is able to provide accurate predictions for Class 1 and Class 2 beams, and performs better than the existing Dharma model, especially for beams with high flange-to-web thickness ratios. A component-based model has been created on the basis of the analytical model, and will in due course be implemented in the software Vulcan for global structural fire analysis.

  6. ROSMOD: A Toolsuite for Modeling, Generating, Deploying, and Managing Distributed Real-time Component-based Software using ROS

    Directory of Open Access Journals (Sweden)

    Pranav Srinivas Kumar

    2016-09-01

    Full Text Available This paper presents the Robot Operating System Model-driven development tool suite, (ROSMOD an integrated development environment for rapid prototyping component-based software for the Robot Operating System (ROS middleware. ROSMOD is well suited for the design, development and deployment of large-scale distributed applications on embedded devices. We present the various features of ROSMOD including the modeling language, the graphical user interface, code generators, and deployment infrastructure. We demonstrate the utility of this tool with a real-world case study: an Autonomous Ground Support Equipment (AGSE robot that was designed and prototyped using ROSMOD for the NASA Student Launch competition, 2014–2015.

  7. Embedded System Construction: Evaluation of a Model-Driven and Component-Based Develpoment Approach

    NARCIS (Netherlands)

    Bunse, C.; Gross, H.G.; Peper, C.

    2008-01-01

    Preprint of paper published in: Models in Software Engineering, Lecture Notes in Computer Science 5421, 2009; doi:10.1007/978-3-642-01648-6_8 Model-driven development has become an important engineering paradigm. It is said to have many advantages over traditional approaches, such as reuse or quali

  8. Embedded System Construction: Evaluation of a Model-Driven and Component-Based Develpoment Approach

    NARCIS (Netherlands)

    Bunse, C.; Gross, H.G.; Peper, C.

    2008-01-01

    Preprint of paper published in: Models in Software Engineering, Lecture Notes in Computer Science 5421, 2009; doi:10.1007/978-3-642-01648-6_8 Model-driven development has become an important engineering paradigm. It is said to have many advantages over traditional approaches, such as reuse or quali

  9. A blind separation method of overlapped multi-components based on time varying AR model

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A method utilizing single channel recordings to blindly separate the multicomponents overlapped in time and frequency domains is proposed in this paper. Based on the time varying AR model, the instantaneous frequency and amplitude of each signal component are estimated respectively, thus the signal component separation is achieved. By using prolate spheroidal sequence as basis functions to expand the time varying parameters of the AR model, the method turns the problem of linear time varying parameters estimation to a linear time invariant parameter estimation problem, then the parameters are estimated by a recursive algorithm. The computation of this method is simple, and no prior knowledge of the signals is needed. Simulation results demonstrate validity and excellent performance of this method.

  10. Models of the earth's core

    Science.gov (United States)

    Stevenson, D. J.

    1981-01-01

    Combined inferences from seismology, high-pressure experiment and theory, geomagnetism, fluid dynamics, and current views of terrestrial planetary evolution lead to models of the earth's core with five basic properties. These are that core formation was contemporaneous with earth accretion; the core is not in chemical equilibrium with the mantle; the outer core is a fluid iron alloy containing significant quantities of lighter elements and is probably almost adiabatic and compositionally uniform; the more iron-rich inner solid core is a consequence of partial freezing of the outer core, and the energy release from this process sustains the earth's magnetic field; and the thermodynamic properties of the core are well constrained by the application of liquid-state theory to seismic and labroatory data.

  11. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    Science.gov (United States)

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  12. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems

    Science.gov (United States)

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-01-01

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829

  13. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems

    Directory of Open Access Journals (Sweden)

    Esther Palomar

    2016-10-01

    Full Text Available Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS, which extends the refinement calculus for component and object system (rCOS modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT, i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  14. Application of Core Dynamics Modeling to Core-Mantle Interactions

    Science.gov (United States)

    Kuang, Weijia

    2003-01-01

    Observations have demonstrated that length of day (LOD) variation on decadal time scales results from exchange of axial angular momentum between the solid mantle and the core. There are in general four core-mantle interaction mechanisms that couple the core and the mantle. Of which, three have been suggested likely the dominant coupling mechanism for the decadal core-mantle angular momentum exchange, namely, gravitational core-mantle coupling arising from density anomalies in the mantle and in the core (including the inner core), the electromagnetic coupling arising from Lorentz force in the electrically conducting lower mantle (e.g. D-layer), and the topographic coupling arising from non-hydrostatic pressure acting on the core-mantle boundary (CMB) topography. In the past decades, most effort has been on estimating the coupling torques from surface geomagnetic observations (kinematic approach), which has provided insights on the core dynamical processes. In the meantime, it also creates questions and concerns on approximations in the studies that may invalidate the corresponding conclusions. The most serious problem is perhaps the approximations that are inconsistent with dynamical processes in the core, such as inconsistencies between the core surface flow beneath the CMB and the CMB topography, and that between the D-layer electric conductivity and the approximations on toroidal field at the CMB. These inconsistencies can only be addressed with numerical core dynamics modeling. In the past few years, we applied our MoSST (Modular, Scalable, Self-consistent and Three-dimensional) core dynamics model to study core-mantle interactions together with geodynamo simulation, aiming at assessing the effect of the dynamical inconsistencies in the kinematic studies on core-mantle coupling torques. We focus on topographic and electromagnetic core-mantle couplings and find that, for the topographic coupling, the consistency between the core flow and the CMB topography is

  15. Spatially-Distributed Stream Flow and Nutrient Dynamics Simulations Using the Component-Based AgroEcoSystem-Watershed (AgES-W) Model

    Science.gov (United States)

    Ascough, J. C.; David, O.; Heathman, G. C.; Smith, D. R.; Green, T. R.; Krause, P.; Kipka, H.; Fink, M.

    2010-12-01

    The Object Modeling System 3 (OMS3), currently being developed by the USDA-ARS Agricultural Systems Research Unit and Colorado State University (Fort Collins, CO), provides a component-based environmental modeling framework which allows the implementation of single- or multi-process modules that can be developed and applied as custom-tailored model configurations. OMS3 as a “lightweight” modeling framework contains four primary foundations: modeling resources (e.g., components) annotated with modeling metadata; domain specific knowledge bases and ontologies; tools for calibration, sensitivity analysis, and model optimization; and methods for model integration and performance scalability. The core is able to manage modeling resources and development tools for model and simulation creation, execution, evaluation, and documentation. OMS3 is based on the Java platform but is highly interoperable with C, C++, and FORTRAN on all major operating systems and architectures. The ARS Conservation Effects Assessment Project (CEAP) Watershed Assessment Study (WAS) Project Plan provides detailed descriptions of ongoing research studies at 14 benchmark watersheds in the United States. In order to satisfy the requirements of CEAP WAS Objective 5 (“develop and verify regional watershed models that quantify environmental outcomes of conservation practices in major agricultural regions”), a new watershed model development approach was initiated to take advantage of OMS3 modeling framework capabilities. Specific objectives of this study were to: 1) disaggregate and refactor various agroecosystem models (e.g., J2K-S, SWAT, WEPP) and implement hydrological, N dynamics, and crop growth science components under OMS3, 2) assemble a new modular watershed scale model for fully-distributed transfer of water and N loading between land units and stream channels, and 3) evaluate the accuracy and applicability of the modular watershed model for estimating stream flow and N dynamics. The

  16. DEVELOPMENT OF A GIS DATA MODEL WITH SPATIAL,TEMPORAL AND ATTRIBUTE COMPONENTS BASED ON OBJECT-ORIENTED APPROACH

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper presents a conceptual data model, the STA-model, for handling spatial, temporal and attribute aspects of objects in GIS. The model is developed on the basis of object-oriented modeling approach. This model includes two major parts: (a) modeling the signal objects by STA-object elements, and (b) modeling relationships between STA-objects. As an example, the STA-model is applied for modeling land cover change data with spatial, temporal and attribute components.

  17. Modeling of Pulsed Transformer with Nanocrystalline Cores

    Directory of Open Access Journals (Sweden)

    Amir Baktash

    2014-07-01

    Full Text Available Recently tape wound cores, due to their excellent properties, are widely used in transformers for pulsed or high frequency applications. The spiral structure of these cores affects the flux distribution inside the core and causes complication of the magnetic analysis and consequently the circuit analysis. In this paper, a model based on reluctance networks method is used to analyze the magnetic flux in toroidal wound cores and losses calculation. A Preisach based hysteresis model is included in the model to consider the nonlinear characteristic of the core. Magnetic losses are calculated by having the flux density in different points of the core and using the hysteresis model. A transformer for using in a series resonant converter is modeled and implemented. The modeling results are compared with experimental measurements and FEM results to evaluate the validity of the model. Comparisons show the accuracy of the model besides its simplicity and fast convergence.

  18. Computer-aided process planning in prismatic shape die components based on Standard for the Exchange of Product model data

    Directory of Open Access Journals (Sweden)

    Awais Ahmad Khan

    2015-11-01

    Full Text Available Insufficient technologies made good integration between the die components in design, process planning, and manufacturing impossible in the past few years. Nowadays, the advanced technologies based on Standard for the Exchange of Product model data are making it possible. This article discusses the three main steps for achieving the complete process planning for prismatic parts of the die components. These three steps are data extraction, feature recognition, and process planning. The proposed computer-aided process planning system works as part of an integrated system to cover the process planning of any prismatic part die component. The system is built using Visual Basic with EWDraw system for visualizing the Standard for the Exchange of Product model data file. The system works successfully and can cover any type of sheet metal die components. The case study discussed in this article is taken from a large design of progressive die.

  19. Identification and Analysis of Labor Productivity Components Based on ACHIEVE Model (Case Study: Staff of Kermanshah University of Medical Sciences)

    Science.gov (United States)

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2015-01-01

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach’s alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees’ viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities. PMID:25560364

  20. Nuclear reactor core modelling in multifunctional simulators

    Energy Technology Data Exchange (ETDEWEB)

    Puska, E.K. [VTT Energy, Nuclear Energy, Espoo (Finland)

    1999-06-01

    The thesis concentrates on the development of nuclear reactor core models for the APROS multifunctional simulation environment and the use of the core models in various kinds of applications. The work was started in 1986 as a part of the development of the entire APROS simulation system. The aim was to create core models that would serve in a reliable manner in an interactive, modular and multifunctional simulator/plant analyser environment. One-dimensional and three-dimensional core neutronics models have been developed. Both models have two energy groups and six delayed neutron groups. The three-dimensional finite difference type core model is able to describe both BWR- and PWR-type cores with quadratic fuel assemblies and VVER-type cores with hexagonal fuel assemblies. The one- and three-dimensional core neutronics models can be connected with the homogeneous, the five-equation or the six-equation thermal hydraulic models of APROS. The key feature of APROS is that the same physical models can be used in various applications. The nuclear reactor core models of APROS have been built in such a manner that the same models can be used in simulator and plant analyser applications, as well as in safety analysis. In the APROS environment the user can select the number of flow channels in the three-dimensional reactor core and either the homogeneous, the five- or the six-equation thermal hydraulic model for these channels. The thermal hydraulic model and the number of flow channels have a decisive effect on the calculation time of the three-dimensional core model and thus, at present, these particular selections make the major difference between a safety analysis core model and a training simulator core model. The emphasis on this thesis is on the three-dimensional core model and its capability to analyse symmetric and asymmetric events in the core. The factors affecting the calculation times of various three-dimensional BWR, PWR and WWER-type APROS core models have been

  1. Component Based Testing with ioco

    NARCIS (Netherlands)

    van der Bijl, H.M.; Rensink, Arend; Tretmans, G.J.

    Component based testing concerns the integration of components which have already been tested separately. We show that, with certain restrictions, the ioco-test theory for conformance testing is suitable for component based testing, in the sense that the integration of fully conformant components is

  2. The artifacts of component-based development

    CERN Document Server

    Qureshi, M Rizwan Jameel

    2012-01-01

    Component based development idea was floated in a conference name "Mass Produced Software Components" in 1968 [1]. Since then engineering and scientific libraries are developed to reuse the previously developed functions. This concept is now widely used in SW development as component based development (CBD). Component-based software engineering (CBSE) is used to develop/ assemble software from existing components [2]. Software developed using components is called component ware [3]. This paper presents different architectures of CBD such as ActiveX, common object request broker architecture (CORBA), remote method invocation (RMI) and simple object access protocol (SOAP). The overall objective of this paper is to support the practice of CBD by comparing its advantages and disadvantages. This paper also evaluates object oriented process model to adapt it for CBD.

  3. Formal Component-Based Semantics

    CERN Document Server

    Madlener, Ken; van Eekelen, Marko; 10.4204/EPTCS.62.2

    2011-01-01

    One of the proposed solutions for improving the scalability of semantics of programming languages is Component-Based Semantics, introduced by Peter D. Mosses. It is expected that this framework can also be used effectively for modular meta theoretic reasoning. This paper presents a formalization of Component-Based Semantics in the theorem prover Coq. It is based on Modular SOS, a variant of SOS, and makes essential use of dependent types, while profiting from type classes. This formalization constitutes a contribution towards modular meta theoretic formalizations in theorem provers. As a small example, a modular proof of determinism of a mini-language is developed.

  4. Retention Models on Core-Shell Columns.

    Science.gov (United States)

    Jandera, Pavel; Hájek, Tomáš; Růžičková, Marie

    2017-07-13

    A thin, active shell layer on core-shell columns provides high efficiency in HPLC at moderately high pressures. We revisited three models of mobile phase effects on retention for core-shell columns in mixed aqueous-organic mobile phases: linear solvent strength and Snyder-Soczewiński two-parameter models and a three-parameter model. For some compounds, two-parameter models show minor deviations from linearity due to neglect of possible minor retention in pure weak solvent, which is compensated for in the three-parameter model, which does not explicitly assume either the adsorption or the partition retention mechanism in normal- or reversed-phase systems. The model retention equation can be formulated as a function of solute retention factors of nonionic compounds in pure organic solvent and in pure water (or aqueous buffer) and of the volume fraction of an either aqueous or organic solvent component in a two-component mobile phase. With core-shell columns, the impervious solid core does not participate in the retention process. Hence, the thermodynamic retention factors, defined as the ratio of the mass of the analyte mass contained in the stationary phase to its mass in the mobile phase in the column, should not include the particle core volume. The values of the thermodynamic factors are lower than the retention factors determined using a convention including the inert core in the stationary phase. However, both conventions produce correct results if consistently used to predict the effects of changing mobile phase composition on retention. We compared three types of core-shell columns with C18-, phenyl-hexyl-, and biphenyl-bonded phases. The core-shell columns with phenyl-hexyl- and biphenyl-bonded ligands provided lower errors in two-parameter model predictions for alkylbenzenes, phenolic acids, and flavonoid compounds in comparison with C18-bonded ligands.

  5. A Core Language for Separate Variability Modeling

    DEFF Research Database (Denmark)

    Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina

    2014-01-01

    Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... hierarchical dependencies between variation points via copying and flattening. Thus, we reduce a model with intricate dependencies to a flat executable model transformation consisting of simple unconditional local variation points. The core semantics is extremely concise: it boils down to two operational rules...

  6. Mechanisms and Geochemical Models of Core Formation

    CERN Document Server

    Rubie, David C

    2015-01-01

    The formation of the Earth's core is a consequence of planetary accretion and processes in the Earth's interior. The mechanical process of planetary differentiation is likely to occur in large, if not global, magma oceans created by the collisions of planetary embryos. Metal-silicate segregation in magma oceans occurs rapidly and efficiently unlike grain scale percolation according to laboratory experiments and calculations. Geochemical models of the core formation process as planetary accretion proceeds are becoming increasingly realistic. Single stage and continuous core formation models have evolved into multi-stage models that are couple to the output of dynamical models of the giant impact phase of planet formation. The models that are most successful in matching the chemical composition of the Earth's mantle, based on experimentally-derived element partition coefficients, show that the temperature and pressure of metal-silicate equilibration must increase as a function of time and mass accreted and so m...

  7. Processor core model for quantum computing.

    Science.gov (United States)

    Yung, Man-Hong; Benjamin, Simon C; Bose, Sougato

    2006-06-09

    We describe an architecture based on a processing "core," where multiple qubits interact perpetually, and a separate "store," where qubits exist in isolation. Computation consists of single qubit operations, swaps between the store and the core, and free evolution of the core. This enables computation using physical systems where the entangling interactions are "always on." Alternatively, for switchable systems, our model constitutes a prescription for optimizing many-qubit gates. We discuss implementations of the quantum Fourier transform, Hamiltonian simulation, and quantum error correction.

  8. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle component-based factor analysis

    Directory of Open Access Journals (Sweden)

    C. A. Stroud

    2012-09-01

    Full Text Available Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007 in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA and two other carbonaceous species, black carbon (BC and carbon monoxide (CO, made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON and two rural sites (Harrow and Bear Creek, ON to derive hydrocarbon-like organic aerosol (HOA factors. A novel diagnostic model evaluation was performed by investigating model POA bias as a function of HOA mass concentration and indicator ratios (e.g. BC/HOA. Eight case studies were selected based on factor analysis and back trajectories to help classify model bias for certain POA source types. By considering model POA bias in relation to co-located BC and CO biases, a plausible story is developed that explains the model biases for all three species.

    At the rural sites, daytime mean PM1 POA mass concentrations were under-predicted compared to observed HOA concentrations. POA under-predictions were accentuated when the transport arriving at the rural sites was from the Detroit/Windsor urban complex and for short-term periods of biomass burning influence. Interestingly, the daytime CO concentrations were only slightly under-predicted at both rural sites, whereas CO was over-predicted at the urban Windsor site with a normalized mean bias of 134%, while good agreement was observed at Windsor for the comparison of daytime PM1 POA and HOA mean values, 1.1 μg m−3 and 1.2 μg m−3, respectively. Biases in model POA predictions also trended from positive to negative with increasing HOA values. Periods of POA over-prediction were most evident at the urban site on calm nights due to an overly-stable model surface layer

  9. Formalization in Component Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Knudsen, John; Makowski, Piotr;

    2006-01-01

    We present a unifying conceptual framework for components, component interfaces, contracts and composition of components by focusing on the collection of properties or qualities that they must share. A specific property, such as signature, functionality behaviour or timing is an aspect. Each aspe...... by small examples, using UML as concrete syntax for various aspects, and is illustrated by one larger case study based on an industrial prototype of a complex component based system....

  10. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle-component-based factor analysis

    Directory of Open Access Journals (Sweden)

    C. A. Stroud

    2012-02-01

    Full Text Available Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007 in southern Ontario (ON, Canada, were used to evaluate Environment Canada's regional chemical transport model predictions of primary organic aerosol (POA. Environment Canada's operational numerical weather prediction model and the 2006 Canadian and 2005 US national emissions inventories were used as input to the chemical transport model (named AURAMS. Particle-component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON and two rural sites (Harrow and Bear Creek, ON to derive hydrocarbon-like organic aerosol (HOA factors. Co-located carbon monoxide (CO, PM2.5 black carbon (BC, and PM1 SO4 measurements were also used for evaluation and interpretation, permitting a detailed diagnostic model evaluation.

    At the urban site, good agreement was observed for the comparison of daytime campaign PM1 POA and HOA mean values: 1.1 μg m−3 vs. 1.2 μg m−3, respectively. However, a POA overprediction was evident on calm nights due to an overly-stable model surface layer. Biases in model POA predictions trended from positive to negative with increasing HOA values. This trend has several possible explanations, including (1 underweighting of urban locations in particulate matter (PM spatial surrogate fields, (2 overly-coarse model grid spacing for resolving urban-scale sources, and (3 lack of a model particle POA evaporation process during dilution of vehicular POA tail-pipe emissions to urban scales. Furthermore, a trend in POA bias was observed at the urban site as a function of the BC/HOA ratio, suggesting a possible association of POA underprediction for diesel combustion sources. For several time periods, POA overprediction was also observed for sulphate-rich plumes, suggesting that our model POA fractions for the PM2.5 chemical

  11. Geodynamo Modeling of Core-Mantle Interactions

    Science.gov (United States)

    Kuang, Wei-Jia; Chao, Benjamin F.; Smith, David E. (Technical Monitor)

    2001-01-01

    Angular momentum exchange between the Earth's mantle and core influences the Earth's rotation on time scales of decades and longer, in particular in the length of day (LOD) which have been measured with progressively increasing accuracy for the last two centuries. There are four possible coupling mechanisms for transferring the axial angular momentum across the core-mantle boundary (CMB): viscous, magnetic, topography, and gravitational torques. Here we use our scalable, modularized, fully dynamic geodynamo model for the core to assess the importance of these torques. This numerical model, as an extension of the Kuang-Bloxham model that has successfully simulated the generation of the Earth's magnetic field, is used to obtain numerical results in various physical conditions in terms of specific parameterization consistent with the dynamical processes in the fluid outer core. The results show that depending on the electrical conductivity of the lower mantle and the amplitude of the boundary topography at CMB, both magnetic and topographic couplings can contribute significantly to the angular momentum exchange. This implies that the core-mantle interactions are far more complex than has been assumed and that there is unlikely a single dominant coupling mechanism for the observed decadal LOD variation.

  12. Enhanced Core Noise Modeling for Turbofan Engines

    Science.gov (United States)

    Stone, James R.; Krejsa, Eugene A.; Clark, Bruce J.

    2011-01-01

    This report describes work performed by MTC Technologies (MTCT) for NASA Glenn Research Center (GRC) under Contract NAS3-00178, Task Order No. 15. MTCT previously developed a first-generation empirical model that correlates the core/combustion noise of four GE engines, the CF6, CF34, CFM56, and GE90 for General Electric (GE) under Contract No. 200-1X-14W53048, in support of GRC Contract NAS3-01135. MTCT has demonstrated in earlier noise modeling efforts that the improvement of predictive modeling is greatly enhanced by an iterative approach, so in support of NASA's Quiet Aircraft Technology Project, GRC sponsored this effort to improve the model. Since the noise data available for correlation are total engine noise spectra, it is total engine noise that must be predicted. Since the scope of this effort was not sufficient to explore fan and turbine noise, the most meaningful comparisons must be restricted to frequencies below the blade passage frequency. Below the blade passage frequency and at relatively high power settings jet noise is expected to be the dominant source, and comparisons are shown that demonstrate the accuracy of the jet noise model recently developed by MTCT for NASA under Contract NAS3-00178, Task Order No. 10. At lower power settings the core noise became most apparent, and these data corrected for the contribution of jet noise were then used to establish the characteristics of core noise. There is clearly more than one spectral range where core noise is evident, so the spectral approach developed by von Glahn and Krejsa in 1982 wherein four spectral regions overlap, was used in the GE effort. Further analysis indicates that the two higher frequency components, which are often somewhat masked by turbomachinery noise, can be treated as one component, and it is on that basis that the current model is formulated. The frequency scaling relationships are improved and are now based on combustor and core nozzle geometries. In conjunction with the Task

  13. Conceptual Models Core to Good Design

    CERN Document Server

    Johnson, Jeff

    2011-01-01

    People make use of software applications in their activities, applying them as tools in carrying out tasks. That this use should be good for people--easy, effective, efficient, and enjoyable--is a principal goal of design. In this book, we present the notion of Conceptual Models, and argue that Conceptual Models are core to achieving good design. From years of helping companies create software applications, we have come to believe that building applications without Conceptual Models is just asking for designs that will be confusing and difficult to learn, remember, and use. We show how Concept

  14. From cusps to cores: a stochastic model

    Science.gov (United States)

    El-Zant, Amr A.; Freundlich, Jonathan; Combes, Françoise

    2016-09-01

    The cold dark matter model of structure formation faces apparent problems on galactic scales. Several threads point to excessive halo concentration, including central densities that rise too steeply with decreasing radius. Yet, random fluctuations in the gaseous component can `heat' the centres of haloes, decreasing their densities. We present a theoretical model deriving this effect from first principles: stochastic variations in the gas density are converted into potential fluctuations that act on the dark matter; the associated force correlation function is calculated and the corresponding stochastic equation solved. Assuming a power-law spectrum of fluctuations with maximal and minimal cutoff scales, we derive the velocity dispersion imparted to the halo particles and the relevant relaxation time. We further perform numerical simulations, with fluctuations realized as a Gaussian random field, which confirm the formation of a core within a time-scale comparable to that derived analytically. Non-radial collective modes enhance the energy transport process that erases the cusp, though the parametrizations of the analytical model persist. In our model, the dominant contribution to the dynamical coupling driving the cusp-core transformation comes from the largest scale fluctuations. Yet, the efficiency of the transformation is independent of the value of the largest scale and depends weakly (linearly) on the power-law exponent; it effectively depends on two parameters: the gas mass fraction and the normalization of the power spectrum. This suggests that cusp-core transformations observed in hydrodynamic simulations of galaxy formation may be understood and parametrized in simple terms, the physical and numerical complexities of the various implementations notwithstanding.

  15. Geomagnetic Core Field Secular Variation Models

    DEFF Research Database (Denmark)

    Gillet, N.; Lesur, V.; Olsen, Nils

    2010-01-01

    We analyse models describing time changes of the Earth’s core magnetic field (secular variation) covering the historical period (several centuries) and the more recent satellite era (previous decade), and we illustrate how both the information contained in the data and the a priori information...... highlight the difficulty of resolving the time variability of the high degree secular variation coefficients (i.e. the secular acceleration), arising for instance from the challenge to properly separate sources of internal and of external origin. In addition, the regularisation process may also result...

  16. From cusps to cores: a stochastic model

    CERN Document Server

    El-Zant, Amr; Combes, Francoise

    2016-01-01

    The cold dark matter model of structure formation faces apparent problems on galactic scales. Several threads point to excessive halo concentration, including central densities that rise too steeply with decreasing radius. Yet, random fluctuations in the gaseous component can 'heat' the centres of haloes, decreasing their densities. We present a theoretical model deriving this effect from first principles: stochastic variations in the gas density are converted into potential fluctuations that act on the dark matter; the associated force correlation function is calculated and the corresponding stochastic equation solved. Assuming a power law spectrum of fluctuations with maximal and minimal cutoff scales, we derive the velocity dispersion imparted to the halo particles and the relevant relaxation time. We further perform numerical simulations, with fluctuations realised as a Gaussian random field, which confirm the formation of a core within a timescale comparable to that derived analytically. Non-radial colle...

  17. Rotating, hydromagnetic laboratory experiment modelling planetary cores

    Science.gov (United States)

    Kelley, Douglas H.

    2009-10-01

    This dissertation describes a series of laboratory experiments motivated by planetary cores and the dynamo effect, the mechanism by which the flow of an electrically conductive fluid can give rise to a spontaneous magnetic field. Our experimental apparatus, meant to be a laboratory model of Earth's core, contains liquid sodium between an inner, solid sphere and an outer, spherical shell. The fluid is driven by the differential rotation of these two boundaries, each of which is connected to a motor. Applying an axial, DC magnetic field, we use a collection of Hall probes to measure the magnetic induction that results from interactions between the applied field and the flowing, conductive fluid. We have observed and identified inertial modes, which are bulk oscillations of the fluid restored by the Coriolis force. Over-reflection at a shear layer is one mechanism capable of exciting such modes, and we have developed predictions of both onset boundaries and mode selection from over-reflection theory which are consistent with our observations. Also, motivated by previous experimental devices that used ferromagnetic boundaries to achieve dynamo action, we have studied the effects of a soft iron (ferromagnetic) inner sphere on our apparatus, again finding inertial waves. We also find that all behaviors are more broadband and generally more nonlinear in the presence of a ferromagnetic boundary. Our results with a soft iron inner sphere have implications for other hydromagnetic experiments with ferromagnetic boundaries, and are appropriate for comparison to numerical simulations as well. From our observations we conclude that inertial modes almost certainly occur in planetary cores and will occur in future rotating experiments. In fact, the predominance of inertial modes in our experiments and in other recent work leads to a new paradigm for rotating turbulence, starkly different from turbulence theories based on assumptions of isotropy and homogeneity, starting instead

  18. Component Based Dynamic Reconfigurable Test System

    Institute of Scientific and Technical Information of China (English)

    LAI Hong; HE Lingsong; ZHANG Dengpan

    2006-01-01

    In this paper, a novel component based framework of test system is presented for the new requirements of dynamic changes of test functions and reconfiguration of test resources. The complexity of dynamic reconfiguration arises from the scale, redirection, extensibility and interconnection of components in test system. The paper is started by discussing the component assembly based framework which provide the open platform to the deploy of components and then the script interpreter model is introduced to dynamically create the components and build the test system by analyzing XML based information of test system. A pipeline model is presented to provide the data channels and behavior reflection among the components. Finally, a dynamic reconfigurable test system is implemented on the basis of COM and applied in the remote test and control system of CNC machine.

  19. Improvement of core degradation model in ISAAC

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Ha; Kim, See Darl; Park, Soo Yong

    2004-02-01

    If water inventory in the fuel channels depletes and fuel rods are exposed to steam after uncover in the pressure tube, the decay heat generated from fuel rods is transferred to the pressure tube and to the calandria tube by radiation, and finally to the moderator in the calandria tank by conduction. During this process, the cladding will be heated first and ballooned when the fuel gap internal pressure exceeds the primary system pressure. The pressure tube will be also ballooned and will touch the calandria tube, increasing heat transfer rate to the moderator. Although these situation is not desirable, the fuel channel is expected to maintain its integrity as long as the calandria tube is submerged in the moderator, because the decay heat could be removed to the moderator through radiation and conduction. Therefore, loss of coolant and moderator inside and outside the channel may cause severe core damage including horizontal fuel channel sagging and finally loss of channel integrity. The sagged channels contact with the channels located below and lose their heat transfer area to the moderator. As the accident goes further, the disintegrated fuel channels will be heated up and relocated onto the bottom of the calandria tank. If the temperature of these relocated materials is high enough to attack the calandria tank, the calandria tank would fail and molten material would contact with the calandria vault water. Steam explosion and/or rapid steam generation from this interaction may threaten containment integrity. Though a detailed model is required to simulate the severe accident at CANDU plants, complexity of phenomena itself and inner structures as well as lack of experimental data forces to choose a simple but reasonable model as the first step. ISAAC 1.0 was developed to model the basic physicochemical phenomena during the severe accident progression. At present, ISAAC 2.0 is being developed for accident management guide development and strategy evaluation. In

  20. Component Based Electronic Voting Systems

    Science.gov (United States)

    Lundin, David

    An electronic voting system may be said to be composed of a number of components, each of which has a number of properties. One of the most attractive effects of this way of thinking is that each component may have an attached in-depth threat analysis and verification strategy. Furthermore, the need to include the full system when making changes to a component is minimised and a model at this level can be turned into a lower-level implementation model where changes can cascade to as few parts of the implementation as possible.

  1. Core Collapse Supernova Models For Nucleosynthesis

    Science.gov (United States)

    Casanova, Jordi; Frohlich, C.; Perego, A.; Hempel, M.

    2014-01-01

    Type II supernova explosions are the product of the collapse of massive stars (M > 8-10 Msun), which explode with a kinetic energy release of 1e51 erg. While sophisticated multi-dimensional models can reveal details of the explosion mechanism (role of convection, fluid instabilities, etc.), they are computationally too expensive for nucleosynthesis studies. However, precise nucleosynthesis predictions are needed to understand the supernova contribution to the heavy elements and the abundances observed in metal-poor stars. We have modeled the core collapse, bounce and subsequent explosion of massive stars assuming spherical symmetry with the code Agile-IDSA (Liebendoerfer et al. 2009) combined with a novel method to artificially trigger the explosion (PUSH). The code also includes the Hempel EOS, which uses a modern non-NSE to cover the entire nucleosynthesis duration. In our simulations, based on the neutrino-delayed explosion mechanism, the explosion sets in by depositing a small amount of additional energy (from mu and tau neutrinos) to revive the stalled shock. Our results show that the code Agile-IDSA combined with PUSH is very robust and can successfully reproduce an explosion with a more reliable treatment of the crucial quantities involved in nucleosynthesis (i.e., the electron fraction). Here, we present a detailed isotopic abundance study for a wide variety of progenitors, as well as an analysis of the explosion properties, such as the explosion energies, remnant masses or compactness of the progenitor models.

  2. Structural modeling of sandwich structures with lightweight cellular cores

    Science.gov (United States)

    Liu, T.; Deng, Z. C.; Lu, T. J.

    2007-10-01

    An effective single layered finite element (FE) computational model is proposed to predict the structural behavior of lightweight sandwich panels having two dimensional (2D) prismatic or three dimensional (3D) truss cores. Three different types of cellular core topology are considered: pyramidal truss core (3D), Kagome truss core (3D) and corrugated core (2D), representing three kinds of material anisotropy: orthotropic, monoclinic and general anisotropic. A homogenization technique is developed to obtain the homogenized macroscopic stiffness properties of the cellular core. In comparison with the results obtained by using detailed FE model, the single layered computational model can give acceptable predictions for both the static and dynamic behaviors of orthotropic truss core sandwich panels. However, for non-orthotropic 3D truss cores, the predictions are not so well. For both static and dynamic behaviors of a 2D corrugated core sandwich panel, the predictions derived by the single layered computational model is generally acceptable when the size of the unit cell varies within a certain range, with the predictions for moderately strong or strong corrugated cores more accurate than those for weak cores.

  3. Structural modeling of sandwich structures with lightweight cellular cores

    Institute of Scientific and Technical Information of China (English)

    T. Liu; Z. C. Deng; T. J. Lu

    2007-01-01

    An effective single layered finite element (FE) computational model is proposed to predict the structural behavior of lightweight sandwich panels having two dimensional (2D) prismatic or three dimensional (3D) truss cores.Three different types of cellular core topology are considered: pyramidal truss core (3D), Kagome truss core (3D) and corrugated core (2D), representing three kinds of material anisotropy: orthotropic, monoclinic and general anisotropic. A homogenization technique is developed to obtain the homogenized macroscopic stiffness properties of the cellular core. In comparison with the results obtained by using detailed FE model, the single layered computational model cangive acceptable predictions for both the static and dynamic behaviors of orthotropic truss core sandwich panels. However, for non-orthotropic 3D truss cores, the predictions are not so well. For both static and dynamic behaviors of a 2D corrugated core sandwich panel, the predictions derived by the single layered computational model is generally acceptable when the size of the unit cell varies within a certain range, with the predictions for moderately strong or strong corrugated cores more accurate than those for weak cores.

  4. Construction and utilization of linear empirical core models for PWR in-core fuel management

    Energy Technology Data Exchange (ETDEWEB)

    Okafor, K.C.

    1988-01-01

    An empirical core-model construction procedure for pressurized water reactor (PWR) in-core fuel management is developed that allows determining the optimal BOC k{sub {infinity}} profiles in PWRs as a single linear-programming problem and thus facilitates the overall optimization process for in-core fuel management due to algorithmic simplification and reduction in computation time. The optimal profile is defined as one that maximizes cycle burnup. The model construction scheme treats the fuel-assembly power fractions, burnup, and leakage as state variables and BOC zone enrichments as control variables. The core model consists of linear correlations between the state and control variables that describe fuel-assembly behavior in time and space. These correlations are obtained through time-dependent two-dimensional core simulations. The core model incorporates the effects of composition changes in all the enrichment control zones on a given fuel assembly and is valid at all times during the cycle for a given range of control variables. No assumption is made on the geometry of the control zones. A scatter-composition distribution, as well as annular, can be considered for model construction. The application of the methodology to a typical PWR core indicates good agreement between the model and exact simulation results.

  5. Secular variation and core-flow modelling with stable strafication at the top of the core

    Science.gov (United States)

    Holme, Richard; Buffett, Bruce

    2015-04-01

    Observed geomagnetic secular variation has been used for many years to provide an observational constraint on the dynamics of the core through the modelling of its surface flow. Recent results in both seismology and mineral physics provide strong evidence of a stably stratified layer at the top of the core, which has substantial implications for the calculation of such flows. It has been assumed for many years that the dynamic state at the core surface is close to tangentially geostrophic, and pure stable stratification also requires a flow to be toroidal. Combining these two conditions requires variations in flow that are completely zonal toroidal, which are known not to provide an adequate explanation of the observed secular variation. However, a stably stratified layer can support flow instabilities of a more general character. Buffett (2014) has recently provided a model in which zonal toroidal motions are associated with the excitation of a zonal poloidal instability. This model is able to explain the broad variation of the axial dipole over the past 100 years, and also to explain feature of geomagnetic jerks that cannot be explained by purely torsional motions. This model has inspired a new generation of core-flow models, with a substantial time-varying zonal poloidal component, something that is absent from most models of core surface flow. Here, we present these new models, and consider to what extent this flow structure can explain the details of secular variation. We also consider the implications for the connection between core-surface flow and length-of-day variation - a stably stratified layer has implications for the interpretation of core flow and the Earth's angular momentum budget. Finally, we consider the ability of core-surface flow models to probe the structure of the stably- stratified layer. Buffett (2014). Geomagnetic fluctuations reveal stable stratification at the top of the Earth's core, Nature 507, 484-487, doi:10.1038/nature13122

  6. Introducing the Core Probability Framework and Discrete-Element Core Probability Model for efficient stochastic macroscopic modelling

    NARCIS (Netherlands)

    Calvert, S.C.; Taale, H.; Hoogendoorn, S.P.

    2014-01-01

    In this contribution the Core Probability Framework (CPF) is introduced with the application of the Discrete-Element Core Probability Model (DE-CPM) as a new DNL for dynamic macroscopic modelling of stochastic traffic flow. The model is demonstrated for validation in a test case and for computationa

  7. Scaling of Core Material in Rubble Mound Breakwater Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Liu, Z.; Troch, P.

    1999-01-01

    The permeability of the core material influences armour stability, wave run-up and wave overtopping. The main problem related to the scaling of core materials in models is that the hydraulic gradient and the pore velocity are varying in space and time. This makes it impossible to arrive at a fully...... correct scaling. The paper presents an empirical formula for the estimation of the wave induced pressure gradient in the core, based on measurements in models and a prototype. The formula, together with the Forchheimer equation can be used for the estimation of pore velocities in cores. The paper proposes...... that the diameter of the core material in models is chosen in such a way that the Froude scale law holds for a characteristic pore velocity. The characteristic pore velocity is chosen as the average velocity of a most critical area in the core with respect to porous flow. Finally the method is demonstrated...

  8. Construction of linear empirical core models for pressurized water reactor in-core fuel management

    Energy Technology Data Exchange (ETDEWEB)

    Okafor, K.C.; Aldemir, T. (The Ohio State Univ., Dept. of Mechanical Engineering, Nuclear Engineering Program, 206 West 18th Ave., Columbus, OH (US))

    1988-06-01

    An empirical core model construction procedure for pressurized water reactor (PWR) in-core fuel management problems is presented that (a) incorporates the effect of composition changes in all the control zones in the core of a given fuel assembly, (b) is valid at all times during the cycle for a given range of control variables, (c) allows determining the optimal beginning of cycle (BOC) kappainfinity distribution as a single linear programming problem,and (d) provides flexibility in the choice of the material zones to describe core composition. Although the modeling procedure assumes zero BOC burnup, the predicted optimal kappainfinity profiles are also applicable to reload cores. In model construction, assembly power fractions and burnup increments during the cycle are regarded as the state (i.e., dependent) variables. Zone enrichments are the control (i.e., independent) variables. The model construction procedure is validated and implemented for the initial core of a PWR to determine the optimal BOC kappainfinity profiles for two three-zone scatter loading schemes. The predicted BOC kappainfinity profiles agree with the results of other investigators obtained by different modeling techniques.

  9. Toward a Standard Model of Core Collapse Supernovae

    OpenAIRE

    Mezzacappa, A.

    2000-01-01

    In this paper, we discuss the current status of core collapse supernova models and the future developments needed to achieve significant advances in understanding the supernova mechanism and supernova phenomenology, i.e., in developing a supernova standard model.

  10. Component-Based Cartoon Face Generation

    Directory of Open Access Journals (Sweden)

    Saman Sepehri Nejad

    2016-11-01

    Full Text Available In this paper, we present a cartoon face generation method that stands on a component-based facial feature extraction approach. Given a frontal face image as an input, our proposed system has the following stages. First, face features are extracted using an extended Active Shape Model. Outlines of the components are locally modified using edge detection, template matching and Hermit interpolation. This modification enhances the diversity of output and accuracy of the component matching required for cartoon generation. Second, to bring cartoon-specific features such as shadows, highlights and, especially, stylish drawing, an array of various face photographs and corresponding hand-drawn cartoon faces are collected. These cartoon templates are automatically decomposed into cartoon components using our proposed method for parameterizing cartoon samples, which is fast and simple. Then, using shape matching methods, the appropriate cartoon component is selected and deformed to fit the input face. Finally, a cartoon face is rendered in a vector format using the rendering rules of the selected template. Experimental results demonstrate effectiveness of our approach in generating life-like cartoon faces.

  11. Modification of Core Model for KNTC 2 Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y.K.; Lee, J.G.; Park, J.E.; Bae, S.N.; Chin, H.C. [Korea Electric Power Research Institute, Taejeon (Korea, Republic of)

    1997-12-31

    KNTC 2 simulator was developed in 1986 referencing YGN 1. Since the YGN 1 has changed its fuel cycle to long term cycle(cycle 9), the data such as rod worth, boron worth, moderator temperature coefficient, and etc. of the simulator and those of the YGN 1 became different. To incorporate these changes into the simulator and make the simulator more close to the reference plant, core model upgrade became a necessity. During this research, core data for the simulator was newly generated using APA of the WH. And to make it easy tuning and verification of the key characteristics of the reactor model, PC-Based tool was also developed. And to facilitate later core model upgrade, two procedures-`the Procedures for core characteristic generation` and `the Procedures for core characteristic modification`-were also developed. (author). 16 refs., 22 figs., 1 tab.

  12. Scaling of Core Material in Rubble Mound Breakwater Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Liu, Z.; Troch, P.

    1999-01-01

    correct scaling. The paper presents an empirical formula for the estimation of the wave induced pressure gradient in the core, based on measurements in models and a prototype. The formula, together with the Forchheimer equation can be used for the estimation of pore velocities in cores. The paper proposes...

  13. Leveraging Component-Based Software Engineering with Fraclet

    OpenAIRE

    Rouvoy, Romain; Merle, Philippe

    2009-01-01

    International audience; Component-based software engineering has achieved wide acceptance in the domain of software engineering by improving productivity, reusability and composition. This success has also encouraged the emergence of a plethora of component models. Nevertheless, even if the abstract models of most of lightweight component models are quite similar, their programming models can still differ a lot. This drawback limits the reuse and composition of components implemented using di...

  14. A seismologically consistent compositional model of Earth's core.

    Science.gov (United States)

    Badro, James; Côté, Alexander S; Brodholt, John P

    2014-05-27

    Earth's core is less dense than iron, and therefore it must contain "light elements," such as S, Si, O, or C. We use ab initio molecular dynamics to calculate the density and bulk sound velocity in liquid metal alloys at the pressure and temperature conditions of Earth's outer core. We compare the velocity and density for any composition in the (Fe-Ni, C, O, Si, S) system to radial seismological models and find a range of compositional models that fit the seismological data. We find no oxygen-free composition that fits the seismological data, and therefore our results indicate that oxygen is always required in the outer core. An oxygen-rich core is a strong indication of high-pressure and high-temperature conditions of core differentiation in a deep magma ocean with an FeO concentration (oxygen fugacity) higher than that of the present-day mantle.

  15. Algorithms for Synthesizing Priorities in Component-based Systems

    CERN Document Server

    Cheng, Chih-Hong; Chen, Yu-Fang; Yan, Rongjie; Jobstmann, Barbara; Ruess, Harald; Buckl, Christian; Knoll, Alois

    2011-01-01

    We present algorithms to synthesize component-based systems that are safe and deadlock-free using priorities, which define stateless-precedence between enabled actions. Our core method combines the concept of fault-localization (using safety-game) and fault-repair (using SAT for conflict resolution). For complex systems, we propose three complementary methods as preprocessing steps for priority synthesis, namely (a) data abstraction to reduce component complexities, (b) alphabet abstraction and #-deadlock to ignore components, and (c) automated assumption learning for compositional priority synthesis.

  16. Solid charged-core model of ball lightning

    Directory of Open Access Journals (Sweden)

    D. B. Muldrew

    2010-01-01

    Full Text Available In this study, ball lightning (BL is assumed to have a solid, positively-charged core. According to this underlying assumption, the core is surrounded by a thin electron layer with a charge nearly equal in magnitude to that of the core. A vacuum exists between the core and the electron layer containing an intense electromagnetic (EM field which is reflected and guided by the electron layer. The microwave EM field applies a ponderomotive force (radiation pressure to the electrons preventing them from falling into the core. The energetic electrons ionize the air next to the electron layer forming a neutral plasma layer. The electric-field distributions and their associated frequencies in the ball are determined by applying boundary conditions to a differential equation given by Stratton (1941. It is then shown that the electron and plasma layers are sufficiently thick and dense to completely trap and guide the EM field. This model of BL is exceptional in that it can explain all or nearly all of the peculiar characteristics of BL. The ES energy associated with the core charge can be extremely large which can explain the observations that occasionally BL contains enormous energy. The mass of the core prevents the BL from rising like a helium-filled balloon – a problem with most plasma and burning-gas models. The positively charged core keeps the negatively charged electron layer from diffusing away, i.e. it holds the ball together; other models do not have a mechanism to do this. The high electrical charges on the core and in the electron layer explains why some people have been electrocuted by BL. Experiments indicate that BL radiates microwaves upon exploding and this is consistent with the model. The fact that this novel model of BL can explain these and other observations is strong evidence that the model should be taken seriously.

  17. Modelling the core magnetic field of the earth

    Science.gov (United States)

    Harrison, C. G. A.; Carle, H. M.

    1982-01-01

    It is suggested that radial off-center dipoles located within the core of the earth be used instead of spherical harmonics of the magnetic potential in modeling the core magnetic field. The off-center dipoles, in addition to more realistically modeling the physical current systems within the core, are if located deep within the core more effective at removing long wavelength signals of either potential or field. Their disadvantage is that their positions and strengths are more difficult to compute, and such effects as upward and downward continuation are more difficult to manipulate. It is nevertheless agreed with Cox (1975) and Alldredge and Hurwitz (1964) that physical realism in models is more important than mathematical convenience. A radial dipole model is presented which agrees with observations of secular variation and excursions.

  18. Core formation, evolution, and convection - A geophysical model

    Science.gov (United States)

    Ruff, L.; Anderson, D. L.

    1980-01-01

    A model for the formation and evolution of the earth's core, which provides an adequate energy source for maintaining the geodynamo, is proposed. A modified inhomogeneous accretion model is proposed which leads to initial iron and refractory enrichment at the center of the planet. The probable heat source for melting of the core is the decay of Al-26. The refractory material is emplaced irregularly in the lowermost mantle with uranium and thorium serving as a long-lived heat source. Fluid motions in the core are driven by the differential heating from above and the resulting cyclonic motions may be the source of the geodynamo.

  19. Core formation, evolution, and convection: A geophysical model

    Science.gov (United States)

    Ruff, L.; Anderson, D. L.

    1978-01-01

    A model is proposed for the formation and evolution of the Earth's core which provides an adequate energy source for maintaining the geodynamo. A modified inhomogeneous accretion model is proposed which leads to initial iron and refractory enrichment at the center of the planet. The probable heat source for melting of the core is the decay of Al. The refractory material is emplaced irregularly in the lowermost mantle with uranium and thorium serving as a long lived heat source. Fluid motions in the core are driven by the differential heating from above and the resulting cyclonic motions may be the source of the geodynamo.

  20. Component-based Systems Reconfigurations Using Graph Grammars

    Directory of Open Access Journals (Sweden)

    O. Kouchnarenko

    2016-01-01

    Full Text Available Dynamic reconfigurations can modify the architecture of component-based systems without incurring any system downtime. In this context, the main contribution of the present article is the establishment of correctness results proving component-based systems reconfigurations using graph grammars. New guarded reconfigurations allow us to build reconfigurations based on primitive reconfiguration operations using sequences of reconfigurations and the alternative and the repetitive constructs, while preserving configuration consistency. A practical contribution consists of the implementation of a component-based model using the GROOVE graph transformation tool. Then, after enriching the model with interpreted configurations and reconfigurations in a consistency compatible manner, a simulation relation is exploited to validate component systems’ implementations. This sound implementation is illustrated on a cloud-based multitier application hosting environment managed as a component-based system.

  1. Forward modeling of δ18O in Andean ice cores

    Science.gov (United States)

    Hurley, J. V.; Vuille, M.; Hardy, D. R.

    2016-08-01

    Tropical ice core archives are among the best dated and highest resolution from the tropics, but a thorough understanding of processes that shape their isotope signature as well as the simulation of observed variability remain incomplete. To address this, we develop a tropical Andean ice core isotope forward model from in situ hydrologic observations and satellite water vapor isotope measurements. A control simulation of snow δ18O captures the mean and seasonal trend but underestimates the observed intraseasonal variability. The simulation of observed variability is improved by including amount effects associated with South American cold air incursions, linking synoptic-scale disturbances and monsoon dynamics to tropical ice core δ18O. The forward model was calibrated with and run under present-day conditions but can also be driven with past climate forcings to reconstruct paleomonsoon variability. The model is transferable and may be used to render a (paleo)climatic context at other ice core locations.

  2. Geomagnetic core field models in the satellite era

    DEFF Research Database (Denmark)

    Lesur, Vincent; Olsen, Nils; Thomson, Alan W. P.

    2011-01-01

    After a brief review of the theoretical basis and difficulties that modelers are facing, we present three recent models of the geomagnetic field originating in the Earth’s core. All three modeling approaches are using recent observatory and near-Earth orbiting survey satellite data. In each case...... the specific aims and techniques used by the modelers are described together with a presentation of the main results achieved. The three different modeling approaches are giving similar results. For a snap shot of the core magnetic field at a given epoch and observed at the Earth’s surface, the differences...... only up to degree 8 or 9. For higher time derivatives of core field models, only the very first degrees are robustly derived....

  3. Measurement of noise associated with model transformer cores

    Energy Technology Data Exchange (ETDEWEB)

    Snell, David [Cogent Power Ltd., Development and Market Research, Orb Electrical Steels, Corporation Road, Newport, South Wales NP19 OXT (United Kingdom)], E-mail: Dave.snell@cogent-power.com

    2008-10-15

    The performance of a transformer core may be considered in terms of power loss and by the noise generated by the core, both of which should be minimised. This paper discusses the setting up of a suitable system for evaluation of noise in a large model transformer core (500 kV A) and issues associated with noise measurement. The equivalent continuous sound pressure level (LAeq) was used as a measure of the A-weighted sound level and measurements were made in the range 16 Hz-25 kHz for various step lap core configurations. The selection of optimum sound insulation materials between core and ground support and for enclosing the transformer is essential for minimisation of background noise. Core clamping pressure must be optimised in order to minimise noise. The use of two laminations per layer instead of one leads to an increase in noise arising from the core. Provided care is taken in building the core, good reproducibility of results can be obtained for analysis.

  4. Multi-core and/or symbolic model checking

    NARCIS (Netherlands)

    Dijk, van Tom; Laarman, Alfons; Pol, van de Jaco; Luettgen, G.; Merz, S.

    2012-01-01

    We review our progress in high-performance model checking. Our multi-core model checker is based on a scalable hash-table design and parallel random-walk traversal. Our symbolic model checker is based on Multiway Decision Diagrams and the saturation strategy. The LTSmin tool is based on the PINS arc

  5. Update to Core reporting practices in structural equation modeling.

    Science.gov (United States)

    Schreiber, James B

    2016-07-21

    This paper is a technical update to "Core Reporting Practices in Structural Equation Modeling."(1) As such, the content covered in this paper includes, sample size, missing data, specification and identification of models, estimation method choices, fit and residual concerns, nested, alternative, and equivalent models, and unique issues within the SEM family of techniques.

  6. A semi-analytic dynamical friction model for cored galaxies

    CERN Document Server

    Petts, James A; Gualandris, Alessia

    2016-01-01

    We present a dynamical friction model based on Chandrasekhar's formula that reproduces the fast inspiral and stalling experienced by satellites orbiting galaxies with a large constant density core. We show that the fast inspiral phase does not owe to resonance. Rather, it owes to the background velocity distribution function for the constant density cores being dissimilar from the usually-assumed Maxwellian distribution. Using the correct background velocity distribution function and the semi-analytic model from Petts et al. (2015), we are able to correctly reproduce the infall rate in both cored and cusped potentials. However, in the case of large cores, our model is no longer able to correctly capture core-stalling. We show that this stalling owes to the tidal radius of the satellite approaching the size of the core. By switching off dynamical friction when rt(r) = r (where rt is the tidal radius at the satellite's position) we arrive at a model which reproduces the N-body results remarkably well. Since the...

  7. Summary of multi-core hardware and programming model investigations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2008-05-01

    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

  8. CORE

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin

    2013-01-01

    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  9. CORE

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin

    2013-01-01

    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  10. Modelling line emission of deuterated H3+ from prestellar cores

    Science.gov (United States)

    Sipilä, O.; Hugo, E.; Harju, J.; Asvany, O.; Juvela, M.; Schlemmer, S.

    2010-01-01

    Context. The depletion of heavy elements in cold cores of interstellar molecular clouds can lead to a situation where deuterated forms of H3+ are the most useful spectroscopic probes of the physical conditions. Aims: The aim is to predict the observability of the rotational lines of H2D+ and D2H+ from prestellar cores. Methods: Recently derived rate coefficients for the H3+ + H2 isotopic system were applied to the “complete depletion” reaction scheme to calculate abundance profiles in hydrostatic core models. The ground-state lines of H2D+(o) (372 GHz) and D2H+(p) (692 GHz) arising from these cores were simulated. The excitation of the rotational levels of these molecules was approximated by using the state-to-state coefficients for collisions with H2. We also predicted line profiles from cores with a power-law density distribution advocated in some previous studies. Results: The new rate coefficients introduce some changes to the complete depletion model, but do not alter the general tendencies. One of the modifications with respect to the previous results is the increase of the D3+ abundance at the cost of other isotopologues. Furthermore, the present model predicts a lower H2D+ (o/p) ratio, and a slightly higher D2H+ (p/o) ratio in very cold, dense cores, as compared with previous modelling results. These nuclear spin ratios affect the detectability of the submm lines of H2D+(o) and D2H+(p). The previously detected H2D+ and D2H+ lines towards the core I16293E, and the H2D+ line observed towards Oph D can be reproduced using the present excitation model and the physical models suggested in the original papers.

  11. Geomagnetic core field models in the satellite era

    DEFF Research Database (Denmark)

    Lesur, Vincent; Olsen, Nils; Thomson, Alan W. P.

    2011-01-01

    After a brief review of the theoretical basis and difficulties that modelers are facing, we present three recent models of the geomagnetic field originating in the Earth’s core. All three modeling approaches are using recent observatory and near-Earth orbiting survey satellite data. In each case ...... only up to degree 8 or 9. For higher time derivatives of core field models, only the very first degrees are robustly derived.......After a brief review of the theoretical basis and difficulties that modelers are facing, we present three recent models of the geomagnetic field originating in the Earth’s core. All three modeling approaches are using recent observatory and near-Earth orbiting survey satellite data. In each case...... the specific aims and techniques used by the modelers are described together with a presentation of the main results achieved. The three different modeling approaches are giving similar results. For a snap shot of the core magnetic field at a given epoch and observed at the Earth’s surface, the differences...

  12. Modeling of Core Competencies in the Registrar's Office

    Science.gov (United States)

    Pikowsky, Reta

    2009-01-01

    The Office of the Registrar at the Georgia Institute of Technology, in cooperation with the Office of Human Resources, has been engaged since February 2008 in a pilot project to model core competencies for the leadership team and the staff. It is the hope of the office of Human resources that this pilot will result in a model that can be used…

  13. Core-oscillator model of Caulobacter crescentus

    Science.gov (United States)

    Vandecan, Yves; Biondi, Emanuele; Blossey, Ralf

    2016-06-01

    The gram-negative bacterium Caulobacter crescentus is a powerful model organism for studies of bacterial cell cycle regulation. Although the major regulators and their connections in Caulobacter have been identified, it still is a challenge to properly understand the dynamics of its circuitry which accounts for both cell cycle progression and arrest. We show that the key decision module in Caulobacter is built from a limit cycle oscillator which controls the DNA replication program. The effect of an induced cell cycle arrest is demonstrated to be a key feature to classify the underlying dynamics.

  14. Numerical models of the Earth’s thermal history: Effects of inner-core solidification and core potassium

    Science.gov (United States)

    Butler, S. L.; Peltier, W. R.; Costin, S. O.

    2005-09-01

    Recently there has been renewed interest in the evolution of the inner core and in the possibility that radioactive potassium might be found in significant quantities in the core. The arguments for core potassium come from considerations of the age of the inner core and the energy required to sustain the geodynamo [Nimmo, F., Price, G.D., Brodholt, J., Gubbins, D., 2004. The influence of potassium on core and geodynamo evolution. Geophys. J. Int. 156, 363-376; Labrosse, S., Poirier, J.-P., Le Mouël, J.-L., 2001. The age of the inner core. Earth Planet Sci. Lett. 190, 111-123; Labrosse, S., 2003. Thermal and magnetic evolution of the Earth's core. Phys. Earth Planet Int. 140, 127-143; Buffett, B.A., 2003. The thermal state of Earth's core. Science 299, 1675-1677] and from new high pressure physics analyses [Lee, K., Jeanloz, R., 2003. High-pressure alloying of potassium and iron: radioactivity in the Earth's core? Geophys. Res. Lett. 30 (23); Murthy, V.M., van Westrenen, W., Fei, Y.W., 2003. Experimental evidence that potassium is a substantial radioactive heat source in planetary cores. Nature 423, 163-165; Gessmann, C.K., Wood, B.J., 2002. Potassium in the Earth's core? Earth Planet Sci. Lett. 200, 63-78]. The Earth's core is also located at the lower boundary of the convecting mantle and the presence of radioactive heat sources in the core will affect the flux of heat between these two regions and will, as a result, have a significant impact on the Earth's thermal history. In this paper, we present Earth thermal history simulations in which we calculate fluid flow in a spherical shell representing the mantle, coupled with a core of a given heat capacity with varying degrees of internal heating in the form of K40 and varying initial core temperatures. The mantle model includes the effects of the temperature dependence of viscosity, decaying radioactive heat sources, and mantle phase transitions. The core model includes the thermal effects of inner core

  15. A convection model to explain anisotropy of the inner core

    Energy Technology Data Exchange (ETDEWEB)

    Wenk, H.-R. [Department of Geology and Geophysics, University of California, Berkeley (United States); Baumgardner, J. R. [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico (United States); Lebensohn, R. A. [CONICET, Consejo Nacional de Investigaciones Cientificas y Tecnicas, University of Rosario, Rosario, (Argentina); Tome, C. N. [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos, New Mexico (United States)

    2000-03-10

    Seismic evidence suggests that the solid inner core of the Earth may be anisotropic. Several models have been proposed to explain this anisotropy as the result of preferred orientation of crystals. They range from a large annealed single crystal, growth at the melt interface, to deformation-induced texture. In this study texture development by deformation during inner core convection is explored for {epsilon}-iron (hcp) and {gamma}-iron (fcc). Convection patterns for harmonic degree two were investigated in detail. In the model it is assumed that traces of potassium are uniformly dispersed in the inner core and act as a heat source. Both for fcc and hcp iron, crystal rotations associated with intracrystalline slip during deformation can plausibly explain a 1-3% anisotropy in P waves with faster velocities along the N-S axis and slower ones in the equatorial plane. The effect of single crystal elastic constants is explored. (c) 2000 American Geophysical Union.

  16. Accurate Modeling of Buck Converters with Magnetic-Core Inductors

    DEFF Research Database (Denmark)

    Astorino, Antonio; Antonini, Giulio; Swaminathan, Madhavan

    2015-01-01

    In this paper, a modeling approach for buck converters with magnetic-core inductors is presented. Due to the high nonlinearity of magnetic materials, the frequency domain analysis of such circuits is not suitable for an accurate description of their behaviour. Hence, in this work, a timedomain...... model of buck converters with magnetic-core inductors in a SimulinkR environment is proposed. As an example, the presented approach is used to simulate an eight-phase buck converter. The simulation results show that an unexpected system behaviour in terms of current ripple amplitude needs the inductor core...

  17. Testing the HTA core model: experiences from two pilot projects

    DEFF Research Database (Denmark)

    Pasternack, Iris; Anttila, Heidi; Mäkelä, Marjukka

    2009-01-01

    coordination in timing and distribution of work would probably help improve applicability and avoid duplication of work. CONCLUSIONS: The HTA Core Model can be developed into a platform that enables and encourages true HTA collaboration in terms of distribution of work and maximum utilization of a common pool...

  18. Gray Models of convection in core collapse supernovae

    CERN Document Server

    Swesty, F D

    1998-01-01

    One of the major difficulties encountered in modeling core collapse supernovae is obtaining an accurate description of the transport of neutrinos through the collapsed stellar core. The behavior of the neutrino distribution function transitions from an LTE distribution in the center of the core to a non-LTE distribution in the outer regions of the core. One method that has been recently employed in order to model the flow of neutrinos in 2-D models is the gray approximation. This approximation assumes that the neutrino distribution can be described by a function that is parameterized in terms of a neutrino temperature and a neutrino chemical potential. However, these parameters must be assumed. Furthermore, the parameters will also differ between the LTE and NLTE regions. Additionally, within the gray approximation the location at which the neutrino distribution function transitions from LTE to NLTE must be assumed. By considering a series of models where the LTE/NLTE decoupling point is varied we show that t...

  19. Nonlinear Dynamic Model of PMBLDC Motor Considering Core Losses

    DEFF Research Database (Denmark)

    Fasil, Muhammed; Mijatovic, Nenad; Jensen, Bogi Bech

    2017-01-01

    The phase variable model is used commonly when simulating a motor drive system with a three-phase permanent magnet brushless DC (PMBLDC) motor. The phase variable model neglects core losses and this affects its accuracy when modelling fractional-slot machines. The inaccuracy of phase variable model...... on the detailed analysis of the flux path and the variation of flux in different components of the machine. A prototype of fractional slot axial flux PMBLDC in-wheel motor is used to assess the proposed nonlinear dynamic model....

  20. Accurate modelling of fabricated hollow-core photonic bandgap fibers.

    Science.gov (United States)

    Fokoua, Eric Numkam; Sandoghchi, Seyed Reza; Chen, Yong; Jasion, Gregory T; Wheeler, Natalie V; Baddela, Naveen K; Hayes, John R; Petrovich, Marco N; Richardson, David J; Poletti, Francesco

    2015-09-07

    We report a novel approach to reconstruct the cross-sectional profile of fabricated hollow-core photonic bandgap fibers from scanning electron microscope images. Finite element simulations on the reconstructed geometries achieve a remarkable match with the measured transmission window, surface mode position and attenuation. The agreement between estimated scattering loss from surface roughness and measured loss values indicates that structural distortions, in particular the uneven distribution of glass across the thin silica struts on the core boundary, have a strong impact on the loss. This provides insight into the differences between idealized models and fabricated fibers, which could be key to further fiber loss reduction.

  1. Development of a core-stability model: a delphi approach.

    Science.gov (United States)

    Majewski-Schrage, Tricia; Evans, Todd A; Ragan, Brian

    2014-05-01

    Despite widespread acceptance, there is currently no consensus on the definition, components, and the specific techniques most appropriate to measure and quantify core stability. To develop a comprehensive core-stability model addressing its definition, components, and assessment techniques. Delphi technique. University laboratory. 15 content experts from United States and Canada, representing a variety of disciplines. The authors distributed an open-ended questionnaire pertaining to a core-stability definition, components, and assessment techniques specific to each expert. They collected data over 2 rounds of telephone interviews. They concluded data collection once a consensus was achieved that equated with 51% agreement among respondents. The authors developed a working definition of core stability as the ability to achieve and sustain control of the trunk region at rest and during precise movement. Eighty-three percent of the experts considered the definition satisfactory. Therefore, the definition was accepted. Furthermore, the experts agreed that muscles (14/15 = 93.3%) and neuromuscular control (8/12 = 66.7%) were components of core stability. Assessment techniques were identified and inconsistencies were highlighted; however, no consensus was established. A consensus core-stability definition was created and 2 components were identified. However, of the initial definitions provided by the experts, no 2 were identical, which revealed the inconsistencies among experts and the importance of this study. Nonetheless, the goal of obtaining a consensus definition was obtained. Although a consensus for the assessment techniques of core stability could not be reached, it was a beneficial starting point to identify the inconsistencies that were discovered among the content experts.

  2. Accelerating Atmospheric Modeling Through Emerging Multi-core Technologies

    OpenAIRE

    Linford, John Christian

    2010-01-01

    The new generations of multi-core chipset architectures achieve unprecedented levels of computational power while respecting physical and economical constraints. The cost of this power is bewildering program complexity. Atmospheric modeling is a grand-challenge problem that could make good use of these architectures if they were more accessible to the average programmer. To that end, software tools and programming methodologies that greatly simplify the acceleration of atmospheric modeling...

  3. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  4. A New Global Core Plasma Model of the Plasmasphere

    Science.gov (United States)

    Gallagher, D. L.; Comfort, R. H.; Craven, P. D.

    2014-01-01

    The Global Core Plasma Model (GCPM) is the first empirical model for thermal inner magnetospheric plasma designed to integrate previous models and observations into a continuous in value and gradient representation of typical total densities. New information about the plasmasphere, in particular, makes possible significant improvement. The IMAGE Mission Radio Plasma Imager (RPI) has obtained the first observations of total plasma densities along magnetic field lines in the plasmasphere and polar cap. Dynamics Explorer 1 Retarding Ion Mass Spectrometer (RIMS) has provided densities in temperatures in the plasmasphere for 5 ion species. These and other works enable a new more detailed empirical model of thermal in the inner magnetosphere that will be presented.

  5. Particle-core model for transverse dynamics of beam halo

    Directory of Open Access Journals (Sweden)

    T. P. Wangler

    1998-12-01

    Full Text Available The transverse motion of beam halo particles is described by a particle-core model which uses the space-charge field of a continuous cylindrical oscillating beam core in a uniform linear focusing channel to provide the force that drives particles to large amplitudes. The model predicts a maximum amplitude for the resonantly-driven particles as a function of the initial mismatch. We have calculated these amplitude limits and have estimated the growth times for extended-halo formation as a function of both the space-charge tune-depression ratio and a mismatch parameter. We also present formulas for the scaling of the maximum amplitudes as a function of the beam parameters. The model results are compared with multiparticle simulations and we find very good agreement for a variety of initial particle distributions.

  6. A model for core formation in the early Earth

    Science.gov (United States)

    Jones, J. H.; Drake, M. J.

    1985-01-01

    Two basic types exogenous models were proposed to account for siderophile and chalcophile element abundances in the Earth's upper mantle. The first model requires that the Earth be depleted in volatiles and that, after a core formation event which extracted the most siderophile elements into the core, additional noble siderophile elements (Pt, Ir, Au) were added as a late veneer and mixed into the mantle. The second model postulates a reduced Earth with approximately CI elemental abundances in which a primary core forming event depleted all siderophile elements in the mantle. The plausibility of models which require fine scale mixing of chondritic material into the upper mantle is analyzed. Mixing in liquids is more efficient, but large degrees of silicate partial melting will facilitate the separation of magma from residual solids. Any external events affecting the upper mantle of the Earth should also be evident in the Moon; but siderophile and chalcophile element abundance patterns inferred for the mantles of the Earth and Moon differ. There appear to be significant physical difficulties associated with chondritic veneer models.

  7. Modelling of Permanent Magnet Synchronous Motor Incorporating Core-loss

    Directory of Open Access Journals (Sweden)

    K. Suthamno

    2012-08-01

    Full Text Available This study proposes a dq-axis modelling of a Permanent Magnet Synchronous Motor (PMSM with copper-loss and core-loss taken into account. The proposed models can be applied to PMSM control and drive with loss minimization in simultaneous consideration. The study presents simulation results of direct drive of a PMSM under no-load and loaded conditions using the proposed models with MATLAB codes. Comparisons of the results are made among those obtained from using PSIM and SIMULINK software packages. The comparison results indicate very good agreement.

  8. On the thermodynamic properties of the generalized Gaussian core model

    Directory of Open Access Journals (Sweden)

    B.M.Mladek

    2005-01-01

    Full Text Available We present results of a systematic investigation of the properties of the generalized Gaussian core model of index n. The potential of this system interpolates via the index n between the potential of the Gaussian core model and the penetrable sphere system, thereby varying the steepness of the repulsion. We have used both conventional and self-consistent liquid state theories to calculate the structural and thermodynamic properties of the system; reference data are provided by computer simulations. The results indicate that the concept of self-consistency becomes indispensable to guarantee excellent agreement with simulation data; in particular, structural consistency (in our approach taken into account via the zero separation theorem is obviously a very important requirement. Simulation results for the dimensionless equation of state, β P / ρ, indicate that for an index-value of 4, a clustering transition, possibly into a structurally ordered phase might set in as the system is compressed.

  9. A systematic approach for component-based software development

    NARCIS (Netherlands)

    Guareis de farias, Cléver; van Sinderen, Marten J.; Ferreira Pires, Luis

    2000-01-01

    Component-based software development enables the construction of software artefacts by assembling prefabricated, configurable and independently evolving building blocks, called software components. This paper presents an approach for the development of component-based software artefacts. This

  10. Verifying Embedded Systems using Component-based Runtime Observers

    DEFF Research Database (Denmark)

    Guan, Wei; Marian, Nicolae; Angelov, Christo K.

    Formal verification methods, such as exhaustive model checking, are often infeasible because of high computational complexity. Runtime observers (monitors) provide an alternative, light-weight verification method, which offers a non-exhaustive yet feasible approach to monitoring system behavior...... against formally specified properties. This paper presents a component-based design method for runtime observers, which are configured from instances of prefabricated reusable components---Predicate Evaluator (PE) and Temporal Evaluator (TE). The PE computes atomic propositions for the TE; the latter...... specified properties via simulation. The presented method has been experimentally validated in an industrial case study---a control system for a safety-critical medical ventilator unit....

  11. Component-Based Software Reuse on the World Wide Web

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Component-based software reuse (CBSR) has been widely used in software developing practice and has an even more brilliant future with the rapid extension of the Internet, because World Wide Web (WWW) makes the large scale of component resources from different vendors become available to software developers. In this paper, an abstract component model suitable for representing components on WWW isproposed, which plays important roles both in achieving interoperability among components and amongreusable component libraries (RCLs). Some necessary changes to many aspects of component management brought by WWW are also discussed, such as the classification of components and the corresponding searching methods, and the certification of components.

  12. CCPA: Component-based communication protocol architecture for embedded systems

    Institute of Scientific and Technical Information of China (English)

    DAI Hong-jun; CHEN Tian-zhou; CHEN Chun

    2005-01-01

    For increased and various communication requirements of modem applications on embedded systems, general purpose protocol stacks and protocol models are not efficient because they are fixed to execute in the static mode. We present the Component-Based Communication Protocol Architecture (CCPA) to make communication dynamic and configurable. It can develop, test and store the customized components for flexible reuse. The protocols are implemented by component assembly and support by configurable environments. This leads to smaller memory, more flexibility, more reconfiguration ability, better concurrency, and multiple data channel support.

  13. Secure Wireless Embedded Systems Via Component-based Design

    DEFF Research Database (Denmark)

    Hjorth, Theis S.; Torbensen, R.

    2010-01-01

    This paper introduces the method secure-by-design as a way of constructing wireless embedded systems using component-based modeling frameworks. This facilitates design of secure applications through verified, reusable software. Following this method we propose a security framework with a secure...... communication component for distributed wireless embedded devices. The components communicate using the Secure Embedded Exchange Protocol (SEEP), which has been designed for flexible trust establishment so that small, resource-constrained, wireless embedded systems are able to communicate short command messages...

  14. Secure wireless embedded systems via component-based design

    DEFF Research Database (Denmark)

    Hjorth, T.; Torbensen, R.

    2010-01-01

    This paper introduces the method secure-by-design as a way of constructing wireless embedded systems using component-based modeling frameworks. This facilitates design of secure applications through verified, reusable software. Following this method we propose a security framework with a secure...... communication component for distributed wireless embedded devices. The components communicate using the Secure Embedded Exchange Protocol (SEEP), which has been designed for flexible trust establishment so that small, resource-constrained, wireless embedded systems are able to communicate short command messages...

  15. Computational Models of Stellar Collapse and Core-Collapse Supernovae

    CERN Document Server

    Ott, C D; Burrows, A; Livne, E; O'Connor, E; Löffler, F

    2009-01-01

    Core-collapse supernovae are among Nature's most energetic events. They mark the end of massive star evolution and pollute the interstellar medium with the life-enabling ashes of thermonuclear burning. Despite their importance for the evolution of galaxies and life in the universe, the details of the core-collapse supernova explosion mechanism remain in the dark and pose a daunting computational challenge. We outline the multi-dimensional, multi-scale, and multi-physics nature of the core-collapse supernova problem and discuss computational strategies and requirements for its solution. Specifically, we highlight the axisymmetric (2D) radiation-MHD code VULCAN/2D and present results obtained from the first full-2D angle-dependent neutrino radiation-hydrodynamics simulations of the post-core-bounce supernova evolution. We then go on to discuss the new code Zelmani which is based on the open-source HPC Cactus framework and provides a scalable AMR approach for 3D fully general-relativistic modeling of stellar col...

  16. The development of component-based information systems

    CERN Document Server

    Cesare, Sergio de; Macredie, Robert

    2015-01-01

    This work provides a comprehensive overview of research and practical issues relating to component-based development information systems (CBIS). Spanning the organizational, developmental, and technical aspects of the subject, the original research included here provides fresh insights into successful CBIS technology and application. Part I covers component-based development methodologies and system architectures. Part II analyzes different aspects of managing component-based development. Part III investigates component-based development versus commercial off-the-shelf products (COTS), includi

  17. Component-based analysis of embedded control applications

    DEFF Research Database (Denmark)

    Angelov, Christo K.; Guan, Wei; Marian, Nicolae

    2011-01-01

    presents an analysis technique that can be used to validate COMDES design models in SIMULINK. It is based on a transformation of the COMDES design model into a SIMULINK analysis model, which preserves the functional and timing behaviour of the application. This technique has been employed to develop...... configuration of applications from validated design models and trusted components. This design philosophy has been instrumental for developing COMDES—a component-based framework for distributed embedded control systems. A COMDES application is conceived as a network of embedded actors that are configured from...... instances of reusable, executable components—function blocks (FBs). System actors operate in accordance with a timed multitasking model of computation, whereby I/O signals are exchanged with the controlled plant at precisely specified time instants, resulting in the elimination of I/O jitter. The paper...

  18. Design of homogeneous trench-assisted multi-core fibers based on analytical model

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2016-01-01

    is the quasi-optimum core layout starting from an one-ring structured 12-core fiber. Based on the analytical model, a square-lattice structured 24-core fiber and a 32-core fiber are designed both for propagation-direction interleaving (PDI) and non-PDI transmission schemes. The proposed model provides...

  19. Experimental determination of a LMFBR seismic equivalent core model

    Energy Technology Data Exchange (ETDEWEB)

    Buland, P.; Fegeant, O.; Fontaine, B.; Gantenbein, F.

    1995-12-31

    Seismic analysis of pool type LMFBR requires to perform a finite element calculation of the reactor. Because of fluid structure interaction and non-linearities due to the presence of gaps between subassemblies, it is impossible to include in the reactor vessel finite elements model the real behaviour of the core. It is therefore required to find a linear equivalent core model (LECM) which will give for the reactor vessel the same results. The design of the LECM is based on an experimental test program conducted with the core mock-up RAPSODIE on Vesuve shaking table located at CEA/Saclay center. The tests permitted to validate a linear equivalent model, which characteristics correspond to the modal parameters of the mock-up (masses, elevations, frequencies...). These characteristics were estimated in air and in water, for different level of excitation. They permitted to quantify the added mass ratio (about 15%) which is in a rather good agreement with the computation when the free surface effect is correctly taken into account. (authors). 2 refs., 5 figs., 1 photo.

  20. 面向适航认证的模型驱动机载软件构件的安全性验证%Model-driven Safety Dependence Verification for Component-based Airborne Software Supporting Airworthiness Certification

    Institute of Scientific and Technical Information of China (English)

    徐丙凤; 黄志球; 胡军; 于笑丰

    2012-01-01

    Current research of airborne software focuses on providing airworthiness certification evidence in software development process. As modern complex airborne software architecture is component-based and distributed, this paper considers the issue of checking the safety dependence relationship of software components against objectives that the airworthiness certification standard stipulates, which is one of the key problems of airborne software development in the design phase. Firstly, the static structure of a system is specified by a systems modeling language (SysML) block definition diagram with the description of safety properties. Secondly, the SysML block definition diagram is transformed to a block dependence graph for precise formal description. Thirdly, a method for checking the consistency between the safety dependence relationship in the static system structure and objectives of the airworthiness certification standard is proposed. Finally, an example of an aircraft navigation system is provided to illustrate how to use the method in the airborne software development process. The integrated safety level of a system is promoted by applying this method, and it can be used to provide airworthiness certification evidence.%在软件开发的过程中为适航认证提供证据,已成为机载软件开发的研究热点.现代复杂机载软件多为构件化分布式架构,如何有效验证构件之间安全性依赖关系与适航认证标准当中规定目标的一致性,是机载软件设计阶段的一个重要问题.首先,使用系统建模语言(SysML)块图建立带有安全性特征的系统静态结构模型,将其转换为块依赖图以便进行精确的形式化描述.在此基础上给出形式验证方法,检验系统静态结构模型中的安全性依赖关系与适航认证标准中所规定目标之间是否一致.最后,通过一个飞机导航系统的例子说明如何将该方法应用于机载软件开发的过

  1. Core-Collapse Supernovae: Modeling between Pragmatism and Perfectionism

    CERN Document Server

    Janka, H T; Kitaura Joyanes, F S; Marek, A; Rampp, M

    2004-01-01

    We briefly summarize recent efforts in Garching for modeling stellar core collapse and post-bounce evolution in one and two dimensions. The transport of neutrinos of all flavors is treated by iteratively solving the coupled system of frequency-dependent moment equations together with a model Boltzmann equation which provides the closure. A variety of progenitor stars, different nuclear equations of state, stellar rotation, and global asymmetries due to large-mode hydrodynamic instabilities have been investigated to ascertain the road to finally successful, convectively supported neutrino-driven explosions.

  2. Effective Field Theory and the No-Core Shell Model

    Directory of Open Access Journals (Sweden)

    Stetcua I.

    2010-04-01

    Full Text Available In finite model space suitable for many-body calculations via the no-core shell model (NCSM, I illustrate the direct application of the effective field theory (EFT principles to solving the many-body Schrödinger equation. Two different avenues for fixing the low-energy constants naturally arising in an EFT approach are discussed. I review results for both nuclear and trapped atomic systems, using effective theories formally similar, albeit describing different underlying physics.

  3. Cycle length maximization in PWRs using empirical core models

    Energy Technology Data Exchange (ETDEWEB)

    Okafor, K.C.; Aldemir, T.

    1987-01-01

    The problem of maximizing cycle length in nuclear reactors through optimal fuel and poison management has been addressed by many investigators. An often-used neutronic modeling technique is to find correlations between the state and control variables to describe the response of the core to changes in the control variables. In this study, a set of linear correlations, generated by two-dimensional diffusion-depletion calculations, is used to find the enrichment distribution that maximizes cycle length for the initial core of a pressurized water reactor (PWR). These correlations (a) incorporate the effect of composition changes in all the control zones on a given fuel assembly and (b) are valid for a given range of control variables. The advantage of using such correlations is that the cycle length maximization problem can be reduced to a linear programming problem.

  4. Dynamical Models to Infer the Core Mass Fraction of Venus

    Science.gov (United States)

    Quintana, Elisa V.; Barclay, Thomas

    2016-10-01

    The uncompressed density of Venus is just a few percent lower than Earth's, however the nature of the interior core structure of Venus remains unclear. Employing state-of-the-art dynamical formation models that allow both accretion and collisional fragmentation, we perform hundreds of simulations of terrestrial planet growth around the Sun in the presence of the giant planets. For both Earth and Venus analogs, we quantify the iron-silicate ratios, water/volatile abundances and specific impact energies of all collisions that lead to their formation. Preliminary results suggest that the distributions of core mass fraction and water content are comparable among the Earth and Venus analogs, suggesting that Earth and Venus may indeed have formed with similar structures and compositions.

  5. Testing a new Free Core Nutation empirical model

    Science.gov (United States)

    Belda, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald

    2016-03-01

    The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.

  6. Two-fluid models of superfluid neutron star cores

    CERN Document Server

    Chamel, N

    2008-01-01

    Both relativistic and non-relativistic two-fluid models of neutron star cores are constructed, using the constrained variational formalism developed by Brandon Carter and co-workers. We consider a mixture of superfluid neutrons and superconducting protons at zero temperature, taking into account mutual entrainment effects. Leptons, which affect the interior composition of the neutron star and contribute to the pressure, are also included. We provide the analytic expression of the Lagrangian density of the system, the so-called master function, from which the dynamical equations can be obtained. All the microscopic parameters of the models are calculated consistently using the non-relativistic nuclear energy density functional theory. For comparison, we have also considered relativistic mean field models. The correspondence between relativistic and non-relativistic hydrodynamical models is discussed in the framework of the recently developed 4D covariant formalism of Newtonian multi-fluid hydrodynamics. We hav...

  7. A Global Model for Circumgalactic and Cluster-core Precipitation

    Science.gov (United States)

    Voit, G. Mark; Meece, Greg; Li, Yuan; O'Shea, Brian W.; Bryan, Greg L.; Donahue, Megan

    2017-08-01

    We provide an analytic framework for interpreting observations of multiphase circumgalactic gas that is heavily informed by recent numerical simulations of thermal instability and precipitation in cool-core galaxy clusters. We start by considering the local conditions required for the formation of multiphase gas via two different modes: (1) uplift of ambient gas by galactic outflows, and (2) condensation in a stratified stationary medium in which thermal balance is explicitly maintained. Analytic exploration of these two modes provides insights into the relationships between the local ratio of the cooling and freefall timescales (i.e., {t}{cool}/{t}{ff}), the large-scale gradient of specific entropy, and the development of precipitation and multiphase media in circumgalactic gas. We then use these analytic findings to interpret recent simulations of circumgalactic gas in which global thermal balance is maintained. We show that long-lasting configurations of gas with 5≲ \\min ({t}{cool}/{t}{ff})≲ 20 and radial entropy profiles similar to observations of cool cores in galaxy clusters are a natural outcome of precipitation-regulated feedback. We conclude with some observational predictions that follow from these models. This work focuses primarily on precipitation and AGN feedback in galaxy-cluster cores, because that is where the observations of multiphase gas around galaxies are most complete. However, many of the physical principles that govern condensation in those environments apply to circumgalactic gas around galaxies of all masses.

  8. Feasibility analysis of real-time physical modeling using WaveCore processor technology on FPGA

    NARCIS (Netherlands)

    Verstraelen, Math; Pfeifle, Florian; Bader, Rolf

    2015-01-01

    WaveCore is a scalable many-core processor technology. This technology is specifically developed and optimized for real-time acoustical modeling applications. The programmable WaveCore soft-core processor is silicon-technology independent and hence can be targeted to ASIC or FPGA technologies. The W

  9. Lifting a Butterfly – A Component-Based FFT

    Directory of Open Access Journals (Sweden)

    Sibylle Schupp

    2003-01-01

    Full Text Available While modern software engineering, with good reason, tries to establish the idea of reusability and the principles of parameterization and loosely coupled components even for the design of performance-critical software, Fast Fourier Transforms (FFTs tend to be monolithic and of a very low degree of parameterization. The data structures to hold the input and output data, the element type of these data, the algorithm for computing the so-called twiddle factors, the storage model for a given set of twiddle factors, all are unchangeably defined in the so-called butterfly, restricting its reuse almost entirely. This paper shows a way to a component-based FFT by designing a parameterized butterfly. Based on the technique of lifting, this parameterization includes algorithmic and implementation issues without violating the complexity guarantees of an FFT. The paper demonstrates the lifting process for the Gentleman-Sande butterfly, i.e., the butterfly that underlies the large class of decimation-in-frequency (DIF FFTs, shows the resulting components and summarizes the implementation of a component-based, generic DIF library in C++.

  10. Core competency model for the family planning public health nurse.

    Science.gov (United States)

    Hewitt, Caroline M; Roye, Carol; Gebbie, Kristine M

    2014-01-01

    A core competency model for family planning public health nurses has been developed, using a three stage Delphi Method with an expert panel of 40 family planning senior administrators, community/public health nursing faculty and seasoned family planning public health nurses. The initial survey was developed from the 2011 Title X Family Planning program priorities. The 32-item survey was distributed electronically via SurveyMonkey(®). Panelist attrition was low, and participation robust resulting in the final 28-item model, suggesting that the Delphi Method was a successful technique through which to achieve consensus. Competencies with at least 75% consensus were included in the model and those competencies were primarily related to education/counseling and administration of medications and contraceptives. The competencies identified have implications for education/training, certification and workplace performance. © 2014 Wiley Periodicals, Inc.

  11. Computational modeling for hexcan failure under core distruptive accidental conditions

    Energy Technology Data Exchange (ETDEWEB)

    Sawada, T.; Ninokata, H.; Shimizu, A. [Tokyo Institute of Technology (Japan)

    1995-09-01

    This paper describes the development of computational modeling for hexcan wall failures under core disruptive accident conditions of fast breeder reactors. A series of out-of-pile experiments named SIMBATH has been analyzed by using the SIMMER-II code. The SIMBATH experiments were performed at KfK in Germany. The experiments used a thermite mixture to simulate fuel. The test geometry of SIMBATH ranged from single pin to 37-pin bundles. In this study, phenomena of hexcan wall failure found in a SIMBATH test were analyzed by SIMMER-II. Although the original model of SIMMER-II did not calculate any hexcan failure, several simple modifications made it possible to reproduce the hexcan wall melt-through observed in the experiment. In this paper the modifications and their significance are discussed for further modeling improvements.

  12. Verifying Embedded Systems using Component-based Runtime Observers

    DEFF Research Database (Denmark)

    Guan, Wei; Marian, Nicolae; Angelov, Christo K.

    Formal verification methods, such as exhaustive model checking, are often infeasible because of high computational complexity. Runtime observers (monitors) provide an alternative, light-weight verification method, which offers a non-exhaustive yet feasible approach to monitoring system behavior...... against formally specified properties. This paper presents a component-based design method for runtime observers, which are configured from instances of prefabricated reusable components---Predicate Evaluator (PE) and Temporal Evaluator (TE). The PE computes atomic propositions for the TE; the latter...... is a reconfigurable component processing a data structure, representing the state transition diagram of a non-deterministic state machine, i.e. a Buchi automaton derived from a system property specified in Linear Temporal Logic (LTL). Observer components have been implemented using design models and design patterns...

  13. Recent Developments in No-Core Shell-Model Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Navratil, P; Quaglioni, S; Stetcu, I; Barrett, B R

    2009-03-20

    We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.

  14. Component-Based Reduced Basis for Eigenproblems

    Science.gov (United States)

    2013-07-17

    Timoshenko model [18], global FEM and SCRBE with and without port reduction (in which the beam is constructed as the concatenation of eight beam...not taken into account in Euler Bernoulli and Timoshenko models which consider only bending displacement. Note that for a beam with a square section...and/or slender beams; Timoshenko is better for shorter wavelength and/or shorter beams. Not surprisingly, the FE (and SCRBE) eigenvalues are closer to

  15. A numerical strategy for modelling rotating stall in core compressors

    Science.gov (United States)

    Vahdati, M.

    2007-03-01

    The paper will focus on one specific core-compressor instability, rotating stall, because of the pressing industrial need to improve current design methods. The determination of the blade response during rotating stall is a difficult problem for which there is no reliable procedure. During rotating stall, the blades encounter the stall cells and the excitation depends on the number, size, exact shape and rotational speed of these cells. The long-term aim is to minimize the forced response due to rotating stall excitation by avoiding potential matches between the vibration modes and the rotating stall pattern characteristics. Accurate numerical simulations of core-compressor rotating stall phenomena require the modelling of a large number of bladerows using grids containing several tens of millions of points. The time-accurate unsteady-flow computations may need to be run for several engine revolutions for rotating stall to get initiated and many more before it is fully developed. The difficulty in rotating stall initiation arises from a lack of representation of the triggering disturbances which are inherently present in aeroengines. Since the numerical model represents a symmetric assembly, the only random mechanism for rotating stall initiation is provided by numerical round-off errors. In this work, rotating stall is initiated by introducing a small amount of geometric mistuning to the rotor blades. Another major obstacle in modelling flows near stall is the specification of appropriate upstream and downstream boundary conditions. Obtaining reliable boundary conditions for such flows can be very difficult. In the present study, the low-pressure compression (LPC) domain is placed upstream of the core compressor. With such an approach, only far field atmospheric boundary conditions are specified which are obtained from aircraft speed and altitude. A chocked variable-area nozzle, placed after the last compressor bladerow in the model, is used to impose boundary

  16. Core surface flow modelling from high-resolution secular variation

    DEFF Research Database (Denmark)

    Holme, R.; Olsen, Nils

    2006-01-01

    -flux hypothesis, but the spectrum of the SV implies that a conclusive test of frozen-flux is not possible. We parametrize the effects of diffusion as an expected misfit in the flow prediction due to departure from the frozen-flux hypothesis; at low spherical harmonic degrees, this contribution dominates...... the expected departure of the SV predictions from flow to the observed SV, while at high degrees the SV model uncertainty is dominant. We construct fine-scale core surface flows to model the SV. Flow non-uniqueness is a serious problem because the flows are sufficiently small scale to allow flow around non......-series of magnetic data and better parametrization of the external magnetic field....

  17. Baryon-Baryon Interactions ---Nijmegen Extended-Soft-Core Models---

    Science.gov (United States)

    Rijken, T. A.; Nagels, M. M.; Yamamoto, Y.

    We review the Nijmegen extended-soft-core (ESC) models for the baryon-baryon (BB) interactions of the SU(3) flavor-octet of baryons (N, Lambda, Sigma, and Xi). The interactions are basically studied from the meson-exchange point of view, in the spirit of the Yukawa-approach to the nuclear force problem [H. Yukawa, ``On the interaction of Elementary Particles I'', Proceedings of the Physico-Mathematical Society of Japan 17 (1935), 48], using generalized soft-core Yukawa-functions. These interactions are supplemented with (i) multiple-gluon-exchange, and (ii) structural effects due to the quark-core of the baryons. We present in some detail the most recent extended-soft-core model, henceforth referred to as ESC08, which is the most complete, sophisticated, and successful interaction-model. Furthermore, we discuss briefly its predecessor the ESC04-model [Th. A. Rijken and Y. Yamamoto, Phys. Rev. C 73 (2006), 044007; Th. A. Rijken and Y. Yamamoto, Ph ys. Rev. C 73 (2006), 044008; Th. A. Rijken and Y. Yamamoto, nucl-th/0608074]. For the soft-core one-boson-exchange (OBE) models we refer to the literature [Th. A. Rijken, in Proceedings of the International Conference on Few-Body Problems in Nuclear and Particle Physics, Quebec, 1974, ed. R. J. Slobodrian, B. Cuec and R. Ramavataram (Presses Universitè Laval, Quebec, 1975), p. 136; Th. A. Rijken, Ph. D. thesis, University of Nijmegen, 1975; M. M. Nagels, Th. A. Rijken and J. J. de Swart, Phys. Rev. D 17 (1978), 768; P. M. M. Maessen, Th. A. Rijken and J. J. de Swart, Phys. Rev. C 40 (1989), 2226; Th. A. Rijken, V. G. J. Stoks and Y. Yamamoto, Phys. Rev. C 59 (1999), 21; V. G. J. Stoks and Th. A. Rijken, Phys. Rev. C 59 (1999), 3009]. All ingredients of these latter models are also part of ESC08, and so a description of ESC08 comprises all models so far in principle. The extended-soft-core (ESC) interactions consist of local- and non-local-potentials due to (i) one-boson-exchanges (OBE), which are the members of nonets of

  18. A refinement driven component-based design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter;

    2007-01-01

    to integrate sophisticated checkers, generators and transformations. A feasible approach to ensuring high quality of such add-ins is to base them on sound formal foundations. This paper summarizes our research on the Refinement of Component and Object Systems (rCOS) and illustrates it with experiences from...... the work on the Common Component Modelling Example (CoCoME). This gives evidence that the formal techniques developed in rCOS can be integrated into a model-driven development process and shows where it may be integrated in computer-aided software engineering (CASE) tools for adding formally supported...

  19. Benchmarking spin-state chemistry in starless core models

    CERN Document Server

    Sipilä, O; Harju, J

    2015-01-01

    Aims. We aim to present simulated chemical abundance profiles for a variety of important species, with special attention given to spin-state chemistry, in order to provide reference results against which present and future models can be compared. Methods. We employ gas-phase and gas-grain models to investigate chemical abundances in physical conditions corresponding to starless cores. To this end, we have developed new chemical reaction sets for both gas-phase and grain-surface chemistry, including the deuterated forms of species with up to six atoms and the spin-state chemistry of light ions and of the species involved in the ammonia and water formation networks. The physical model is kept simple in order to facilitate straightforward benchmarking of other models against the results of this paper. Results. We find that the ortho/para ratios of ammonia and water are similar in both gas-phase and gas-grain models, at late times in particular, implying that the ratios are determined by gas-phase processes. We d...

  20. Development of an automated core model for nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Mosteller, R.D.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop an automated package of computer codes that can model the steady-state behavior of nuclear-reactor cores of various designs. As an added benefit, data produced for steady-state analysis also can be used as input to the TRAC transient-analysis code for subsequent safety analysis of the reactor at any point in its operating lifetime. The basic capability to perform steady-state reactor-core analysis already existed in the combination of the HELIOS lattice-physics code and the NESTLE advanced nodal code. In this project, the automated package was completed by (1) obtaining cross-section libraries for HELIOS, (2) validating HELIOS by comparing its predictions to results from critical experiments and from the MCNP Monte Carlo code, (3) validating NESTLE by comparing its predictions to results from numerical benchmarks and to measured data from operating reactors, and (4) developing a linkage code to transform HELIOS output into NESTLE input.

  1. Model uniform core criteria for mass casualty triage.

    Science.gov (United States)

    2011-06-01

    There is a need for model uniform core criteria for mass casualty triage because disasters frequently cross jurisdictional lines and involve responders from multiple agencies who may be using different triage tools. These criteria (Tables 1-4) reflect the available science, but it is acknowledged that there are significant research gaps. When no science was available, decisions were formed by expert consensus derived from the available triage systems. The intent is to ensure that providers at a mass-casualty incident use triage methodologies that incorporate these core principles in an effort to promote interoperability and standardization. At a minimum, each triage system must incorporate the criteria that are listed below. Mass casualty triage systems in use can be modified using these criteria to ensure interoperability. The criteria include general considerations, global sorting, lifesaving interventions, and assignment of triage categories. The criteria apply only to providers who are organizing multiple victims in a discrete geographic location or locations, regardless of the size of the incident. They are classified by whether they were derived through available direct scientific evidence, indirect scientific evidence, expert consensus, and/or are used in multiple existing triage systems. These criteria address only primary triage and do not consider secondary triage. For the purposes of this document the term triage refers to mass-casualty triage and provider refers to any person who assigns primary triage categories to victims of a mass-casualty incident.

  2. The Geological information and modelling Thematic Core Service of EPOS

    Science.gov (United States)

    Robida, François; Wächter, Joachim; Tulstrup, Jørgen; Lorenz, Henning; Carter, Mary; Cipolloni, Carlo; Morel, Olivier

    2016-04-01

    Geological data and models are important assets for the EPOS community. The Geological information and modelling Thematic Core Service of EPOS is being designed and will be implemented in an efficient and sustainable access system for geological multi-scale data assets for EPOS through the integration of distributed infrastructure components (nodes) of geological surveys, research institutes and the international drilling community (ICDP/IODP). The TCS will develop and take benefit of the synergy between the existing data infrastructures of the Geological Surveys of Europe (EuroGeoSurveys / OneGeology-Europe / EGDI) and of the large amount of information produced by the research organisations. These nodes will offer a broad range of resources including: geological maps, borehole data, geophysical data (seismic data, borehole log data), archived information on physical material (samples, cores), geochemical and other analyses of rocks, soils and minerals, and Geological models (3D, 4D). The services will be implemented on international standards (such as INSPIRE, IUGS/CGI, OGC, W3C, ISO) in order to guarantee their interoperability with other EPOS TCS as well as their compliance with INSPIRE European Directive or international initiatives (such as OneGeology). This will provide future virtual research environments with means to facilitate the use of existing information for future applications. In addition, workflows will be established that allow the integration of other existing and new data and applications. Processing and the use of simulation and visualization tools will subsequently support the integrated analysis and characterization of complex subsurface structures and their inherent dynamic processes. This will in turn aid in the overall understanding of complex multi-scale geo-scientific questions. This TCS will work alongside other EPOS TCSs to create an efficient and comprehensive multidisciplinary research platform for the Earth Sciences in Europe.

  3. Experimental determination of LMFBR seismic equivalent core model

    Energy Technology Data Exchange (ETDEWEB)

    Fontaine, B.; Buland, P.; Fegeant, O.; Gantenbein, F. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France)

    1995-12-31

    The main phenomena which influence an LMFBR core seismic response are the fluid structure interaction and the impacts between subassemblies. To study the core behaviour seismic tests and calculations have been performed on the core mock-up RAPSODIE in air or in water and for different excitation levels. (author). 2 refs., 6 figs.

  4. Sulfur chemistry: 1D modeling in massive dense cores

    CERN Document Server

    Wakelam, V; Herpin, F

    2011-01-01

    The main sulfur-bearing molecules OCS, H2S, SO, SO2, and CS have been observed in four high mass dense cores (W43-MM1, IRAS 18264, IRAS 05358, and IRAS 18162). Our goal is to put some constraints on the relative evolutionary stage of these sources by comparing these observations with time-dependent chemical modeling. We used the chemical model Nahoon, which computes the gas-phase chemistry and gas-grain interactions of depletion and evaporation. Mixing of the different chemical compositions shells in a 1D structure through protostellar envelope has been included since observed lines suggest nonthermal supersonic broadening. Observed radial profiles of the temperature and density are used to compute the chemistry as a function of time. With our model, we underproduce CS by several orders of magnitude compared to the other S-bearing molecules, which seems to contradict observations, although some uncertainties in the CS abundance observed at high temperature remain. The OCS/SO2, SO/SO2, and H2S/SO2 abundance ra...

  5. Modeling of molecular clouds with formation of prestellar cores

    CERN Document Server

    Donkov, Sava; Veltchev, Todor V

    2012-01-01

    We develop a statistical approach for description of dense structures (cores) in molecular clouds that might be progenitors of stars. Our basic assumptions are a core mass-density relationship and a power-law density distribution of these objects as testified by numerical simulations and observations. The core mass function (CMF) was derived and its slope in the high-mass regime was obtained analytically. Comparisons with observational CMFs in several Galactic clouds are briefly presented.

  6. Mechanical behavior of a sandwich with corrugated GRP core: numerical modeling and experimental validation

    OpenAIRE

    Tumino, D; T. Ingrassia; V. Nigrelli; G. Pitarresi; V. Urso Miano

    2014-01-01

    In this work the mechanical behaviour of a core reinforced composite sandwich structure is studied. The sandwich employs a Glass Reinforced Polymer (GRP) orthotropic material for both the two external skins and the inner core web. In particular, the core is designed in order to cooperate with the GRP skins in membrane and flexural properties by means of the addition of a corrugated laminate into the foam core. An analytical model has been developed to replace a unit cell of this s...

  7. On-line core monitoring system based on buckling corrected modified one group model

    Energy Technology Data Exchange (ETDEWEB)

    Freire, Fernando S., E-mail: freire@eletronuclear.gov.br [ELETROBRAS Eletronuclear Gerencia de Combustivel Nuclear, Rio de Janeiro, RJ (Brazil)

    2011-07-01

    Nuclear power reactors require core monitoring during plant operation. To provide safe, clean and reliable core continuously evaluate core conditions. Currently, the reactor core monitoring process is carried out by nuclear code systems that together with data from plant instrumentation, such as, thermocouples, ex-core detectors and fixed or moveable In-core detectors, can easily predict and monitor a variety of plant conditions. Typically, the standard nodal methods can be found on the heart of such nuclear monitoring code systems. However, standard nodal methods require large computer running times when compared with standards course-mesh finite difference schemes. Unfortunately, classic finite-difference models require a fine mesh reactor core representation. To override this unlikely model characteristic we can usually use the classic modified one group model to take some account for the main core neutronic behavior. In this model a course-mesh core representation can be easily evaluated with a crude treatment of thermal neutrons leakage. In this work, an improvement made on classic modified one group model based on a buckling thermal correction was used to obtain a fast, accurate and reliable core monitoring system methodology for future applications, providing a powerful tool for core monitoring process. (author)

  8. Implementing Quality Assurance Features in Component-based Software System

    National Research Council Canada - National Science Library

    Navdeep Batolar; Parminder Kaur

    2016-01-01

    The increasing demand of component-based development approach (CBDA) gives opportunity to the software developers to increase the speed of the software development process and lower its production cost...

  9. Component-based Control Software Design for Flexible Manufacturing System

    Institute of Scientific and Technical Information of China (English)

    周炳海; 奚立峰; 曹永上

    2003-01-01

    A new method that designs and implements the component-based distributed & hierarchical flexible manufacturing control software is described with a component concept in this paper. The proposed method takes aim at improving the flexibility and reliability of the control system. On the basis of describing the concepts of component-based software and the distributed object technology, the architecture of the component-based software of the control system is suggested with the Common Object Request Broker Architecture (CORBA). And then, we propose a design method for component-based distributed & hierarchical flexible manufacturing control system. Finally, to verify the software design method, a prototype flexible manufacturing control system software has been implemented in Orbix 2. 3c, VC++6.0 and has been tested in connection with the physical flexible ranufacturing shop at the WuXi Professional Institute.

  10. Core cooling by subsolidus mantle convection. [thermal evolution model of earth

    Science.gov (United States)

    Schubert, G.; Cassen, P.; Young, R. E.

    1979-01-01

    Although vigorous mantle convection early in the thermal history of the earth is shown to be capable of removing several times the latent heat content of the core, a thermal evolution model of the earth in which the core does not solidify can be constructed. The large amount of energy removed from the model earth's core by mantle convection is supplied by the internal energy of the core which is assumed to cool from an initial high temperature given by the silicate melting temperature at the core-mantle boundary. For the smaller terrestrial planets, the iron and silicate melting temperatures at the core-mantle boundaries are more comparable than for the earth; the models incorporate temperature-dependent mantle viscosity and radiogenic heat sources in the mantle. The earth models are constrained by the present surface heat flux and mantle viscosity and internal heat sources produce only about 55% of the earth model's present surface heat flow.

  11. Exact solutions of the high dimensional hard-core Fermi-Hubbard model

    Institute of Scientific and Technical Information of China (English)

    潘峰; 戴连荣

    2001-01-01

    A simple algebraic approach to exact solutions of the hard-core Fermi-Hubbard model is proposed. Excitation energies and the corresponding wavefunctions of the hard-core Fermi-Hubbard model with nearest neighbor hopping cases in high dimension are obtained by using this method, which manifests that the model is exactly solvable in any dimension.

  12. Inertial waves in a laboratory model of the Earth's core

    Science.gov (United States)

    Triana, Santiago Andres

    2011-12-01

    A water-filled three-meter diameter spherical shell built as a model of the Earth's core shows evidence of precessionally forced flows and, when spinning the inner sphere differentially, inertial modes are excited. We identified the precessionally forced flow to be primarily the spin-over inertial mode, i.e., a uniform vorticity flow whose rotation axis is not aligned with the container's rotation axis. A systematic study of the spin-over mode is carried out, showing that the amplitude dependence on the Poincare number is in qualitative agreement with Busse's laminar theory while its phase differs significantly, likely due to topographic effects. At high rotation rates free shear layers concentrating most of the kinetic energy of the spin-over mode have been observed. When spinning the inner sphere differentially, a total of 12 inertial modes have been identified, reproducing and extending previous experimental results. The inertial modes excited appear ordered according to their azimuthal drift speed as the Rossby number is varied.

  13. On-Line Core Thermal-Hydraulic Model Improvement

    Energy Technology Data Exchange (ETDEWEB)

    In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok; Shin, Chang Hwan; Hwang, Dae Hyun; Seo, Kyung Won

    2007-02-15

    The objective of this project is to implement a fast-running 4-channel based code CETOP-D in an advanced reactor core protection calculator system(RCOPS). The part required for the on-line calculation of DNBR were extracted from the source of the CETOP-D code based on analysis of the CETOP-D code. The CETOP-D code was revised to maintain the input and output variables which are the same as in CPC DNBR module. Since the DNBR module performs a complex calculation, it is divided into sub-modules per major calculation step. The functional design requirements for the DNBR module is documented and the values of the database(DB) constants were decided. This project also developed a Fortran module(BEST) of the RCOPS Fortran Simulator and a computer code RCOPS-SDNBR to independently calculate DNBR. A test was also conducted to verify the functional design and DB of thermal-hydraulic model which is necessary to calculate the DNBR on-line in RCOPS. The DNBR margin is expected to increase by 2%-3% once the CETOP-D code is used to calculate the RCOPS DNBR. It should be noted that the final DNBR margin improvement could be determined in the future based on overall uncertainty analysis of the RCOPS.

  14. Component-Based Development of Runtime Observers in the COMDES Framework

    DEFF Research Database (Denmark)

    Guan, Wei; Li, Gang; Angelov, Christo K.;

    2013-01-01

    Formal verification methods, such as exhaustive model checking, are often infeasible because of high computational complexity. Runtime observers (monitors) provide an alternative, light-weight verification method, which offers a non-exhaustive but still feasible approach to monitor system behavior...... against formally specified properties. This paper presents a component-based design method for runtime observers in the context of COMDES framework—a component-based framework for distributed embedded system and its supporting tools. Therefore, runtime verification is facilitated by model...

  15. Synchronous Control of Reconfiguration in Fractal Component-based Systems -- a Case Study

    CERN Document Server

    Bouhadiba, Tayeb; Delaval, Gwenaël; Rutten, Éric

    2011-01-01

    In the context of component-based embedded systems, the management of dynamic reconfiguration in adaptive systems is an increasingly important feature. The Fractal component-based framework, and its industrial instantiation MIND, provide for support for control operations in the lifecycle of components. Nevertheless, the use of complex and integrated architectures make the management of this reconfiguration operations difficult to handle by programmers. To address this issue, we propose to use Synchronous languages, which are a complete approach to the design of reactive systems, based on behavior models in the form of transition systems. Furthermore, the design of closed-loop reactive managers of reconfigurations can benefit from formal tools like Discrete Controller Synthesis. In this paper we describe an approach to concretely integrate synchronous reconfiguration managers in Fractal component-based systems. We describe how to model the state space of the control problem, and how to specify the control obj...

  16. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  17. 一种基于扩展MVVM模式的面向服务软构件模型%A Model of Service-oriented Software Component Based on Extended MVVM Pattern

    Institute of Scientific and Technical Information of China (English)

    李猛坤; 陈明

    2011-01-01

    在面向石油行业的SaaS系统上进行海量数据分析处理时,需要针对不同的地区地理信息特点动态地采用相应的数据分析处理方法.设计了一种基于扩展MVVM模式的面向服务软构件模型,该模型在Web前端增加容纳SOC(Service-Oriented Computing)的业务逻辑处理模块的软构件平台,并允许用户动态添加个性化业务逻辑处理模块,以便支撑对石油行业数据的各种不同业务数据逻辑处理需求.该模型使得用户在以Web的方式享受云服务的同时,可以动态在SaaS系统上添加符合预定义接口标准的个性化处理模块.实验证明,该模型可以有效地提高SaaS系统的Web前端可扩展性,同时提高了数据分析处理的质量.%Processing and analyzing mass data with SaaS system of Petroleum industry, data is analyzed by different ways for its features. A model of service-oriented software component is designs based on Extended MVVM pattern, this model increases SOC ( Service-Oriented Computing) of software component pool of logical processing model on Web,and adds logical processing model dynamically for requirements of processing data of various business. This model helps users add processing model which is in accordance with standards dynamically at the same time using cloud service by web. The two points is proved by this experiment. On the one hand. this model improves extensibility of Web on SaaS system; on the other hand. this model improves quality of processing data.

  18. Development of 3D ferromagnetic model of tokamak core with strong toroidal asymmetry

    DEFF Research Database (Denmark)

    Markovič, Tomáš; Gryaznevich, Mikhail; Ďuran, Ivan;

    2015-01-01

    Fully 3D model of strongly asymmetric tokamak core, based on boundary integral method approach (i.e. characterization of ferromagnet by its surface) is presented. The model is benchmarked on measurements on tokamak GOLEM, as well as compared to 2D axisymmetric core equivalent for this tokamak...

  19. Performance modeling and analysis of parallel Gaussian elimination on multi-core computers

    Directory of Open Access Journals (Sweden)

    Fadi N. Sibai

    2014-01-01

    Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.

  20. An analytical model for the evolution of starless cores - I. The constant-mass case

    Science.gov (United States)

    Pattle, K.

    2016-07-01

    We propose an analytical model for the quasi-static evolution of starless cores confined by a constant external pressure, assuming that cores are isothermal and obey a spherically symmetric density distribution. We model core evolution for Plummer-like and Gaussian density distributions in the adiabatic and isothermal limits, assuming Larson-like dissipation of turbulence. We model the variation in the terms in the virial equation as a function of core characteristic radius, and determine whether cores are evolving towards virial equilibrium or gravitational collapse. We ignore accretion on to cores in the current study. We discuss the different behaviours predicted by the isothermal and adiabatic cases, and by our choice of index for the size-linewidth relation, and suggest a means of parametrizing the magnetic energy term in the virial equation. We model the evolution of the set of cores observed by Pattle et al. in the L1688 region of Ophiuchus in the `virial plane'. We find that not all virially bound and pressure-confined cores will evolve to become gravitationally bound, with many instead contracting to virial equilibrium with their surroundings, and find an absence of gravitationally dominated and virially unbound cores. We hypothesize a `starless core desert' in this quadrant of the virial plane, which may result from cores initially forming as pressure-confined objects. We conclude that a virially bound and pressure-confined core will not necessarily evolve to become gravitationally bound, and thus cannot be considered pre-stellar. A core can only be definitively considered pre-stellar (collapsing to form an individual stellar system) if it is gravitationally unstable.

  1. Little Earth Experiment: an instrument to model planetary cores

    CERN Document Server

    Aujogue, Kelig; Bates, Ian; Debray, François; Sreenivasan, Binod

    2016-01-01

    In this paper, we present a new experimental facility, Little Earth Experiment, designed to study the hydrodynamics of liquid planetary cores. The main novelty of this apparatus is that a transparent electrically conducting electrolyte is subject to extremely high magnetic fields (up to 10T) to produce electromagnetic effects comparable to those produced by moderate magnetic fields in planetary cores. This technique makes it possible to visualise for the first time the coupling between the principal forces in a convection-driven dynamo by means of Particle Image Velocimetry (PIV) in a geometry relevant to planets. We first present the technology that enables us to generate these forces and implement PIV in a high magnetic field environment. We then show that the magnetic field drastically changes the structure of convective plumes in a configuration relevant to the tangent cylinder region of the Earth's core.

  2. Little Earth Experiment: An instrument to model planetary cores

    Science.gov (United States)

    Aujogue, Kélig; Pothérat, Alban; Bates, Ian; Debray, François; Sreenivasan, Binod

    2016-08-01

    In this paper, we present a new experimental facility, Little Earth Experiment, designed to study the hydrodynamics of liquid planetary cores. The main novelty of this apparatus is that a transparent electrically conducting electrolyte is subject to extremely high magnetic fields (up to 10 T) to produce electromagnetic effects comparable to those produced by moderate magnetic fields in planetary cores. This technique makes it possible to visualise for the first time the coupling between the principal forces in a convection-driven dynamo by means of Particle Image Velocimetry (PIV) in a geometry relevant to planets. We first present the technology that enables us to generate these forces and implement PIV in a high magnetic field environment. We then show that the magnetic field drastically changes the structure of convective plumes in a configuration relevant to the tangent cylinder region of the Earth's core.

  3. Reliability Modeling of Components Based on Life Measured by the Times of Load Application%以载荷作用次数为寿命度量指标下的零件可靠性建模

    Institute of Scientific and Technical Information of China (English)

    王正; 康锐; 谢里阳

    2009-01-01

    The reliability modeling method of components was developed when the life was measured by the times of load application. The models for reliability and failure rate of components under random repeated loads without strength degeneration and those with strength degeneration were respectively derived, and the relationship between the reliability and the times of load application, and that between the failure rate and the times of load application were discussed in different cases. The results show that even though strength does not degenerate, both reliability and failure rate of components decrease as the times of load application increases, and the failure rate curve has partial characteristics of bathtub curve with early failure period and random failure period. When strength degenerates with the times of load application, the reliability of components decreases more obviously as the times of load application increases, and the failure rate of components decreases first and then increases as the times of load application increases, with the whole shape of the bathtub curve.%提出了以随机载荷作用次数为寿命度量指标框架下的零件可靠性建模方法,分别给出了强度不退化和强度退化时随机载荷多次作用下的零件可靠度与失效率计算模型,研究了不同情况下零件可靠度与失效率随载荷作用次数的变化规律.研究表明,即使零件强度不退化,零件的可靠度与失效率也会随着随机载荷作用次数的增加而逐渐减小,且失效率曲线具有浴盆曲线"早期失效期"和"偶然失效期"的特征.强度随载荷作用次数退化时,随着载荷作用次数的增加,零件的可靠度降低较为明显,失效率先减小后增大具有"浴盆曲线"的全部特征.

  4. Eddy viscosity of core flow inferred from comparison between time evolutions of the length-of-day and a core surface flow model

    Science.gov (United States)

    Matsushima, M.

    2016-12-01

    Diffusive processes of large scales in the Earth's core are dominated not by the molecular diffusion but by the eddy diffusion. To carry out numerical simulations of realistic geodynamo models, it is important to adopt appropriate parameters. However, the eddy viscous diffusion, or the eddy viscosity, is not a property of the core fluid but of the core flow. Hence it is significant to estimate the eddy viscosity from core flow models. In fact, fluid motion near the Earth's core surface provides useful information on core dynamics, features of the core-mantle boundary (CMB), and core-mantle coupling, for example. Such core fluid motion can be estimated from spatial and temporal distributions of the geomagnetic field. Most of core surface flow models rely on the frozen-flux approximation (Roberts and Scott, 1965), in which the magnetic diffusion is neglected. It should be noted, however, that there exists a viscous boundary layer at the CMB, where the magnetic diffusion may play an important role in secular variations of geomagnetic field. Therefore, a new approach to estimation of core surface flow has been devised by Matsushima (2015). That is, the magnetic diffusion is explicitly incorporated within the viscous boundary layer, while it is neglected below the boundary layer at the CMB which is assumed to be a spherical surface. A core surface flow model between 1840 and 2015 has been derived from a geomagnetic field model, COV-OBS.x1 (Gillet et al., 2015). Temporal variations of core flows contain information on phenomena in relation with core-mantle coupling, such as the LOD (length-of-day), and spin-up/spin-down of core flows. In particular, core surface flows inside the viscous boundary layer at the CMB may reveal an interesting feature in relation with Earth's rotation. We have examined time series of the LOD and vorticity derived from the core surface flow model. We have found a possible correlation between the LOD and the axial component of global vorticity

  5. A model for large-amplitude internal solitary waves with trapped cores

    Directory of Open Access Journals (Sweden)

    K. R. Helfrich

    2010-07-01

    Full Text Available Large-amplitude internal solitary waves in continuously stratified systems can be found by solution of the Dubreil-Jacotin-Long (DJL equation. For finite ambient density gradients at the surface (bottom for waves of depression (elevation these solutions may develop recirculating cores for wave speeds above a critical value. As typically modeled, these recirculating cores contain densities outside the ambient range, may be statically unstable, and thus are physically questionable. To address these issues the problem for trapped-core solitary waves is reformulated. A finite core of homogeneous density and velocity, but unknown shape, is assumed. The core density is arbitrary, but generally set equal to the ambient density on the streamline bounding the core. The flow outside the core satisfies the DJL equation. The flow in the core is given by a vorticity-streamfunction relation that may be arbitrarily specified. For simplicity, the simplest choice of a stagnant, zero vorticity core in the frame of the wave is assumed. A pressure matching condition is imposed along the core boundary. Simultaneous numerical solution of the DJL equation and the core condition gives the exterior flow and the core shape. Numerical solutions of time-dependent non-hydrostatic equations initiated with the new stagnant-core DJL solutions show that for the ambient stratification considered, the waves are stable up to a critical amplitude above which shear instability destroys the initial wave. Steadily propagating trapped-core waves formed by lock-release initial conditions also agree well with the theoretical wave properties despite the presence of a "leaky" core region that contains vorticity of opposite sign from the ambient flow.

  6. Evolution dynamics modeling and simulation of logistics enterprise's core competence based on service innovation

    Science.gov (United States)

    Yang, Bo; Tong, Yuting

    2017-04-01

    With the rapid development of economy, the development of logistics enterprises in China is also facing a huge challenge, especially the logistics enterprises generally lack of core competitiveness, and service innovation awareness is not strong. Scholars in the process of studying the core competitiveness of logistics enterprises are mainly from the perspective of static stability, not from the perspective of dynamic evolution to explore. So the author analyzes the influencing factors and the evolution process of the core competence of logistics enterprises, using the method of system dynamics to study the cause and effect of the evolution of the core competence of logistics enterprises, construct a system dynamics model of evolution of core competence logistics enterprises, which can be simulated by vensim PLE. The analysis for the effectiveness and sensitivity of simulation model indicates the model can be used as the fitting of the evolution process of the core competence of logistics enterprises and reveal the process and mechanism of the evolution of the core competence of logistics enterprises, and provide management strategies for improving the core competence of logistics enterprises. The construction and operation of computer simulation model offers a kind of effective method for studying the evolution of logistics enterprise core competence.

  7. Failure Rate Model of Mechanical Components Based on Four Elements%基于四要素的机械零部件失效率计算模型

    Institute of Scientific and Technical Information of China (English)

    王正

    2011-01-01

    The factors affecting the typical failure rate curve were analyzed, and four elements including load, strength, strength degradation and life index(namely, the number of load application and time) were proposed for calculating the failure rate of components. Taking the number of load application and time as the life index, respectively, the failure rate models of components consisting of four elements were developed, which can embody the parameters of load, strength, strength degradation and life. Then, the behaviors of the reliability and failure rate of components changing as the life index were studied. The results show that the failure rate curves of components have the partial or whole characteristics of bathtub curve. For different parameters of load, strength and the rule of strength degradation, the failure rate curves of components have different shapes. The models derived herein can calculate the failure rate of components as long as the known parameters of load, strength and its degradation and life index, and can direct the reliability-based design of components more scientifically.%分析了影响典型失效率曲线变化的因素,提出了零部件失效率计算的四要素,即载荷、强度、强度退化规律以及寿命指标(载荷作用次数或时间)。分别在以载荷作用次数和时间为寿命度量指标框架下,建立了能够全面地体现载荷、强度、强度退化规律以及寿命指标等参数影响的零部件失效率计算四要素模型,并研究了零部件可靠度与失效率随寿命指标的变化规律。研究表明,零部件的可靠度随寿命指标的增大而逐渐减小,零部件的失效率随寿命指标变化具有浴盆曲线全部(或部分)特征。对于不同的强度分布、载荷分布和强度退化规律组合,零部件具有不同的可靠度,失效率曲线具有不同的形状。所建立的失效率模型无需依赖产品的失效数据信息,而只要在载荷、

  8. Issues in Component-Based Development: Towards Specification with ADLs

    Directory of Open Access Journals (Sweden)

    Rafael González

    2006-10-01

    Full Text Available Software development has been coupled with time and cost problems through history. This has motivated the search for flexible, trustworthy and time and cost-efficient development. In order to achieve this, software reuse appears fundamental and component-based development, the way towards reuse. This paper discusses the present state of component-based development and some of its critical issues for success, such as: the existence of adequate repositories, component integration within a software architecture and an adequate specification. This paper suggests ADLs (Architecture Description Languages as a possible means for this specification.

  9. Continuously Optimized Reliable Energy (CORE) Microgrid: Models & Tools (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2013-07-01

    This brochure describes Continuously Optimized Reliable Energy (CORE), a trademarked process NREL employs to produce conceptual microgrid designs. This systems-based process enables designs to be optimized for economic value, energy surety, and sustainability. Capabilities NREL offers in support of microgrid design are explained.

  10. Computer simulation of hard-core models for liquid crystals

    NARCIS (Netherlands)

    Frenkel, D.

    1987-01-01

    A review is presented of computer simulations of liquid crystal systems. It will be shown that the shape of hard-core particles is of crucial importance for the stability of the phases. Both static and dynamic properties of the systems are obtained by means of computer simulation.

  11. Multiscale model of global inner-core anisotropy induced by hcp-alloy plasticity

    Science.gov (United States)

    Cardin, P.; Deguen, R.; Lincot, A.; Merkel, S.

    2016-12-01

    The Earth's solid inner core exhibits a global seismic anisotropy of several percents. It results from a coherent alignment of anisotropic Fe alloy crystals through the inner-core history that can be sampled by present-day seismic observations. By combining self-consistent polycrystal plasticity, inner-core formation models, Monte-Carlo search for elastic moduli, and simulations of seismic measurements, we introduce a multiscale model that can reproduce a global seismic anisotropy of several percents aligned with the Earth's rotation axis. Conditions for a successful model are an hexagonal close packed structure for the inner-core Fe alloy, plastic deformation by pyramidal slip, and large-scale flow induced by a low-degree inner-core formation model. For global anisotropies ranging between 1 and 3%, the elastic anisotropy in the single crystal ranges from 5 to 20% with larger velocities along the c axis.

  12. Study on the judgment model of dyeing and weaving corporation's core competence

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    On the basis of the dyeing and weaving corporations' characters, this paper put forward the dimensionality and index system to analyze the core competence. This paper divided the core competence into three layers and gave out the Dimensional-Hierarchical structure of core competence through combining the analysis dimensionalities with the competence layers. The model was described to evaluate, analyze and judge the dyeing and weaving corporation's competence.

  13. Organizational Models for Non-Core Processes Management: A Classification Framework

    Directory of Open Access Journals (Sweden)

    Alberto F. De Toni

    2012-12-01

    The framework enables the identification and the explanation of the main advantages and disadvantages of each strategy and to highlight how a company should coherently choose an organizational model on the basis of: (a the specialization/complexity of the non‐core processes, (b the focus on core processes, (c its inclination towards know‐how outsourcing, and (d the desired level of autonomy in the management of non‐core processes.

  14. Component-based assistants for MEMS design tools

    Science.gov (United States)

    Hahn, Kai; Brueck, Rainer; Schneider, Christian; Schumer, Christian; Popp, Jens

    2001-04-01

    With this paper a new approach for MEMS design tools will be introduced. An analysis of the design tool market leads to the result that most of the designers work with large and inflexible frameworks. Purchasing and maintaining these frameworks is expensive, and gives no optimum support for MEMS design process. The concept of design assistants, carried out with the concept of interacting software components, denotes a new generation of flexible, small, semi-autonomous software systems that are used to solve specific MEMS design tasks in close interaction with the designer. The degree of interaction depends on the complexity of the design task to be performed and the possibility to formalize the respective knowledge. In this context the Internet as one of today's most important communication media provides support for new tool concepts on the basis of the Java programming language. These modern technologies can be used to set up distributed and platform-independent applications. Thus the idea emerged to implement design assistants using Java. According to the MEMS design model new process sequences have to be defined new for every specific design object. As a consequence, assistants have to be built dynamically depending on the requirements of the design process, what can be achieved with component based software development. Componentware offers the possibility to realize design assistants, in areas like design rule checks, process consistency checks, technology definitions, graphical editors, etc. that may reside distributed over the Internet, communicating via Internet protocols. At the University of Siegen a directory for reusable MEMS components has been created, containing a process specification assistant and a layout verification assistant for lithography based MEMS technologies.

  15. Nominal and Structural Subtyping in Component-Based Programming

    DEFF Research Database (Denmark)

    Ostermann, Klaus

    2007-01-01

    type. We analyze structural and different flavors of nominal subtyping from the perspective of component-based programming, where issues such as blame assignment and modular extensibility are important. Our analysis puts various existing subtyping mechanisms into a common frame of reference...

  16. Management of Globally Distributed Component-Based Software Development Projects

    NARCIS (Netherlands)

    J. Kotlarsky (Julia)

    2005-01-01

    textabstractGlobally Distributed Component-Based Development (GD CBD) is expected to become a promising area, as increasing numbers of companies are setting up software development in a globally distributed environment and at the same time are adopting CBD methodologies. Being an emerging area, the

  17. Component-Based Approach in Learning Management System Development

    Science.gov (United States)

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  18. Research on the equivalence between digital core and rock physics models

    Science.gov (United States)

    Yin, Xingyao; Zheng, Ying; Zong, Zhaoyun

    2017-06-01

    In this paper, we calculate the elastic modulus of 3D digital cores using the finite element method, systematically study the equivalence between the digital core model and various rock physics models, and carefully analyze the conditions of the equivalence relationships. The influences of the pore aspect ratio and consolidation coefficient on the equivalence relationships are also further refined. Theoretical analysis indicates that the finite element simulation based on the digital core is equivalent to the boundary theory and Gassmann model. For pure sandstones, effective medium theory models (SCA and DEM) and the digital core models are equivalent in cases when the pore aspect ratio is within a certain range, and dry frame models (Nur and Pride model) and the digital core model are equivalent in cases when the consolidation coefficient is a specific value. According to the equivalence relationships, the comparison of the elastic modulus results of the effective medium theory and digital rock physics is an effective approach for predicting the pore aspect ratio. Furthermore, the traditional digital core models with two components (pores and matrix) are extended to multiple minerals to more precisely characterize the features and mineral compositions of rocks in underground reservoirs. This paper studies the effects of shale content on the elastic modulus in shaly sandstones. When structural shale is present in the sandstone, the elastic modulus of the digital cores are in a reasonable agreement with the DEM model. However, when dispersed shale is present in the sandstone, the Hill model cannot describe the changes in the stiffness of the pore space precisely. Digital rock physics describes the rock features such as pore aspect ratio, consolidation coefficient and rock stiffness. Therefore, digital core technology can, to some extent, replace the theoretical rock physics models because the results are more accurate than those of the theoretical models.

  19. Towards Core Modelling Practices in Integrated Water Resource Management: An Interdisciplinary View of the Modelling Process

    Science.gov (United States)

    Jakeman, A. J.; Elsawah, S.; Pierce, S. A.; Ames, D. P.

    2016-12-01

    The National Socio-Environmental Synthesis Center (SESYNC) Core Modelling Practices Pursuit is developing resources to describe core practices for developing and using models to support integrated water resource management. These practices implement specific steps in the modelling process with an interdisciplinary perspective; however, the particular practice that is most appropriate depends on contextual aspects specific to the project. The first task of the pursuit is to identify the various steps for which implementation practices are to be described. This paper reports on those results. The paper draws on knowledge from the modelling process literature for environmental modelling (Jakeman et al., 2006), engaging stakeholders (Voinov and Bousquet, 2010) and general modelling (Banks, 1999), as well as the experience of the consortium members. We organise the steps around the four modelling phases. The planning phase identifies what is to be achieved, how and with what resources. The model is built and tested during the construction phase, and then used in the application phase. Finally, models that become part of the ongoing policy process require a maintenance phase. For each step, the paper focusses on what is to be considered or achieved, rather than how it is performed. This reflects the separation of the steps from the practices that implement them in different contexts. We support description of steps with a wide range of examples. Examples are designed to be generic and do not reflect any one project or context, but instead are drawn from common situations or from extremely different ones so as to highlight some of the issues that may arise at each step. References Banks, J. (1999). Introduction to simulation. In Proceedings of the 1999 Winter Simulation Conference. Jakeman, A. J., R. A. Letcher, and J. P. Norton (2006). Ten iterative steps in development and evaluation of environmental models. Environmental Modelling and Software 21, 602-614. Voinov, A

  20. Modeling QoS Parameters in Component-Based Systems

    Science.gov (United States)

    2004-08-01

    deployed components, begins with the system developer, willing to build a system, by presenting a query to the system generator . The query describes...is built using the system generator . If some of the components are not found then the system integrator can modify the system query by adding more

  1. Modeling Component-based Bragg gratings Application: tunable lasers

    Directory of Open Access Journals (Sweden)

    Hedara Rachida

    2011-09-01

    Full Text Available The principal function of a grating Bragg is filtering, which can be used in optical fibers based component and active or passive semi conductors based component, as well as telecommunication systems. Their ideal use is with lasers with fiber, amplifiers with fiber or Laser diodes. In this work, we are going to show the principal results obtained during the analysis of various types of grating Bragg by the method of the coupled modes. We then present the operation of DBR are tunable. The use of Bragg gratings in a laser provides single-mode sources, agile wavelength. The use of sampled grating increases the tuning range.

  2. Precessional states in a laboratory model of the Earth's core

    Science.gov (United States)

    Triana, S. A.; Zimmerman, D. S.; Lathrop, D. P.

    2012-04-01

    A water-filled three-meter diameter spherical shell, geometrically similar to the Earth's core, shows precessionally forced flows. The precessional torque is supplied by the daily rotation of the laboratory by the Earth. We identify the precessionally forced flow to be primarily the spin-over inertial mode, i.e., a uniform vorticity flow whose rotation axis is not aligned with the sphere's rotation axis. A systematic study of the spin-over mode is carried out, showing that the amplitude depends on the ratio of precession to rotation rates (the Poincaré number), in marginal qualitative agreement with Busse's (1968) laminar theory. We find its phase differs significantly though, likely due to topographic effects. At high rotation rates, free shear layers are observed. Comparison with previous computational studies and implications for the Earth's core are discussed.

  3. Stability of core-shell nanowires in selected model solutions

    Science.gov (United States)

    Kalska-Szostko, B.; Wykowska, U.; Basa, A.; Zambrzycka, E.

    2015-03-01

    This paper presents the studies of stability of magnetic core-shell nanowires prepared by electrochemical deposition from an acidic solution containing iron in the core and modified surface layer. The obtained nanowires were tested according to their durability in distilled water, 0.01 M citric acid, 0.9% NaCl, and commercial white wine (12% alcohol). The proposed solutions were chosen in such a way as to mimic food related environment due to a possible application of nanowires as additives to, for example, packages. After 1, 2 and 3 weeks wetting in the solutions, nanoparticles were tested by Infrared Spectroscopy, Atomic Absorption Spectroscopy, Transmission Electron Microscopy and X-ray diffraction methods.

  4. A model for the internal structure of molecular cloud cores

    CERN Document Server

    McLaughlin, D E; McLaughlin, Dean E; Pudritz, Ralph E

    1996-01-01

    We generalize the classic Bonnor-Ebert stability analysis of pressure-truncated, self-gravitating gas spheres, to include clouds with arbitrary equations of state. A virial-theorem analysis is also used to incorporate mean magnetic fields into such structures. The results are applied to giant molecular clouds (GMCs), and to individual dense cores, with an eye to accounting for recent observations of the internal velocity-dispersion profiles of the cores in particular. We argue that GMCs and massive cores are at or near their critical mass, and that in such a case the size-linewidth and mass-radius relations between them are only weakly dependent on their internal structures; any gas equation of state leads to essentially the same relations. We briefly consider the possibility that molecular clouds can be described by polytropic pressure-density relations (of either positive or negative index), but show that these are inconsistent with the apparent gravitational virial equilibrium, 2U + W = 0 of GMCs and of ma...

  5. Discussion about modeling the effects of neutron flux exposure for nuclear reactor core analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.

    1986-04-01

    Methods used to calculate the effects of exposure to a neutron flux are described. The modeling of the nuclear-reactor core history presents an analysis challenge. The nuclide chain equations must be solved, and some of the methods in use for this are described. Techniques for treating reactor-core histories are discussed and evaluated.

  6. Rapid core field variations during the satellite era: Investigations using stochastic process based field models

    DEFF Research Database (Denmark)

    Finlay, Chris; Olsen, Nils; Gillet, Nicolas

    . We report spherical harmonic spectra, comparisons to observatory monthly means, and maps of the radial field at the core-mantle boundary, from the resulting ensemble of core field models. We find that inter-annual fluctuations in the external field (for example related to high solar-driven activity...

  7. Flow Dynamic Analysis of Core Shooting Process through Experiment and Multiphase Modeling

    Directory of Open Access Journals (Sweden)

    Changjiang Ni

    2016-01-01

    Full Text Available Core shooting process is the most widely used technique to make sand cores and it plays an important role in the quality of sand cores as well as the manufacture of complicated castings in metal casting industry. In this paper, the flow behavior of sand particles in the core box was investigated synchronously with transparent core box, high-speed camera, and pressure measuring system. The flow pattern of sand particles in the shooting head of the core shooting machine was reproduced with various colored core sand layers. Taking both kinetic and frictional stress into account, a kinetic-frictional constitutive correlation was established to describe the internal momentum transfer in the solid phase. Two-fluid model (TFM simulations with turbulence model were then performed and good agreement was achieved between the experimental and simulation results on the flow behavior of sand particles in both the shooting head and the core box. Based on the experimental and simulation results, the flow behavior of sand particles in the core box, the formation of “dead zone” in the shooting head, and the effect of drag force were analyzed in terms of sand volume fraction (αs, sand velocity (Vs, and pressure variation (P.

  8. Rapid core field variations during the satellite era: Investigations using stochastic process based field models

    DEFF Research Database (Denmark)

    Finlay, Chris; Olsen, Nils; Gillet, Nicolas

    We present a new ensemble of time-dependent magnetic field models constructed from satellite and observatory data spanning 1997-2013 that are compatible with prior information concerning the temporal spectrum of core field variations. These models allow sharper field changes compared to traditional....... We report spherical harmonic spectra, comparisons to observatory monthly means, and maps of the radial field at the core-mantle boundary, from the resulting ensemble of core field models. We find that inter-annual fluctuations in the external field (for example related to high solar-driven activity...

  9. An experimental investigation into the trapping model core pillars with reinforced fly ash composites

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, M.K. [National Inst. of Technology, Rourkela (India); Karanam, U.M. [Indian Inst. of Technology, Kharagpur (India)

    2008-06-15

    This paper presented details of a study which examined the use of fly ash composite materials for backfilling mine voids in room-and-pillar mining techniques. The study examined the load deformation characteristics of model core pillars confined by wire mesh reinforced fly ash composite materials. Anhydrous chemical-grade lime and gypsum were added in various quantities to class F fly ash samples. The model core pillars were 57 mm in diameter and 200 mm in length. The engineering properties of the model core pillars were then determined using unconfined compressive strength and Brazilian indirect tensile strength tests. The experimental investigations showed that the percentage increases in the strength of the trapped model core pillars varied with the different types of composite materials, and was also influenced by the length of the curing period and the ratio of the annular thickness of the fill area to the model core pillar radius. Results demonstrated that the addition of excess lime to fly ash composites was not beneficial. Maximum strength gains of 14 per cent were achieved with model cores of a cement-sand ratio of 1:2.5 for fly ash composites containing 15 per cent lime and 5 per cent gypsum. It was concluded that suitable fly ash composites reinforced with wire ropes can enhance the strength of the load bearing element and alter the post-peak characteristics of trapped cores.

  10. Creating Innovators through setting up organizational Vision, Mission, and Core Values : a Strategic Model in Higher Education

    OpenAIRE

    Aithal, Sreeramana

    2016-01-01

    Vision, mission, objectives and core values play major role in setting up sustainable organizations. Vision and mission statements describe the organization’s goals. Core values and core principles represent the organization’s culture. In this paper, we have discussed a model on how a higher education institution can prosper to reach its goal of ‘creating innovators’ through its vision, mission, objectives and core values. A model for the core values required for a prospective ...

  11. [Construction of the addiction prevention core competency model for preventing addictive behavior in adolescents].

    Science.gov (United States)

    Park, Hyun Sook; Jung, Sun Young

    2013-12-01

    This study was done to provide fundamental data for the development of competency reinforcement programs to prevent addictive behavior in adolescents through the construction and examination of an addiction prevention core competency model. In this study core competencies for preventing addictive behavior in adolescents through competency modeling were identified, and the addiction prevention core competency model was developed. It was validated methodologically. Competencies for preventing addictive behavior in adolescents as defined by the addiction prevention core competency model are as follows: positive self-worth, self-control skill, time management skill, reality perception skill, risk coping skill, and positive communication with parents and with peers or social group. After construction, concurrent cross validation of the addiction prevention core competency model showed that this model was appropriate. The study results indicate that the addiction prevention core competency model for the prevention of addictive behavior in adolescents through competency modeling can be used as a foundation for an integral approach to enhance adolescent is used as an adjective and prevent addictive behavior. This approach can be a school-centered, cost-efficient strategy which not only reduces addictive behavior in adolescents, but also improves the quality of their resources.

  12. Core-shell particles as model compound for studying fouling

    DEFF Research Database (Denmark)

    Christensen, Morten Lykkegaard; Nielsen, Troels Bach; Andersen, Morten Boel Overgaard;

    2008-01-01

    Synthetic colloidal particles with hard cores and soft, water-swollen shells were used to study cake formation during ultrafiltration. The total cake resistance was lowest for particles with thick shells, which indicates that interparticular forces between particles (steric hindrance...... and electrostatic repulsion) influenced cake formation. At low pressure the specific cake resistance could be predicted from the Kozeny-Carman equation. At higher pressures, the resistance increased due to cake compression. Both cake formation and compression were reversible. For particles with thick shells...... the permeate flux could be enhanced by lowering the pressure. Hence, the amount of water-swollen material influences both cake thickness and resistance....

  13. Improvement of Axial Reflector Cross Section Generation Model for PWR Core Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Cheon Bo; Lee, Kyung Hoon; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    This paper covers the study for improvement of axial reflector XS generation model. In the next section, the improved 1D core model is represented in detail. Reflector XS generated by the improved model is compared to that of the conventional model in the third section. Nuclear design parameters generated by these two XS sets are also covered in that section. Significant of this study is discussed in the last section. Two-step procedure has been regarded as the most practical approach for reactor core designs because it offers core design parameters quite rapidly within acceptable range. Thus this approach is adopted for SMART (System-integrated Modular Advanced Reac- Tor) core design in KAERI with the DeCART2D1.1/ MASTER4.0 (hereafter noted as DeCART2D/ MASTER) code system. Within the framework of the two-step procedure based SMART core design, various researches have been studied to improve the core design reliability and efficiency. One of them is improvement of reflector cross section (XS) generation models. While the conventional FA/reflector two-node model used for most core designs to generate reflector XS cannot consider the actual configuration of fuel rods that intersect at right angles to axial reflectors, the revised model reflects the axial fuel configuration by introducing the radially simplified core model. The significance of the model revision is evaluated by observing HGC generated by DeCART2D, reflector XS, and core design parameters generated by adopting the two models. And it is verified that about 30 ppm CBC error can be reduced and maximum Fq error decreases from about 6 % to 2.5 % by applying the revised model. Error of AO and axial power shapes are also reduced significantly. Therefore it can be concluded that the simplified 1D core model improves the accuracy of the axial reflector XS and leads to the two-step procedure reliability enhancement. Since it is hard for core designs to be free from the two-step approach, it is necessary to find

  14. The use of CORE model by metacognitive skill approach in developing characters junior high school students

    Science.gov (United States)

    Fisher, Dahlia; Yaniawati, Poppy; Kusumah, Yaya Sukjaya

    2017-08-01

    This study aims to analyze the character of students who obtain CORE learning model using metacognitive approach. The method in this study is qualitative research and quantitative research design (Mixed Method Design) with concurrent embedded strategy. The research was conducted on two groups: an experimental group and the control group. An experimental group consists of students who had CORE model learning using metacognitive approach while the control group consists of students taught by conventional learning. The study was conducted the object this research is the seventh grader students in one the public junior high schools in Bandung. Based on this research, it is known that the characters of the students in the CORE model learning through metacognitive approach is: honest, hard work, curious, conscientious, creative and communicative. Overall it can be concluded that CORE model learning is good for developing characters of a junior high school student.

  15. Ab Initio Study of 40Ca with an Importance Truncated No-Core Shell Model

    Energy Technology Data Exchange (ETDEWEB)

    Roth, R; Navratil, P

    2007-05-22

    We propose an importance truncation scheme for the no-core shell model, which enables converged calculations for nuclei well beyond the p-shell. It is based on an a priori measure for the importance of individual basis states constructed by means of many-body perturbation theory. Only the physically relevant states of the no-core model space are considered, which leads to a dramatic reduction of the basis dimension. We analyze the validity and efficiency of this truncation scheme using different realistic nucleon-nucleon interactions and compare to conventional no-core shell model calculations for {sup 4}He and {sup 16}O. Then, we present the first converged calculations for the ground state of {sup 40}Ca within no-core model spaces including up to 16{h_bar}{Omega}-excitations using realistic low-momentum interactions. The scheme is universal and can be easily applied to other quantum many-body problems.

  16. Mathematical Model for Growth of Inclusion in Deoxidization on the Basis of Unreacted Core Model

    Institute of Scientific and Technical Information of China (English)

    WU Su-zhou; ZHANG Jiong-ming

    2008-01-01

    Controlling inclusion composition,from the point of view of thermodynamics,only explains the probability and limit of reaction.However,kinetics makes the nucleation and the velocity of growth of inclusions clear,and these kinetic factors arc very important to the quality of slab.The basic kinetic theory of unreacted core model was used to build the mathematical model for the growth of inclusions and the concerned software was developed through Visual Basic 6.0.The time that different radius inclusions attain saturation was calculated to determine the controlling step of reaction between steel and inclusions.The time for the growth of inclusion obtained from the model was in good agreement with the data measured by Japanese Okuyama G,which indicated that the model is reasonable.

  17. An analytical model for the evolution of starless cores I: The constant-mass case

    CERN Document Server

    Pattle, Kate

    2016-01-01

    We propose an analytical model for the quasistatic evolution of starless cores confined by a constant external pressure, assuming that cores are isothermal and obey a spherically-symmetric density distribution. We model core evolution for Plummer-like and Gaussian density distributions in the adiabatic and isothermal limits, assuming Larson-like dissipation of turbulence. We model the variation in the terms in the virial equation as a function of core characteristic radius, and determine whether cores are evolving toward virial equilibrium or gravitational collapse. We ignore accretion onto cores in the current study. We discuss the different behaviours predicted by the isothermal and adiabatic cases, and by our choice of index for the size-linewidth relation, and suggest a means of parameterising the magnetic energy term in the virial equation. We model the evolution of the set of cores observed by Pattle et al. (2015) in the L1688 region of Ophiuchus in the 'virial plane'. We find that not all virially-boun...

  18. Model to Study Resin Impregnation Process of Premix Made of Friction Spun Core Yarn

    Institute of Scientific and Technical Information of China (English)

    丁辛; 吴学东

    2001-01-01

    A model was deveIoped to investigate impregnation behavior of thermoplastic resin into filament bundle based on Darcy's law. Consolidation processes of unidirectional laminate were performed to evaluate the validity of the model. Friction spun core yarns were used in the experiments with polypropylene fiber sheath and glass filament core. The processing conditions, such as temperature and pressure, and filament parameters were taken into consideration. A good agreement was found between theoretical prediction and experiment data.

  19. A Global Model For Circumgalactic and Cluster-Core Precipitation

    CERN Document Server

    Voit, G M; Li, Y; O'Shea, B W; Bryan, G L; Donahue, M

    2016-01-01

    We provide an analytic framework for interpreting observations of multiphase circumgalactic gas that is heavily informed by recent numerical simulations of thermal instability and precipitation in cool-core galaxy clusters. We start by considering the local conditions required for the formation of multiphase gas via two different modes: (1) uplift of ambient gas by galactic outflows, and (2) condensation in a stratified stationary medium in which thermal balance is explicitly maintained. Analytic exploration of these two modes provides insights into the relationships between the local ratio of the cooling and freefall time scales (i.e., t_cool / t_ff), the large-scale gradient of specific entropy, and development of precipitation and multiphase media in circumgalactic gas. We then use these analytic findings to interpret recent simulations of circumgalactic gas in which global thermal balance is maintained. We show that long-lasting configurations of gas with 5 < t_cool / t_ff < 20 and radial entropy pr...

  20. Quasi-exactly solvable relativistic soft-core Coulomb models

    CERN Document Server

    Agboola, Davids

    2013-01-01

    By considering a unified treatment, we present quasi exact polynomial solutions to both the Klein-Gordon and Dirac equations with the family of soft-core Coulomb potentials $V_q(r)=-Z/\\left(r^q+\\beta^q\\right)^{1/q}$, $Z>0$, $\\beta>0$, $q\\geq 1$. We consider cases $q=1$ and $q=2$ and show that both cases are reducible to the same basic ordinary differential equation. A systematic and closed form solution to the basic equation is obtain using the Bethe ansatz method. For each case, the expressions for the energies and the allowed parameters are obtained analytically and the wavefunctions are derive in terms of the roots of a set of Bethe ansatz equations.

  1. Mathematical Model for Thermal Processes of Single-Core Power Cable

    Directory of Open Access Journals (Sweden)

    D. Zalizny

    2012-01-01

    Full Text Available The paper proposes a mathematical model for thermal processes that permits to calculate non-stationary thermal processes of core insulation and surface of a single-core power cable in real-time mode. The model presents the cable as four thermal homogeneous bodies: core, basic insulation, protective sheath and internal environment. Thermal processes between homogeneous bodies are described by a system of four differential equations. The paper contains a proposal to solve this system of equations with the help of a thermal equivalent circuit and the Laplace transform. All design ratios for thermal parameters and algorithm for calculating temperature of core insulation and temperature of power cable surface. These algorithms can be added in the software of microprocessor devices. The paper contains results of experimental investigations and reveals that an absolute error of the mathematical model does not exceed 3ºС.

  2. Multiscale model of global inner-core anisotropy induced by hcp-alloy plasticity

    CERN Document Server

    Lincot, A; Deguen, R; Merkel, Sébastien

    2016-01-01

    $\\bullet$ Multiscale model of inner-core anisotropy produced by hcp alloy deformation$\\bullet$ 5 to 20% single-crystal elastic anisotropy and plastic deformation by pyramidal slip $\\bullet$ Low-degree inner-core formation model with faster crystallization at the equatorThe Earth's solid inner-core exhibits a global seismic anisotropy of several percents. It results from a coherent alignment of anisotropic Fe-alloy crystals through the inner-core history that can be sampled by present-day seismic observations. By combining self-consistent polycrystal plasticity, inner-core formation models, Monte-Carlo search for elastic moduli, and simulations of seismic measurements, we introduce a multiscale model that can reproduce a global seismic anisotropy of several percents aligned with the Earth's rotation axis. Conditions for a successful model are an hexagonal-close-packed structure for the inner-core Fe-alloy, plastic deformation by pyramidal \\textless{}c+a\\textgreater{} slip, and large-scale flow induced by a low...

  3. SAPHIR: a physiome core model of body fluid homeostasis and blood pressure regulation.

    Science.gov (United States)

    Thomas, S Randall; Baconnier, Pierre; Fontecave, Julie; Françoise, Jean-Pierre; Guillaud, François; Hannaert, Patrick; Hernández, Alfredo; Le Rolle, Virginie; Mazière, Pierre; Tahi, Fariza; White, Ronald J

    2008-09-13

    We present the current state of the development of the SAPHIR project (a Systems Approach for PHysiological Integration of Renal, cardiac and respiratory function). The aim is to provide an open-source multi-resolution modelling environment that will permit, at a practical level, a plug-and-play construction of integrated systems models using lumped-parameter components at the organ/tissue level while also allowing focus on cellular- or molecular-level detailed sub-models embedded in the larger core model. Thus, an in silico exploration of gene-to-organ-to-organism scenarios will be possible, while keeping computation time manageable. As a first prototype implementation in this environment, we describe a core model of human physiology targeting the short- and long-term regulation of blood pressure, body fluids and homeostasis of the major solutes. In tandem with the development of the core models, the project involves database implementation and ontology development.

  4. IAEA CRP on HTGR Uncertainties in Modeling: Assessment of Phase I Lattice to Core Model Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Rouxelin, Pascal Nicolas [Idaho National Lab. (INL), Idaho Falls, ID (United States); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented by the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise

  5. Noise generated by model step lap core configurations of grain oriented electrical steel

    Energy Technology Data Exchange (ETDEWEB)

    Snell, David [Cogent Power Ltd., Development and Market Research, Orb Electrical Steels, Corporation Road, Newport, South Wales NP19 OXT (United Kingdom)], E-mail: Dave.snell@cogent-power.com

    2008-10-15

    Although it is important to reduce the power loss associated with transformer cores by use of electrical steel of the optimum grade, it is equally important to minimise the noise generated by the core. This paper discusses the effect of variations in the number of steps (3, 5, and 7) and the step overlap (2, 4, and 6 mm) on noise associated with model step lap cores of conventional, high permeability and ball unit domain refined high permeability grain oriented electrical steel. A-weighted sound pressure level noise measurements (LAeq) were made at various locations of the core over the frequency range 25-16,000 Hz. For all step lap cores investigated, the noise generated was dependent on the induction level, and on the number of steps and step overlap employed. The use of 3 step lap cores and step overlaps of 2 mm should be avoided, if low noise is to be achieved. There was very little difference between the noise emitted by the 5 and 7 step lap cores. Similar noise levels were noted for 27M0H material in the non-domain refined (NDR) and ball unit domain refined condition for a 5 step lap core with 6 mm step overlap.

  6. Experimental Study and Mathematical Modeling of Asphaltene Deposition Mechanism in Core Samples

    Directory of Open Access Journals (Sweden)

    Jafari Behbahani T.

    2015-11-01

    increased. The experimental results show that the amount of remaining asphaltene in carbonate core samples is higher than those in sandstone core samples. Also, SEM (Scanning Electron Microscopy micrographs of carbonate core samples showed the formation of large clusters of asphaltene in comparison with sandstone core samples during natural depletion. It can be seen from the modeling results that the proposed model based on the multilayer adsorption equilibrium mechanism and four material balance equations is more accurate than those obtained from the monolayer adsorption equilibrium adsorption mechanism and two material balance equations, and is in agreement with the experimental data of natural depletion reported in this work and with those reported in the literature.

  7. An investigation of ab initio shell-model interactions derived by no-core shell model

    Science.gov (United States)

    Wang, XiaoBao; Dong, GuoXiang; Li, QingFeng; Shen, CaiWan; Yu, ShaoYing

    2016-09-01

    The microscopic shell-model effective interactions are mainly based on the many-body perturbation theory (MBPT), the first work of which can be traced to Brown and Kuo's first attempt in 1966, derived from the Hamada-Johnston nucleon-nucleon potential. However, the convergence of the MBPT is still unclear. On the other hand, ab initio theories, such as Green's function Monte Carlo (GFMC), no-core shell model (NCSM), and coupled-cluster theory with single and double excitations (CCSD), have made many progress in recent years. However, due to the increasing demanding of computing resources, these ab initio applications are usually limited to nuclei with mass up to A = 16. Recently, people have realized the ab initio construction of valence-space effective interactions, which is obtained through a second-time renormalization, or to be more exactly, projecting the full-manybody Hamiltonian into core, one-body, and two-body cluster parts. In this paper, we present the investigation of such ab initio shell-model interactions, by the recent derived sd-shell effective interactions based on effective J-matrix Inverse Scattering Potential (JISP) and chiral effective-field theory (EFT) through NCSM. In this work, we have seen the similarity between the ab initio shellmodel interactions and the interactions obtained by MBPT or by empirical fitting. Without the inclusion of three-body (3-bd) force, the ab initio shell-model interactions still share similar defects with the microscopic interactions by MBPT, i.e., T = 1 channel is more attractive while T = 0 channel is more repulsive than empirical interactions. The progress to include more many-body correlations and 3-bd force is still badly needed, to see whether such efforts of ab initio shell-model interactions can reach similar precision as the interactions fitted to experimental data.

  8. Spatial Resolution of Core Surface Flow Models Derived From Satellite Data

    Science.gov (United States)

    Eymin, C.; Hulot, G.

    Core surface flows are usually computed from observations of the internal magnetic field and its secular variation. With observatory based secular variation models, the spatial resolution of core surface flows was mainly limited by the resolution of the secular variation model itself. This resolution dramatically improved with magnetic satellite data and for the first time the main limitation on core surface flow compu- tations comes from the hiding of the smallest length scale of the internal magnetic field by the crust. Indeed, the invisible small scale magnetic field may interact with core flows to produce large scale secular variation. This interaction cannot be taken into account during the flow computation process and may alter the computed flow models, even for large length scales. We investigate here the effects of the truncation of the internal magnetic field with known flow models using two different and inde- pendent core surface flow computation methods. In particular, we try to estimate the amplitude of the error introduced by this truncation and the spatial resolution that can be obtained with the new satellite data for core surface flows.

  9. An Intuitionistic Fuzzy Methodology for Component-Based Software Reliability Optimization

    DEFF Research Database (Denmark)

    Madsen, Henrik; Grigore, Albeanu; Popenţiuvlǎdicescu, Florin

    2012-01-01

    Component-based software development is the current methodology facilitating agility in project management, software reuse in design and implementation, promoting quality and productivity, and increasing the reliability and performability. This paper illustrates the usage of intuitionistic fuzzy...... degree approach in modelling the quality of entities in imprecise software reliability computing in order to optimize management results. Intuitionistic fuzzy optimization algorithms are proposed to be used for complex software systems reliability optimization under various constraints....

  10. Compact core model for Symmetric Double-Gate Junctionless Transistors

    Science.gov (United States)

    Cerdeira, A.; Ávila, F.; Íñiguez, B.; de Souza, M.; Pavanello, M. A.; Estrada, M.

    2014-04-01

    A new charge-based compact analytical model for Symmetric Double-Gate Junctionless Transistors is presented. The model is physically-based and considers both the depletion and accumulation operating conditions including the series resistance effects. Most model parameters are related to physical magnitudes and the extraction procedure for each of them is well established. The model provides an accurate continuous description of the transistor behavior in all operating conditions. Among important advantages with respect to previous models are the inclusion of the effect of the series resistance and the fulfilment of being symmetrical with respect to drain voltage equal to zero. It is validated with simulations for doping concentrations of 5 × 1018 and 1 × 1019 cm-3, as well as for layer thickness of 10 and 15 nm, allowing normally-off operation.

  11. Growth of the inner core in the mean-field dynamo model

    CERN Document Server

    Reshetnyak, M Yu

    2016-01-01

    Application of Parker's dynamo model to the geodynamo with the growing inner core is considered. It is shown that decrease of the inner core size, where intensive magnetic field generation takes place, leads to the multi-polar magnetic field in the past. This effect reflects the decrease of the region of the effective magnetic field generation. The process is accompanied by increase of the reversals number and decrease of intensity of the geomagnetic field. The constraints on the mechanisms of convection in the liquid core are discussed.

  12. The General Equilibrium Model with Joint Ownership of the Corporation (Voting Stock and the Core),

    Science.gov (United States)

    general equilibrium system. The point specifically is that the Arrow-Debreu treatment of the joint ownership of industry by introducing shares which can be treated, requires further specification. The need for further specification can be seen immediately when this model is examined not for the competitive equilibrium but for the core. It is well known that the competitive equilibrium is contained within the core. However it will be shown that unless extra conditions are imposed on the control of stock the resultant game may have no core whatsoever and hence no competitive

  13. State space modeling of reactor core in a pressurized water reactor

    Science.gov (United States)

    Ashaari, A.; Ahmad, T.; Shamsuddin, Mustaffa; M, Wan Munirah W.; Abdullah, M. Adib

    2014-07-01

    The power control system of a nuclear reactor is the key system that ensures a safe operation for a nuclear power plant. However, a mathematical model of a nuclear power plant is in the form of nonlinear process and time dependent that give very hard to be described. One of the important components of a Pressurized Water Reactor is the Reactor core. The aim of this study is to analyze the performance of power produced from a reactor core using temperature of the moderator as an input. Mathematical representation of the state space model of the reactor core control system is presented and analyzed in this paper. The data and parameters are taken from a real time VVER-type Pressurized Water Reactor and will be verified using Matlab and Simulink. Based on the simulation conducted, the results show that the temperature of the moderator plays an important role in determining the power of reactor core.

  14. State space modeling of reactor core in a pressurized water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ashaari, A.; Ahmad, T.; M, Wan Munirah W. [Department of Mathematical Science, Faculty of Science, Universiti Teknologi Malaysia, 81310 Skudai, Johor (Malaysia); Shamsuddin, Mustaffa [Institute of Ibnu Sina, Universiti Teknologi Malaysia, 81310 Skudai, Johor (Malaysia); Abdullah, M. Adib [Swinburne University of Technology, Faculty of Engineering, Computing and Science, Jalan Simpang Tiga, 93350 Kuching, Sarawak (Malaysia)

    2014-07-10

    The power control system of a nuclear reactor is the key system that ensures a safe operation for a nuclear power plant. However, a mathematical model of a nuclear power plant is in the form of nonlinear process and time dependent that give very hard to be described. One of the important components of a Pressurized Water Reactor is the Reactor core. The aim of this study is to analyze the performance of power produced from a reactor core using temperature of the moderator as an input. Mathematical representation of the state space model of the reactor core control system is presented and analyzed in this paper. The data and parameters are taken from a real time VVER-type Pressurized Water Reactor and will be verified using Matlab and Simulink. Based on the simulation conducted, the results show that the temperature of the moderator plays an important role in determining the power of reactor core.

  15. Component-based software for high-performance scientific computing

    Science.gov (United States)

    Alexeev, Yuri; Allan, Benjamin A.; Armstrong, Robert C.; Bernholdt, David E.; Dahlgren, Tamara L.; Gannon, Dennis; Janssen, Curtis L.; Kenny, Joseph P.; Krishnan, Manojkumar; Kohl, James A.; Kumfert, Gary; Curfman McInnes, Lois; Nieplocha, Jarek; Parker, Steven G.; Rasmussen, Craig; Windus, Theresa L.

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  16. Space cryogenics components based on the thermomechanical (TM) effect

    Science.gov (United States)

    Yuan, S. W. K.; Frederking, T. H. K.

    1988-01-01

    He II vapor-liquid phase separation (VLPS) is discussed, with emphasis on fluid-related transport phenomena. The VLPS system has been studied for both linear and nonlinear regimes, demonstrating that well-defined convection patterns exist in porous plug phase separators. In the second part, other components based on the thermomechanical effect are discussed in the limit of ideal conditions. Examples considered include the heat pipe transfer of zero net mass flow, liquid transfer pumps based on the fountain effect, mechanocaloric devices for cooling purposes, and He II vortex refrigerators.

  17. A simple reactivity feedback model accounting for radial core expansion effects in the liquid metal fast reactor

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min; Lee, Yong Bum; Chang, Won Pyo; Haha, Do Hee [KAERI, Taejon (Korea, Republic of)

    2002-10-01

    The radial core expansion due to the structure temperature rise is one of major negative reactivity insertion mechanisms in metallic fueled reactor. Thermal expansion is a result of both the laws of nature and the particular core design and it causes negative reactivity feedback by the combination of increased core volume captures and increased core surface leakage. The simple radial core expansion reactivity feedback model developed for the SSC-K code was evaluated by the code-to-code comparison analysis. From the comparison results, it can be stated that the radial core expansion reactivity feedback model employed into the SSC-K code may be reasonably accurate in the UTOP analysis.

  18. Matérn's hard core models of types I and II with arbitrary compact grains

    DEFF Research Database (Denmark)

    Kiderlen, Markus; Hörig, Mario

    Matérn's classical hard core models can be interpreted as models obtained from a stationary marked Poisson process by dependent thinning. The marks are balls of fixed radius, and a point is retained when its associated ball does not hit any other balls (type I) or when its random birth time is st...... of this model with the process of intact grains of the dead leaves model and the Stienen model leads to analogous results for the latter....

  19. Towards an extensible core model for Digital Rights Management in VDM

    DEFF Research Database (Denmark)

    Lauritsen, Rasmus Winther; Lorenzen, Lasse

    2012-01-01

    In this article two views on DRM are presented and modelled in VDM. The contribution from this modelling process is two-fold. A set of properties that are of interest while designing DRM systems are presented. Then, the two models are compared to elaborate our understanding of DRM elements and th...... and their inter- play. This work is an exploration towards realizing a core model and terminology upon which extended models can be built....

  20. Model driven product line engineering : core asset and process implications

    OpenAIRE

    Azanza Sesé, Maider

    2011-01-01

    Reuse is at the heart of major improvements in productivity and quality in Software Engineering. Both Model Driven Engineering (MDE) and Software Product Line Engineering (SPLE) are software development paradigms that promote reuse. Specifically, they promote systematic reuse and a departure from craftsmanship towards an industrialization of the software development process. MDE and SPLE have established their benefits separately. Their combination, here called Model Driven Product Line Engin...

  1. Thermal Analysis of the Driving Component Based on the Thermal Network Method in a Lunar Drilling System and Experimental Verification

    Directory of Open Access Journals (Sweden)

    Dewei Tang

    2017-03-01

    Full Text Available The main task of the third Chinese lunar exploration project is to obtain soil samples that are greater than two meters in length and to acquire bedding information from the surface of the moon. The driving component is the power output unit of the drilling system in the lander; it provides drilling power for core drilling tools. High temperatures can cause the sensors, permanent magnet, gears, and bearings to suffer irreversible damage. In this paper, a thermal analysis model for this driving component, based on the thermal network method (TNM was established and the model was solved using the quasi-Newton method. A vacuum test platform was built and an experimental verification method (EVM was applied to measure the surface temperature of the driving component. Then, the TNM was optimized, based on the principle of heat distribution. Through comparative analyses, the reasonableness of the TNM is validated. Finally, the static temperature field of the driving component was predicted and the “safe working times” of every mode are given.

  2. Developing a theory of the strategic core of teams: a role composition model of team performance.

    Science.gov (United States)

    Humphrey, Stephen E; Morgeson, Frederick P; Mannor, Michael J

    2009-01-01

    Although numerous models of team performance have been articulated over the past 20 years, these models have primarily focused on the individual attribute approach to team composition. The authors utilized a role composition approach, which investigates how the characteristics of a set of role holders impact team effectiveness, to develop a theory of the strategic core of teams. Their theory suggests that certain team roles are most important for team performance and that the characteristics of the role holders in the "core" of the team are more important for overall team performance. This theory was tested in 778 teams drawn from 29 years of major league baseball (1974'-2002). Results demonstrate that although high levels of experience and job-related skill are important predictors of team performance, the relationships between these constructs and team performance are significantly stronger when the characteristics are possessed by core role holders (as opposed to non-core role holders). Further, teams that invest more of their financial resources in these core roles are able to leverage such investments into significantly improved performance. These results have implications for team composition models, as they suggest a new method for considering individual contributions to a team's success that shifts the focus onto core roles. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  3. Variability modes in core flows inverted from geomagnetic field models

    CERN Document Server

    Pais, Maria A; Schaeffer, Nathanaël

    2014-01-01

    We use flows that we invert from two geomagnetic field models spanning centennial time periods (gufm1 and COV-OBS), and apply Principal Component Analysis and Singular Value Decomposition of coupled fields to extract the main modes characterizing their spatial and temporal variations. The quasi geostrophic flows inverted from both geomagnetic field models show similar features. However, COV-OBS has a less energetic mean flow and larger time variability. The statistical significance of flow components is tested from analyses performed on subareas of the whole domain. Bootstrapping methods are also used to extract robust flow features required by both gufm1 and COV-OBS. Three main empirical circulation modes emerge, simultaneously constrained by both geomagnetic field models and expected to be robust against the particular a priori used to build them. Mode 1 exhibits three large robust vortices at medium/high latitudes, with opposite circulation under the Atlantic and the Pacific hemispheres. Mode 2 interesting...

  4. Shrinking core models applied to the sodium silicate production process

    Directory of Open Access Journals (Sweden)

    Stanković Mirjana S.

    2007-01-01

    Full Text Available The sodium silicate production process, with the molar ratio SiO2/Na2O = 2, for detergent zeolite 4A production, is based on quartz sand dissolving in NaOH aqueous solution, with a specific molality. It is a complex process performed at high temperature and pressure. It is of vital importance to develop adequate mathematical models, which are able to predict the dynamical response of the process parameters. A few kinetic models were developed within this study, which were adjusted and later compared to experimental results. It was assumed that SiO2 particles are smooth spheres, with uniform diameter. This diameter decreases during dissolving. The influence of particle diameter, working temperature and hydroxide ion molality on the dissolution kinetics was investigated. It was concluded that the developed models are sufficiently correct, in the engineering sense, and can be used for the dynamical prediction of process parameters.

  5. The No-Core Gamow Shell Model: Including the continuum in the NCSM

    CERN Document Server

    Barrett, B R; Michel, N; Płoszajczak, M

    2015-01-01

    We are witnessing an era of intense experimental efforts that will provide information about the properties of nuclei far from the line of stability, regarding resonant and scattering states as well as (weakly) bound states. This talk describes our formalism for including these necessary ingredients into the No-Core Shell Model by using the Gamow Shell Model approach. Applications of this new approach, known as the No-Core Gamow Shell Model, both to benchmark cases as well as to unstable nuclei will be given.

  6. Symplectic Symmetry and the Ab Initio No-Core Shell Model

    Energy Technology Data Exchange (ETDEWEB)

    Draayer, Jerry P.; Dytrych, Tomas; Sviratcheva, Kristina D.; Bahri, Chairul; /Louisiana State U.; Vary, James P.; /Iowa State U. /LLNL, Livermore /SLAC

    2007-03-14

    The symplectic symmetry of eigenstates for the 0{sub gs}{sup +} in {sup 16}O and the 0{sub gs}{sup +} and lowest 2{sup +} and 4{sup +} configurations of {sup 12}C that are well-converged within the framework of the no-core shell model with the JISP16 realistic interaction is examined. These states are found to project at the 85-90% level onto very few symplectic representations including the most deformed configuration, which confirms the importance of a symplectic no-core shell model and reaffirms the relevance of the Elliott SU(3) model upon which the symplectic scheme is built.

  7. Support for Programming Models in Network-on-Chip-based Many-core Systems

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth

    and scalability in an image processing application with the aim of providing insight into parallel programming issues. The second part proposes and presents the tile-based Clupea many-core architecture, which has the objective of providing configurable support for programming models to allow different programming......This thesis addresses aspects of support for programming models in Network-on- Chip-based many-core architectures. The main focus is to consider architectural support for a plethora of programming models in a single system. The thesis has three main parts. The first part considers parallelization...

  8. Mechanical behavior of a sandwich with corrugated GRP core: numerical modeling and experimental validation

    Directory of Open Access Journals (Sweden)

    D. Tumino

    2014-10-01

    Full Text Available In this work the mechanical behaviour of a core reinforced composite sandwich structure is studied. The sandwich employs a Glass Reinforced Polymer (GRP orthotropic material for both the two external skins and the inner core web. In particular, the core is designed in order to cooperate with the GRP skins in membrane and flexural properties by means of the addition of a corrugated laminate into the foam core. An analytical model has been developed to replace a unit cell of this structure with an orthotropic equivalent thick plate that reproduces the in plane and out of plane behaviour of the original geometry. Different validation procedures have been implemented to verify the quality of the proposed method. At first a comparison has been performed between the analytical model and the original unit cell modelled with a Finite Element mesh. Elementary loading conditions are reproduced and results are compared. Once the reliability of the analytical model was assessed, this homogenised model was implemented within the formulation of a shell finite element. The goal of this step is to simplify the FE analysis of complex structures made of corrugated core sandwiches; in fact, by using the homogenised element, the global response of a real structure can be investigated only with the discretization of its mid-surface. Advantages are mainly in terms of time to solution saving and CAD modelling simplification. Last step is then the comparison between this FE model and experiments made on sandwich beams and panels whose skins and corrugated cores are made of orthotropic cross-ply GRP laminates. Good agreement between experimental and numerical results confirms the validity of the proposed model.

  9. Physics input for modelling superfluid neutron stars with hyperon cores

    CERN Document Server

    Gusakov, M E; Kantor, E M

    2014-01-01

    Observations of massive ($M \\approx 2.0~M_\\odot$) neutron stars (NSs), PSRs J1614-2230 and J0348+0432, rule out most of the models of nucleon-hyperon matter employed in NS simulations. Here we construct three possible models of nucleon-hyperon matter consistent with the existence of $2~M_\\odot$ pulsars as well as with semi-empirical nuclear matter parameters at saturation, and semi-empirical hypernuclear data. Our aim is to calculate for these models all the parameters necessary for modelling dynamics of hyperon stars (such as equation of state, adiabatic indices, thermodynamic derivatives, relativistic entrainment matrix, etc.), making them available for a potential user. To this aim a general non-linear hadronic Lagrangian involving $\\sigma\\omega\\rho\\phi\\sigma^\\ast$ meson fields, as well as quartic terms in vector-meson fields, is considered. A universal scheme for calculation of the $\\ell=0,1$ Landau Fermi-liquid parameters and relativistic entrainment matrix is formulated in the mean-field approximation. ...

  10. Benchmark calculation of no-core Monte Carlo shell model in light nuclei

    CERN Document Server

    Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P; 10.1063/1.3584062

    2011-01-01

    The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

  11. Muscle spindles exhibit core lesions and extensive degeneration of intrafusal fibers in the Ryr1{sup I4895T/wt} mouse model of core myopathy

    Energy Technology Data Exchange (ETDEWEB)

    Zvaritch, Elena; MacLennan, David H., E-mail: david.maclennan@utoronto.ca

    2015-04-24

    Muscle spindles from the hind limb muscles of adult Ryr1{sup I4895T/wt} (IT/+) mice exhibit severe structural abnormalities. Up to 85% of the spindles are separated from skeletal muscle fascicles by a thick layer of connective tissue. Many intrafusal fibers exhibit degeneration, with Z-line streaming, compaction and collapse of myofibrillar bundles, mitochondrial clumping, nuclear shrinkage and pyknosis. The lesions resemble cores observed in the extrafusal myofibers of this animal model and of core myopathy patients. Spindle abnormalities precede those in extrafusal fibers, indicating that they are a primary pathological feature in this murine Ryr1-related core myopathy. Muscle spindle involvement, if confirmed for human core myopathy patients, would provide an explanation for an array of devastating clinical features characteristic of these diseases and provide novel insights into the pathology of RYR1-related myopathies. - Highlights: • Muscle spindles exhibit structural abnormalities in a mouse model of core myopathy. • Myofibrillar collapse and mitochondrial clumping is observed in intrafusal fibers. • Myofibrillar degeneration follows a pattern similar to core formation in extrafusal myofibers. • Muscle spindle abnormalities are a part of the pathological phenotype in the mouse model of core myopathy. • Direct involvement of muscle spindles in the pathology of human RYR1-related myopathies is proposed.

  12. Transient LOFA computations for a VHTR using one-twelfth core flow models

    Energy Technology Data Exchange (ETDEWEB)

    Tung, Yu-Hsin, E-mail: touushin@gmail.com [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu, Taiwan (China); Ferng, Yuh-Ming, E-mail: ymferng@ess.nthu.edu.tw [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu, Taiwan (China); Johnson, Richard W., E-mail: rwjohnson@cableone.net [Idaho National Laboratory, Idaho Falls, ID (United States); Chieng, Ching-Chang, E-mail: ccchieng@cityu.edu.hk [Dept of Mechanical and Biomedical Engineering, City University of Hong Kong, Kowloon (Hong Kong)

    2016-05-15

    Highlights: • Investigation of flow and heat transfer for a 1/12 VHTR core model using CFD. • The high performance computing using ∼531 M sufficient refined mesh. • LOFA transient calculations employ both laminar and turbulence models to characterize natural convection. • The comparisons with small models suggest the need of large flow model. - Abstract: A prismatic gas-cooled very high temperature reactor (VHTR) is being developed under the next generation nuclear program. One of the concerns for the reactor design is the effects of a loss of flow accident (LOFA) where the coolant circulators are lost for some reason, causing a loss of forced coolant flow through the core. In the previous studies, the natural circulation in the whole reactor vessel (RV) was obtained by segmentation strategies if the computational fluid dynamic (CFD) analysis with a sufficiently refined mesh was conducted, due to the limits of computer capability. The computational domains in the previous studies were segmented sections which were small flow region models, such as 1/12 sectors, or a combination of a few number of the 1/12 sector (ranging from 2 to 15) using geometric symmetry, for a full dome region. The present paper investigates the flow and heat transfer for a much larger flow region model, a 1/12 core model, using high performance computing. The computation meshes for 1/12 sector and 1/12 reactor core are of 7.8 M and ∼531 M, respectively. Over 85,000 and 35,000 iterations for steady and transient (100 s) calculations are required to achieve convergence, respectively. ∼0.1 min CPU time was required using 192 computer cores for the 1/12 sector model and ∼1.3 min CPU time using 768 cores in parallel for the 1/12 core model, for every iteration, using ALPS, Advanced Large-scale Parallel Superclusters. For the LOFA transient condition, this study employs both laminar flow and different turbulence models to characterize the phenomenon of natural convection. The

  13. The modeling of core melting and in-vessel corium relocation in the APRIL code

    Energy Technology Data Exchange (ETDEWEB)

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T. [Rensselaer Polytechnic Institute, Troy, NY (United States)] [and others

    1995-09-01

    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  14. Probability modeling of the number of positive cores in a prostate cancer biopsy session, with applications.

    Science.gov (United States)

    Serfling, Robert; Ogola, Gerald

    2016-02-10

    Among men, prostate cancer (CaP) is the most common newly diagnosed cancer and the second leading cause of death from cancer. A major issue of very large scale is avoiding both over-treatment and under-treatment of CaP cases. The central challenge is deciding clinical significance or insignificance when the CaP biopsy results are positive but only marginally so. A related concern is deciding how to increase the number of biopsy cores for larger prostates. As a foundation for improved choice of number of cores and improved interpretation of biopsy results, we develop a probability model for the number of positive cores found in a biopsy, given the total number of cores, the volumes of the tumor nodules, and - very importantly - the prostate volume. Also, three applications are carried out: guidelines for the number of cores as a function of prostate volume, decision rules for insignificant versus significant CaP using number of positive cores, and, using prior distributions on total tumor size, Bayesian posterior probabilities for insignificant CaP and posterior median CaP. The model-based results have generality of application, take prostate volume into account, and provide attractive tradeoffs of specificity versus sensitivity. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Comparison of dynamical cores for NWP models: comparison of COSMO and Dune

    Science.gov (United States)

    Brdar, Slavko; Baldauf, Michael; Dedner, Andreas; Klöfkorn, Robert

    2013-06-01

    We present a range of numerical tests comparing the dynamical cores of the operationally used numerical weather prediction (NWP) model COSMO and the university code Dune, focusing on their efficiency and accuracy for solving benchmark test cases for NWP. The dynamical core of COSMO is based on a finite difference method whereas the Dune core is based on a Discontinuous Galerkin method. Both dynamical cores are briefly introduced stating possible advantages and pitfalls of the different approaches. Their efficiency and effectiveness is investigated, based on three numerical test cases, which require solving the compressible viscous and non-viscous Euler equations. The test cases include the density current (Straka et al. in Int J Numer Methods Fluids 17:1-22, 1993), the inertia gravity (Skamarock and Klemp in Mon Weather Rev 122:2623-2630, 1994), and the linear hydrostatic mountain waves of (Bonaventura in J Comput Phys 158:186-213, 2000).

  16. Coherent Network Analysis of Gravitational Waves from Three-Dimensional Core-Collapse Supernova Models

    CERN Document Server

    Hayama, Kazuhiro; Kotake, Kei; Takiwaki, Tomoya

    2015-01-01

    Using predictions from three-dimensional (3D) hydrodynamics simulations of core-collapse supernovae (CCSNe), we present a coherent network analysis to detection, reconstruction, and the source localization of the gravitational-wave (GW) signals. By combining with the GW spectrogram analysis, we show that several important hydrodynamics features imprinted in the original waveforms persist in the waveforms of the reconstructed signals. The characteristic excess in the GW spectrograms originates not only from rotating core-collapse and bounce, the subsequent ring down of the proto-neutron star (PNS) as previously identified, but also from the formation of magnetohydrodynamics jets and non-axisymmetric instabilities in the vicinity of the PNS. Regarding the GW signals emitted near at the rotating core bounce, the horizon distance, which we set by a SNR exceeding 8, extends up to $\\sim$ 18 kpc for the most rapidly rotating 3D model among the employed waveform libraries. Following the rotating core bounce, the domi...

  17. Ex-Vessel Core Melt Modeling Comparison between MELTSPREAD-CORQUENCH and MELCOR 2.1

    Energy Technology Data Exchange (ETDEWEB)

    Robb, Kevin R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Farmer, Mitchell [Argonne National Lab. (ANL), Argonne, IL (United States); Francis, Matthew W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-03-01

    System-level code analyses by both United States and international researchers predict major core melting, bottom head failure, and corium-concrete interaction for Fukushima Daiichi Unit 1 (1F1). Although system codes such as MELCOR and MAAP are capable of capturing a wide range of accident phenomena, they currently do not contain detailed models for evaluating some ex-vessel core melt behavior. However, specialized codes containing more detailed modeling are available for melt spreading such as MELTSPREAD as well as long-term molten corium-concrete interaction (MCCI) and debris coolability such as CORQUENCH. In a preceding study, Enhanced Ex-Vessel Analysis for Fukushima Daiichi Unit 1: Melt Spreading and Core-Concrete Interaction Analyses with MELTSPREAD and CORQUENCH, the MELTSPREAD-CORQUENCH codes predicted the 1F1 core melt readily cooled in contrast to predictions by MELCOR. The user community has taken notice and is in the process of updating their systems codes; specifically MAAP and MELCOR, to improve and reduce conservatism in their ex-vessel core melt models. This report investigates why the MELCOR v2.1 code, compared to the MELTSPREAD and CORQUENCH 3.03 codes, yield differing predictions of ex-vessel melt progression. To accomplish this, the differences in the treatment of the ex-vessel melt with respect to melt spreading and long-term coolability are examined. The differences in modeling approaches are summarized, and a comparison of example code predictions is provided.

  18. Comparison between triangular and hexagonal modeling of a hexagonal-structured reactor core using box method

    Energy Technology Data Exchange (ETDEWEB)

    Malmir, Hessam, E-mail: malmir@energy.sharif.edu [Department of Energy Engineering, Sharif University of Technology, Azadi Street, Tehran (Iran, Islamic Republic of); Moghaddam, Nader Maleki [Department of Nuclear Engineering and Physics, Amir Kabir University of Technology (Tehran Polytechnique), Hafez Street, Tehran (Iran, Islamic Republic of); Zahedinejad, Ehsan [Department of Energy Engineering, Sharif University of Technology, Azadi Street, Tehran (Iran, Islamic Republic of)

    2011-02-15

    A hexagonal-structured reactor core (e.g. VVER-type) is mostly modeled by structured triangular and hexagonal mesh zones. Although both the triangular and hexagonal models give good approximations over the neutronic calculation of the core, there are some differences between them that seem necessary to be clarified. For this purpose, the neutronic calculations of a hexagonal-structured reactor core have to be performed using the structured triangular and hexagonal meshes based on box method of discretisation and then the results of two models should be benchmarked in different cases. In this paper, the box method of discretisation is derived for triangular and hexagonal meshes. Then, two 2-D 2-group static simulators for triangular and hexagonal geometries (called TRIDIF-2 and HEXDIF-2, respectively) are developed using the box method. The results are benchmarked against the well-known CITATION computer code in case of a VVER-1000 reactor core. Furthermore, the relative powers calculated by the TRIDIF-2 and HEXDIF-2 along with the ones obtained by the CITATION code are compared with the verified results which have been presented in the Final Safety Analysis Report (FSAR) of the aforementioned reactor. Different benchmark cases revealed the reliability of the box method in contrast with the CITATION code. Furthermore, it is shown that the triangular modeling of the core is more acceptable compared with the hexagonal one.

  19. Atomically informed nonlocal semi-discrete variational Peierls-Nabarro model for planar core dislocations.

    Science.gov (United States)

    Liu, Guisen; Cheng, Xi; Wang, Jian; Chen, Kaiguo; Shen, Yao

    2017-03-02

    Prediction of Peierls stress associated with dislocation glide is of fundamental concern in understanding and designing the plasticity and mechanical properties of crystalline materials. Here, we develop a nonlocal semi-discrete variational Peierls-Nabarro (SVPN) model by incorporating the nonlocal atomic interactions into the semi-discrete variational Peierls framework. The nonlocal kernel is simplified by limiting the nonlocal atomic interaction in the nearest neighbor region, and the nonlocal coefficient is directly computed from the dislocation core structure. Our model is capable of accurately predicting the displacement profile, and the Peierls stress, of planar-extended core dislocations in face-centered cubic structures. Our model could be extended to study more complicated planar-extended core dislocations, such as {111} dislocations in Al-based and Ti-based intermetallic compounds.

  20. Core-scale solute transport model selection using Monte Carlo analysis

    CERN Document Server

    Malama, Bwalya; James, Scott C

    2013-01-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...

  1. Combining models of behaviour with operational data to provide enhanced condition monitoring of AGR cores

    Energy Technology Data Exchange (ETDEWEB)

    West, Graeme M., E-mail: graeme.west@strath.ac.uk; Wallace, Christopher J.; McArthur, Stephen D.J.

    2014-06-01

    Highlights: • Combining laboratory model outputs with operational data. • Isolation of single component from noisy data. • Better understanding of the health of graphite cores. • Extended plant operation through leveraging existing data sources. - Abstract: Installation of new monitoring equipment in Nuclear Power Plants (NPPs) is often difficult and expensive and therefore maximizing the information that can be extracted from existing monitoring equipment is highly desirable. This paper describes the process of combining models derived from laboratory experimentation with current operational plant data to infer an underlying measure of health. A demonstration of this process is provided where the fuel channel bore profile, a measure of core health, is inferred from data gathered during the refuelling process of an Advanced Gas-cooled Reactor (AGR) nuclear power plant core. Laboratory simulation was used to generate a model of an interaction between the fuel assembly and the core. This model is used to isolate a single frictional component from a noisy input signal and use this friction component as a measure of health to assess the current condition of the graphite bricks that comprise the core. In addition, the model is used to generate an expected refuelling response (the noisy input signal) for a given set of channel bore diameter measurements for either insertion of new fuel or removal of spent fuel, providing validation of the model. This benefit of this work is that it provides a greater understanding of the health of the graphite core, which is important for continued and extended operation of the AGR plants in the UK.

  2. Non-Equilibrium Chemistry of Dynamically Evolving Prestellar Cores: I. Basic Magnetic and Non-Magnetic Models and Parameter Studies

    CERN Document Server

    Tassis, Konstantinos; Yorke, Harold W; Turner, Neal

    2011-01-01

    We combine dynamical and non-equilibrium chemical modeling of evolving prestellar molecular cloud cores, and explore the evolution of molecular abundances in the contracting core. We model both magnetic cores, with varying degrees of initial magnetic support, and non-magnetic cores, with varying collapse delay times. We explore, through a parameter study, the competing effects of various model parameters in the evolving molecular abundances, including the elemental C/O ratio, the temperature, and the cosmic-ray ionization rate. We find that different models show their largest quantitative differences at the center of the core, whereas the outer layers, which evolve slower, have abundances which are severely degenerate among different dynamical models. There is a large range of possible abundance values for different models at a fixed evolutionary stage (central density), which demonstrates the large potential of chemical differentiation in prestellar cores. However, degeneracies among different models, compou...

  3. Core-scale solute transport model selection using Monte Carlo analysis

    Science.gov (United States)

    Malama, Bwalya; Kuhlman, Kristopher L.; James, Scott C.

    2013-06-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (3H) and sodium-22 (22Na ), and the retarding solute uranium-232 (232U). The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single-porosity and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows single-porosity and double-porosity models are structurally deficient, yielding late-time residual bias that grows with time. On the other hand, the multirate model yields unbiased predictions consistent with the late-time -5/2 slope diagnostic of multirate mass transfer. The analysis indicates the multirate model is better suited to describing core-scale solute breakthrough in the Culebra Dolomite than the other two models.

  4. Context sensitivity and ambiguity in component-based systems design

    Energy Technology Data Exchange (ETDEWEB)

    Bespalko, S.J.; Sindt, A.

    1997-10-01

    Designers of components-based, real-time systems need to guarantee to correctness of soft-ware and its output. Complexity of a system, and thus the propensity for error, is best characterized by the number of states a component can encounter. In many cases, large numbers of states arise where the processing is highly dependent on context. In these cases, states are often missed, leading to errors. The following are proposals for compactly specifying system states which allow the factoring of complex components into a control module and a semantic processing module. Further, the need for methods that allow for the explicit representation of ambiguity and uncertainty in the design of components is discussed. Presented herein are examples of real-world problems which are highly context-sensitive or are inherently ambiguous.

  5. VERA-CS Modeling and Simulation of PWR Main Steam Line Break Core Response to DNB

    Energy Technology Data Exchange (ETDEWEB)

    Salko, Robert K [ORNL; Sung, Yixing [Westinghouse Electric Company, Cranberry Township; Kucukboyaci, Vefa [Westinghouse Electric Company, Cranberry Township; Xu, Yiban [Westinghouse Electric Company, Cranberry Township; Cao, Liping [Westinghouse Electric Company, Cranberry Township

    2016-01-01

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time step of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.

  6. A Semi-Analytic dynamical friction model that reproduces core stalling

    CERN Document Server

    Petts, James A; Read, Justin I

    2015-01-01

    We present a new semi-analytic model for dynamical friction based on Chandrasekhar's formalism. The key novelty is the introduction of physically motivated, radially varying, maximum and minimum impact parameters. With these, our model gives an excellent match to full N-body simulations for isotropic background density distributions, both cuspy and shallow, without any fine-tuning of the model parameters. In particular, we are able to reproduce the dramatic core-stalling effect that occurs in shallow/constant density cores, for the first time. This gives us new physical insight into the core-stalling phenomenon. We show that core stalling occurs in the limit in which the product of the Coulomb logarithm and the local fraction of stars with velocity lower than the infalling body tends to zero. For cuspy backgrounds, this occurs when the infalling mass approaches the enclosed background mass. For cored backgrounds, it occurs at larger distances from the centre, due to a combination of a rapidly increasing minim...

  7. Endophenotype Network Models: Common Core of Complex Diseases

    Science.gov (United States)

    Ghiassian, Susan Dina; Menche, Jörg; Chasman, Daniel I.; Giulianini, Franco; Wang, Ruisheng; Ricchiuto, Piero; Aikawa, Masanori; Iwata, Hiroshi; Müller, Christian; Zeller, Tania; Sharma, Amitabh; Wild, Philipp; Lackner, Karl; Singh, Sasha; Ridker, Paul M.; Blankenberg, Stefan; Barabási, Albert-László; Loscalzo, Joseph

    2016-06-01

    Historically, human diseases have been differentiated and categorized based on the organ system in which they primarily manifest. Recently, an alternative view is emerging that emphasizes that different diseases often have common underlying mechanisms and shared intermediate pathophenotypes, or endo(pheno)types. Within this framework, a specific disease’s expression is a consequence of the interplay between the relevant endophenotypes and their local, organ-based environment. Important examples of such endophenotypes are inflammation, fibrosis, and thrombosis and their essential roles in many developing diseases. In this study, we construct endophenotype network models and explore their relation to different diseases in general and to cardiovascular diseases in particular. We identify the local neighborhoods (module) within the interconnected map of molecular components, i.e., the subnetworks of the human interactome that represent the inflammasome, thrombosome, and fibrosome. We find that these neighborhoods are highly overlapping and significantly enriched with disease-associated genes. In particular they are also enriched with differentially expressed genes linked to cardiovascular disease (risk). Finally, using proteomic data, we explore how macrophage activation contributes to our understanding of inflammatory processes and responses. The results of our analysis show that inflammatory responses initiate from within the cross-talk of the three identified endophenotypic modules.

  8. CFD analysis of PWR core top and reactor vessel upper plenum internal subdomain models

    Energy Technology Data Exchange (ETDEWEB)

    Kao, Min-Tsung; Wu, Chung-Yun [National Tsing Hua University, Hsinchu 30043, Taiwan (China); Chieng, Ching-Chang, E-mail: cchieng@ess.nthu.edu.tw [National Tsing Hua University, Hsinchu 30043, Taiwan (China); Xu Yiban; Yuan Kun; Dzodzo, Milorad; Conner, Michael; Beltz, Steven; Ray, Sumit; Bissett, Teresa [Westinghouse Electric Company, Cranberry Township, PA 16066 (United States)

    2011-10-15

    Highlights: > The paper develops a CFD flow model for upper portion of AP1000 and determines how lateral flow in the top core and upper plenum. > Mesh sensitivities and geometrical modification strategies give the guidelines to reduce the size of overall computation mesh. > Pressure drop measurement data act as a guideline for the mesh selection. > Lateral flows are mainly exiting through upper and lower windows of guide tubes ({approx}81%) and 18% flow through small side gaps. > The interactions between guide tubes and neighboring support column as well as flow characteristic are revealed. - Abstract: One aspect of the Westinghouse AP1000{sup TM} reactor design is the reduction in the number of major components and simplification in manufacturing. One design change relative to current Westinghouse reactors of similar size is that AP1000 reactor vessel has two nozzles/hot legs instead of three. With regard to fuel performance, this design difference creates a different flow field in the reactor vessel upper plenum. The flow exiting from the core and entering the upper plenum must turn toward one of the two outlet nozzles and flow laterally around numerous control rod guide tubes and support columns. Also, below the upper plenum are the upper core plate and the top core region of the 157 fuel assemblies and 69 guidetube assemblies. To determine how the lateral flow in the top of the core and upper plenum compares to the current reactors a CFD model of the flow in the upper portion of the AP1000 reactor vessel was created. Before detailed CFD simulations of the flow in the entire upper plenum and top core regions were performed, conducting local simulations for smaller sections of the domain provided crucial and detailed physical aspects of the flow. These sub-domain models were used to perform mesh sensitivities and to assess what geometrical details may be eliminated from the larger model in order to reduce mesh size and computational requirements. In this paper

  9. Analysis of heterogeneous boron dilution transients during outages with APROS 3D nodal core model

    Energy Technology Data Exchange (ETDEWEB)

    Kuopanportti, Jaakko [Fortum Power and Heat Ltd, Nuclear Production, Fortum (Finland)

    2015-09-15

    A diluted water plug can form inside the primary coolant circuit if the coolant flow has stopped at least temporarily. The source of the clean water can be external or the fresh water can build up internally during boiling/condensing heat transfer mode, which can occur if the primary coolant inventory has decreased enough during an accident. If the flow restarts in the stagnant primary loop, the diluted water plug can enter the reactor core. During outages after the fresh fuel has been loaded and the temperature of the coolant is low, the dilution potential is the highest because the critical boron concentration is at the maximum. This paper examines the behaviour of the core as clean or diluted water plugs of different sizes enter the core during outages. The analysis were performed with the APROS 3D nodal core model of Loviisa VVER-440, which contains an own flow channel and 10 axial nodes for each fuel assembly. The widerange cross section data was calculated with CASMO-4E. According to the results, the core can withstand even large pure water plugs without fuel failures on natural circulation. The analyses emphasize the importance of the simulation of the backflows inside the core when the reactor is on natural circulation.

  10. Contribution to modeling of the reflooding of a severely damaged reactor core using PRELUDE experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Bachrata, A.; Fichot, F.; Repetto, G. [Institut de Radioprotection et de Surete Nucleaire IRSN, Cadarache (France); Quintard, M. [Universite de Toulouse, INPT, UPS, IMFT Institut de Mecanique des Fluides de Toulouse, Allee Camille Soula, F-31400 Toulouse (France); CNRS, IMFT, F-31400 Toulouse (France); Fleurot, J. [Institut de Radioprotection et de Surete Nucleaire IRSN, Cadarache (France)

    2012-07-01

    In case of accident at a nuclear power plant, water sources may not be available for a long period of time and the core heats up due to the residual power. The reflooding (injection of water into core) may be applied if the availability of safety injection is recovered during accident. If the injection becomes available only in the late phase of accident, water will enter a core configuration that will differ significantly from original rod-bundle geometry. Any attempt to inject water after significant core degradation can lead to further fragmentation of core material. The fragmentation of fuel rods may result in the formation of a 'debris bed'. The typical particle size in a debris bed might reach few millimeters (characteristic length-scale: 1 to 5 mm), i.e., a high permeability porous medium. The French 'Institut de Radioprotection et de Surete Nucleaire' is developing experimental programs (PEARL and PRELUDE) and simulation tools (ICARE-CATHARE and ASTEC) to study and optimize the severe accident management strategy and to assess the probabilities to stop the progress of in-vessel core degradation. It is shown that the quench front exhibits either a ID behaviour or a 2D one, depending on injection rate or bed characteristics. The PRELUDE experiment covers a rather large range of variation of parameters, for which the developed model appears to be quite predictive. (authors)

  11. Assessment of CANDU reactor physics effects using a simplified whole-core MCNP model

    Energy Technology Data Exchange (ETDEWEB)

    Kozier, K.S

    2002-07-01

    A whole-core Monte Carlo n-particle (MCNP) model of a simplified CANDU reactor was developed and used to study core configurations and reactor physics phenomena of interest in CANDU safety analysis. The resulting reactivity data were compared with values derived from corresponding WIMS-AECL/RFSP, two-neutron-energy-group diffusion theory core simulations, thereby extending the range of CANDU-related code-to-code benchmark comparisons to include whole-core representations. These comparisons show a systematic discrepancy of about 6 mk between the respective absolute k{sub eff} values, but very good agreement to within about -0.15 {+-} 0.06 mk for the reactivity perturbation induced by G-core checkerboard coolant voiding. These findings are generally consistent with the results of much simpler uniform-lattice comparisons involving only WIMS-AECL and MCNP. In addition, MCNP fission-energy tallies were used to evaluate other core-wide properties, such as fuel bundle and total-channel power distributions, as well as intra-bundle details, such as outer-fuel-ring relative power densities and outer-ring fuel element azimuthal power variations, which cannot be determined directly from WIMS-AECL/RFSP core calculations. The average MCNP values for the ratio of outer fuel element to average fuel element power density agreed well with corresponding values derived from WIMS-AECL lattice-cell cases, showing a small systematic discrepancy of about 0.5 %, independent of fuel bum-up. For fuel bundles containing the highest-power fuel elements, the maximum peak-to-average outer-element azimuthal power variation was about 2.5% for cases where a statistically significant trend was observed, while much larger peak-to-average outer-element azimuthal power variations of up to around 42% were observed in low-power fuel bundles at the core/radial-neutron-reflector interface. (author)

  12. Porphyrin-Cored Polymer Nanoparticles: Macromolecular Models for Heme Iron Coordination.

    Science.gov (United States)

    Rodriguez, Kyle J; Hanlon, Ashley M; Lyon, Christopher K; Cole, Justin P; Tuten, Bryan T; Tooley, Christian A; Berda, Erik B; Pazicni, Samuel

    2016-10-03

    Porphyrin-cored polymer nanoparticles (PCPNs) were synthesized and characterized to investigate their utility as heme protein models. Created using collapsible heme-centered star polymers containing photodimerizable anthracene units, these systems afford model heme cofactors buried within hydrophobic, macromolecular environments. Spectroscopic interrogations demonstrate that PCPNs display redox and ligand-binding reactivity similar to that of native systems and thus are potential candidates for modeling biological heme iron coordination.

  13. Solid-liquid phase equilibria of the Gaussian core model fluid.

    Science.gov (United States)

    Mausbach, Peter; Ahmed, Alauddin; Sadus, Richard J

    2009-11-14

    The solid-liquid phase equilibria of the Gaussian core model are determined using the GWTS [J. Ge, G.-W. Wu, B. D. Todd, and R. J. Sadus, J. Chem. Phys. 119, 11017 (2003)] algorithm, which combines equilibrium and nonequilibrium molecular dynamics simulations. This is the first reported use of the GWTS algorithm for a fluid system displaying a reentrant melting scenario. Using the GWTS algorithm, the phase envelope of the Gaussian core model can be calculated more precisely than previously possible. The results for the low-density and the high-density (reentrant melting) sides of the solid state are in good agreement with those obtained by Monte Carlo simulations in conjunction with calculations of the solid free energies. The common point on the Gaussian core envelope, where equal-density solid and liquid phases are in coexistence, could be determined with high precision.

  14. Introducing FACETS, the Framework Application for Core-Edge Transport Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Cary, John R. [Tech-X Corporation; Candy, Jeff [General Atomics; Cohen, Ronald H. [Lawrence Livermore National Laboratory (LLNL); Krasheninnikov, Sergei I [ORNL; McCune, Douglas C [ORNL; Estep, Donald J [Colorado State University, Fort Collins; Larson, Jay W [ORNL; Malony, Allen [University of Oregon; Worley, Patrick H [ORNL; Carlsson, Johann Anders [ORNL; Hakim, A H [Tech-X Corporation; Hamill, P [Tech-X Corporation; Kruger, Scott E [ORNL; Muzsala, S [Tech-X Corporation; Pletzer, Alexander [ORNL; Shasharina, Svetlana [Tech-X Corporation; Wade-Stein, D [Tech-X Corporation; Wang, N [Tech-X Corporation; McInnes, Lois C [ORNL; Wildey, T [Tech-X Corporation; Casper, T. A. [Lawrence Livermore National Laboratory (LLNL); Diachin, Lori A [ORNL; Epperly, Thomas [Lawrence Livermore National Laboratory (LLNL); Rognlien, T. D. [Lawrence Livermore National Laboratory (LLNL); Fahey, Mark R [ORNL; Kuehn, Jeffery A [ORNL; Morris, A [University of Oregon; Shende, Sameer [University of Oregon; Feibush, E [Tech-X Corporation; Hammett, Gregory W [ORNL; Indireshkumar, K [Tech-X Corporation; Ludescher, C [Tech-X Corporation; Randerson, L [Tech-X Corporation; Stotler, D. [Princeton Plasma Physics Laboratory (PPPL); Pigarov, A [University of California, San Diego; Bonoli, P. [Massachusetts Institute of Technology (MIT); Chang, C S [New York University; D' Ippolito, D. A. [Lodestar Research Corporation; Colella, Philip [Lawrence Berkeley National Laboratory (LBNL); Keyes, David E [Columbia University; Bramley, R [Indiana University; Myra, J. R. [Lodestar Research Corporation

    2007-06-01

    The FACETS (Framework Application for Core-Edge Transport Simulations) project began in January 2007 with the goal of providing core to wall transport modeling of a tokamak fusion reactor. This involves coupling previously separate computations for the core, edge, and wall regions. Such a coupling is primarily through connection regions of lower dimensionality. The project has started developing a component-based coupling framework to bring together models for each of these regions. In the first year, the core model will be a 1 dimensional model (1D transport across flux surfaces coupled to a 2D equilibrium) with fixed equilibrium. The initial edge model will be the fluid model, UEDGE, but inclusion of kinetic models is planned for the out years. The project also has an embedded Scientific Application Partnership that is examining embedding a full-scale turbulence model for obtaining the crosssurface fluxes into a core transport code.

  15. Introducing FACETS, the Framework Application for Core-Edge Transport Simulations

    Science.gov (United States)

    Cary, J. R.; Candy, J.; Cohen, R. H.; Krasheninnikov, S.; McCune, D. C.; Estep, D. J.; Larson, J.; Malony, A. D.; Worley, P. H.; Carlsson, J. A.; Hakim, A. H.; Hamill, P.; Kruger, S.; Muzsala, S.; Pletzer, A.; Shasharina, S.; Wade-Stein, D.; Wang, N.; McInnes, L.; Wildey, T.; Casper, T.; Diachin, L.; Epperly, T.; Rognlien, T. D.; Fahey, M. R.; Kuehn, J. A.; Morris, A.; Shende, S.; Feibush, E.; Hammett, G. W.; Indireshkumar, K.; Ludescher, C.; Randerson, L.; Stotler, D.; Pigarov, A. Yu; Bonoli, P.; Chang, C. S.; D'Ippolito, D. A.; Colella, P.; Keyes, D. E.; Bramley, R.; Myra, J. R.

    2007-07-01

    The FACETS (Framework Application for Core-Edge Transport Simulations) project began in January 2007 with the goal of providing core to wall transport modeling of a tokamak fusion reactor. This involves coupling previously separate computations for the core, edge, and wall regions. Such a coupling is primarily through connection regions of lower dimensionality. The project has started developing a component-based coupling framework to bring together models for each of these regions. In the first year, the core model will be a 1 ½ dimensional model (1D transport across flux surfaces coupled to a 2D equilibrium) with fixed equilibrium. The initial edge model will be the fluid model, UEDGE, but inclusion of kinetic models is planned for the out years. The project also has an embedded Scientific Application Partnership that is examining embedding a full-scale turbulence model for obtaining the crosssurface fluxes into a core transport code.

  16. A review of MAAP4 code structure and core T/H model

    Energy Technology Data Exchange (ETDEWEB)

    Song, Yong Mann; Park, Soo Yong

    1998-03-01

    The modular accident analysis program (MAAP) version 4 is a computer code that can simulate the response of LWR plants during severe accident sequences and includes models for all of the important phenomena which might occur during accident sequences. In this report, MAAP4 code structure and core thermal hydraulic (T/H) model which models the T/H behavior of the reactor core and the response of core components during all accident phases involving degraded cores are specifically reviewed and then reorganized. This reorganization is performed via getting the related models together under each topic whose contents and order are same with other two reports for MELCOR and SCDAP/RELAP5 to be simultaneously published. Major purpose of the report is to provide information about the characteristics of MAAP4 core T/H models for an integrated severe accident computer code development being performed under the one of on-going mid/long-term nuclear developing project. The basic characteristics of the new integrated severe accident code includes: 1) Flexible simulation capability of primary side, secondary side, and the containment under severe accident conditions, 2) Detailed plant simulation, 3) Convenient user-interfaces, 4) Highly modularization for easy maintenance/improvement, and 5) State-of-the-art model selection. In conclusion, MAAP4 code has appeared to be superior for 3) and 4) items but to be somewhat inferior for 1) and 2) items. For item 5), more efforts should be made in the future to compare separated models in detail with not only other codes but also recent world-wide work. (author). 17 refs., 1 tab., 12 figs.

  17. Testing of a measurement model for baccalaureate nursing students' self-evaluation of core competencies.

    Science.gov (United States)

    Hsu, Li-Ling; Hsieh, Suh-Ing

    2009-11-01

    Testing of a measurement model for baccalaureate nursing students' self-evaluation of core competencies. This paper is a report of a study to test the psychometric properties of the Self-Evaluated Core Competencies Scale for baccalaureate nursing students. Baccalaureate nursing students receive basic nursing education and continue to build competency in practice settings after graduation. Nursing students today face great challenges. Society demands analytic, critical, reflective and transformative attitudes from graduates. It also demands that institutions of higher education take the responsibility to encourage students, through academic work, to acquire knowledge and skills that meet the needs of the modern workplace, which favours highly skilled and qualified workers. A survey of 802 senior nursing students in their last semester at college or university was conducted in Taiwan in 2007 using the Self-Evaluated Core Competencies Scale. Half of the participants were randomly assigned either to principal components analysis with varimax rotation or confirmatory factor analysis. Principal components analysis revealed two components of core competencies that were named as humanity/responsibility and cognitive/performance. The initial model of confirmatory factor analysis was then converged to an acceptable solution but did not show a good fit; however, the final model of confirmatory factor analysis was converged to an acceptable solution with acceptable fit. The final model has two components, namely humanity/responsibility and cognitive/performance. Both components have four indicators. In addition, six indicators have their correlated measurement errors. Self-Evaluated Core Competencies Scale could be used to assess the core competencies of undergraduate nursing students. In addition, it should be used as a teaching guide to increase students' competencies to ensure quality patient care in hospitals.

  18. Noise variation by compressive stress on the model core of power transformers

    Energy Technology Data Exchange (ETDEWEB)

    Mizokami, Masato, E-mail: mizokami.g76.masato@jp.nssmc.com; Kurosaki, Yousuke

    2015-05-01

    The reduction of audible noise generated by cores for power transformers has been required due to environmental concern. It is known that compressive stress in the rolling direction of electrical steel affects magnetostriction and it can result in an increase in noise level. In this research, the effect of compressive stress to noise was investigated on a 3-phase 3-limb model core. Compressive stress was applied in the rolling direction of the limbs from the outside of the core. It increased the sound pressure levels and the slope of the rise was about 2 dBA/MPa. Magnetostriction on single sheet samples was also measured under compressive stress and the harmonic components of the magnetostriction were compared with those of noise. It revealed that the variation in magnetostriction with compressive stress did not entirely correspond to that in noise. In one of the experiments, localized bending happened on one limb during compressing the core. While deformation of the core had not been intended, the noise was measured. The deformation increased the noise by more than 10 dBA and it occurred on most of the harmonic components. - Highlights: • Audible noise was measured on a model core to which compressive stress was applied. • The stress in the rolling direction of the steel causes a rise in noise level. • The slope of the rise in sound pressure level up to 2.5 MPa is about 2 dBA/MPa. • Variation in magnetostriction by stress does not entirely agree with that in noise. • Bend arisen in the core causes an extreme increase in noise.

  19. Does the Core Contain Potassium?: An Assessment of the Uncertainties in Thermal and Dynamo Evolution Models

    Science.gov (United States)

    Nimmo, F.

    2006-12-01

    The long-term thermal evolution of the core, and the history of the geodynamo, are determined by the rate at which heat is extracted from the core, and the presence of any heat sources within the core [1,2]. Radioactive potassium may provide one such heat source: mineral physics results [3,4] are permissive but not definitive; cosmochemical constraints are weak [5]; and geoneutrino detection [6] does not yet have the required resolution. Theoretical models [1-2,7-9] can help to address whether or not potassium is present in the core. Since the evolution of the CMB heat flux is hard to calculate, a better approach is to assume that the entropy available to power the geodynamo has remained constant over time, and to infer the resulting heat flux [2]. Unfortunately, several important parameters, notably core thermal conductivity and the entropy production rate required to sustain the geodynamo, are uncertain. I have carried out a suite of models using a wide range of parameter values based on published results. In the absence of potassium, an ancient inner core [10] and a continuously active geodynamo are only possible if 1) the dissipation generated by the dynamo is small, Gessmann and Wood, EPSL 200, 63-78, 2002. [5] Lassiter G3, Q11012, 2004. [6] Araki et al., Nature 436, 499-503, 2005. [7] Lister PEPI 140, 145-158, 2003. [8] Roberts et al., in Earth's Core and Lower Mantle, ed. Jones et al. [9] Nimmo et al. GJI 156, 363-376, 2004. [10] Brandon et al., EPSL 206, 411-426, 2003. [11] Hernlund et al., Nature 434, 882-886, 2005. [12] Zhong, JGR 111, B04409, 2006.

  20. Effect of superconducting solenoid model cores on spanwise iron magnet roll control

    Science.gov (United States)

    Britcher, C. P.

    1985-01-01

    Compared with conventional ferromagnetic fuselage cores, superconducting solenoid cores appear to offer significant reductions in the projected cost of a large wind tunnel magnetic suspension and balance system. The provision of sufficient magnetic roll torque capability has been a long-standing problem with all magnetic suspension and balance systems; and the spanwise iron magnet scheme appears to be the most powerful system available. This scheme utilizes iron cores which are installed in the wings of the model. It was anticipated that the magnetization of these cores, and hence the roll torque generated, would be affected by the powerful external magnetic field of the superconducting solenoid. A preliminary study has been made of the effect of the superconducting solenoid fuselage model core concept on the spanwise iron magnet roll torque generation schemes. Computed data for one representative configuration indicate that reductions in available roll torque occur over a range of applied magnetic field levels. These results indicate that a 30-percent increase in roll electromagnet capacity over that previously determined will be required for a representative 8-foot wind tunnel magnetic suspension and balance system design.

  1. Cyanopolyynes and sulphur bearing species in hot cores: Chemical and line excitation models

    CERN Document Server

    Chapman, J F; Millar, T J; Burton, M G; Walsh, A J

    2008-01-01

    We present results from a time dependent gas phase chemical model of a hot core based on the physical conditions of G305.2+0.2. While the cyanopolyyne HC_3N has been observed in hot cores, the longer chained species, HC_5N, HC_7N, and HC_9N have not been considered typical hot core species. We present results which show that these species can be formed under hot core conditions. We discuss the important chemical reactions in this process and, in particular, show that their abundances are linked to the parent species acetylene which is evaporated from icy grain mantles. The cyanopolyynes show promise as `chemical clocks' which may aid future observations in determining the age of hot core sources. The abundance of the larger cyanopolyynes increase and decrease over relatively short time scales, ~10^2.5 years. We also discuss several sulphur bearing species. We present results from a non-LTE statistical equilibrium excitation model as a series of density, temperature and column density dependent contour plots w...

  2. Model-based temperature noise monitoring methods for LMFBR core anomaly detection

    Energy Technology Data Exchange (ETDEWEB)

    Tamaoki, Tetsuo; Sonoda, Yukio; Sato, Masuo (Toshiba Corp., Kawasaki, Kanagawa (Japan)); Takahashi, Ryoichi

    1994-03-01

    Temperature noise, measured by thermocouples mounted at each core fuel subassembly, is considered to be the most useful signal for detecting and locating local cooling anomalies in an LMFBR core. However, the core outlet temperature noise contains background noise due to fluctuations in the operating parameters including reactor power. It is therefore necessary to reduce this background noise for highly sensitive anomaly detection by subtracting predictable components from the measured signal. In the present study, both a physical model and an autoregressive model were applied to noise data measured in the experimental fast reactor JOYO. The results indicate that the autoregressive model has a higher precision than the physical model in background noise prediction. Based on these results, an 'autoregressive model modification method' is proposed, in which a temporary autoregressive model is generated by interpolation or extrapolation of reference models identified under a small number of different operating conditions. The generated autoregressive model has shown sufficient precision over a wide range of reactor power in applications to artificial noise data produced by an LMFBR noise simulator even when the coolant flow rate was changed to keep a constant power-to-flow ratio. (author).

  3. Self-consistent core-pedestal transport simulations with neural network accelerated models

    Science.gov (United States)

    Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.

    2017-08-01

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.

  4. Application of the Optimized Baxter Model to the hard-core attractive Yukawa system

    NARCIS (Netherlands)

    Prinsen, P.; Pamies, J.C.; Odijk, Th.; Frenkel, D.

    2006-01-01

    We perform Monte Carlo simulations on the hard-core attractive Yukawa system to test the Optimized Baxter Model that was introduced in [P.Prinsen and T. Odijk, J. Chem. Phys. 121, p.6525 (2004)] to study a fluid phase of spherical particles interacting through a short-range pair potential. We compar

  5. Application of the optimized Baxter model to the hard-core attractive Yukawa system

    NARCIS (Netherlands)

    Prinsen, P.; Pàmies, J.C.; Odijk, T.; Frenkel, D.

    2006-01-01

    We perform Monte Carlo simulations on the hard-core attractive Yukawa system to test the optimized Baxter model that was introduced by Prinsen and Odijk [J. Chem. Phys. 121, 6525 (2004) ] to study a fluid phase of spherical particles interacting through a short-range pair potential. We compare the c

  6. Improved bounds on the phase transition for the hard-core model in 2 dimensions

    NARCIS (Netherlands)

    Vera, Juan C.; Vigoda, E.; Yang, L.

    2015-01-01

    For the hard-core lattice gas model defined on independent sets weighted by an activity $\\lambda$, we study the critical activity $\\lambda_c(\\mathbb{Z}^2)$ for the uniqueness/nonuniqueness threshold on the 2-dimensional integer lattice $\\mathbb{Z}^2$. The conjectured value of the critical activity i

  7. Linear and nonlinear modeling of light propagation in hollow-core photonic crystal fiber

    DEFF Research Database (Denmark)

    Roberts, John; Lægsgaard, Jesper

    2009-01-01

    Hollow core photonic crystal fibers (HC-PCFs) find applications which include quantum and non-linear optics, gas detection and short high-intensity laser pulse delivery. Central to most applications is an understanding of the linear and nonlinear optical properties. These require careful modeling...

  8. Real-time Model Development of Core Protection and Monitoring System for SMART Simulator Application

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Bonseung; Hwang, Daehyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-05-15

    Important features of the software models are described for the application to SMART simulator. A real-time performance of the models was examined for various simulation scenarios. Areal-time model development of core protection and monitoring algorithms for SMART simulator is being studied. Software algorithms as well as design bases and requirements for core protection and monitoring are developed and various performance tests are done. From test results, it is judged that SCOPS{sub S}SIM and SCOMS{sub S}SIM algorithms and calculational capabilities are appropriate for core protection and monitoring program in SMART simulator. A multi-purpose best-estimate simulator for the SMART is being established which is purposed to be used as a tool to evaluate the impacts of design changes on the safety performance, and to improve and/or optimize the operating procedure of the SMART. In keeping with these purposes, a real-time model of the digital core protection and monitoring systems was developed on the basis of SCOPS and SCOMS algorithms of SMART.

  9. Core-SOL modelling of neon seeded JET discharges with the ITER-like wall

    Energy Technology Data Exchange (ETDEWEB)

    Telesca, G. [Department of Applied Physics, Ghent University (Belgium); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Ivanova-Stanik, I.; Zagoerski, R.; Czarnecka, A. [Institute of Plasma Physics and Laser Microfusion, Warsaw (Poland); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Brezinsek, S.; Huber, A.; Wiesen, S. [Forschungszentrum Juelich GmbH, Institut fuer Klima- und Energieforschung-Plasmaphysik, Juelich (Germany); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Drewelow, P. [Max-Planck-Institut fuer Plasmaphysik, Greifswald (Germany); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Giroud, C. [CCFE Culham, Abingdon (United Kingdom); EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Collaboration: JET EFDA contributors

    2016-08-15

    Five ELMy H-mode Ne seeded JET pulses have been simulated with the self-consistent core-SOL model COREDIV. In this five pulse series only the Ne seeding rate was changed shot by shot, allowing a thorough study of the effect of Ne seeding on the total radiated power and of its distribution between core and SOL tobe made. The increase in the simulations of the Ne seeding rate level above that achieved in experiments shows saturation of the total radiated power at a relatively low radiated-heating power ratio (f{sub rad} = 0.60) and a further increase of the ratio of SOL to core radiation, in agreement with the reduction of W release at high Ne seeding level. In spite of the uncertainties caused by the simplified SOL model of COREDIV (neutral model, absence of ELMs and slab model for the SOL), the increase of the perpendicular transport in the SOL with increasing Ne seeding rate, which allows to reproduce numerically the experimental distribution core-SOL of the radiated power, appears to be of general applicability. (copyright 2016 The Authors. Contributions to Plasma Physics published by Wiley-VCH Verlag GmbH and Co. KGaA Weinheim. This)

  10. Reactivity of fly ash: extension and application of a shrinking core model

    NARCIS (Netherlands)

    Brouwers, Jos; van Eijk, R.J.

    2002-01-01

    In the present paper a theoretical study is presented on the dissolution (reaction) of pulverised powder coal fly ash. A shrinking core model is derived for hollow spheres that contain two regions (outer hull and inner region). The resulting analytical equations are applied to the dissolution

  11. An Examination of Family Communication within the Core and Balance Model of Family Leisure Functioning

    Science.gov (United States)

    Smith, Kevin M.; Freeman, Patti A.; Zabriskie, Ramon B.

    2009-01-01

    The purpose of this study was to examine family communication within the core and balance model of family leisure functioning. The study was conducted from a youth perspective of family leisure and family functioning. The sample consisted of youth (N= 95) aged 11 - 17 from 25 different states in the United States. Path analyses indicated that…

  12. The HTA core model: a novel method for producing and reporting health technology assessments

    DEFF Research Database (Denmark)

    Lampe, Kristian; Mäkelä, Marjukka; Garrido, Marcial Velasco

    2009-01-01

    OBJECTIVES: The aim of this study was to develop and test a generic framework to enable international collaboration for producing and sharing results of health technology assessments (HTAs). METHODS: Ten international teams constructed the HTA Core Model, dividing information contained in a compr...

  13. Muscle spindles exhibit core lesions and extensive degeneration of intrafusal fibers in the Ryr1(I4895T/wt) mouse model of core myopathy.

    Science.gov (United States)

    Zvaritch, Elena; MacLennan, David H

    2015-04-24

    Muscle spindles from the hind limb muscles of adult Ryr1(I4895T/wt) (IT/+) mice exhibit severe structural abnormalities. Up to 85% of the spindles are separated from skeletal muscle fascicles by a thick layer of connective tissue. Many intrafusal fibers exhibit degeneration, with Z-line streaming, compaction and collapse of myofibrillar bundles, mitochondrial clumping, nuclear shrinkage and pyknosis. The lesions resemble cores observed in the extrafusal myofibers of this animal model and of core myopathy patients. Spindle abnormalities precede those in extrafusal fibers, indicating that they are a primary pathological feature in this murine Ryr1-related core myopathy. Muscle spindle involvement, if confirmed for human core myopathy patients, would provide an explanation for an array of devastating clinical features characteristic of these diseases and provide novel insights into the pathology of RYR1-related myopathies.

  14. The FACETS project: integrated core-edge-wall modeling with concurrent execution

    Science.gov (United States)

    Cary, J. R.; Balay, S.; Candy, J.; Carlsson, J. A.; Cohen, R. H.; Epperly, T.; Estep, D. J.; Fahey, M. R.; Groebner, R. J.; Hakim, A. H.; Hammett, G. W.; Indireshkumar, K.; Kruger, S. E.; Maloney, A. D.; McCune, D. C.; McInnes, L.; Morris, A.; Pankin, A.; Pletzer, A.; Pigarov, A.; Rognlien, T. D.; Shasharina, S.; Shende, S.; Vadlamani, S.; Zhang, H.

    2009-11-01

    The multi-institutional FACETS project has the physics goals of using computation to understand of how a consistent, coupled core-edge-wall plasma evolves, including energy flow, particle recycling, and the variation of power density on divertor plates with plasma under different conditions. FACETS is being developed to take advantage of Leadership Class Facilities (LCFs), while still being able to run on laptops with reduced fidelity models. This presentation will provide a high-level overview of the project, discussing the issues of componentization, solvers, performance monitoring, testing, visualization and first physics results for core-edge coupling.

  15. Spherical relativistic vacuum core models in a Λ-dominated era

    Science.gov (United States)

    Yousaf, Z.

    2017-02-01

    This paper is devoted to analyzing the effects of the cosmological constant in the evolution of exact analytical collapsing vacuum core celestial models. For this purpose, relativistic spherical geometry coupled with null expansion locally anisotropic matter distributions is considered. We have first developed a relation between tidal forces and structural variables. We then explored some viable spherical cosmological models by taking the expansion-free condition. Our first class of spherical models is obtained after constraining system matter content, while the second class is obtained by considering barotropic equation of state. We propose that our calculated solutions could be regarded as a relativistic toy model for those astronomical compact populations where vacuum core is expected to appear, like cosmological voids.

  16. Evidence for Symplectic Symmetry in Ab Initio No-Core Shell Model Results for Light Nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Dytrych, Tomas; Sviratcheva, Kristina D.; Bahri, Chairul; Draayer, Jerry P.; /Louisiana State U.; Vary, James P.; /Iowa State U. /LLNL, Livermore /SLAC

    2007-04-24

    Clear evidence for symplectic symmetry in low-lying states of {sup 12}C and {sup 16}O is reported. Eigenstates of {sup 12}C and {sup 16}O, determined within the framework of the no-core shell model using the JISP16 NN realistic interaction, typically project at the 85-90% level onto a few of the most deformed symplectic basis states that span only a small fraction of the full model space. The results are nearly independent of whether the bare or renormalized effective interactions are used in the analysis. The outcome confirms Elliott's SU(3) model which underpins the symplectic scheme, and above all, points to the relevance of a symplectic no-core shell model that can reproduce experimental B(E2) values without effective charges as well as deformed spatial modes associated with clustering phenomena in nuclei.

  17. Component-based integration of chemistry and optimization software.

    Science.gov (United States)

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  18. Component Based Effort Estimation During Software Development: Problematic View

    Directory of Open Access Journals (Sweden)

    VINIT KUMAR

    2011-10-01

    Full Text Available Component-based software development (CBD is anemerging discipline that promises to take softwareengineering into a new era. Building on theachievements of object-oriented software construction,CBD aims to deliver software engineering from acottage industry into an industrial age for InformationTechnology, wherein software can be assembled fromcomponents, in the manner that hardware systems arecurrently constructed from kits of parts. Componentbaseddevelopment (CBD is a branch of softwareengineering that emphasizes the separation ofconcerns in respect of the wide-ranging functionalityavailable throughout a given software system. Thispractice aims to bring about an equally wide-rangingdegree of benefits in both the short-term and the longtermfor the software itself and for organizations thatsponsor such software. Software engineers regardcomponents as part of the starting platformfor service-orientation. Components play this role, for example, in Web services, and more recently, in service-oriented architectures (SOA, whereby a component is converted by the Web service into aservice and subsequently inherits further characteristics beyond that of an ordinary component. Components can produce or consume events and can be used for event driven architectures (EDA.

  19. Exploratory models of the earth's thermal regime during segregation of the core

    Science.gov (United States)

    Davies, G. F.

    1980-01-01

    Some simple exploratory theoretical models of the thermal effects of core segregation have been investigated, assuming an initially homogeneous earth and including convective heat transport through a 'parameterized convection' approximation. The results indicate that either (1) mantle temperatures 30% or more above present values may have resulted from the gravitational energy released during core segregation, (2) the earth retained very little of its accretional energy, (3) core segregation lasted for one billion years or more, or (4) the earth accreted heterogeneously. Option 3 seems to be precluded by terrestrial lead isotope data, and the alternatives each raise substantial questions concerning the mechanics, chemistry, and petrology of the earth's early history. There is no recognized evidence for the early hot phase of option 1, and option 4 implies, among other things, an analogous early hot phase. Although it has not been favored, option 2 may be viable.

  20. Prestellar core modeling in the presence of a filament. The dense heart of L1689B

    Science.gov (United States)

    Steinacker, J.; Bacmann, A.; Henning, Th.; Heigl, S.

    2016-08-01

    Context. Lacking a paradigm for the onset of star formation, it is important to derive basic physical properties of prestellar cores and filaments like density and temperature structures. Aims: We aim to disentangle the spatial variation in density and temperature across the prestellar core L1689B, which is embedded in a filament. We want to determine the range of possible central densities and temperatures that are consistent with the continuum radiation data. Methods: We apply a new synergetic radiative transfer method: the derived 1D density profiles are both consistent with a cut through the Herschel PACS/SPIRE and JCMT SCUBA-2 continuum maps of L1689B and with a derived local interstellar radiation field. Choosing an appropriate cut along the filament major axis, we minimize the impact of the filament emission on the modeling. Results: For the bulk of the core (5000-20 000 au) an isothermal sphere model with a temperature of around 10 K provides the best fits. We show that the power law index of the density profile, as well as the constant temperature can be derived directly from the radial surface brightness profiles. For the inner region (transfer methods also avoids the loss of information owing to smearing of all maps to the coarsest spatial resolution. We find the central core region to be colder and denser than estimated in recent inverse radiative transfer modeling, possibly indicating the start of star formation in L1689B.

  1. A lumped element transformer model including core losses and winding impedances

    OpenAIRE

    Ribbenfjärd, David

    2007-01-01

    In order to design a power transformer it is important to understand its internal electromagnetic behaviour. That can be obtained by measurements on physical transformers, analytical expressions and computer simulations. One benefit with simulations is that the transformer can be studied before it is built physically and that the consequences of changing dimensions and parameters easily can be tested. In this thesis a time-domain transformer model is presented. The model includes core losses ...

  2. Studies of mixed HEU-LEU-MTR cores using 3D models

    Energy Technology Data Exchange (ETDEWEB)

    Haenggi, P.; Lehmann, E.; Hammer, J.; Christen, R. [Paul Scherrer Institute, Villigen (Switzerland)

    1997-08-01

    Several different core loadings were assembled at the SAPHIR research reactor in Switzerland combining the available types of MTR-type fuel elements, consisting mainly of both HEU and LEU fuel. Bearing in mind the well known problems which can occur in such configurations (especially power peaking), investigations have been carried out for each new loading with a 2D neutron transport code (BOXER). The axial effects were approximated by a global buckling value and therefore the radial effects could be studied in considerably detail. Some of the results were reported at earlier RERTR meetings and were compared to those obtained by other methods and with experimental values. For the explicit study of the third dimension of the core, another code (SILWER), which has been developed in PSI for LWR power plant cores, has been selected. With the help of an adapted model for the MTR-core of SAPHIR, several important questions have been addressed. Among other aspects, the estimation of the axial contribution to the hot channel factors, the influence of the control rod position and of the Xe-poisoning on the power distribution were studied. Special attention was given to a core position where a new element was assumed placed near a empty, water filled position. The comparison of elements of low and high enrichments at this position was made in terms of the induced power peaks, with explicit consideration of axial effects. The program SILWER has proven to be applicable to MTR-cores for the investigation of axial effects. For routine use as for the support of reactor operation, this 3D code is a good supplement to the standard 2D model.

  3. The treatment of mixing in core helium burning models - II. Constraints from cluster star counts

    Science.gov (United States)

    Constantino, Thomas; Campbell, Simon W.; Lattanzio, John C.; van Duijneveldt, Adam

    2016-03-01

    The treatment of convective boundaries during core helium burning is a fundamental problem in stellar evolution calculations. In the first paper of this series, we showed that new asteroseismic observations of these stars imply they have either very large convective cores or semiconvection/partially mixed zones that trap g modes. We probe this mixing by inferring the relative lifetimes of asymptotic giant branch (AGB) and horizontal branch (HB) from R2, the observed ratio of these stars in recent HST photometry of 48 Galactic globular clusters. Our new determinations of R2 are more self-consistent than those of previous studies and our overall calculation of R2 = 0.117 ± 0.005 is the most statistically robust now available. We also establish that the luminosity difference between the HB and the AGB clump is Δ log {L}_HB^AGB = 0.455 ± 0.012. Our results accord with earlier findings that standard models predict a lower R2 than is observed. We demonstrate that the dominant sources of uncertainty in models are the prescription for mixing and the stochastic effects that can result from its numerical treatment. The luminosity probability density functions that we derive from observations feature a sharp peak near the AGB clump. This constitutes a strong new argument against core breathing pulses, which broaden the predicted width of the peak. We conclude that the two mixing schemes that can match the asteroseismology are capable of matching globular cluster observations, but only if (i) core breathing pulses are avoided in models with a semiconvection/partially mixed zone, or (ii) that models with large convective cores have a particular depth of mixing beneath the Schwarzschild boundary during subsequent early-AGB `gravonuclear' convection.

  4. Highlights from the 2016 Dynamical Core Model Intercomparison Project (DCMIP-2016)

    Science.gov (United States)

    Jablonowski, Christiane; Ullrich, Paul A.; Reed, Kevin A.; Zarzycki, Colin M.; Kent, James; Lauritzen, Peter H.; Nair, Ramachandran D.

    2017-04-01

    The 2016 Dynamical Core Model Intercomparison Project (DCMIP-2016) shed light on the newest modeling techniques for global weather and climate and models with particular focus on the newest non-hydrostatic atmospheric dynamical cores, their physics-dynamics coupling, and variable-resolution aspects. As part of a two-week summer school held in June 2016 at the National Center for Atmospheric Research (NCAR), a main objective of DCMIP-2016 was to establish an open-access database via the Earth System Grid Federation (ESGF) that hosts DCMIP-2016 simulations for community use from over 12 international modeling groups. In addition, DCMIP-2016 established new atmospheric model test cases of intermediate complexity that incorporated simplified physical parameterizations. The paper presents the results of the three DCMIP-2016 test cases which assess the evolution of an idealized moist baroclinic wave, a tropical cyclone and a supercell. All flow scenarios start from analytically-prescribed moist reference states in gradient-wind and hydrostatic balance which are overlaid by localized perturbations. The simple moisture feedbacks are represented by a warm-rain Kessler-type parameterization without any cloud stage. The tropical cyclone test case also utilizes surface fluxes and turbulent mixing in the boundary layer. The paper highlights the characteristics of the DCMIP-2016 dynamical cores and reveals the impact of the moisture processes on the flow fields over 5-15-day forecast periods. In addition, the coupling between the dynamics, physics and the tracer advection schemes is assessed via a "Terminator" tracer test. The work demonstrates how idealized test cases are part of a model hierarchy that helps distinguish between causes and effects in atmospheric models and their physics-dynamics interplay. This characterizes and informs the design of atmospheric dynamical cores.

  5. Modeling Polarized Emission from Black Hole Jets: Application to M87 Core Jet

    Directory of Open Access Journals (Sweden)

    Monika Mościbrodzka

    2017-09-01

    Full Text Available We combine three-dimensional general-relativistic numerical models of hot, magnetized Advection Dominated Accretion Flows around a supermassive black hole and the corresponding outflows from them with a general relativistic polarized radiative transfer model to produce synthetic radio images and spectra of jet outflows. We apply the model to the underluminous core of M87 galaxy. The assumptions and results of the calculations are discussed in context of millimeter observations of the M87 jet launching zone. Our ab initio polarized emission and rotation measure models allow us to address the constrains on the mass accretion rate onto the M87 supermassive black hole.

  6. Core - Corona Model describes the Centrality Dependence of v_2/epsilon

    CERN Document Server

    Aichelin, J

    2010-01-01

    Event by event EPOS calculations in which the expansion of the system is described by {\\it ideal} hydrodynamics reproduce well the measured centrality dependence of $v_2/\\epsilon_{part}$, although it has been claimed that only viscous hydrodynamics can reproduce these data. This is due to the core - corona effect which manifests itself in the initial condition of the hydrodynamical expansion. The centrality dependence of $v_2/\\epsilon_{part}$ can be understood in the recently advanced core-corona model, a simple parameter free EPOS inspired model to describe the centrality dependence of different observables from SPS to RHIC energies. This model has already been successfully applied to understand the centrality dependence of multiplicities and of the average transverse momentum of identified particles.

  7. Validity of Viscous Core Correction Models for Self-Induced Velocity Calculations

    CERN Document Server

    Van Hoydonck, Wim

    2012-01-01

    Viscous core correction models are used in free wake simulations to remove the infinite velocities at the vortex centreline. It will be shown that the assumption that these corrections converge to the Biot-Savart law in the far field is not correct for points near the tangent line of a vortex segment. Furthermore, the self-induced velocity of a vortex ring with a viscous core is shown to converge to the wrong value. The source of these errors in the model is identified and an improved model is presented that rectifies the errors. It results in correct values for the self-induced velocity of a viscous vortex ring and induced velocities that converge to the values predicted by the Biot-Savart law for all points in the far field.

  8. A Core Model for Parts Suppliers Selecting Method in Manufacturing Supply Chain

    Directory of Open Access Journals (Sweden)

    Guorong Chen

    2015-01-01

    Full Text Available Service-oriented manufacturing is the new development of manufacturing systems, and manufacturing supply chain service is also an important part of the service-oriented manufacturing systems; hence, the optimal selection of parts suppliers becomes one of key problems in the supply chain system. Complex network theories made a rapid progress in recent years, but the classical models such as BA model and WS model can not resolve the widespread problems of manufacturing supply chain, such as the repeated attachment of edge and fixed number of vertices, but edges increased with preferential connectivity, and flexible edges’ probability. A core model is proposed to resolve the problem in the paper: it maps the parts supply relationship as a repeatable core; a vertex’s probability distribution function integrating the edge’s rate and vertex’s degree is put forward; some simulations, such as the growth of core, the degree distribution characteristics, and the impacting of parameter, are carried out in our experiments, and the case study is set also. The paper proposed a novel model to analyze the manufacturing supply chain system from the insights of complex network.

  9. The multi-state hard core model on a regular tree

    CERN Document Server

    Galvin, David; Ramanan, Kavita; Tetali, Prasad

    2010-01-01

    The classical hard core model from statistical physics, with activity $\\lambda > 0$ and capacity $C=1$, on a graph $G$, concerns a probability measure on the set ${\\mathcal I}(G)$ of independent sets of $G$, with the measure of each independent set $I \\in {\\mathcal I}(G)$ being proportional to $\\lambda^{|I|}$. Ramanan et al. proposed a generalization of the hard core model as an idealized model of multicasting in communication networks. In this generalization, the {\\em multi-state} hard core model, the capacity $C$ is allowed to be a positive integer, and a configuration in the model is an assignment of states from $\\{0,\\ldots,C\\}$ to $V(G)$ (the set of nodes of $G$) subject to the constraint that the states of adjacent nodes may not sum to more than $C$. The activity associated to state $i$ is $\\lambda^{i}$, so that the probability of a configuration $\\sigma:V(G)\\rightarrow \\{0,\\ldots, C\\}$ is proportional to $\\lambda^{\\sum_{v \\in V(G)} \\sigma(v)}$. In this work, we consider this generalization when $G$ is a...

  10. Modeling Shallow Core-Level Transitions in the Reflectance Spectra of Gallium-Containing Semiconductors

    Science.gov (United States)

    Stoute, Nicholas; Aspnes, David

    2012-02-01

    The electronic structure of covalent materials is typically approached by band theory. However, shallow core level transitions may be better modeled by an atomic-scale approach. We investigate shallow d-core level reflectance spectra in terms of a local atomic-multiplet theory, a novel application of a theory typically used for higher-energy transitions on more ionic type material systems. We examine specifically structure in reflectance spectra of GaP, GaAs, GaSb, GaSe, and GaAs1-xPx due to transitions that originate from Ga3d core levels and occur in the 20 to 25 eV range. We model these spectra as a Ga^+3 closed-shell ion whose transitions are influenced by perturbations on 3d hole-4p electron final states. These are specifically spin-orbit effects on the hole and electron, and a crystal-field effect on the hole, attributed to surrounding bond charges and positive ligand anions. Empirical radial-strength parameters were obtained by least-squares fitting. General trends with respect to anion electronegativity are consistent with expectations. In addition to the spin-orbit interaction, crystal-field effects play a significant role in breaking the degeneracy of the d levels, and consequently are necessary to understand shallow 3d core level spectra.

  11. A decision tree algorithm for investigation of model biases related to dynamical cores and physical parameterizations.

    Science.gov (United States)

    Soner Yorgun, M; Rood, Richard B

    2016-12-01

    An object-based evaluation method using a pattern recognition algorithm (i.e., classification trees) is applied to the simulated orographic precipitation for idealized experimental setups using the National Center of Atmospheric Research (NCAR) Community Atmosphere Model (CAM) with the finite volume (FV) and the Eulerian spectral transform dynamical cores with varying resolutions. Daily simulations were analyzed and three different types of precipitation features were identified by the classification tree algorithm. The statistical characteristics of these features (i.e., maximum value, mean value, and variance) were calculated to quantify the difference between the dynamical cores and changing resolutions. Even with the simple and smooth topography in the idealized setups, complexity in the precipitation fields simulated by the models develops quickly. The classification tree algorithm using objective thresholding successfully detected different types of precipitation features even as the complexity of the precipitation field increased. The results show that the complexity and the bias introduced in small-scale phenomena due to the spectral transform method of CAM Eulerian spectral dynamical core is prominent, and is an important reason for its dissimilarity from the FV dynamical core. The resolvable scales, both in horizontal and vertical dimensions, have significant effect on the simulation of precipitation. The results of this study also suggest that an efficient and informative study about the biases produced by GCMs should involve daily (or even hourly) output (rather than monthly mean) analysis over local scales.

  12. Uniqueness of three-mode factor models with sparse cores : The 3x3x3 case

    NARCIS (Netherlands)

    Kiers, H.A.L.; ten Berge, J.M.F.; Rocci, R

    1997-01-01

    Three-Mode Factor Analysis (3MFA) and PARAFAC are methods to describe three-way data. Both methods employ models with components for the three modes of a three-way array; the 3MFA model also uses a three-way core array for linking all components to each other. The use of the core array makes the 3MF

  13. Crack growth rate in core shroud horizontal welds using two models for a BWR

    Energy Technology Data Exchange (ETDEWEB)

    Arganis Juárez, C.R., E-mail: carlos.arganis@inin.gob.mx; Hernández Callejas, R.; Medina Almazán, A.L.

    2015-05-15

    Highlights: • Two models were used to predict SCC growth rate in a core shroud of a BWR. • A weld residual stress distribution with 30% stress relaxation by neutron was used. • Agreement is shown between the measurements of SCC growth rate and the predictions. • Slip–oxidation model is better at low fluences and empirical model at high fluences. - Abstract: An empirical crack growth rate correlation model and a predictive model based on the slip–oxidation mechanism for Stress Corrosion Cracking (SCC) were used to calculate the crack growth rate in a BWR core shroud. In this study, the crack growth rate was calculated by accounting for the environmental factors related to aqueous environment, neutron irradiation to high fluence and the complex residual stress conditions resulting from welding. In estimating the SCC behavior the crack growth measurements data from a Boiling Water Reactor (BWR) plant are referred to, and the stress intensity factor vs crack depth throughout thickness is calculated using a generic weld residual stress distribution for a core shroud, with a 30% stress relaxation induced by neutron irradiation. Quantitative agreement is shown between the measurements of SCC growth rate and the predictions of the slip–oxidation mechanism model for relatively low fluences (5 × 10{sup 24} n/m{sup 2}), and the empirical model predicted better the SCC growth rate than the slip–oxidation model for high fluences (>1 × 10{sup 25} n/m{sup 2}). The relevance of the models predictions for SCC growth rate behavior depends on knowing the model parameters.

  14. Preliminary design report for SCDAP/RELAP5 lower core plate model

    Energy Technology Data Exchange (ETDEWEB)

    Coryell, E.W. [Lockheed Martin Idaho Technologies Co., Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab.; Griffin, F.P. [Oak Ridge National Lab., TN (United States)

    1998-07-01

    The SCDAP/RELAP5 computer code is a best-estimate analysis tool for performing nuclear reactor severe accident simulations. Under primary sponsorship of the US Nuclear Regulatory Commission (NRC), Idaho National Engineering and Environmental Laboratory (INEEL) is responsible for overall maintenance of this code and for improvements for pressurized water reactor (PWR) applications. Since 1991, Oak Ridge National Laboratory (ORNL) has been improving SCDAP/RELAP5 for boiling water reactor (BWR) applications. The RELAP5 portion of the code performs the thermal-hydraulic calculations for both normal and severe accident conditions. The structures within the reactor vessel and coolant system can be represented with either RELAP5 heat structures or SCDAP/RELAP5 severe accident structures. The RELAP5 heat structures are limited to normal operating conditions (i.e., no structural oxidation, melting, or relocation), while the SCDAP portion of the code is capable of representing structural degradation and core damage progression that can occur under severe accident conditions. DCDAP/RELAP5 currently assumes that molten material which leaves the core region falls into the lower vessel head without interaction with structural materials. The objective of this design report is to describe the modifications required for SCDAP/RELAP5 to treat the thermal response of the structures in the core plate region as molten material relocates downward from the core, through the core plate region, and into the lower plenum. This has been a joint task between INEEL and ORNL, with INEEL focusing on PWR-specific design, and ORNL focusing upon the BWR-specific aspects. Chapter 2 describes the structures in the core plate region that must be represented by the proposed model. Chapter 3 presents the available information about the damage progression that is anticipated to occur in the core plate region during a severe accident, including typical SCDAP/RELAP5 simulation results. Chapter 4 provides a

  15. Simulating High Flux Isotope Reactor Core Thermal-Hydraulics via Interdimensional Model Coupling

    Energy Technology Data Exchange (ETDEWEB)

    Travis, Adam R [ORNL

    2014-05-01

    A coupled interdimensional model is presented for the simulation of the thermal-hydraulic characteristics of the High Flux Isotope Reactor core at Oak Ridge National Laboratory. The model consists of two domains a solid involute fuel plate and the surrounding liquid coolant channel. The fuel plate is modeled explicitly in three-dimensions. The coolant channel is approximated as a twodimensional slice oriented perpendicular to the fuel plate s surface. The two dimensionally-inconsistent domains are linked to one another via interdimensional model coupling mechanisms. The coupled model is presented as a simplified alternative to a fully explicit, fully three-dimensional model. Involute geometries were constructed in SolidWorks. Derivations of the involute construction equations are presented. Geometries were then imported into COMSOL Multiphysics for simulation and modeling. Both models are described in detail so as to highlight their respective attributes in the 3D model, the pursuit of an accurate, reliable, and complete solution; in the coupled model, the intent to simplify the modeling domain as much as possible without affecting significant alterations to the solution. The coupled model was created with the goal of permitting larger portions of the reactor core to be modeled at once without a significant sacrifice to solution integrity. As such, particular care is given to validating incorporated model simplifications. To the greatest extent possible, the decrease in solution time as well as computational cost are quantified versus the effects such gains have on the solution quality. A variant of the coupled model which sufficiently balances these three solution characteristics is presented alongside the more comprehensive 3D model for comparison and validation.

  16. Hadron Resonance Gas Model for An Arbitrarily Large Number of Different Hard-Core Radii

    CERN Document Server

    Oliinychenko, D R; Sagun, V V; Ivanytskyi, A I; Yakimenko, I P; Nikonov, E G; Taranenko, A V; Zinovjev, G M

    2016-01-01

    We develop a novel formulation of the hadron-resonance gas model which, besides a hard-core repulsion, explicitly accounts for the surface tension induced by the interaction between the particles. Such an equation of state allows us to go beyond the Van der Waals approximation for any number of different hard-core radii. A comparison with the Carnahan-Starling equation of state shows that the new model is valid for packing fractions 0.2-0.22, while the usual Van der Waals model is inapplicable at packing fractions above 0.11-0.12. Moreover, it is shown that the equation of state with induced surface tension is softer than the one of hard spheres and remains causal at higher particle densities. The great advantage of our model is that there are only two equations to be solved and it does not depend on the various values of the hard-core radii used for different hadronic resonances. Using this novel equation of state we obtain a high-quality fit of the ALICE hadron multiplicities measured at center-of-mass ener...

  17. Cenozoic crustal extension in southeastern Arizona and implications for models of core-complex development

    Science.gov (United States)

    Arca, M. Serkan; Kapp, Paul; Johnson, Roy A.

    2010-06-01

    In conventional models of Cordilleran-style metamorphic core-complex development, initial extension occurs along a breakaway fault, which subsequently is deformed into a synform and abandoned in response to isostatic rebound and new faults breaking forward in the dominant transport direction. The Catalina core complex and associated geology in southeastern Arizona have been pointed to as a type example of this model. From southwest to northeast, the region is characterized by the NW-SE trending Tucson basin, the Catalina core complex, the San Pedro trough and the Galiuro Mountains. The Catalina core complex is bounded by the top-to-the-southwest Catalina detachment fault along its southwestern flank and the low-angle, northeast-dipping San Pedro fault along its northeastern flank. The Galiuro Mountains expose non-mylonitic rocks and are separated from the San Pedro trough to the southwest by a system of low- to moderate-angle southwest-dipping normal faults. This Galiuro fault system is widely interpreted to be the breakaway zone for the Catalina core complex. It is inferred to be folded into a synform beneath the San Pedro trough, to resurface to the southwest as the San Pedro fault, and to have been abandoned during slip along the younger Catalina detachment. This study aimed to test this model through analysis of field relations and geochronological age constraints, and reprocessing and interpretation of 2-D seismic reflection data from the Catalina core complex and San Pedro trough. In contrast to predictions of the conventional breakaway zone model, we raise the possibility of a moderate-angle, southwest-dipping detachment fault beneath the San Pedro trough that could extend to mid-crustal depths beneath the eastern flank of the Catalina Mountains. We present an alternative kinematic model in which extension was accommodated by a pair of top-to-the-southwest normal-fault systems (the Catalina and Galiuro detachment faults), with the only major difference

  18. Monte Carlo Error Analysis Applied to Core Formation: The Single-stage Model Revived

    Science.gov (United States)

    Cottrell, E.; Walter, M. J.

    2009-12-01

    The last decade has witnessed an explosion of studies that scrutinize whether or not the siderophile element budget of the modern mantle can plausibly be explained by metal-silicate equilibration in a deep magma ocean during core formation. The single-stage equilibrium scenario is seductive because experiments that equilibrate metal and silicate can then serve as a proxy for the early earth, and the physical and chemical conditions of core formation can be identified. Recently, models have become more complex as they try to accommodate the proliferation of element partitioning data sets, each of which sets its own limits on the pressure, temperature, and chemistry of equilibration. The ability of single stage models to explain mantle chemistry has subsequently been challenged, resulting in the development of complex multi-stage core formation models. Here we show that the extent to which extant partitioning data are consistent with single-stage core formation depends heavily upon (1) the assumptions made when regressing experimental partitioning data (2) the certainty with which regression coefficients are known and (3) the certainty with which the core/mantle concentration ratios of the siderophile elements are known. We introduce a Monte Carlo algorithm coded in MATLAB that samples parameter space in pressure and oxygen fugacity for a given mantle composition (nbo/t) and liquidus, and returns the number of equilibrium single-stage liquidus “solutions” that are permissible, taking into account the uncertainty in regression parameters and range of acceptable core/mantle ratios. Here we explore the consequences of regression parameter uncertainty and the impact of regression construction on model outcomes. We find that the form of the partition coefficient (Kd with enforced valence state, or D) and the handling of the temperature effect (based on 1-atm free energy data or high P-T experimental observations) critically affects model outcomes. We consider the most

  19. Unified Solutions of the Hard-Core Fermi-and Bose-Hubbard Models

    Institute of Scientific and Technical Information of China (English)

    PAN Feng; DAI Lian-Rong

    2003-01-01

    A unified algebraic approach to both the hard-core Fermi- and Bose-Hubbard models is extended to boththe finite- and infinite-site with periodic condition cases. Excitation energies and the corresponding wavefunctions ofboth the models with nearest neighbor hopping are exactly derived by using a new and simple algebraic method. It isfound that spectra of both the models are determined simply by eigenvalue problem of N × N hopping matrix, where Nis the number of sites for finite system or the period of sites for infinite system.

  20. A Reacting-Shrinking Core Model for Pyrolysis and Combustion of a Single Biomass Particle

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, Alfredo L.; Avila, Claudio R.; Garcia, Ximena A. (Dept. of Chemical Engineering, Univ. of Concepcion (Chile)). e-mail: algordon@udec.cl

    2008-10-15

    Consecutive heating and pyrolysis of the solid due to incident heat flow represented by a shrinking reacting core approximation and combustion, described by a shrinking unreacting core description, allow modeling the combustion of a cylindrical biomass particle. A bi-dimensional approach (radial and axial) describes mass and heat balances and first order kinetics characterizes the reaction rates. A finite differences algorithm numerically solved the model. Results showed good agreement with available experimental data. The parameter analysis for the pyrolysis step shows high sensibility to the kinetic constants, the incident heat flow, the initial moisture and particle size. Parameters with a significant effect on combustion are the concentration and effective diffusivity of the oxidizing agent in the atmosphere around the reaction surface

  1. A component-based FPGA design framework for neuronal ion channel dynamics simulations.

    Science.gov (United States)

    Mak, Terrence S T; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang

    2006-12-01

    Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field-programmable gate array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) and N-methyl-D-aspartate (NMDA) synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution, as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired.

  2. Supersolid Phase in One-Dimensional Hard-Core Boson Hubbard Model with a Superlattice Potential

    Institute of Scientific and Technical Information of China (English)

    GUO Huai-Ming; LIANG Ying

    2008-01-01

    The ground state of the one-dimensional hard-core boson Hubbard model with a superlattice potential is studied by quantum Monte Carlo methods. We demonstrate that besides the CDW phase and the Mort insulator phase, the supersolid phase emerges due to the presence of the superlattice potential, which reflects the competition with the hopping term. We also study the densities of sublattices and have a clear idea about the distribution of the bosons on the lattice.

  3. Nuclear structure calculations in $^{20}$Ne with No-Core Configuration-Interaction model

    CERN Document Server

    Konieczka, Maciej

    2016-01-01

    Negative parity states in $^{20}$Ne and Gamow-Teller strength distribution for the ground-state beta-decay of $^{20}$Na are calculated for the very first time using recently developed No-Core Configuration-Interaction model. The approach is based on multi-reference density functional theory involving isospin and angular-momentum projections. Advantages and shortcomings of the method are briefly discussed.

  4. Beyond the pseudo-time-dependent approach: chemical models of dense core precursors

    CERN Document Server

    Hassel, G E; Bergin, E A

    2010-01-01

    Context: Chemical models of dense cloud cores often utilize the so-called pseudo-time-dependent approximation, in which the physical conditions are held fixed and uniform as the chemistry occurs. In this approximation, the initial abundances chosen, which are totally atomic in nature except for molecular hydrogen, are artificial. A more detailed approach to the chemistry of dense cold cores should include the physical evolution during their early stages of formation. Aims: Our major goal is to investigate the initial synthesis of molecular ices and gas-phase molecules as cold molecular gas begins to form behind a shock in the diffuse interstellar medium. The abundances calculated as the conditions evolve can then be utilized as reasonable initial conditions for a theory of the chemistry of dense cores. Methods: Hydrodynamic shock-wave simulations of the early stages of cold core formation are used to determine the time-dependent physical conditions for a gas-grain chemical network. We follow the cold post-sho...

  5. Finite Element Modeling and Component-based Approach of Steel and Composite Steel Joints at Elevated Temperatures%高温下钢及组合钢节点有限元建模及基于构件的分析方法

    Institute of Scientific and Technical Information of China (English)

    陈江海

    2012-01-01

    ,semi-continuous and continuous joint models [2].In terms of stiffness,the joint models can be nominally pinned,semi-rigid or rigid and in terms of strength,they can be classified as nominally pinned,partial-strength or full-strength.While the majority of engineering problems only require linear elastic analysis,there are some special structures that may require advanced analysis to reduce construction cost,for example,in the design of low-rise unbraced steel framed structures.In this context,the incorporation of semi-continuous joints (with semi-rigid and partial-strength characteristics) into frame analysis can significantly enhance structural stiffness and strength against sway arising from notional horizontal loads,wind loads,global imperfections or seismic action.This is so that the computed lateral drift under governing horizontal loads may be acceptable within the EC 3 stipulations.In the context of performance-based approach for some fire scenarios,structural fire engineers may want to utilize the inherent stiffness and strength of steel joints,particularly for steel structures with end-plate joints,which is the most common form of steel construction.The end plates may range from partial-depth,flush to extended end plates,covering nominally-pinned,semi-rigid and fully rigid joint models.This paper presents a series of numerical and analytical investigations to study the behavior of end plate joints at elevated temperatures.Applying a “component-based” methodology,the mechanical response of these joints at elevated temperatures has been formulated,incorporating the beam web shear component,and the tension and compression zones of the connection.The component-based approach can consider the effect from thermal restraint on steel joints.Finite element simulations of the steel end plate joint tests were also performed and both the component-based and numerical finite element predictions provide acceptable correlations with the test behavior,including the effect of thermal

  6. COMDES-II: A Component-Based Framework for Generative Development of Distributed Real-Time Control Systems

    DEFF Research Database (Denmark)

    Ke, Xu; Sierszecki, Krzysztof; Angelov, Christo K.

    2007-01-01

    The paper presents a generative development methodology and component models of COMDES-II, a component-based software framework for distributed embedded control systems with real-time constraints. The adopted methodology allows for rapid modeling and validation of control software at a higher lev...... methodology for COMDES-II from a general perspective, describes the component models in details and demonstrates their application through a DC-Motor control system case study....

  7. Prestellar core modeling in the presence of a filament - The dense heart of L1689B

    CERN Document Server

    Steinacker, Juergen; Henning, Thomas; Heigl, Stefan

    2016-01-01

    Short version: We apply a new synergetic radiative transfer method: the derived 1D density profiles are both consistent with a cut through the Herschel PACS/SPIRE and JCMT SCUBA-2 continuum maps of L1689B and with a derived local interstellar radiation field. Choosing an appropriate cut along the filament major axis, we minimize the impact of the filament emission on the modeling. For the bulk of the core (5000-20000 au) an isothermal sphere model with a temperature of around 10 K provides the best fits. We show that the power law index of the density profile, as well as the constant temperature can be derived directly from the radial surface brightness profiles. For the inner region (< 5000 au), we find a range of densities and temperatures that are consistent with the surface brightness profiles and the local interstellar radiation field. Based on our core models, we find that pixel-by-pixel single temperature spectral energy distribution fits are incapable of determining dense core properties. We conclu...

  8. Implementation of EUnetHTA core Model® in Lombardia: the VTS framework.

    Science.gov (United States)

    Radaelli, Giovanni; Lettieri, Emanuele; Masella, Cristina; Merlino, Luca; Strada, Alberto; Tringali, Michele

    2014-01-01

    This study describes the health technology assessment (HTA) framework introduced by Regione Lombardia to regulate the introduction of new technologies. The study outlines the process and dimensions adopted to prioritize, assess and appraise the requests of new technologies. The HTA framework incorporates and adapts elements from the EUnetHTA Core Model and the EVIDEM framework. It includes dimensions, topics, and issues provided by EUnetHTA Core Model to collect data and process the assessment. Decision making is instead supported by the criteria and Multi-Criteria Decision Analysis technique from the EVIDEM consortium. The HTA framework moves along three process stages: (i) prioritization of requests, (ii) assessment of prioritized technology, (iii) appraisal of technology in support of decision making. Requests received by Regione Lombardia are first prioritized according to their relevance along eight dimensions (e.g., costs, efficiency and efficacy, organizational impact, safety). Evidence about the impacts of the prioritized technologies is then collected following the issues and topics provided by EUnetHTA Core Model. Finally, the Multi-Criteria Decision Analysis technique is used to appraise the novel technology and support Regione Lombardia decision making. The VTS (Valutazione delle Tecnologie Sanitarie) framework has been successfully implemented at the end of 2011. From its inception, twenty-six technologies have been processed.

  9. Nuclear inputs of key iron isotopes for core-collapse modeling and simulation

    CERN Document Server

    Nabi, Jameel-Un

    2014-01-01

    From the modeling and simulation results of presupernova evolution of massive stars, it was found that isotopes of iron, $^{54,55,56}$Fe, play a significant role inside the stellar cores, primarily decreasing the electron-to-baryon ratio ($Y_{e}$) mainly via electron capture processes thereby reducing the pressure support. The neutrinos produced, as a result of these capture processes, are transparent to the stellar matter and assist in cooling the core thereby reducing the entropy. The structure of the presupernova star is altered both by the changes in $Y_{e}$ and the entropy of the core material. Here we present the microscopic calculation of Gamow-Teller strength distributions for isotopes of iron. The calculation is also compared with other theoretical models and experimental data. Presented also are stellar electron capture rates and associated neutrino cooling rates, due to isotopes of iron, in a form suitable for simulation and modeling codes. It is hoped that the nuclear inputs presented here should ...

  10. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    Energy Technology Data Exchange (ETDEWEB)

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the model to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.

  11. CoreFlow: a computational platform for integration, analysis and modeling of complex biological data.

    Science.gov (United States)

    Pasculescu, Adrian; Schoof, Erwin M; Creixell, Pau; Zheng, Yong; Olhovsky, Marina; Tian, Ruijun; So, Jonathan; Vanderlaan, Rachel D; Pawson, Tony; Linding, Rune; Colwill, Karen

    2014-04-04

    A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. CoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be

  12. Severe accident modeling of a PWR core with different cladding materials

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, S. C. [Westinghouse Electric Company LLC, 5801 Bluff Road, Columbia, SC 29209 (United States); Henry, R. E.; Paik, C. Y. [Fauske and Associates, Inc., 16W070 83rd Street, Burr Ridge, IL 60527 (United States)

    2012-07-01

    The MAAP v.4 software has been used to model two severe accident scenarios in nuclear power reactors with three different materials as fuel cladding. The TMI-2 severe accident was modeled with Zircaloy-2 and SiC as clad material and a SBO accident in a Zion-like, 4-loop, Westinghouse PWR was modeled with Zircaloy-2, SiC, and 304 stainless steel as clad material. TMI-2 modeling results indicate that lower peak core temperatures, less H 2 (g) produced, and a smaller mass of molten material would result if SiC was substituted for Zircaloy-2 as cladding. SBO modeling results indicate that the calculated time to RCS rupture would increase by approximately 20 minutes if SiC was substituted for Zircaloy-2. Additionally, when an extended SBO accident (RCS creep rupture failure disabled) was modeled, significantly lower peak core temperatures, less H 2 (g) produced, and a smaller mass of molten material would be generated by substituting SiC for Zircaloy-2 or stainless steel cladding. Because the rate of SiC oxidation reaction with elevated temperature H{sub 2}O (g) was set to 0 for this work, these results should be considered preliminary. However, the benefits of SiC as a more accident tolerant clad material have been shown and additional investigation of SiC as an LWR core material are warranted, specifically investigations of the oxidation kinetics of SiC in H{sub 2}O (g) over the range of temperatures and pressures relevant to severe accidents in LWR 's. (authors)

  13. Preparedness for clinical: evaluation of the core elements of the Clinical Immersion curriculum model.

    Science.gov (United States)

    Diefenbeck, Cynthia; Herrman, Judith; Wade, Gail; Hayes, Evelyn; Voelmeck, Wayne; Cowperthwait, Amy; Norris, Susan

    2015-01-01

    The Clinical Immersion Model is an innovative baccalaureate nursing curriculum that has demonstrated successful outcomes over the past 10 years. For those intending to adopt the model, individual components in isolation may prove ineffective. This article describes three core components of the curriculum that form the foundation of preparation for the senior-year clinical immersion. Detailed student-centered outcomes evaluation of these critical components is shared. Results of a mixed-methods evaluation, including surveys and focus groups, are presented. Implications of this curricular evaluation and future directions are explored.

  14. IP cores design from specifications to production modeling, verification, optimization, and protection

    CERN Document Server

    Mohamed, Khaled Salah

    2016-01-01

    This book describes the life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection. Various trade-offs in the design process are discussed, including  those associated with many of the most common memory cores, controller IPs  and system-on-chip (SoC) buses. Readers will also benefit from the author’s practical coverage of new verification methodologies. such as bug localization, UVM, and scan-chain.  A SoC case study is presented to compare traditional verification with the new verification methodologies. ·         Discusses the entire life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection; ·         Introduce a deep introduction for Verilog for both implementation and verification point of view.  ·         Demonstrates how to use IP in applications such as memory controllers and SoC buses. ·         Describes a new ver...

  15. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2010

    Energy Technology Data Exchange (ETDEWEB)

    Rahmat Aryaeinejad; Douglas S. Crawford; Mark D. DeHart; George W. Griffith; D. Scott Lucas; Joseph W. Nielsen; David W. Nigg; James R. Parry; Jorge Navarro

    2010-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance and, to some extent, experiment management are obsolete, inconsistent with the state of modern nuclear engineering practice, and are becoming increasingly difficult to properly verify and validate (V&V). Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In 2009 the Idaho National Laboratory (INL) initiated a focused effort to address this situation through the introduction of modern high-fidelity computational software and protocols, with appropriate V&V, within the next 3-4 years via the ATR Core Modeling and Simulation and V&V Update (or “Core Modeling Update”) Project. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).

  16. Preliminary Thermal Hydraulic Analyses of the Conceptual Core Models with Tubular Type Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Hee Taek; Park, Jong Hark; Park, Cheol

    2006-11-15

    A new research reactor (AHR, Advanced HANARO Reactor) based on the HANARO has being conceptually developed for the future needs of research reactors. A tubular type fuel was considered as one of the fuel options of the AHR. A tubular type fuel assembly has several curved fuel plates arranged with a constant small gap to build up cooling channels, which is very similar to an annulus pipe with many layers. This report presents the preliminary analysis of thermal hydraulic characteristics and safety margins for three conceptual core models using tubular fuel assemblies. Four design criteria, which are the fuel temperature, ONB (Onset of Nucleate Boiling) margin, minimum DNBR (Departure from Nucleate Boiling Ratio) and OFIR (Onset of Flow Instability Ratio), were investigated along with various core flow velocities in the normal operating conditions. And the primary coolant flow rate based a conceptual core model was suggested as a design information for the process design of the primary cooling system. The computational fluid dynamics analysis was also carried out to evaluate the coolant velocity distributions between tubular channels and the pressure drop characteristics of the tubular fuel assembly.

  17. The treatment of mixing in core helium burning models: I. Implications for asteroseismology

    CERN Document Server

    Constantino, Thomas; Christensen-Dalsgaard, Joergen; Lattanzio, John C; Stello, Dennis

    2015-01-01

    The detection of mixed oscillation modes offers a unique insight into the internal structure of core helium burning (CHeB) stars. The stellar structure during CHeB is very uncertain because the growth of the convective core, and/or the development of a semiconvection zone, is critically dependent on the treatment of convective boundaries. In this study we calculate a suite of stellar structure models and their non-radial pulsations to investigate why the predicted asymptotic g-mode $\\ell = 1$ period spacing $\\Delta\\Pi_1$ is systematically lower than is inferred from Kepler field stars. We find that only models with large convective cores, such as those calculated with our newly proposed "maximal-overshoot" scheme, can match the average $\\Delta\\Pi_1$ reported. However, we also find another possible solution that is related to the method used to determine $\\Delta\\Pi_1$: mode trapping can raise the observationally inferred $\\Delta\\Pi_1$ well above its true value. Even after accounting for these two proposed reso...

  18. Modeling Overlapping Laminations in Magnetic Core Materials Using 2-D Finite-Element Analysis

    DEFF Research Database (Denmark)

    Jensen, Bogi Bech; Guest, Emerson David; Mecrow, Barrie C.

    2015-01-01

    and a composite material is created, which has the same magnetization characteristic. The benefit of this technique is that it allows a designer to perform design and optimization of magnetic cores with overlapped laminations using a 2-D FE model rather than a 3-D FE model, which saves modeling and simulation...... time. The modeling technique is verified experimentally by creating a composite material of a lap joint with a 3-mm overlapping region and using it in a 2-D FE model of a ring sample made up of a stack of 20 laminations. The B-H curve of the simulated ring sample is compared with the B-H curve obtained...

  19. Analytical Model of Thermo-electrical Behaviour in Superconducting Resistive Core Cables

    CERN Document Server

    Calvi, M; Breschi, M; Coccoli, M; Granieri, P; Iriart, G; Lecci, F; Siemko, A

    2006-01-01

    High field superconducting Nb3Sn accelerators magnets above 14 T, for future High Energy Physics applications, call for improvements in the design of the protection system against resistive transitions. The longitudinal quench propagation velocity (vq) is one of the parameters defining the requirements of the protection. Up to now vq has been always considered as a physical parameter defined by the operating conditions (the bath temperature, cooling conditions, the magnetic field and the over all current density) and the type of superconductor and stabilizer used. It is possible to enhance the quench propagation velocity by segregating a percent of the stabilizer into the core, although keeping the total amount constant and tuning the contact resistance between the superconducting strands and the core. Analytical model and computer simulations are presented to explain the phenomenon. The consequences with respect to minimum quench energy are evidenced and the strategy to optimize the cable designed is discuss...

  20. Accounting for crustal magnetization in models of the core magnetic field

    Science.gov (United States)

    Jackson, Andrew

    1990-01-01

    The problem of determining the magnetic field originating in the earth's core in the presence of remanent and induced magnetization is considered. The effect of remanent magnetization in the crust on satellite measurements of the core magnetic field is investigated. The crust as a zero-mean stationary Gaussian random process is modelled using an idea proposed by Parker (1988). It is shown that the matrix of second-order statistics is proportional to the Gram matrix, which depends only on the inner-products of the appropriate Green's functions, and that at a typical satellite altitude of 400 km the data are correlated out to an angular separation of approximately 15 deg. Accurate and efficient means of calculating the matrix elements are given. It is shown that the variance of measurements of the radial component of a magnetic field due to the crust is expected to be approximately twice that in horizontal components.

  1. CoreFlow: A computational platform for integration, analysis and modeling of complex biological data

    DEFF Research Database (Denmark)

    Pasculescu, Adrian; Schoof, Erwin; Creixell, Pau

    2014-01-01

    between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion...... provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts...... into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time...

  2. Efficient n-gram, Skipgram and Flexgram Modelling with Colibri Core

    Directory of Open Access Journals (Sweden)

    Maarten van Gompel

    2016-08-01

    Full Text Available Counting n-grams lies at the core of any frequentist corpus analysis and is often considered a trivial matter. Going beyond consecutive n-grams to patterns such as skipgrams and flexgrams increases the demand for efficient solutions. The need to operate on big corpus data does so even more. Lossless compression and non-trivial algorithms are needed to lower the memory demands, yet retain good speed. Colibri Core is software for the efficient computation and querying of n-grams, skipgrams and flexgrams from corpus data. The resulting pattern models can be analysed and compared in various ways. The software offers a programming library for C++ and Python, as well as command-line tools.

  3. The astrometric core solution for the Gaia mission. Overview of models, algorithms and software implementation

    CERN Document Server

    Lindegren, Lennart; Hobbs, David; O'Mullane, William; Bastian, Ulrich; Hernández, José

    2011-01-01

    The Gaia satellite will observe about one billion stars and other point-like sources. The astrometric core solution will determine the astrometric parameters (position, parallax, and proper motion) for a subset of these sources, using a global solution approach which must also include a large number of parameters for the satellite attitude and optical instrument. The accurate and efficient implementation of this solution is an extremely demanding task, but crucial for the outcome of the mission. We provide a comprehensive overview of the mathematical and physical models applicable to this solution, as well as its numerical and algorithmic framework. The astrometric core solution is a simultaneous least-squares estimation of about half a billion parameters, including the astrometric parameters for some 100 million well-behaved so-called primary sources. The global nature of the solution requires an iterative approach, which can be broken down into a small number of distinct processing blocks (source, attitude,...

  4. An improved energy-collapsing method for core-reflector modelization in SFR core calculations using the PARIS platform

    Energy Technology Data Exchange (ETDEWEB)

    Vidal, J. F.; Archier, P.; Calloo, A.; Jacquet, P.; Tommasi, J. [CEA, DEN, DER, Cadarache, F-13108 Saint-Paul-lez-Durance (France); Le Tellier, R. [CEA, DEN, DTN, Cadarache, F-13108 Saint-Paul-lez-Durance (France)

    2012-07-01

    In the framework of the ASTRID project, sodium cooled fast reactor studies are conducted at CEA in compliance with GEN IV reactors criteria, particularly for safety requirements. An improved safety requires better calculation tools to obtain accurate reactivity effects (especially sodium void effect) and power map distributions. The current calculation route lies on the JEFF3.1.1 library and the classical two-step approach performed with the ECCO module of the ERANOS code system at the assembly level and the Sn SNATCH solver - implemented within the PARIS platform - at the core level. 33-group cross sections used by SNATCH are collapsed from 1968-group self-shielded cross-section with a specific flux-current weighting. Recent studies have shown that this collapsing is non-conservative when dealing with core-reflector interface and can lead to reactivity discrepancies larger than 500 pcm in the case of a steel reflector. Such a discrepancy is due to the flux anisotropy at the interface, which is not taken into account when cross sections are obtained from separate fuel and reflector assembly calculations. A new approach is proposed in this paper. It consists in separating the self-shielding and the flux calculations. The first one is still performed with ECCO on separate patterns. The second one is done with SNATCH on a 1D traverse, representative of the core-reflector interface. An improved collapsing method using angular flux moments is then carried out to collapse the cross sections onto the 33-group structure. In the case of a simplified ZONA2B 2D homogeneous benchmark, results in terms of k{sub eff} and power map are strongly improved for a small increase of the computing time. (authors)

  5. Initial Comparison of Direct and Legacy Modeling Approaches for Radial Core Expansion Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Shemon, Emily R. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-10-10

    Radial core expansion in sodium-cooled fast reactors provides an important reactivity feedback effect. As the reactor power increases due to normal start up conditions or accident scenarios, the core and surrounding materials heat up, causing both grid plate expansion and bowing of the assembly ducts. When the core restraint system is designed correctly, the resulting structural deformations introduce negative reactivity which decreases the reactor power. Historically, an indirect procedure has been used to estimate the reactivity feedback due to structural deformation which relies upon perturbation theory and coupling legacy physics codes with limited geometry capabilities. With advancements in modeling and simulation, radial core expansion phenomena can now be modeled directly, providing an assessment of the accuracy of the reactivity feedback coefficients generated by indirect legacy methods. Recently a new capability was added to the PROTEUS-SN unstructured geometry neutron transport solver to analyze deformed meshes quickly and directly. By supplying the deformed mesh in addition to the base configuration input files, PROTEUS-SN automatically processes material adjustments including calculation of region densities to conserve mass, calculation of isotopic densities according to material models (for example, sodium density as a function of temperature), and subsequent re-homogenization of materials. To verify the new capability of directly simulating deformed meshes, PROTEUS-SN was used to compute reactivity feedback for a series of contrived yet representative deformed configurations for the Advanced Burner Test Reactor design. The indirect legacy procedure was also performed to generate reactivity feedback coefficients for the same deformed configurations. Interestingly, the legacy procedure consistently overestimated reactivity feedbacks by 35% compared to direct simulations by PROTEUS-SN. This overestimation indicates that the legacy procedures are in fact

  6. EXPERIMENTAL EVALUATION OF NUMERICAL MODELS TO REPRESENT THE STIFFNESS OF LAMINATED ROTOR CORES IN ELECTRICAL MACHINES

    Directory of Open Access Journals (Sweden)

    HIDERALDO L. V. SANTOS

    2013-08-01

    Full Text Available Usually, electrical machines have a metallic cylinder made up of a compacted stack of thin metal plates (referred as laminated core assembled with an interference fit on the shaft. The laminated structure is required to improve the electrical performance of the machine and, besides adding inertia, also enhances the stiffness of the system. Inadequate characterization of this element may lead to errors when assessing the dynamic behavior of the rotor. The aim of this work was therefore to evaluate three beam models used to represent the laminated core of rotating electrical machines. The following finite element beam models are analyzed: (i an “equivalent diameter model”, (ii an “unbranched model” and (iii a “branched model”. To validate the numerical models, experiments are performed with nine different electrical rotors so that the first non-rotating natural frequencies and corresponding vibration modes in a free-free support condition are obtained experimentally. The models are evaluated by comparing the natural frequencies and corresponding vibration mode shapes obtained experimentally with those obtained numerically. Finally, a critical discussion of the behavior of the beam models studied is presented. The results show that for the majority of the rotors tested, the “branched model” is the most suitable

  7. Drug release profile in core-shell nanofibrous structures: a study on Peppas equation and artificial neural network modeling.

    Science.gov (United States)

    Maleki, Mahboubeh; Amani-Tehran, Mohammad; Latifi, Masoud; Mathur, Sanjay

    2014-01-01

    Release profile of drug constituent encapsulated in electrospun core-shell nanofibrous mats was modeled by Peppas equation and artificial neural network. Core-shell fibers were fabricated by co-axial electrospinning process using tetracycline hydrochloride (TCH) as the core and poly(l-lactide-co-glycolide) (PLGA) or polycaprolactone (PCL) as the shell materials. The density and hydrophilicity of the shell polymers, feed rates and concentrations of core and shell phases, the contribution of TCH in core material and electrical field were the parameters fed to the perceptron network to predict Peppas constants in order to derive release pattern. This study demonstrated the viability of the prediction tool in determining drug release profile of electrospun core-shell nanofibrous scaffolds.

  8. Real-time Performance Verification of Core Protection and Monitoring System with Integrated Model for SMART Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Bon-Seung; Kim, Sung-Jin; Hwang, Dae-Hyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    In keeping with these purposes, a real-time model of the digital core protection and monitoring systems for simulator implementation was developed on the basis of SCOPS and SCOMS algorithms. In addition, important features of the software models were explained for the application to SMART simulator, and the real-time performance of the models linked with DLL was examined for various simulation scenarios. In this paper, performance verification of core protection and monitoring software is performed with integrated simulator model. A real-time performance verification of core protection and monitoring software for SMART simulator was performed with integrated simulator model. Various DLL connection tests were done for software algorithm change. In addition, typical accident scenarios of SMART were simulated with 3KEYMASTER and simulated results were compared with those of DLL linked core protection and monitoring software. Each calculational result showed good agreements.

  9. DYNAMICO, an atmospheric dynamical core for high-performance climate modeling

    Science.gov (United States)

    Dubos, Thomas; Meurdesoif, Yann; Spiga, Aymeric; Millour, Ehouarn; Fita, Lluis; Hourdin, Frédéric; Kageyama, Masa; Traore, Abdoul-Khadre; Guerlet, Sandrine; Polcher, Jan

    2017-04-01

    Institut Pierre Simon Laplace has developed a very scalable atmospheric dynamical core, DYNAMICO, based on energy-conserving finite-difference/finite volume numerics on a quasi-uniform icosahedral-hexagonal mesh. Scalability is achieved by combining hybrid MPI/OpenMP parallelism to asynchronous I/O. This dynamical core has been coupled to radiative transfer physics tailored to the atmosphere of Saturn, allowing unprecedented simulations of the climate of this giant planet. For terrestrial climate studies DYNAMICO is being integrated into the IPSL Earth System Model IPSL-CM. Preliminary aquaplanet and AMIP-style simulations yield reasonable results when compared to outputs from IPSL-CM5. The observed performance suggests that an order of magnitude may be gained with respect to IPSL-CM CMIP5 simulations either on the duration of simulations or on their resolution. Longer simulations would be of interest for the study of paleoclimate, while higher resolution could improve certain aspects of the modeled climate such as extreme events, as will be explored in the HighResMIP project. Following IPSL's strategic vision of building a unified global-regional modelling system, a fully-compressible, non-hydrostatic prototype of DYNAMICO has been developed, enabling future convection-resolving simulations. Work supported by ANR project "HEAT", grant number CE23_2014_HEAT Dubos, T., Dubey, S., Tort, M., Mittal, R., Meurdesoif, Y., and Hourdin, F.: DYNAMICO-1.0, an icosahedral hydrostatic dynamical core designed for consistency and versatility, Geosci. Model Dev., 8, 3131-3150, doi:10.5194/gmd-8-3131-2015, 2015.

  10. Constraints on geomagnetic secular variation modeling from electromagnetism and fluid dynamics of the Earth's core

    Science.gov (United States)

    Benton, E. R.

    1986-01-01

    A spherical harmonic representation of the geomagnetic field and its secular variation for epoch 1980, designated GSFC(9/84), is derived and evaluated. At three epochs (1977.5, 1980.0, 1982.5) this model incorporates conservation of magnetic flux through five selected patches of area on the core/mantle boundary bounded by the zero contours of vertical magnetic field. These fifteen nonlinear constraints are included like data in an iterative least squares parameter estimation procedure that starts with the recently derived unconstrained field model GSFC (12/83). Convergence is approached within three iterations. The constrained model is evaluated by comparing its predictive capability outside the time span of its data, in terms of residuals at magnetic observatories, with that for the unconstrained model.

  11. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  12. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2011

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg; Devin A. Steuhm

    2011-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance and, to some extent, experiment management are obsolete, inconsistent with the state of modern nuclear engineering practice, and are becoming increasingly difficult to properly verify and validate (V&V). Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In 2009 the Idaho National Laboratory (INL) initiated a focused effort to address this situation through the introduction of modern high-fidelity computational software and protocols, with appropriate V&V, within the next 3-4 years via the ATR Core Modeling and Simulation and V&V Update (or 'Core Modeling Update') Project. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the anticipated ATR Core Internals Changeout (CIC) in the 2014 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its first full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (SCALE, KENO-6, HELIOS, NEWT, and ATTILA) have been installed at the INL under various permanent sitewide license agreements and corresponding baseline models of the ATR and ATRC are now operational, demonstrating the basic feasibility of these code packages for their intended purpose. Furthermore

  13. Radiation Transfer of Models of Massive Star Formation. I. Dependence on Basic Core Properties

    CERN Document Server

    Zhang, Yichen

    2011-01-01

    Radiative transfer calculations of massive star formation are presented. These are based on the Turbulent Core Model of McKee & Tan and self-consistently included a hydrostatic core, an inside-out expansion wave, a zone of free-falling rotating collapse, wide-angle dust-free outflow cavities, an active accretion disk, and a massive protostar. For the first time for such models, an optically thick inner gas disk extends inside the dust destruction front. This is important to conserve the accretion energy naturally and for its shielding effect on the outer region of the disk and envelope. The simulation of radiation transfer is performed with the Monte Carlo code of Whitney, yielding spectral energy distributions (SEDs) for the model series, from the simplest spherical model to the fiducial one, with the above components each added step-by-step. Images are also presented in different wavebands of various telescope cameras, including Spitzer IRAC and MIPS, SOFIA FORCAST and Herschel PACS and SPIRE. The exist...

  14. Two-Dimensional Core-Collapse Supernova Models with Multi-Dimensional Transport

    CERN Document Server

    Dolence, Joshua C; Zhang, Weiqun

    2014-01-01

    We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant $\\mathcal{O}(v/c)$ terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate $\\mathcal{O}(v/c)$ terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 milliseconds after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying ``ray-by-ray' approach employed by all other groups may be compromising their results. We show that ``ray-by-ray' calculations greatly exaggerate the ...

  15. An Evaluation of Component-Based Software Design Approaches

    OpenAIRE

    Puppin, Diego; Silvestri, Fabrizio; Laforenza, Domenico

    2004-01-01

    There is growing attention for a component-oriented software design of Grid applications. Within this framework, applications are built by assembling together independently developed-software components. A component is a software unit with a clearly defined interface and explicit dependencies. It is designed to be integrated with other components, but independently from them. Unix filters and the pipe composition model, the first successful component-oriented model, allowed more complex appli...

  16. Identification of DNA motifs implicated in maintenance of bacterial core genomes by predictive modeling.

    Science.gov (United States)

    Halpern, David; Chiapello, Hélène; Schbath, Sophie; Robin, Stéphane; Hennequet-Antier, Christelle; Gruss, Alexandra; El Karoui, Meriem

    2007-09-01

    Bacterial biodiversity at the species level, in terms of gene acquisition or loss, is so immense that it raises the question of how essential chromosomal regions are spared from uncontrolled rearrangements. Protection of the genome likely depends on specific DNA motifs that impose limits on the regions that undergo recombination. Although most such motifs remain unidentified, they are theoretically predictable based on their genomic distribution properties. We examined the distribution of the "crossover hotspot instigator," or Chi, in Escherichia coli, and found that its exceptional distribution is restricted to the core genome common to three strains. We then formulated a set of criteria that were incorporated in a statistical model to search core genomes for motifs potentially involved in genome stability in other species. Our strategy led us to identify and biologically validate two distinct heptamers that possess Chi properties, one in Staphylococcus aureus, and the other in several streptococci. This strategy paves the way for wide-scale discovery of other important functional noncoding motifs that distinguish core genomes from the strain-variable regions.

  17. RF-TSV DESIGN, MODELING AND APPLICATION FOR 3D MULTI-CORE COMPUTER SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Yu Le; Yang Haigang; Xie Yuanlu

    2012-01-01

    The state-of-the-art multi-core computer systems are based on Very Large Scale three Dimensional (3D) Integrated circuits (VLSI).In order to provide high-speed vertical data transmission in such 3D systems,efficient Through-Silicon Via (TSV) technology is critically important.In this paper,various Radio Frequency (RF) TSV designs and models are proposed.Specifically,the Cu-plug TSV with surrounding ground TSVs is used as the baseline structure.For further improvement,the dielectric coaxial and novel air-gap coaxial TSVs are introduced.Using the empirical parameters of these coaxial TSVs,the simulation results are obtained demonstrating that these coaxial RF-TSVs can provide two-order higher of cut-off frequencies than the Cu-plug TSVs.Based on these new RF-TSV technologies,we propose a novel 3D multi-core computer system as well as new architectures for manipulating the interfaces between RF and baseband circuit.Taking into consideration the scaling down of IC manufacture technologies,predictions for the performance of future generations of circuits are made.With simulation results indicating energy per bit and area per bit being reduced by 7% and 11% respectively,we can conclude that the proposed method is a worthwhile guideline for the design of future multi-core computer ICs.

  18. Identification of DNA motifs implicated in maintenance of bacterial core genomes by predictive modeling.

    Directory of Open Access Journals (Sweden)

    David Halpern

    2007-09-01

    Full Text Available Bacterial biodiversity at the species level, in terms of gene acquisition or loss, is so immense that it raises the question of how essential chromosomal regions are spared from uncontrolled rearrangements. Protection of the genome likely depends on specific DNA motifs that impose limits on the regions that undergo recombination. Although most such motifs remain unidentified, they are theoretically predictable based on their genomic distribution properties. We examined the distribution of the "crossover hotspot instigator," or Chi, in Escherichia coli, and found that its exceptional distribution is restricted to the core genome common to three strains. We then formulated a set of criteria that were incorporated in a statistical model to search core genomes for motifs potentially involved in genome stability in other species. Our strategy led us to identify and biologically validate two distinct heptamers that possess Chi properties, one in Staphylococcus aureus, and the other in several streptococci. This strategy paves the way for wide-scale discovery of other important functional noncoding motifs that distinguish core genomes from the strain-variable regions.

  19. Interpreting H2O isotope variations in high-altitude ice cores using a cyclone model

    Science.gov (United States)

    Holdsworth, Gerald

    2008-04-01

    Vertical profiles of isotope (δ18O or δD) values versus altitude (z) from sea level to high altitude provide a link to cyclones, which impact most ice core sites. Cyclonic structure variations cause anomalous variations in ice core δ time series which may obscure the basic temperature signal. Only one site (Mount Logan, Yukon) provides a complete δ versus z profile generated solely from data. At other sites, such a profile has to be constructed by supplementing field data. This requires using the so-called isotopic or δ thermometer which relates δ to a reference temperature (T). The construction of gapped sections of δ versus z curves requires assuming a typical atmospheric lapse rate (dT/dz), where T is air temperature, and using the slope (dδ/dT) of a site-derived δ thermometer to calculate dδ/dz. Using a three-layer model of a cyclone, examples are given to show geometrically how changes in the thickness of the middle, mixed layer leads to the appearance of anomalous δ values in time series (producing decalibration of the δ thermometer there). The results indicate that restrictions apply to the use of the δ thermometer in ice core paleothermometry, according to site altitude, regional meteorology, and climate state.

  20. GALFIT-CORSAIR: implementing the core-Sersic model into GALFIT

    CERN Document Server

    Bonfini, P

    2014-01-01

    We introduce GALFIT-CORSAIR: a publicly available, fully retro-compatible modification of the 2D fitting software GALFIT (v.3) which adds an implementation of the core-Sersic model. We demonstrate the software by fitting the images of NGC 5557 and NGC 5813, which have been previously identified as core-Sersic galaxies by their 1D radial light profiles. These two examples are representative of different dust obscuration conditions, and of bulge/disk decomposition. To perform the analysis, we obtained deep Hubble Legacy Archive (HLA) mosaics in the F555W filter (~V-band). We successfully reproduce the results of the previous 1D analysis, modulo the intrinsic differences between the 1D and the 2D fitting procedures. The code and the analysis procedure described here have been developed for the first coherent 2D analysis of a sample of core-Sersic galaxies, which will be presented in a forth-coming paper. As the 2D analysis provides better constraining on multi-component fitting, and is fully seeing-corrected, it...

  1. An approach to model reactor core nodalization for deterministic safety analysis

    Science.gov (United States)

    Salim, Mohd Faiz; Samsudin, Mohd Rafie; Mamat @ Ibrahim, Mohd Rizal; Roslan, Ridha; Sadri, Abd Aziz; Farid, Mohd Fairus Abd

    2016-01-01

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH1.6, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D® computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.

  2. Core competency requirements among extension workers in peninsular Malaysia: Use of Borich's needs assessment model.

    Science.gov (United States)

    Umar, Sulaiman; Man, Norsida; Nawi, Nolila Mohd; Latif, Ismail Abd; Samah, Bahaman Abu

    2017-06-01

    The study described the perceived importance of, and proficiency in core agricultural extension competencies among extension workers in Peninsular Malaysia; and evaluating the resultant deficits in the competencies. The Borich's Needs Assessment Model was used to achieve the objectives of the study. A sample of 298 respondents was randomly selected and interviewed using a pre-tested structured questionnaire. Thirty-three core competency items were assessed. Instrument validity and reliability were ensured. The cross-sectional data obtained was analysed using SPSS for descriptive statistics including mean weighted discrepancy score (MWDS). Results of the study showed that on a scale of 5, the most important core extension competency items according to respondents' perception were: "Making good use of information and communication technologies/access and use of web-based resources" (M=4.86, SD=0.23); "Conducting needs assessments" (M=4.84, SD=0.16); "organizing extension campaigns" (M=4.82, SD=0.47) and "Managing groups and teamwork" (M=4.81, SD=0.76). In terms of proficiency, the highest competency identified by the respondents was "Conducting farm and home visits (M=3.62, SD=0.82) followed by 'conducting meetings effectively' (M=3.19, SD=0.72); "Conducting focus group discussions" (M=3.16, SD=0.32) and "conducting community forums" (M=3.13, SD=0.64). The discrepancies implying competency deficits were widest in "Acquiring and allocating resources" (MWDS=12.67); use of information and communication technologies (ICTs) and web-based resources in agricultural extension (MWDS=12.59); and report writing and sharing the results and impacts (MWDS=11.92). It is recommended that any intervention aimed at developing the capacity of extension workers in Peninsular Malaysia should prioritize these core competency items in accordance with the deficits established in this study. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. An approach to model reactor core nodalization for deterministic safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my; Samsudin, Mohd Rafie, E-mail: rafies@tnb.com.my [Nuclear Energy Department, Regulatory Economics & Planning Division, Tenaga Nasional Berhad (Malaysia); Mamat Ibrahim, Mohd Rizal, E-mail: m-rizal@nuclearmalaysia.gov.my [Prototypes & Plant Development Center, Malaysian Nuclear Agency (Malaysia); Roslan, Ridha, E-mail: ridha@aelb.gov.my; Sadri, Abd Aziz [Nuclear Installation Divisions, Atomic Energy Licensing Board (Malaysia); Farid, Mohd Fairus Abd [Reactor Technology Center, Malaysian Nuclear Agency (Malaysia)

    2016-01-22

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH{sub 1.6}, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D{sup ®} computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.

  4. A moist aquaplanet variant of the Held-Suarez test for atmospheric model dynamical cores

    Science.gov (United States)

    Thatcher, Diana R.; Jablonowski, Christiane

    2016-04-01

    A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held-Suarez (HS) test that was developed for dry simulations on "a flat Earth" and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics-physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat between the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics-dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. The new moist variant of the HS test can be considered a test case of intermediate complexity.

  5. Modelling of Equilibrium Between Mantle and Core: Refractory, Volatile, and Highly Siderophile Elements

    Science.gov (United States)

    Righter, K.; Danielson, L.; Pando, K.; Shofner, G.; Lee, C. -T.

    2013-01-01

    Siderophile elements have been used to constrain conditions of core formation and differentiation for the Earth, Mars and other differentiated bodies [1]. Recent models for the Earth have concluded that the mantle and core did not fully equilibrate and the siderophile element contents of the mantle can only be explained under conditions where the oxygen fugacity changes from low to high during accretion and the mantle and core do not fully equilibrate [2,3]. However these conclusions go against several physical and chemical constraints. First, calculations suggest that even with the composition of accreting material changing from reduced to oxidized over time, the fO2 defined by metal-silicate equilibrium does not change substantially, only by approximately 1 logfO2 unit [4]. An increase of more than 2 logfO2 units in mantle oxidation are required in models of [2,3]. Secondly, calculations also show that metallic impacting material will become deformed and sheared during accretion to a large body, such that it becomes emulsified to a fine scale that allows equilibrium at nearly all conditions except for possibly the length scale for giant impacts [5] (contrary to conclusions of [6]). Using new data for D(Mo) metal/silicate at high pressures, together with updated partitioning expressions for many other elements, we will show that metal-silicate equilibrium across a long span of Earth s accretion history may explain the concentrations of many siderophile elements in Earth's mantle. The modeling includes refractory elements Ni, Co, Mo, and W, as well as highly siderophile elements Au, Pd and Pt, and volatile elements Cd, In, Bi, Sb, Ge and As.

  6. Giant planet formation in the framework of the core instability model

    CERN Document Server

    Fortier, Andrea

    2010-01-01

    In this Thesis I studied the formation of the four giant planets of the Solar System in the framework of the nucleated instability hypothesis. The model considers that solids and gas accretion are coupled in an interactive fashion, taking into account detailed constitutive physics for the envelope. The accretion rate of the core corresponds to the oligarchic growth regime. I also considered that accreted planetesimals follow a size distribution. One of the main results of this Thesis is that I was able to compute the formation of Jupiter, Saturn, Uranus and Neptune in less than 10 million years, which is considered to be the protoplanetary disk mean lifetime.

  7. Fractional charge separation in the hard-core Bose Hubbard Model on the Kagome Lattice

    Science.gov (United States)

    Zhang, Xue Feng; Eggert, Sebastian

    2013-03-01

    We consider the hard core Bose Hubbard Model on a Kagome lattice with fixed (open) boundary conditions on two edges. We find that the fixed boundary conditions lift the degeneracy and freeze the system at 1/3 and 2/3 filling at small hopping. At larger hopping strengths, fractional charges spontaneously separate and are free to move to the edges of the system, which leads to a novel compressible phase with solid order. The compressibility is due to excitations on the edge which display a chrial symmetry breaking that is reminiscent of the quantum Hall effect. Large scale Monte Carlo simulations confirm the analytical calculations.

  8. No-Core Shell Model Calculations in Light Nuclei with Three-Nucleon Forces

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, B R; Vary, J P; Nogga, A; Navratil, P; Ormand, W E

    2004-01-08

    The ab initio No-Core Shell Model (NCSM) has recently been expanded to include nucleon-nucleon (NN) and three-nucleon (3N) interactions at the three-body cluster level. Here it is used to predict binding energies and spectra of p-shell nuclei based on realistic NN and 3N interactions. It is shown that 3N force (3NF) properties can be studied in these nuclear systems. First results show that interactions based on chiral perturbation theory lead to a realistic description of {sup 6}Li.

  9. Energy spectrum and phase diagrams of two-sublattice hard-core boson model

    Directory of Open Access Journals (Sweden)

    I.V. Stasyuk

    2013-06-01

    Full Text Available The energy spectrum, spectral density and phase diagrams have been obtained for two-sublattice hard-core boson model in frames of random phase approximation approach. Reconstruction of boson spectrum at the change of temperature, chemical potential and energy difference between local positions in sublattices is studied. The phase diagrams illustrating the regions of existence of a normal phase which can be close to Mott-insulator (MI or charge-density (CDW phase diagrams as well as the phase with the Bose-Einstein condensate (SF phase are built.

  10. COOPERATIVE MODEL FOR OPTIMIZATION OF EXECUTION OF THREADS ON MULTI-CORE SYSTEM

    Directory of Open Access Journals (Sweden)

    A. A. Prihozhy

    2014-01-01

    Full Text Available The problem of the increase of efficiency of multi-thread applications on multi-core systems is investigated. The optimization cooperative model of threads execution has been proposed. It optimizes the execution order of the  computational operations and the operations of data exchange, decreases the overall time of the multithread application  execution by means of the reduction of the critical path in the concurrent algorithm graph, increases the application throughput at the growth of the number of threads, and excludes the competition among threads that is specific for preemptive multitasking...............................

  11. Modeling and simulation in the systems engineering life cycle core concepts and accompanying lectures

    CERN Document Server

    Loper, Margaret L

    2015-01-01

    This easy to read text/reference provides a broad introduction to the fundamental concepts of modeling and simulation (M&S) and systems engineering, highlighting how M&S is used across the entire systems engineering lifecycle. Each chapter corresponds to a short lecture covering a core topic in M&S or systems engineering.  Topics and features: reviews the full breadth of technologies, methodologies and uses of M&S, rather than just focusing on a specific aspect of the field; presents contributions from renowned specialists in each topic covered; introduces the foundational elements and proce

  12. Core/shell CdS/ZnS nanoparticles: Molecular modelling and characterization by photocatalytic decomposition of Methylene Blue

    Science.gov (United States)

    Praus, Petr; Svoboda, Ladislav; Tokarský, Jonáš; Hospodková, Alice; Klemm, Volker

    2014-02-01

    Core/shell CdS/ZnS nanoparticles were modelled in the Material Studio environment and synthesized by one-pot procedure. The core CdS radius size and thickness of the ZnS shell composed of 1-3 ZnS monolayers were predicted from the molecular models. From UV-vis absorption spectra of the CdS/ZnS colloid dispersions transition energies of CdS and ZnS nanostructures were calculated. They indicated penetration of electrons and holes from the CdS core into the ZnS shell and relaxation strain in the ZnS shell structure. The transitions energies were used for calculation of the CdS core radius by the Schrödinger equation. Both the relaxation strain in ZnS shells and the size of the CdS core radius were predicted by the molecular modelling. The ZnS shell thickness and a degree of the CdS core coverage were characterized by the photocatalytic decomposition of Methylene Blue (MB) using CdS/ZnS nanoparticles as photocatalysts. The observed kinetic constants of the MB photodecomposition (kobs) were evaluated and a relationship between kobs and the ZnS shell thickness was derived. Regression results revealed that 86% of the CdS core surface was covered with ZnS and the average thickness of ZnS shell was about 12% higher than that predicted by molecular modelling.

  13. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    Energy Technology Data Exchange (ETDEWEB)

    Duffy, Stephen [Cleveland State Univ., Cleveland, OH (United States)

    2013-09-09

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  14. No-Core Shell Model for A = 47 and A = 49

    Energy Technology Data Exchange (ETDEWEB)

    Vary, J P; Negoita, A G; Stoica, S

    2006-11-13

    We apply the no-core shell model to the nuclear structure of odd-mass nuclei straddling {sup 48}Ca. Starting with the NN interaction, that fits two-body scattering and bound state data, we evaluate the nuclear properties of A = 47 and A = 49 nuclei while preserving all the underlying symmetries. Due to model space limitations and the absence of three-body interactions, we incorporate phenomenological interaction terms determined by fits to A = 48 nuclei in a previous effort. Our modified Hamiltonian produces reasonable spectra for these odd-mass nuclei. In addition to the differences in single-particle basis states, the absence of a single-particle Hamiltonian in our no-core approach complicates comparisons with valence effective NN interactions. We focus on purely off-diagonal two-body matrix elements since they are not affected by ambiguities in the different roles for one-body potentials and we compare selected sets of fp-shell matrix elements of our initial and modified Hamiltonians in the harmonic oscillator basis with those of a recent model fp-shell interaction, the GXPF1 interaction of Honma et al. While some significant differences emerge from these comparisons, there is an overall reasonably good correlation between our off-diagonal matrix elements and those of GXPF1.

  15. No-Core Shell Model for 48-Ca, 48-Sc and 48-Ti

    Energy Technology Data Exchange (ETDEWEB)

    Popescu, S; Stoica, S; Vary, J P; Navratil, P

    2004-10-26

    The authors report the first no-core shell model results for {sup 48}Ca, {sup 48}Sc and {sup 48}Ti with derived and modified two-body Hamiltonians. We use an oscillator basis with a limited {bar h}{Omega} range around 40/A{sup 1/3} = 11 MeV and a limited model space up to 1 {bar h}{Omega}. No single-particle energies are used. They find that the charge dependence of the bulk binding energy of eight A = 48 nuclei is reasonably described with an effective Hamiltonian derived from the CD-Bonn interaction while there is an overall underbinding by about 0.4 MeV/nucleon. However, resulting spectra exhibit deficiencies that are anticipated due to: (1) basis space limitations and/or the absence of effective many-body interactions; and, (2) the absence of genuine three-nucleon interactions. They introduce phenomenological modifications to obtain fits to total binding and low-lying spectra. The resulting no-core shell model opens a path for applications to experiments such as the double-beta ({beta}{beta}) decay process.

  16. Experimental earth tidal models in considering nearly diurnal free wobble of the Earth's liquid core

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Based on the 28 series of the high precision and high minute sampling tidal gravity observations at 20 stations in Global Geodynamics Project (GGP) network, the resonant parameters of the Earth's nearly diurnal free wobble (including the eigenperiods, resonant strengths and quality factots) are precisely determined. The discrepancy of the eigenperiod between observed and theoretical values is studied, the important conclusion that the real dynamic ellipticity of the liquid core is about 5% larger than the one under the static equilibrium assumption is approved by using our gravity technique. The experimental Earth's tidal gravity models with considering the nearly diurnal free wobble of the Earth's liquid core are constructed in this study. The numerical results show that the difference among three experimental models is less than 0.1%, and the largest discrepancy compared to those widely used nowdays given by Dehant (1999) and Mathews (2001) is only about 0.4%. It can provide with the most recent real experimental tidal gravity models for the global study of the Earth's tides, geodesy and space techniques and so on.

  17. Multi-core CPU or GPU-accelerated Multiscale Modeling for Biomolecular Complexes.

    Science.gov (United States)

    Liao, Tao; Zhang, Yongjie; Kekenes-Huskey, Peter M; Cheng, Yuhui; Michailova, Anushka; McCulloch, Andrew D; Holst, Michael; McCammon, J Andrew

    2013-07-01

    Multi-scale modeling plays an important role in understanding the structure and biological functionalities of large biomolecular complexes. In this paper, we present an efficient computational framework to construct multi-scale models from atomic resolution data in the Protein Data Bank (PDB), which is accelerated by multi-core CPU and programmable Graphics Processing Units (GPU). A multi-level summation of Gaus-sian kernel functions is employed to generate implicit models for biomolecules. The coefficients in the summation are designed as functions of the structure indices, which specify the structures at a certain level and enable a local resolution control on the biomolecular surface. A method called neighboring search is adopted to locate the grid points close to the expected biomolecular surface, and reduce the number of grids to be analyzed. For a specific grid point, a KD-tree or bounding volume hierarchy is applied to search for the atoms contributing to its density computation, and faraway atoms are ignored due to the decay of Gaussian kernel functions. In addition to density map construction, three modes are also employed and compared during mesh generation and quality improvement to generate high quality tetrahedral meshes: CPU sequential, multi-core CPU parallel and GPU parallel. We have applied our algorithm to several large proteins and obtained good results.

  18. No-core configuration-interaction model for the isospin- and angular-momentum-projected states

    CERN Document Server

    Satula, W; Dobaczewski, J; Konieczka, M

    2016-01-01

    [Background] Single-reference density functional theory is very successful in reproducing bulk nuclear properties like binding energies, radii, or quadrupole moments throughout the entire periodic table. Its extension to the multi-reference level allows for restoring symmetries and, in turn, for calculating transition rates. [Purpose] We propose a new no-core-configuration-interaction (NCCI) model treating properly isospin and rotational symmetries. The model is applicable to any nucleus irrespective of its mass and neutron- and proton-number parity. It properly includes polarization effects caused by an interplay between the long- and short-range forces acting in the atomic nucleus. [Methods] The method is based on solving the Hill-Wheeler-Griffin equation within a model space built of linearly-dependent states having good angular momentum and properly treated isobaric spin. The states are generated by means of the isospin and angular-momentum projection applied to a set of low-lying (multi)particle-(multi)h...

  19. A plastic indentation model for sandwich beams with metallic foam cores

    Institute of Scientific and Technical Information of China (English)

    Zhong-You Xie; Ji-Lin Yu; Zhi-Jun Zheng

    2011-01-01

    Light weight high performance sandwich composite structures have been used extensively in various load bearing applications.Experiments have shown that the indentation significantly reduces the load bearing capacity of sandwiched beams.In this paper,the indentation behavior of foam core sandwich beams without considering the globally axial and flexural deformation was analyzed using the principle of virtual velocities.A concisely theoretical solution of loading capacity and denting profile was presented.The denting load was found to be proportional to the square root of the denting depth.A finite element model was established to verify the prediction of the model.The load-indentation curves and the profiles of the dented zone predicted by theoretical model and numerical simulation are in good agreement.

  20. Design of a new dynamical core for global atmospheric models based on some efficient numerical methods

    Institute of Scientific and Technical Information of China (English)

    WANG Bin; WAN Hui; JI Zhongzhen; ZHANG Xin; YU Rucong; YU Yongqiang; LIU Hongtao

    2004-01-01

    A careful study on the integral properties of the primitive hydrostatic balance equations for baroclinic atmosphere is carried out, and a new scheme todesign the global adiabatic model of atmospheric dynamics ispresented. This scheme includes a method of weighted equal-areamesh and a fully discrete finite difference method with quadraticand linear conservations for solving the primitive equationsystem. Using this scheme, we established a new dynamical corewith adjustable high resolution acceptable to the availablecomputer capability, which can be very stable without anyfiltering and smoothing. Especially, some important integralproperties are kept unchanged, such as the anti-symmetries of thehorizontal advection operators and the vertical convectionoperator, the mass conservation, the effective energy conservationunder the standard stratification approximation, and so on. Somenumerical tests on the new dynamical core, respectively regardingits global conservations and its integrated performances inclimatic modeling, incorporated with the physical packagesfrom the Community Atmospheric Model Version 2 (CAM2) of NationalCenter for Atmospheric Research (NCAR), are included.

  1. The high density phase of the k-NN hard core lattice gas model

    Science.gov (United States)

    Nath, Trisha; Rajesh, R.

    2016-07-01

    The k-NN hard core lattice gas model on a square lattice, in which the first k next nearest neighbor sites of a particle are excluded from being occupied by another particle, is the lattice version of the hard disc model in two dimensional continuum. It has been conjectured that the lattice model, like its continuum counterpart, will show multiple entropy-driven transitions with increasing density if the high density phase has columnar or striped order. Here, we determine the nature of the phase at full packing for k up to 820 302 . We show that there are only eighteen values of k, all less than k  =  4134, that show columnar order, while the others show solid-like sublattice order.

  2. Systematic Features of Axisymmetric Neutrino-Driven Core-Collapse Supernova Models in Multiple Progenitors

    CERN Document Server

    Nakamura, Ko; Kuroda, Takami; Kotake, Kei

    2014-01-01

    We present an overview of axisymmetric core-collapse supernova simulations employing neutrino transport scheme by the isotropic diffusion source approximation. Studying 101 solar-metallicity progenitors covering zero-age main sequence mass from 10.8 to 75.0 solar masses, we systematically investigate how the differences in the structures of these multiple progenitors impact the hydrodynamics evolution. By following a long-term evolution over 1.0 s after bounce, most of the computed models exhibit neutrino-driven revival of the stalled bounce shock at about 200 - 800 ms postbounce, leading to the possibility of explosion. Pushing the boundaries of expectations in previous one-dimensional studies, our results show that the time of shock revival, evolution of shock radii, and diagnostic explosion energies are tightly correlated with the compactness parameter xi which characterizes the structure of the progenitors. Compared to models with low xi, models with high xi undergo high ram pressure from the accreting ma...

  3. Computational Model for the Neutronic Simulation of Pebble Bed Reactor’s Core Using MCNPX

    Directory of Open Access Journals (Sweden)

    J. Rosales

    2014-01-01

    Full Text Available Very high temperature reactor (VHTR designs offer promising performance characteristics; they can provide sustainable energy, improved proliferation resistance, inherent safety, and high temperature heat supply. These designs also promise operation to high burnup and large margins to fuel failure with excellent fission product retention via the TRISO fuel design. The pebble bed reactor (PBR is a design of gas cooled high temperature reactor, candidate for Generation IV of Nuclear Energy Systems. This paper describes the features of a detailed geometric computational model for PBR whole core analysis using the MCNPX code. The validation of the model was carried out using the HTR-10 benchmark. Results were compared with experimental data and calculations of other authors. In addition, sensitivity analysis of several parameters that could have influenced the results and the accuracy of model was made.

  4. Effective Interactions and Operators in Nuclei within the No-Core Shell Model

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, B; Navratil, P; Stetcu, I; Vary, J

    2005-09-14

    We review the application of effective operator formalism to the ab initio no core shell model (NCSM). For short-range operators, such as the nucleon-nucleon potential, the unitary-transformation method works extremely well at the two-body cluster approximation and good results are obtained for the binding energies and excitation spectra of light nuclei (A {<=} 16). However, for long-range operators, such as the radius or the quadrupole moment, performing this unitary transformation at the two-body cluster level, does not include the higher-order correlations needed to renormalize these long-range operators adequately. Usually, such correlations can be obtained either by increasing the order of the cluster approximation, or by increasing the model space. We will discuss the difficulties of these approaches as well as alternate possible solutions for including higher-order correlations in small model spaces.

  5. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager

    2012-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core

  6. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    Science.gov (United States)

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  7. Emergence of cluster structures and collectivity within a no-core shell-model framework

    Science.gov (United States)

    Launey, K. D.; Dreyfuss, A. C.; Draayer, J. P.; Dytrych, T.; Baker, R.

    2014-12-01

    An innovative symmetry-guided concept, which capitalizes on partial as well as exact symmetries that underpin the structure of nuclei, is discussed. Within this framework, ab initio applications of the theory to light nuclei reveal the origin of collective modes and the emergence a simple orderly pattern from first principles. This provides a strategy for determining the nature of bound states of nuclei in terms of a relatively small fraction of the complete shell-model space, which, in turn, can be used to explore ultra-large model spaces for a description of alpha-cluster and highly deformed structures together with the associated rotations. We find that by using only a fraction of the model space extended far beyond current no-core shell-model limits and a long-range interaction that respects the symmetries in play, the outcome reproduces characteristic features of the low-lying 0+ states in 12 C (including the elusive Hoyle state and its 2+ excitation) and agrees with ab initio results in smaller spaces. This is achieved by selecting those particle configurations and components of the interaction found to be foremost responsible for the primary physics governing clustering phenomena and large spatial deformation in the ground-state and Hoyle-state rotational bands of 12 C. For these states, we offer a novel perspective emerging out of no-core shell-model considerations, including a discussion of associated nuclear deformation, matter radii, and density distribution. The framework we find is also extensible to negative-parity states (e.g., the 3-1 state in 12C) and beyond, namely, to the low-lying 0+ states of 8Be as well as the ground-state rotational band of Ne, Mg, and Si isotopes. The findings inform key features of the nuclear interaction and point to a new insight into the formation of highly-organized simple patterns in nuclear dynamics.

  8. No-core shell model for A = 47 and A = 49

    CERN Document Server

    Negoita, A G; Stoica, S

    2010-01-01

    We apply the no-core shell model to the nuclear structure of odd-mass nuclei straddling $^{48}$Ca. Starting with the NN interaction, that fits two-body scattering and bound state data we evaluate the nuclear properties of $A = 47$ and $A = 49$ nuclei while preserving all the underlying symmetries. Due to model space limitations and the absence of 3-body interactions, we incorporate phenomenological interaction terms determined by fits to $A = 48$ nuclei in a previous effort. Our modified Hamiltonian produces reasonable spectra for these odd mass nuclei. In addition to the differences in single-particle basis states, the absence of a single-particle Hamiltonian in our no-core approach complicates comparisons with valence effective NN interactions. We focus on purely off-diagonal two-body matrix elements since they are not affected by ambiguities in the different roles for one-body potentials and we compare selected sets of $fp$-shell matrix elements of our initial and modified Hamiltonians in the harmonic osci...

  9. Steady flows at the top of earth's core derived from geomagnetic field models

    Science.gov (United States)

    Voorhies, Coerte V.

    1986-01-01

    Select models of the main geomagnetic field and its secular variation are extrapolated to the base of an insulating mantle and used to estimate the adjacent fluid motion of a perfectly conducting outer core. The assumption of steady motion provides formally unique solutions and is tested along with that of no upwelling. The hypothesis of no upwelling is found to be substantially worse than that of steady motion. Although the actual motion is not thought to be steady, the large-scale secular variation at the top of the core can be adequately described by a large-scale, combined toroidal-poloidal circulation which is steady for intervals of at least a decade or two. The derived flows include a bulk westward drift but are complicated by superimposed jets, gyres, and surface divergence indicative of vigorous vertical motion at depth. The circulation pattern and key global properties including rms speed, upwelling, and westward drift are found to be fairly insensitive to variations in modeling parameters.

  10. 3-D core modelling of RIA transient: the TMI-1 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ferraresi, P. [CEA Cadarache, Institut de Protection et de Surete Nucleaire, Dept. de Recherches en Securite, 13 - Saint Paul Lez Durance (France); Studer, E. [CEA Saclay, Dept. Modelisation de Systemes et Structures, 91 - Gif sur Yvette (France); Avvakumov, A.; Malofeev, V. [Nuclear Safety Institute of Russian Research Center, Kurchatov Institute, Moscow (Russian Federation); Diamond, D.; Bromley, B. [Nuclear Energy and Infrastructure Systems Div., Brookhaven National Lab., BNL, Upton, NY (United States)

    2001-07-01

    The increase of fuel burn up in core management poses actually the problem of the evaluation of the deposited energy during Reactivity Insertion Accidents (RIA). In order to precisely evaluate this energy, 3-D approaches are used more and more frequently in core calculations. This 'best-estimate' approach requires the evaluation of code uncertainties. To contribute to this evaluation, a code benchmark has been launched. A 3-D modelling for the TMI-1 central Ejected Rod Accident with zero and intermediate initial powers was carried out with three different methods of calculation for an inserted reactivity respectively fixed at 1.2 $ and 1.26 $. The studies implemented by the neutronics codes PARCS (BNL) and CRONOS (IPSN/CEA) describe an homogeneous assembly, whereas the BARS (KI) code allows a pin-by-pin representation (CRONOS has both possibilities). All the calculations are consistent, the variation in figures resulting mainly from the method used to build cross sections and reflectors constants. The maximum rise in enthalpy for the intermediate initial power (33 % P{sub N}) calculation is, for this academic calculation, about 30 cal/g. This work will be completed in a next step by an evaluation of the uncertainty induced by the uncertainty on model parameters, and a sensitivity study of the key parameters for a peripheral Rod Ejection Accident. (authors)

  11. Comparison of CORA and MELCOR core degradation simulation and the MELCOR oxidation model

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jun [College of Engineering, The University of Wisconsin-Madison, Madison, WI 53706 (United States); State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Corradini, Michael L., E-mail: corradini@engr.wisc.edu [College of Engineering, The University of Wisconsin-Madison, Madison, WI 53706 (United States); Fu, Wen [College of Engineering, The University of Wisconsin-Madison, Madison, WI 53706 (United States); Institute of Nuclear and New Energy Technology, Tsinghua University, Beijing 100084 (China); Haskin, Troy [College of Engineering, The University of Wisconsin-Madison, Madison, WI 53706 (United States); Tian, Wenxi; Zhang, Yapei; Su, Guanghui; Qiu, Suizheng [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, Xi’an 710049 (China)

    2014-09-15

    Highlights: • Oxidation model of MELCOR is analyzed and the improving suggestion is provided. • MELCOR core degradation calculating results are compared with CORA experiment. • Flow rate of argon and steam, the generating rate of hydrogen is calculated and compared. • Temperature spatial variation and temperature history is calculated and presented. - Abstract: MELCOR is widely used and sufficiently trusted for severe accident analysis. However, the occurrence of Fukushima has increased the focus on severe accident codes and their use. A MELCOR core degradation calculation was conducted at the University of Wisconsin–Madison under the help of Sandia. The calculation results were checked by comparing with a past CORA experiment. MELCOR calculation results included the flow rate of argon and steam, the generation rate of hydrogen. Through this work, the performance of MELCOR COR package was reviewed in detail. This paper compares the hydrogen generation rates predicted by MELCOR to the CORA test data. While agreement is reasonable it could be improved. Additionally, the MELCOR zirconium oxidation model was analyzed.

  12. Using archaeomagnetic field models to constrain the physics of the core: robustness and preferred locations of reversed flux patches

    Science.gov (United States)

    Terra-Nova, Filipe; Amit, Hagay; Hartmann, Gelvam A.; Trindade, Ricardo I. F.

    2016-09-01

    Archaeomagnetic field models cover longer timescales than historical models and may therefore resolve the motion of geomagnetic features on the core-mantle boundary (CMB) in a more meaningful statistical sense. Here we perform a detailed appraisal of archaeomagnetic field models to infer some aspects of the physics of the outer core. We characterize and compare the identification and tracking of reversed flux patches (RFPs) in order to assess the RFPs robustness. We find similar behaviour within a family of models but differences among different families, suggesting that modelling strategy is more influential than data set. Similarities involve recurrent positions of RFPs, but no preferred direction of motion is found. The tracking of normal flux patches shows similar qualitative behaviour confirming that RFPs identification and tracking is not strongly biased by their relative weakness. We also compare the tracking of RFPs with that of the historical field model gufm1 and with seismic anomalies of the lowermost mantle to explore the possibility that RFPs have preferred locations prescribed by lower mantle lateral heterogeneity. The archaeomagnetic field model that most resembles the historical field is interpreted in terms of core dynamics and core-mantle thermal interactions. This model exhibits correlation between RFPs and low seismic shear velocity in co-latitude and a shift in longitude. These results shed light on core processes, in particular we infer toroidal field lines with azimuthal orientation below the CMB and large fluid upwelling structures with a width of about 80° (Africa) and 110° (Pacific) at the top of the core. Finally, similar preferred locations of RFPs in the past 9 and 3 kyr of the same archaeomagnetic field model suggest that a 3 kyr period is sufficiently long to reliably detect mantle control on core dynamics. This allows estimating an upper bound of 220-310 km for the magnetic boundary layer thickness below the CMB.

  13. A Component Based Approach to Scientific Workflow Management

    CERN Document Server

    Le Goff, Jean-Marie; Baker, Nigel; Brooks, Peter; McClatchey, Richard

    2001-01-01

    CRISTAL is a distributed scientific workflow system used in the manufacturing and production phases of HEP experiment construction at CERN. The CRISTAL project has studied the use of a description driven approach, using meta- modelling techniques, to manage the evolving needs of a large physics community. Interest from such diverse communities as bio-informatics and manufacturing has motivated the CRISTAL team to re-engineer the system to customize functionality according to end user requirements but maximize software reuse in the process. The next generation CRISTAL vision is to build a generic component architecture from which a complete software product line can be generated according to the particular needs of the target enterprise. This paper discusses the issues of adopting a component product line based approach and our experiences of software reuse.

  14. Testing the infinitely many genes model for the evolution of the bacterial core genome and pangenome.

    Science.gov (United States)

    Collins, R Eric; Higgs, Paul G

    2012-11-01

    When groups of related bacterial genomes are compared, the number of core genes found in all genomes is usually much less than the mean genome size, whereas the size of the pangenome (the set of genes found on at least one of the genomes) is much larger than the mean size of one genome. We analyze 172 complete genomes of Bacilli and compare the properties of the pangenomes and core genomes of monophyletic subsets taken from this group. We then assess the capabilities of several evolutionary models to predict these properties. The infinitely many genes (IMG) model is based on the assumption that each new gene can arise only once. The predictions of the model depend on the shape of the evolutionary tree that underlies the divergence of the genomes. We calculate results for coalescent trees, star trees, and arbitrary phylogenetic trees of predefined fixed branch length. On a star tree, the pangenome size increases linearly with the number of genomes, as has been suggested in some previous studies, whereas on a coalescent tree, it increases logarithmically. The coalescent tree gives a better fit to the data, for all the examples we consider. In some cases, a fixed phylogenetic tree proved better than the coalescent tree at reproducing structure in the gene frequency spectrum, but little improvement was gained in predictions of the core and pangenome sizes. Most of the data are well explained by a model with three classes of gene: an essential class that is found in all genomes, a slow class whose rate of origination and deletion is slow compared with the time of divergence of the genomes, and a fast class showing rapid origination and deletion. Although the majority of genes originating in a genome are in the fast class, these genes are not retained for long periods, and the majority of genes present in a genome are in the slow or essential classes. In general, we show that the IMG model is useful for comparison with experimental genome data both for species level and

  15. A core stochastic population projection model for Florida manatees (Trichechus manatus latirostris)

    Science.gov (United States)

    Runge, Michael C.; Sanders-Reed, Carol A.; Fonnesbeck, Christopher J.

    2007-01-01

    , temporal variance of adult survival, and long-term warm-water capacity. This core biological model is expected to evolve over time, as better information becomes available about manatees and their habitat, and as new assessment needs arise. We anticipate that this core model will be customized for other state and federal assessments in the near future.

  16. Magnetic model for a horse-spleen ferritin with a three-phase core structure

    Energy Technology Data Exchange (ETDEWEB)

    Jung, J.H.; Eom, T.W. [Quantum Photonic Science Research Center, Department of Physics and Research Institute for Natural Sciences, Hanyang University, Seoul 133-791 (Korea, Republic of); Lee, Y.P., E-mail: yplee@hanyang.ac.kr [Quantum Photonic Science Research Center, Department of Physics and Research Institute for Natural Sciences, Hanyang University, Seoul 133-791 (Korea, Republic of); Rhee, J.Y. [Department of Physics, Sungkyunkwan University, Suwon (Korea, Republic of); Choi, E.H. [Kwangwoon University, Seoul (Korea, Republic of)

    2011-12-15

    The increasing interests in magnetic nanoparticles has prompted research on ferritin, which is naturally a well-defined iron-storage protein in most living organisms. However, the exact magnetic behavior of ferritin is not well understood, because the crystal structures of ferritin and ferrihydrite, its major component, are not fully understood. Briefly, we discuss the previous magnetization models of ferritin and ferrihydrite and we present a new model ({Sigma}3L) of the initial magnetization of ferritin, considering its different phases. The new model includes three Langevin-function terms, which represent three different magnetic moments provided by the likely hydroxide and oxide mineral phases in ferritin. Compared to previous models, our simple model fits the experimental data 12 times better in terms of the sum of least squares. The magnetic independence of each component supports the multi-phase compositional model of the mineral core of horse-spleen ferritin. This {Sigma}3L model gives a quantization of the amounts of the different phases within horse-spleen ferritins that matches other published experimental data: 60-80% ferrihydrite, 15-25% maghemite/magnetite, and 1-10% hematite. - Highlights: > We present a new model ({Sigma}3L) of the initial magnetization of ferritin, considering its different phases. > New model includes three Langevin-function terms, which represent three different magnetic moments provided by ferritin phases. > Compared to previous models, our simple model fits the experimental data 12 times better in terms of the sum of least square. > The magnetic independence of each component supports that ferritin and ferrihydrite are composed of different phases.

  17. Analysis and Evaluating Security of Component-Based Software Development: A Security Metrics Framework

    Directory of Open Access Journals (Sweden)

    Irshad Ahmad Mir

    2012-10-01

    Full Text Available Evaluating the security of software systems is a complex problem for the research communities due to the multifaceted and complex operational environment of the system involved. Many efforts towards the secure system development methodologies like secSDLC by Microsoft have been made but the measurement scale on which the security can be measured got least success. As with a shift in the nature of software development from standalone applications to distributed environment where there are a number of potential adversaries and threats present, security has been outlined and incorporated at the architectural level of the system and so is the need to evaluate and measure the level of security achieved . In this paper we present a framework for security evaluation at the design and architectural phase of the system development. We have outlined the security objectives based on the security requirements of the system and analyzed the behavior of various software architectures styles. As the component-based development (CBD is an important and widely used model to develop new large scale software due to various benefits like increased reuse, reduce time to market and cost. Our emphasis is on CBD and we have proposed a framework for the security evaluation of Component based software design and derived the security metrics for the main three pillars of security, confidentiality, integrity and availability based on the component composition, dependency and inter component data/information flow. The proposed framework and derived metrics are flexible enough, in way that the system developer can modify the metrics according to the situation and are applicable both at the development phases and as well as after development.

  18. Mean-field dynamic criticality and geometric transition in the Gaussian core model

    Science.gov (United States)

    Coslovich, Daniele; Ikeda, Atsushi; Miyazaki, Kunimasa

    2016-04-01

    We use molecular dynamics simulations to investigate dynamic heterogeneities and the potential energy landscape of the Gaussian core model (GCM). Despite the nearly Gaussian statistics of particles' displacements, the GCM exhibits giant dynamic heterogeneities close to the dynamic transition temperature. The divergence of the four-point susceptibility is quantitatively well described by the inhomogeneous version of the mode-coupling theory. Furthermore, the potential energy landscape of the GCM is characterized by large energy barriers, as expected from the lack of activated, hopping dynamics, and display features compatible with a geometric transition. These observations demonstrate that all major features of mean-field dynamic criticality can be observed in a physically sound, three-dimensional model.

  19. Advancing Nucleosynthesis in Self-consistent, Multidimensional Models of Core-Collapse Supernovae

    CERN Document Server

    Harris, J Austin; Chertkow, Merek A; Bruenn, Stephen W; Lentz, Eric J; Messer, O E Bronson; Mezzacappa, Anthony; Blondin, John M; Marronetti, Pedro; Yakunin, Konstantin N

    2014-01-01

    We investigate core-collapse supernova (CCSN) nucleosynthesis in polar axisymmetric simulations using the multidimensional radiation hydrodynamics code CHIMERA. Computational costs have traditionally constrained the evolution of the nuclear composition in CCSN models to, at best, a 14-species $\\alpha$-network. Such a simplified network limits the ability to accurately evolve detailed composition, neutronization and the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks in post-processing nucleosynthesis calculations. Limitations such as poor spatial resolution of the tracer particles, estimation of the expansion timescales, and determination of the "mass-cut" at the end of the simulation impose uncertainties inherent to this approach. We present a detailed analysis of the impact of these uncertainties on post-processing nucleosynthesis calculations and implications for future models.

  20. Gas Core Reactor Numerical Simulation Using a Coupled MHD-MCNP Model

    Science.gov (United States)

    Kazeminezhad, F.; Anghaie, S.

    2008-01-01

    Analysis is provided in this report of using two head-on magnetohydrodynamic (MHD) shocks to achieve supercritical nuclear fission in an axially elongated cylinder filled with UF4 gas as an energy source for deep space missions. The motivation for each aspect of the design is explained and supported by theory and numerical simulations. A subsequent report will provide detail on relevant experimental work to validate the concept. Here the focus is on the theory of and simulations for the proposed gas core reactor conceptual design from the onset of shock generations to the supercritical state achieved when the shocks collide. The MHD model is coupled to a standard nuclear code (MCNP) to observe the neutron flux and fission power attributed to the supercritical state brought about by the shock collisions. Throughout the modeling, realistic parameters are used for the initial ambient gaseous state and currents to ensure a resulting supercritical state upon shock collisions.

  1. Effective Operators Within the Ab Initio No-Core Shell Model

    Energy Technology Data Exchange (ETDEWEB)

    Stetcu, I; Barrett, B R; Navratil, P; Vary, J P

    2004-11-30

    We implement an effective operator formalism for general one- and two-body operators, obtaining results consistent with the no-core shell model (NCSM) wave functions. The Argonne V8' nucleon-nucleon potential was used in order to obtain realistic wave functions for {sup 4}He, {sup 6}Li and {sup 12}C. In the NCSM formalism, we compute electromagnetic properties using the two-body cluster approximation for the effective operators and obtain results which are sensitive to the range of the bare operator. To illuminate the dependence on the range, we employ a Gaussian two-body operator of variable range, finding weak renormalization of long range operators (e.g., quadrupole) in a fixed model space. This is understood in terms of the two-body cluster approximation which accounts mainly for short-range correlations. Consequently, short range operators, such as the relative kinetic energy, will be well renormalized in the two-body cluster approximation.

  2. Thermodynamic properties of Fe-S alloys from molecular dynamics modeling: Implications for the lunar fluid core

    Science.gov (United States)

    Kuskov, Oleg L.; Belashchenko, David K.

    2016-09-01

    Density and sound velocity of Fe-S liquids for the P-T parameters of the lunar core have not been constrained well. From the analysis of seismic wave travel time, Weber et al. (2011) proposed that the lunar core is composed of iron alloyed with ⩽6 wt% of light elements, such as S. A controversial issue in models of planetary core composition concerns whether Fe-S liquids under high pressure - temperature conditions provide sound velocity and density data, which match the seismic model. Here we report the results of molecular dynamics (MD) simulations of iron-sulfur alloys based on Embedded Atom Model (EAM). The results of calculations include caloric, thermal and elastic properties of Fe-S alloys at concentrations of sulfur 0-18 at.%, temperatures up to 2500 K and pressures up to 14 GPa. The effect of sulfur on the elastic properties of Fe-rich melts is most evident in the notably decreased density with added S content. In the MD simulation, the density and bulk modulus KT of liquid Fe-S decrease with increasing sulfur content, while the bulk modulus KS decreases as a whole but has some fluctuations with increasing sulfur content. The sound velocity increases with increasing pressure, but depends weakly on temperature and the concentration of sulfur. For a fluid Fe-S core of the Moon (∼5 GPa/2000 K) with 6-16 at.% S (3.5-10 wt%), the sound velocity and density may be estimated at the level of 4000 m s-1 and 6.25-7.0 g cm-3. Comparison of thermodynamic calculations with the results of interpretation of seismic observations shows good agreement of P-wave velocities in the liquid outer core, while the core density does not match the seismic models. At such concentrations of sulfur and a density by 20-35% higher than the model seismic density, a radius for the fluid outer core should be less than about 330 km found by Weber et al. because at the specified mass and moment of inertia values of the Moon an increase of the core density leads to a decrease of the core

  3. Industrial Component-based Sample Mobile Robot System

    Directory of Open Access Journals (Sweden)

    Péter Kucsera

    2007-12-01

    Full Text Available The mobile robot development can be done in two different ways. The first is tobuild up an embedded system, the second is to use ‘ready to use’ industrial components.With the spread of Industrial mobile robots there are more and more components on themarket which can be used to build up a whole control and sensor system of a mobile robotplatform. Using these components electrical hardware development is not needed, whichspeeds up the development time and decreases the cost. Using a PLC on board, ‘only’constructing the program is needed and the developer can concentrate on the algorithms,not on developing hardware. My idea is to solve the problem of mobile robot localizationand obstacle avoidance using industrial components and concentrate this topic to themobile robot docking. In factories, mobile robots can be used to deliver parts from oneplace to another, but there are always two critical points. The robot has to be able tooperate in human environment, and also reach the target and get to a predefined positionwhere another system can load it or get the delivered product. I would like to construct amechanically simple robot model, which can calculate its position from the rotation of itswheels, and when it reaches a predefined location with the aid of an image processingsystem it can dock to an electrical connector. If the robot succeeded it could charge itsbatteries through this connector as well.

  4. STEADY STATE MODELING OF THE MINIMUM CRITICAL CORE OF THE TRANSIENT REACTOR TEST FACILITY

    Energy Technology Data Exchange (ETDEWEB)

    Anthony L. Alberti; Todd S. Palmer; Javier Ortensi; Mark D. DeHart

    2016-05-01

    With the advent of next generation reactor systems and new fuel designs, the U.S. Department of Energy (DOE) has identified the need for the resumption of transient testing of nuclear fuels. The DOE has decided that the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory (INL) is best suited for future testing. TREAT is a thermal neutron spectrum, air-cooled, nuclear test facility that is designed to test nuclear fuels in transient scenarios. These specific scenarios range from simple temperature transients to full fuel melt accidents. DOE has expressed a desire to develop a simulation capability that will accurately model the experiments before they are irradiated at the facility. It is the aim for this capability to have an emphasis on effective and safe operation while minimizing experimental time and cost. The multi physics platform MOOSE has been selected as the framework for this project. The goals for this work are to identify the fundamental neutronics properties of TREAT and to develop an accurate steady state model for future multiphysics transient simulations. In order to minimize computational cost, the effect of spatial homogenization and angular discretization are investigated. It was found that significant anisotropy is present in TREAT assemblies and to capture this effect, explicit modeling of cooling channels and inter-element gaps is necessary. For this modeling scheme, single element calculations at 293 K gave power distributions with a root mean square difference of 0.076% from those of reference SERPENT calculations. The minimum critical core configuration with identical gap and channel treatment at 293 K resulted in a root mean square, total core, radial power distribution 2.423% different than those of reference SERPENT solutions.

  5. Verification of GRAPES unified global and regional numerical weather prediction model dynamic core

    Institute of Scientific and Technical Information of China (English)

    YANG XueSheng; HU JiangLin; CHEN DeHui; ZHANG HongLiang; SHEN XueShun; CHEN JiaBin; JI LiRen

    2008-01-01

    During the past few years, most of the new developed numerical weather prediction models adopt the strategy of multi-scale technique. Therefore, China Meteorological Administration has devoted to de-veloping a new generation of global and regional multi-scale model since 2003. In order to validate the performance of the GRAPES (Global and Regional Assimilation and PrEdiction System) model both for its scientific design and program coding, a suite of idealized tests has been proposed and conducted, which includes the density flow test, three-dimensional mountain wave and the cross-polar flow test. The density flow experiment indicates that the dynamic core has the ability to simulate the fine scale nonlinear flow structures and its transient features. While the three-dimensional mountain wave test shows that the model can reproduce the horizontal and vertical propagation of internal gravity waves quite well. Cross-polar flow test demonstrates the rationality of both for the semi-Lagrangian departure point calculation and the discretization of the model near the poles. The real case forecasts reveal that the model has the ability to predict the large-scale weather regimes in summer such as the subtropical high, and to capture the major synoptic patterns in the mid and high latitudes.

  6. Hard-core thinnings of germ-grain models with power-law grain sizes

    CERN Document Server

    Kuronen, Mikko

    2012-01-01

    Random sets with long-range dependence can be generated using a Boolean model with power-law grain sizes. We study thinnings of such Boolean models which have the hard-core property that no grains overlap in the resulting germ-grain model. A fundamental question is whether long-range dependence is preserved under such thinnings. To answer this question we study four natural thinnings of a Poisson germ-grain model where the grains are spheres with a regularly varying size distribution. We show that a thinning which favors large grains preserves the slow correlation decay of the original model, whereas a thinning which favors small grains does not. Our most interesting finding concerns the case where only disjoint grains are retained, which corresponds to the well-known Mat\\'ern type I thinning. In the resulting germ-grain model, typical grains have exponentially small sizes, but rather surprisingly, the long-range dependence property is still present. As a byproduct, we obtain new mechanisms for generating hom...

  7. Geoacoustic model at the DH-1 long-core site in the Korean continental margin of the East Sea

    Science.gov (United States)

    Ryang, Woo Hun; Kim, Seong Pil

    2014-05-01

    A long core of 23.6 m was acquired at the DH-1 site (37°36.651'N and 129°19.709'E) in the Korean continental margin of the western East Sea. The core site is located near the Donghae City and the water depth is 357.8 m deep. The long-core sediment was recovered using the Portable Remotely Operated Drill (PROD), a fully contained drilling system, remotely operated at the seafloor. The recovered core sediments were analyzed for physical, sedimentological, and geoacoustic properties mostly at 10~30 cm intervals. Based on the long-core data with subbottom and air-gun profiles at the DH-1 core site, a geoacoustic model was firstly reconstructed including water mass. The geoacoustic model comprises 7 geoacoustic units of the core sediments, based on the measurements of 125 P-wave velocities and 121 attenuations. The P-wave speed was compensated to in situ depth below the sea floor using the Hamilton method. The geoacoustic model DH-1 probably contributes for reconstruction of geoacoustic models reflecting vertical and lateral variability of acoustic properties in the Korean continental margin of the western East Sea. Keywords: long core, geoacoustic model, East Sea, continental margin, P-wave speed Acknowledgements: This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0025733) and by the Ministry of Knowledge Economy through the grant of Marine Geology and Geophysical Mapping Project (GP2010-013).

  8. Acceleration of a Semi Implicit Non-hydrostatic Atmospheric Model on Many Core Architecture

    Science.gov (United States)

    Abdi, D. S.; Giraldo, F.; Wilcox, L.; Warburton, T.

    2016-12-01

    The Non-hydrostatic Unified Model of the Atmosphere (NUMA) uses high-order continuous and discontinuous Galerkin spatial discretization that result in high arithmetic intensity per-degree of freedom. Therefore, these methods are suitable for the future many-core and multi-core supercomputers with heterogeneous computing units. In our previous work, we have presented scalability of NUMA using explicit time integration that performed well on a GPU cluster. We achieved two orders of magnitude speedup over the CPU version of NUMA, while maintaining a 90% weak scaling efficiency using up to 16384 GPUs of the titan supercomputer. However, in operational numerical weather prediction, it is often necessary to use semi-implicit time integrators that allow for a much larger time step than explicit time integrators. A lot of machinery is required to enable semi-implcit time integration on the GPU and other accelerators. In this work, we will present the work on porting of the semi-implicit time integrators 1D and 3D IMEX, that require solving a system of linear equations at each time step. The scalability of the semi-implicit NUMA will be tested on supercomputers consisting of different accelerators such as CPUs, GPUs, KNL. The porting of NUMA to many-core architecture is done using a unified language called OCCA - that takes a `write once run everywhere' approach for different accelerators. To be able to scale well on future exa-scale supercomputers with heterogeneous accelerators, NUMA should be portable in-terms of both code and performance across different devices. The OCCA language's design is such that a node-per-thread approach is used on the GPU while an element-per-thread approach is used on the CPU. This makes OCCA highly performance portable and achieve results equalling that obtained with a native language (e.g. OpenMP or CUDA).

  9. Setting up The Geological information and modelling Thematic Core Service for EPOS

    Science.gov (United States)

    Grellet, Sylvain; Häner, Rainer; Pedersen, Mikael; Lorenz, Henning; Carter, Mary; Cipolloni, Carlo; Robida, François

    2017-04-01

    Geological data and models are key assets for the EPOS community. The Geological information and modelling Thematic Core Service of EPOS is being designed as an efficient and sustainable access system for geological multi-scale data assets for EPOS through the integration of distributed infrastructure components (nodes) of geological surveys, research institutes and the international drilling community (ICDP/IODP). The TCS will develop and take benefit of the synergy between the existing data infrastructures of the Geological Surveys of Europe (EuroGeoSurveys / OneGeology-Europe / EGDI) and of the large amount of information produced by the research organisations. These nodes will offer a broad range of resources including: geological maps, borehole data, borehole associated observations (borehole log data, groundwater level, groundwater quality…) and archived information on physical material (samples, cores), geological models (3D, 4D), geohazards, geophysical data such as active seismic data and other analyses of rocks, soils and minerals. The services will be implemented based on international standards (such as INSPIRE, IUGS/CGI, OGC, W3C, ISO) in order to guarantee their interoperability with other EPOS TCS as well as their compliance with INSPIRE European Directive or international initiatives (such as OneGeology). We present the implementation of the thematic core services for geology and modelling, including scheduling of the development of the different components. The activity with the OGC groups already started in 2016 through an ad-hoc meeting on Borehole and 3D/4D and the way both will be interlinked will also be introduced. This will provide future virtual research environments with means to facilitate the use of existing information for future applications. In addition, workflows will be established that allow the integration of other existing and new data and applications. Processing and the use of simulation and visualization tools will

  10. An Iron-Rain Model for Core Formation on Asteroid 4 Vesta

    Science.gov (United States)

    Kiefer, Walter S.; Mittlefehldt, David W.

    2016-01-01

    Asteroid 4 Vesta is differentiated into a crust, mantle, and core, as demonstrated by studies of the eucrite and diogenite meteorites and by data from NASA's Dawn spacecraft. Most models for the differentiation and thermal evolution of Vesta assume that the metal phase completely melts within 20 degrees of the eutectic temperature, well before the onset of silicate melting. In such a model, core formation initially happens by Darcy flow, but this is an inefficient process for liquid metal and solid silicate. However, the likely chemical composition of Vesta, similar to H chondrites with perhaps some CM or CV chondrite, has 13-16 weight percent S. For such compositions, metal-sulfide melting will not be complete until a temperature of at least 1350 degrees Centigrade. The silicate solidus for Vesta's composition is between 1100 and 1150 degrees Centigrade, and thus metal and silicate melting must have substantially overlapped in time on Vesta. In this chemically and physically more likely view of Vesta's evolution, metal sulfide drops will sink by Stokes flow through the partially molten silicate magma ocean in a process that can be envisioned as "iron rain". Measurements of eucrites show that moderately siderophile elements such as Ni, Mo, and W reached chemical equilibrium between the metal and silicate phases, which is an important test for any Vesta differentiation model. The equilibration time is a function of the initial metal grain size, which we take to be 25-45 microns based on recent measurements of H6 chondrites. For these sizes and reasonable silicate magma viscosities, equilibration occurs after a fall distance of just a few meters through the magma ocean. Although metal drops may grow in size by merger with other drops, which increases their settling velocities and decreases the total core formation time, the short equilibration distance ensures that the moderately siderophile elements will reach chemical equilibrium between metal and silicate before

  11. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  12. The Widom-Rowlinson model, the hard-core model and the extremality of the complete graph

    OpenAIRE

    Cohen, Emma; Csikvári, Péter; Perkins, Will; Tetali, Prasad

    2016-01-01

    Let $H_{\\mathrm{WR}}$ be the path on $3$ vertices with a loop at each vertex. D. Galvin conjectured, and E. Cohen, W. Perkins and P. Tetali proved that for any $d$-regular simple graph $G$ on $n$ vertices we have $$\\hom(G,H_{\\mathrm{WR}})\\leq \\hom(K_{d+1},H_{\\mathrm{WR}})^{n/(d+1)}.$$ In this paper we give a short proof of this theorem together with the proof of a conjecture of Cohen, Perkins and Tetali. Our main tool is a simple bijection between the Widom-Rowlinson model and the hard-core m...

  13. Gas and grain chemical composition in cold cores as predicted by the Nautilus 3-phase model

    CERN Document Server

    Ruaud, Maxime; Hersant, Franck

    2016-01-01

    We present an extended version of the 2-phase gas-grain code NAUTILUS to the 3-phase modelling of gas and grain chemistry of cold cores. In this model, both the mantle and the surface are considered as chemically active. We also take into account the competition among reaction, diffusion and evaporation. The model predictions are confronted to ice observations in the envelope of low-mass and massive young stellar objects as well as toward background stars. Modelled gas-phase abundances are compared to species observed toward TMC-1 (CP) and L134N dark clouds. We find that our model successfully reproduces the observed ice species. It is found that the reaction-diffusion competition strongly enhances reactions with barriers and more specifically reactions with H2, which is abundant on grains. This finding highlights the importance to have a good approach to determine the abundance of H2 on grains. Consequently, it is found that the major N-bearing species on grains go from NH3 to N2 and HCN when the reaction-di...

  14. A southern Africa harmonic spline core field model derived from CHAMP satellite data

    Science.gov (United States)

    Nahayo, E.; Kotzé, P. B.; McCreadie, H.

    2015-02-01

    The monitoring of the Earth's magnetic field time variation requires a continuous recording of geomagnetic data with a good spatial coverage over the area of study. In southern Africa, ground recording stations are limited and the use of satellite data is needed for the studies where high spatial resolution data is required. We show the fast time variation of the geomagnetic field in the southern Africa region by deriving an harmonic spline model from CHAMP satellite measurements recorded between 2001 and 2010. The derived core field model, the Southern Africa Regional Model (SARM), is compared with the global model GRIMM-2 and the ground based data recorded at Hermanus magnetic observatory (HER) in South Africa and Tsumeb magnetic observatory (TSU) in Namibia where the focus is mainly on the long term variation of the geomagnetic field. The results of this study suggest that the regional model derived from the satellite data alone can be used to study the small scale features of the time variation of the geomagnetic field where ground data is not available. In addition, these results also support the earlier findings of the occurrence of a 2007 magnetic jerk and rapid secular variation fluctuations of 2003 and 2004 in the region.

  15. Gamma-ray Emission from the Vela Pulsar Modeled with the Annular Gap and Core Gap

    CERN Document Server

    Du, Y J; Qiao, G J; Chou, C K

    2011-01-01

    The Vela pulsar represents a distinct group of {\\gamma}-ray pulsars. Fermi {\\gamma}-ray observations reveal that it has two sharp peaks (P1 and P2) in the light curve with a phase separation of 0.42 and a third peak (P3) in the bridge. The location and intensity of P3 are energy-dependent. We use the 3D magnetospheric model for the annular gap and core gap to simulate the {\\gamma}-ray light curves, phase-averaged and phase-resolved spectra. We found that the acceleration electric field along a field line in the annular gap region decreases with heights. The emission at high energy GeV band is originated from the curvature radiation of accelerated primary particles, while the synchrotron radiation from secondary particles have some contributions to low energy {\\gamma}-ray band (0.1 - 0.3 GeV). The {\\gamma}-ray light curve peaks P1 and P2 are generated in the annular gap region near the altitude of null charge surface, whereas P3 and the bridge emission is generated in the core gap region. The intensity and loc...

  16. Structural characteristics of the core layer and biomimetic model of the ladybug forewing.

    Science.gov (United States)

    Chen, Jinxiang; Xu, Mengye; Okabe, Yoji; Guo, Zhensheng; Yu, Xindi

    2017-07-19

    To explore the characteristics of the core structure of ladybug (Harmonia axyridis) forewings, their microstructure was studied using microscopes. The results suggest that trabeculae exist in the frame of the beetle (ladybug) forewing for the first time; this study represents the first determination of the parameters N, the total number of trabeculae in each forewing, and λt, the ratio of the cross-sectional area of the trabeculae to the effective area of trabecular distribution. The cross-sectional area of a single trabecula in the ladybug forewing is smaller than those in two other kinds of beetles, Allomyrina dichotoma and Prosopocoilus inclinatus. However, the average trabecular density of the ladybug forewing is 84 per square millimeter, which is the highest among these three kinds of beetles. The λt values are 1.0%, 1.5% and 10.5% for H. axyridis, A. dichotoma and P. inclinatus, respectively, and the corresponding N values are approximately 1.4, 1.7 and 3.7 thousand, respectively. Based on these findings, a biomimetic model of the ladybug forewing is proposed, which is characterized by a core structure with a high-density distribution of thin trabeculae surrounded by a foam-like material. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Characterization and identification of microRNA core promoters in four model species.

    Directory of Open Access Journals (Sweden)

    Xuefeng Zhou

    2007-03-01

    Full Text Available MicroRNAs are short, noncoding RNAs that play important roles in post-transcriptional gene regulation. Although many functions of microRNAs in plants and animals have been revealed in recent years, the transcriptional mechanism of microRNA genes is not well-understood. To elucidate the transcriptional regulation of microRNA genes, we study and characterize, in a genome scale, the promoters of intergenic microRNA genes in Caenorhabditis elegans, Homo sapiens, Arabidopsis thaliana, and Oryza sativa. We show that most known microRNA genes in these four species have the same type of promoters as protein-coding genes have. To further characterize the promoters of microRNA genes, we developed a novel promoter prediction method, called common query voting (CoVote, which is more effective than available promoter prediction methods. Using this new method, we identify putative core promoters of most known microRNA genes in the four model species. Moreover, we characterize the promoters of microRNA genes in these four species. We discover many significant, characteristic sequence motifs in these core promoters, several of which match or resemble the known cis-acting elements for transcription initiation. Among these motifs, some are conserved across different species while some are specific to microRNA genes of individual species.

  18. Modeling AGN Feedback in Cool-Core Clusters: The Balance between Heating and Cooling

    CERN Document Server

    Li, Yuan

    2014-01-01

    We study the long-term evolution of an idealized cool-core galaxy cluster under the influence of momentum-driven AGN feedback using three-dimensional high-resolution (60 pc) adaptive mesh refinement (AMR) simulations. The momentum-driven AGN feedback is modeled with a pair of (small-angle) precessing jets, and the jet power is calculated based on the accretion rate of the cold gas in the vicinity of the Supermassive Black Hole (SMBH). The ICM first cools into clumps along the propagation direction of the AGN jets. As the jet power increases, gas condensation occurs isotropically, forming spatially extended (up to a few tens kpc) structures that resemble the observed $\\rm H\\alpha$ filaments in Perseus and many other cool-core cluster. Jet heating elevates the gas entropy and cooling time, halting clump formation. The cold gas that is not accreted onto the SMBH settles into a rotating disk of $\\sim 10^{11}$ M$_{\\odot}$. The hot gas cools directly onto the cold disk while the SMBH accretes from the innermost reg...

  19. Numerical Toy-Model Calculation of the Nucleon Spin Autocorrelation Function in a Supernova Core

    CERN Document Server

    Raffelt, G G; Raffelt, Georg; Sigl, Guenter

    1999-01-01

    We develop a simple model for the evolution of a nucleon spin in a hot and dense nuclear medium. A given nucleon is limited to one-dimensional motion in a distribution of external, spin-dependent scattering potentials. We calculate the nucleon spin autocorrelation function numerically for a variety of potential densities and distributions which are meant to bracket realistic conditions in a supernova core. For all plausible configurations the width of the spin-density structure function is found to be less than the temperature. This is in contrast with a naive perturbative calculation based on the one-pion exchange potential which overestimates the width and thus suggests a large suppression of the neutrino opacities by nucleon spin fluctuations. Our results suggest that it may be justified to neglect the collisional broadening of the spin-density structure function for the purpose of estimating the neutrino opacities in the deep inner core of a supernova. On the other hand, we find no indication that process...

  20. A transmission line model for propagation in elliptical core optical fibers

    Energy Technology Data Exchange (ETDEWEB)

    Georgantzos, E.; Boucouvalas, A. C. [Department of Telecommunications and Informatics, University of Peloponnese, Karaiskaki 70, 221 00, Tripoli Greece (Greece); Papageorgiou, C. [Department of Electrical Engineering, National technical University of Athens, Iroon Politechniou 9, Kaisariani, 16121, Athens (Greece)

    2015-12-31

    The calculation of mode propagation constants of elliptical core fibers has been the purpose of extended research leading to many notable methods, with the classic step index solution based on Mathieu functions. This paper seeks to derive a new innovative method for the determination of mode propagation constants in single mode fibers with elliptic core by modeling the elliptical fiber as a series of connected coupled transmission line elements. We develop a matrix formulation of the transmission line and the resonance of the circuits is used to calculate the mode propagation constants. The technique, used with success in the case of cylindrical fibers, is now being extended for the case of fibers with elliptical cross section. The advantage of this approach is that it is very well suited to be able to calculate the mode dispersion of arbitrary refractive index profile elliptical waveguides. The analysis begins with the deployment Maxwell’s equations adjusted for elliptical coordinates. Further algebraic analysis leads to a set of equations where we are faced with the appearance of harmonics. Taking into consideration predefined fixed number of harmonics simplifies the problem and enables the use of the resonant circuits approach. According to each case, programs have been created in Matlab, providing with a series of results (mode propagation constants) that are further compared with corresponding results from the ready known Mathieu functions method.

  1. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2013

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg

    2013-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for effective application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).

  2. The treatment of mixing in core helium burning models -- II. Constraints from cluster star counts

    CERN Document Server

    Constantino, Thomas; Lattanzio, John C; van Duijneveldt, Adam

    2015-01-01

    The treatment of convective boundaries during core helium burning is a fundamental problem in stellar evolution calculations. In Paper~I we showed that new asteroseismic observations of these stars imply they have either very large convective cores or semiconvection/partially mixed zones that trap g-modes. We probe this mixing by inferring the relative lifetimes of asymptotic giant branch (AGB) and horizontal branch (HB) from $R_2$, the observed ratio of these stars in recent HST photometry of 48 Galactic globular clusters. Our new determinations of $R_2$ are more self-consistent than those of previous studies and our overall calculation of $R_2 = 0.117 \\pm 0.005$ is the most statistically robust now available. We also establish that the luminosity difference between the HB and the AGB clump is $\\Delta \\log{L}_\\text{HB}^\\text{AGB} = 0.455 \\pm 0.012$. Our results accord with earlier findings that standard models predict a lower $R_2$ than is observed. We demonstrate that the dominant sources of uncertainty in ...

  3. A core observational data model for enhancing the interoperability of ontologically annotated environmental data

    Science.gov (United States)

    Schildhauer, M.; Bermudez, L. E.; Bowers, S.; Dibner, P. C.; Gries, C.; Jones, M. B.; McGuinness, D. L.; Cao, H.; Cox, S. J.; Kelling, S.; Lagoze, C.; Lapp, H.; Madin, J.

    2010-12-01

    Research in the environmental sciences often requires accessing diverse data, collected by numerous data providers over varying spatiotemporal scales, incorporating specialized measurements from a range of instruments. These measurements are typically documented using idiosyncratic, disciplinary specific terms, and stored in management systems ranging from desktop spreadsheets to the Cloud, where the information is often further decomposed or stylized in unpredictable ways. This situation creates major informatics challenges for broadly discovering, interpreting, and merging the data necessary for integrative earth science research. A number of scientific disciplines have recognized these issues, and been developing semantically enhanced data storage frameworks, typically based on ontologies, to enable communities to better circumscribe and clarify the content of data objects within their domain of practice. There is concern, however, that cross-domain compatibility of these semantic solutions could become problematic. We describe here our efforts to address this issue by developing a core, unified Observational Data Model, that should greatly facilitate interoperability among the semantic solutions growing organically within diverse scientific domains. Observational Data Models have emerged independently from several distinct scientific communities, including the biodiversity sciences, ecology, evolution, geospatial sciences, and hydrology, to name a few. Informatics projects striving for data integration within each of these domains had converged on identifying "observations" and "measurements" as fundamental abstractions that provide useful "templates" through which scientific data can be linked— at the structural, composited, or even cell value levels— to domain terms stored in ontologies or other forms of controlled vocabularies. The Scientific Observations Network, SONet (http://sonet.ecoinformatics.org) brings together a number of these observational

  4. Modeling Multiple-Core Updraft Plume Rise for an Aerial Ignition Prescribed Burn by Coupling Daysmoke with a Cellular Automata Fire Model

    Directory of Open Access Journals (Sweden)

    Yongqiang Liu

    2012-07-01

    Full Text Available Smoke plume rise is critically dependent on plume updraft structure. Smoke plumes from landscape burns (forest and agricultural burns are typically structured into “sub-plumes” or multiple-core updrafts with the number of updraft cores depending on characteristics of the landscape, fire, fuels, and weather. The number of updraft cores determines the efficiency of vertical transport of heat and particulate matter and therefore plume rise. Daysmoke, an empirical-stochastic plume rise model designed for simulating wildland fire plumes, requires updraft core number as an input. In this study, updraft core number was gained via a cellular automata fire model applied to an aerial ignition prescribed burn conducted at Eglin AFB on 6 February 2011. Typically four updraft cores were simulated in agreement with a photo-image of the plume showing three/four distinct sub-plumes. Other Daysmoke input variables were calculated including maximum initial updraft core diameter, updraft core vertical velocity, and relative emissions production. Daysmoke simulated a vertical tower that mushroomed 1,000 m above the mixing height. Plume rise was validated by ceilometer. Simulations with two temperature profiles found 89–93 percent of the PM2.5 released during the flaming phase was transported into the free atmosphere above the mixing layer. The minimal ground-level smoke concentrations were verified by a small network of particulate samplers. Implications of these results for inclusion of wildland fire smoke in air quality models are discussed.

  5. A Kinetic Vlasov Model for Plasma Simulation Using Discontinuous Galerkin Method on Many-Core Architectures

    Science.gov (United States)

    Reddell, Noah

    Advances are reported in the three pillars of computational science achieving a new capability for understanding dynamic plasma phenomena outside of local thermodynamic equilibrium. A continuum kinetic model for plasma based on the Vlasov-Maxwell system for multiple particle species is developed. Consideration is added for boundary conditions in a truncated velocity domain and supporting wall interactions. A scheme to scale the velocity domain for multiple particle species with different temperatures and particle mass while sharing one computational mesh is described. A method for assessing the degree to which the kinetic solution differs from a Maxwell-Boltzmann distribution is introduced and tested on a thoroughly studied test case. The discontinuous Galerkin numerical method is extended for efficient solution of hyperbolic conservation laws in five or more particle phase-space dimensions using tensor-product hypercube elements with arbitrary polynomial order. A scheme for velocity moment integration is integrated as required for coupling between the plasma species and electromagnetic waves. A new high performance simulation code WARPM is developed to efficiently implement the model and numerical method on emerging many-core supercomputing architectures. WARPM uses the OpenCL programming model for computational kernels and task parallelism to overlap computation with communication. WARPM single-node performance and parallel scaling efficiency are analyzed with bottlenecks identified guiding future directions for the implementation. The plasma modeling capability is validated against physical problems with analytic solutions and well established benchmark problems.

  6. A new model for the computation of the formation factor of core rocks

    Science.gov (United States)

    Beltrán, A.; Chávez, O.; Zaldivar, J.; Godínez, F. A.; García, A.; Zenit, R.

    2017-04-01

    Among all the rock parameters measured by modern well logging tools, the formation factor is essential because it can be used to calculate the volume of oil- and/or gas in wellsite. A new mathematical model to calculate the formation factor is analytically derived from first principles. Given the electrical properties of both rock and brine (resistivities) and tortuosity (a key parameter of the model), it is possible to calculate the dependence of the formation factor with porosity with good accuracy. When the cementation exponent ceases to remain constant with porosity; the new model is capable of capturing both: the non-linear behavior (for small porosity values) and the typical linear one in log-log plots for the formation factor vs. porosity. Comparisons with experimental data from four different conventional core rock lithologies: sands, sandstone, limestone and volcanic are shown, for all of them a good agreement is observed. This new model is robust, simple and of easy implementation for practical applications. In some cases, it could substitute Archie's law replacing its empirical nature.

  7. Core-Level Modeling and Frequency Prediction for DSP Applications on FPGAs

    Directory of Open Access Journals (Sweden)

    Gongyu Wang

    2015-01-01

    Full Text Available Field-programmable gate arrays (FPGAs provide a promising technology that can improve performance of many high-performance computing and embedded applications. However, unlike software design tools, the relatively immature state of FPGA tools significantly limits productivity and consequently prevents widespread adoption of the technology. For example, the lengthy design-translate-execute (DTE process often must be iterated to meet the application requirements. Previous works have enabled model-based, design-space exploration to reduce DTE iterations but are limited by a lack of accurate model-based prediction of key design parameters, the most important of which is clock frequency. In this paper, we present a core-level modeling and design (CMD methodology that enables modeling of FPGA applications at an abstract level and yet produces accurate predictions of parameters such as clock frequency, resource utilization (i.e., area, and latency. We evaluate CMD’s prediction methods using several high-performance DSP applications on various families of FPGAs and show an average clock-frequency prediction error of 3.6%, with a worst-case error of 20.4%, compared to the best of existing high-level prediction methods, 13.9% average error with 48.2% worst-case error. We also demonstrate how such prediction enables accurate design-space exploration without coding in a hardware-description language (HDL, significantly reducing the total design time.

  8. Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core

    Science.gov (United States)

    Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey

    2017-05-01

    SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.

  9. The molecular architecture of the yeast spindle pole body core determined by Bayesian integrative modeling.

    Science.gov (United States)

    Viswanath, Shruthi; Bonomi, Massimiliano; Kim, Seung Joong; Klenchin, Vadim A; Taylor, Keenan C; Yabut, King C; Umbreit, Neil T; Van Epps, Heather A; Meehl, Janet; Jones, Michele H; Russel, Daniel; Velazquez-Muriel, Javier A; Winey, Mark; Rayment, Ivan; Davis, Trisha N; Sali, Andrej; Muller, Eric G

    2017-08-16

    Microtubule organizing centers (MTOCs) form, anchor and stabilize the polarized network of microtubules in a cell. The central MTOC is the centrosome that duplicates during the cell cycle and during mitosis assembles a bipolar spindle to capture and segregate sister chromatids. Yet, despite their importance in cell biology, the physical structure of MTOCs is poorly understood. Here we determine the molecular architecture of the core of the yeast spindle pole body (SPB) by Bayesian integrative structure modeling based on in vivo FRET, small-angle X-ray scattering (SAXS), X-ray crystallography, electron microscopy and two-hybrid analysis. The model is validated by several methods that include a genetic analysis of the conserved PACT domain that recruits Spc110, a protein related to pericentrin, to the SPB. The model suggests that calmodulin can act as a protein cross-linker and Spc29 is an extended, flexible protein. The model led to the identification of a single, essential heptad in the coiled coil of Spc110 and a minimal PACT domain. It also led to a proposed pathway for the integration of Spc110 into the SPB. © 2017 by The American Society for Cell Biology.

  10. The Status of Multi-Dimensional Core-Collapse Supernova Models

    CERN Document Server

    Müller, B

    2016-01-01

    Models of core-collapse supernova explosions powered by the neutrino-driven mechanism have matured considerable in recent years. Explosions at the low-mass end of the progenitor spectrum can routinely be simulated in 1D, 2D, and 3D and allow us to study supernova nucleosynthesis based on first-principle models. Results of nucleosynthesis calculations indicate that supernovae of the lowest masses could be important contributors of some lighter n-rich elements beyond iron. The explosion mechanism of more massive stars is still under investigation, although first 3D models of neutrino-driven explosions employing multi-group neutrino transport have recently become available. Together with earlier 2D models and more simplified 3D simulations, these have elucidated the interplay between neutrino heating and hydrodynamic instabilities in the post-shock region that is essential for shock revival. However, some physical ingredients may still need to be added or improved before simulations can robustly explain supernov...

  11. Ab Initio No Core Shell Model - Recent Results and Further Prospects

    CERN Document Server

    Vary, James P; Potter, Hugh; Caprio, Mark A; Smith, Robin; Binder, Sven; Calci, Angelo; Fischer, Sebastian; Langhammer, Joachim; Roth, Robert; Aktulga, Hasan Metin; Ng, Esmond; Yang, Chao; Oryspayev, Dossay; Sosonkina, Masha; Saule, Erik; Çatalyürek, Ümit

    2015-01-01

    There has been significant recent progress in solving the long-standing problems of how nuclear shell structure and collective motion emerge from underlying microscopic inter-nucleon interactions. We review a selection of recent significant results within the ab initio No Core Shell Model (NCSM) closely tied to three major factors enabling this progress: (1) improved nuclear interactions that accurately describe the experimental two-nucleon and three-nucleon interaction data; (2) advances in algorithms to simulate the quantum many-body problem with strong interactions; and (3) continued rapid development of high-performance computers now capable of performing $20 \\times 10^{15}$ floating point operations per second. We also comment on prospects for further developments.

  12. Ab Initio No-Core Shell Model Calculations Using Realistic Two- and Three-Body Interactions

    Energy Technology Data Exchange (ETDEWEB)

    Navratil, P; Ormand, W E; Forssen, C; Caurier, E

    2004-11-30

    There has been significant progress in the ab initio approaches to the structure of light nuclei. One such method is the ab initio no-core shell model (NCSM). Starting from realistic two- and three-nucleon interactions this method can predict low-lying levels in p-shell nuclei. In this contribution, we present a brief overview of the NCSM with examples of recent applications. We highlight our study of the parity inversion in {sup 11}Be, for which calculations were performed in basis spaces up to 9{Dirac_h}{Omega} (dimensions reaching 7 x 10{sup 8}). We also present our latest results for the p-shell nuclei using the Tucson-Melbourne TM three-nucleon interaction with several proposed parameter sets.

  13. Harmonic Domain Modelling of Transformer Core Nonlinearities Using the DIgSILENT PowerFactory Software

    DEFF Research Database (Denmark)

    Bak, Claus Leth; Bak-Jensen, Birgitte; Wiechowski, Wojciech

    2008-01-01

    This paper demonstrates the results of implementation and verification of an already existing algorithm that allows for calculating saturation characteristics of singlephase power transformers. The algorithm was described for the first time in 1993. Now this algorithm has been implemented using...... the DIgSILENT Programming Language (DPL) as an external script in the harmonic domain calculations of a power system analysis tool PowerFactory [10]. The algorithm is verified by harmonic measurements on a single-phase power transformer. A theoretical analysis of the core nonlinearities phenomena...... in single and three-phase transformers is also presented. This analysis leads to the conclusion that the method can be applied for modelling nonlinearities of three-phase autotransformers....

  14. A coarse-grained model based on core-softened potentials for anomalous polymers

    Indian Academy of Sciences (India)

    RONALDO J C BATISTA; EVY A SALCEDO TORRES; ALAN BARROS DE OLIVEIRA; MARCIA C B BARBOSA

    2017-07-01

    Starting from an anomalous monomeric system, where particles interact via a two-scale cores oftened potential, we investigate how the system properties evolve inasmuch as particles are put together to form polymers whose chain size varies from 4 up to 32 monomers. We observed that the density and diffusionanomaly regions in the pressure versus temperature phase diagram of the monomeric system is smaller in the monomeric system when compared with the polymers. We also found that the polymers do not fold into themselves to form solid spheres instead they tend to maximize the chain-fluid contact.Also, Rouse and Reptation models can be employed to describe the polymers diffusive behaviour. But, in contrast to results of simulations where mere interacts via Lennard-Jones potentials, our results shown a much shorter entanglement length of at most 8 monomers.

  15. Recent advances in the theoretical modeling of pulsating low-mass He-core white dwarfs

    CERN Document Server

    Córsico, A H; Calcaferro, L M; Serenelli, A M; Kepler, S O; Jeffery, C S

    2016-01-01

    Many extremely low-mass (ELM) white-dwarf (WD) stars are currently being found in the field of the Milky Way. Some of these stars exhibit long-period nonradial $g$-mode pulsations, and constitute the class of ELMV pulsating WDs. In addition, several low-mass pre-WDs, which could be precursors of ELM WDs, have been observed to show short-period photometric variations likely due to nonradial $p$ modes and radial modes. They could constitute a new class of pulsating low-mass pre-WD stars, the pre-ELMV stars. Here, we present the recent results of a thorough theoretical study of the nonadiabatic pulsation properties of low-mass He-core WDs and pre-WDs on the basis of fully evolutionary models representative of these stars.

  16. Virtual Machine Support for Many-Core Architectures: Decoupling Abstract from Concrete Concurrency Models

    Directory of Open Access Journals (Sweden)

    Stefan Marr

    2010-02-01

    Full Text Available The upcoming many-core architectures require software developers to exploit concurrency to utilize available computational power. Today's high-level language virtual machines (VMs, which are a cornerstone of software development, do not provide sufficient abstraction for concurrency concepts. We analyze concrete and abstract concurrency models and identify the challenges they impose for VMs. To provide sufficient concurrency support in VMs, we propose to integrate concurrency operations into VM instruction sets. Since there will always be VMs optimized for special purposes, our goal is to develop a methodology to design instruction sets with concurrency support. Therefore, we also propose a list of trade-offs that have to be investigated to advise the design of such instruction sets. As a first experiment, we implemented one instruction set extension for shared memory and one for non-shared memory concurrency. From our experimental results, we derived a list of requirements for a full-grown experimental environment for further research.

  17. Cost-based optimization of a nuclear reactor core design: a preliminary model

    Energy Technology Data Exchange (ETDEWEB)

    Sacco, Wagner F.; Alves Filho, Hermes [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico. Dept. de Modelagem Computacional]. E-mails: wfsacco@iprj.uerj.br; halves@iprj.uerj.br; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil). Div. de Reatores]. E-mail: cmnap@ien.gov.br

    2007-07-01

    A new formulation of a nuclear core design optimization problem is introduced in this article. Originally, the optimization problem consisted in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the radial power peaking factor in a three-enrichment zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. Here, we address the same problem using the minimization of the fuel and cladding materials costs as the objective function, and the radial power peaking factor as an operational constraint. This cost-based optimization problem is attacked by two metaheuristics, the standard genetic algorithm (SGA), and a recently introduced Metropolis algorithm called the Particle Collision Algorithm (PCA). The two algorithms are submitted to the same computational effort and their results are compared. As the formulation presented is preliminary, more elaborate models are also discussed (author)

  18. Infrared length scale and extrapolations for the no-core shell model

    CERN Document Server

    Wendt, K A; Papenbrock, T; Sääf, D

    2015-01-01

    We precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the $A$-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of $A$ nucleons in the NCSM space to that of $A$ nucleons in a $3(A-1)$-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for $^{6}$Li. We apply our result and perform accurate IR extrapolations for bound states of $^{4}$He, $^{6}$He, $^{6}$Li, $^{7}$Li. We also attempt to extrapolate NCSM results for $^{10}$B and $^{16}$O with bare interactions from chiral effective field theory over tens of MeV.

  19. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  20. Percentage of Positive Biopsy Cores: A Better Risk Stratification Model for Prostate Cancer?

    Energy Technology Data Exchange (ETDEWEB)

    Huang Jiayi; Vicini, Frank A. [Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI (United States); Williams, Scott G. [Peter Maccallum Cancer Centre and University of Melbourne, Melbourne, Victoria (Australia); Ye Hong; McGrath, Samuel; Ghilezan, Mihai; Krauss, Daniel; Martinez, Alvaro A. [Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI (United States); Kestin, Larry L., E-mail: lkestin@comcast.net [Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI (United States)

    2012-07-15

    Purpose: To assess the prognostic value of the percentage of positive biopsy cores (PPC) and perineural invasion in predicting the clinical outcomes after radiotherapy (RT) for prostate cancer and to explore the possibilities to improve on existing risk-stratification models. Methods and Materials: Between 1993 and 2004, 1,056 patients with clinical Stage T1c-T3N0M0 prostate cancer, who had four or more biopsy cores sampled and complete biopsy core data available, were treated with external beam RT, with or without a high-dose-rate brachytherapy boost at William Beaumont Hospital. The median follow-up was 7.6 years. Multivariate Cox regression analysis was performed with PPC, Gleason score, pretreatment prostate-specific antigen, T stage, PNI, radiation dose, androgen deprivation, age, prostate-specific antigen frequency, and follow-up duration. A new risk stratification (PPC classification) was empirically devised to incorporate PPC and replace the T stage. Results: On multivariate Cox regression analysis, the PPC was an independent predictor of distant metastasis, cause-specific survival, and overall survival (all p < .05). A PPC >50% was associated with significantly greater distant metastasis (hazard ratio, 4.01; 95% confidence interval, 1.86-8.61), and its independent predictive value remained significant with or without androgen deprivation therapy (all p < .05). In contrast, PNI and T stage were only predictive for locoregional recurrence. Combining the PPC ({<=}50% vs. >50%) with National Comprehensive Cancer Network risk stratification demonstrated added prognostic value of distant metastasis for the intermediate-risk (hazard ratio, 5.44; 95% confidence interval, 1.78-16.6) and high-risk (hazard ratio, 4.39; 95% confidence interval, 1.70-11.3) groups, regardless of the use of androgen deprivation and high-dose RT (all p < .05). The proposed PPC classification appears to provide improved stratification of the clinical outcomes relative to the National

  1. No-core configuration-interaction model for the isospin- and angular-momentum-projected states

    Science.gov (United States)

    Satuła, W.; Båczyk, P.; Dobaczewski, J.; Konieczka, M.

    2016-08-01

    Background: Single-reference density functional theory is very successful in reproducing bulk nuclear properties like binding energies, radii, or quadrupole moments throughout the entire periodic table. Its extension to the multireference level allows for restoring symmetries and, in turn, for calculating transition rates. Purpose: We propose a new variant of the no-core-configuration-interaction (NCCI) model treating properly isospin and rotational symmetries. The model is applicable to any nucleus irrespective of its mass and neutron- and proton-number parity. It properly includes polarization effects caused by an interplay between the long- and short-range forces acting in the atomic nucleus. Methods: The method is based on solving the Hill-Wheeler-Griffin equation within a model space built of linearly dependent states having good angular momentum and properly treated isobaric spin. The states are generated by means of the isospin and angular-momentum projection applied to a set of low-lying (multi)particle-(multi)hole deformed Slater determinants calculated using the self-consistent Skyrme-Hartree-Fock approach. Results: The theory is applied to calculate energy spectra in N ≈Z nuclei that are relevant from the point of view of a study of superallowed Fermi β decays. In particular, a new set of the isospin-symmetry-breaking corrections to these decays is given. Conclusions: It is demonstrated that the NCCI model is capable of capturing main features of low-lying energy spectra in light and medium-mass nuclei using relatively small model space and without any local readjustment of its low-energy coupling constants. Its flexibility and a range of applicability makes it an interesting alternative to the conventional nuclear shell model.

  2. Core/shell CdS/ZnS nanoparticles: Molecular modelling and characterization by photocatalytic decomposition of Methylene Blue

    Energy Technology Data Exchange (ETDEWEB)

    Praus, Petr, E-mail: petr.praus@vsb.cz [Department of Analytical Chemistry and Material Testing, VŠB-Technical University of Ostrava, 17. listopadu 15, 708 33 Ostrava-Poruba (Czech Republic); Regional Materials Science and Technology Centre, VŠB-Technical University of Ostrava, 17. listopadu 15, 708 33 Ostrava (Czech Republic); Svoboda, Ladislav [Department of Analytical Chemistry and Material Testing, VŠB-Technical University of Ostrava, 17. listopadu 15, 708 33 Ostrava-Poruba (Czech Republic); Tokarský, Jonáš [Nanotechnology Centre, VŠB-Technical University of Ostrava, 17. listopadu 15, 708 33 Ostrava-Poruba (Czech Republic); IT4Innovations Centre of Excellence, VŠB-Technical University of Ostrava, 17. listopadu 15, 708 33 Ostrava-Poruba (Czech Republic); Hospodková, Alice [Department of Semiconductors, Institute of Physics ASCR, v. v. i., The Academy of Science of the Czech Republic, Na Slovance 1999/2, 182 21 Prague 8 (Czech Republic); Klemm, Volker [Institute of Materials Science, TU Bergakademie Freiberg, Gustav-Zeuner-Street 5, D-09599 Freiberg (Germany)

    2014-02-15

    Core/shell CdS/ZnS nanoparticles were modelled in the Material Studio environment and synthesized by one-pot procedure. The core CdS radius size and thickness of the ZnS shell composed of 1–3 ZnS monolayers were predicted from the molecular models. From UV–vis absorption spectra of the CdS/ZnS colloid dispersions transition energies of CdS and ZnS nanostructures were calculated. They indicated penetration of electrons and holes from the CdS core into the ZnS shell and relaxation strain in the ZnS shell structure. The transitions energies were used for calculation of the CdS core radius by the Schrödinger equation. Both the relaxation strain in ZnS shells and the size of the CdS core radius were predicted by the molecular modelling. The ZnS shell thickness and a degree of the CdS core coverage were characterized by the photocatalytic decomposition of Methylene Blue (MB) using CdS/ZnS nanoparticles as photocatalysts. The observed kinetic constants of the MB photodecomposition (k{sub obs}) were evaluated and a relationship between k{sub obs} and the ZnS shell thickness was derived. Regression results revealed that 86% of the CdS core surface was covered with ZnS and the average thickness of ZnS shell was about 12% higher than that predicted by molecular modelling.

  3. On the reflection of Alfv\\'en waves and its implication for Earth's core modeling

    CERN Document Server

    Schaeffer, Nathanaël; Cardin, Philippe; Marie, Drouard

    2011-01-01

    Alfv\\'en waves propagate in electrically conducting fluids in the presence of a magnetic field. Their reflection properties depend on the ratio between the kinematic viscosity and the magnetic diffusivity of the fluid, also known as the magnetic Prandtl number Pm. In the special case Pm=1, there is no reflection on an insulating, no-slip boundary, and the wave energy is entirely dissipated in the boundary layer. We investigate the consequences of this remarkable behaviour for the numerical modeling of torsional Alfv\\'en waves (also known as torsional oscillations), which represent a special class of Alfv\\'en waves, in rapidly rotating spherical shells. They consist of geostrophic motions and are thought to exist in the fluid cores of planets with internal magnetic field. In the geophysical limit Pm 0.3, which is the range of values for which geodynamo numerical models operate. As a result, geodynamo models with no-slip boundary conditions cannot exhibit torsional oscillation normal modes.

  4. Light Nuclei in the Framework of the Symplectic No-Core Shell Model

    Energy Technology Data Exchange (ETDEWEB)

    Draayer, Jerry P.; Dytrych, Tomas; Sviratcheva, Kristina D.; Bahri, Chairul; /Louisiana State U.; Vary, James P.; /Iowa State U. /LLNL, Livermore /SLAC

    2007-04-02

    A symplectic no-core shell model (Sp-NCSM) is constructed with the goal of extending the ab-initio NCSM to include strongly deformed higher-oscillator-shell configurations and to reach heavier nuclei that cannot be studied currently because the spaces encountered are too large to handle, even with the best of modern-day computers. This goal is achieved by integrating two powerful concepts: the ab-initio NCSM with that of the Sp(3,R) {contains} SU(3) group-theoretical approach. The NCSM uses modern realistic nuclear interactions in model spaces that consists of many-body configurations up to a given number of {h_bar}{Upsilon} excitations together with modern high-performance parallel computing techniques. The symplectic theory extends this picture by recognizing that when deformed configurations dominate, which they often do, the model space can be better selected so less relevant low-lying {h_bar}{Upsilon} configurations yield to more relevant high-lying {h_bar}{Upsilon} configurations, ones that respect a near symplectic symmetry found in the Hamiltonian. Results from an application of the Sp-NCSM to light nuclei are compared with those for the NCSM and with experiment.

  5. Inference of ICF implosion core mix using experimental data and theoretical mix modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sherrill, Leslie Welser [Los Alamos National Laboratory; Haynes, Donald A [Los Alamos National Laboratory; Cooley, James H [Los Alamos National Laboratory; Sherrill, Manolo E [Los Alamos National Laboratory; Mancini, Roberto C [UNR; Tommasini, Riccardo [LLNL; Golovkin, Igor E [PRISM COMP. SCIENCES; Haan, Steven W [LLNL

    2009-01-01

    The mixing between fuel and shell materials in Inertial Confinement Fusion (lCF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model predicted trends in the width of the mix layer as a function of initial shell thickness. These results contribute to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increasing confidence in the methods used to extract mixing information from experimental data.

  6. Kinetic Parameters Estimation of MgO-C Refractory by Shrinking Core Model

    Institute of Scientific and Technical Information of China (English)

    B.Hashemi; Z.A.Nemati; S.K. Sadrnezhaad; Z.A.Moghimi

    2006-01-01

    Kinetics of oxidation of MgO-C refractories was investigated by shrinking core modeling of the gas-solid reactions taking place during heating the porous materials to the high temperatures. Samples containing 4.5~17 wt pct graphite were isothermally oxidized at 1000~1350℃. Weight loss data was compared with predictions of the model. A mixed 2-stage mechanism comprised of pore diffusion plus boundary layer gas transfer was shown to generally control the oxidation rate. Pore diffusion was however more effective, especially at graphite contents lower than 10 wt pct under forced convection blowing of the air. Model calculations showed that effective gas diffusion coefficients were in the range of 0.08 to 0.55 cm2/s. These values can be utilized to determine the corresponding tortuosity factors of 6.85 to 2.22. Activation energies related to the pore diffusion mechanism appeared to be around (46.4±2)kJ/mol. The estimated intermolecular diffusion coefficients were shown to be independent of the graphite content, when the percentage of the graphite exceeded a marginal value of 10.

  7. The Status of Multi-Dimensional Core-Collapse Supernova Models

    Science.gov (United States)

    Müller, B.

    2016-09-01

    Models of neutrino-driven core-collapse supernova explosions have matured considerably in recent years. Explosions of low-mass progenitors can routinely be simulated in 1D, 2D, and 3D. Nucleosynthesis calculations indicate that these supernovae could be contributors of some lighter neutron-rich elements beyond iron. The explosion mechanism of more massive stars remains under investigation, although first 3D models of neutrino-driven explosions employing multi-group neutrino transport have become available. Together with earlier 2D models and more simplified 3D simulations, these have elucidated the interplay between neutrino heating and hydrodynamic instabilities in the post-shock region that is essential for shock revival. However, some physical ingredients may still need to be added/improved before simulations can robustly explain supernova explosions over a wide range of progenitors. Solutions recently suggested in the literature include uncertainties in the neutrino rates, rotation, and seed perturbations from convective shell burning. We review the implications of 3D simulations of shell burning in supernova progenitors for the `perturbations-aided neutrino-driven mechanism,' whose efficacy is illustrated by the first successful multi-group neutrino hydrodynamics simulation of an 18 solar mass progenitor with 3D initial conditions. We conclude with speculations about the impact of 3D effects on the structure of massive stars through convective boundary mixing.

  8. Grid-Enabling SPMD Applications through Hierarchical Partitioning and a Component-Based Runtime

    Science.gov (United States)

    Mathias, Elton; Cavé, Vincent; Lanteri, Stéphane; Baude, Françoise

    Developing highly communicating scientific applications capable of efficiently use computational grids is not a trivial task. Ideally, these applications should consider grid topology 1) during the mesh partitioning, to balance workload among heterogeneous resources and exploit physical neighborhood, and 2) in communications, to lower the impact of latency and reduced bandwidth. Besides, this should not be a complex matter in end-users applications. These are the central concerns of the DiscoGrid project, which promotes the concept of a hierarchical SPMD programming model, along with a grid-aware multi-level mesh partitioning to enable the treatment of grid issues by the underlying runtime, in a seamless way for programmers. In this paper, we present the DiscoGrid project and the work around the GCM/ProActive-based implementation of the DiscoGrid Runtime. Experiments with a non-trivial computational electromagnetics application show that the component-based approach offers a flexible and efficient support and that the proposed programming model can ease the development of such applications.

  9. Core-crust transition properties of neutron stars within systematically varied extended relativistic mean-field model

    CERN Document Server

    Sulaksono, A; Agrawal, B K

    2014-01-01

    The model dependence and the symmetry energy dependence of the core-crust transition properties for the neutron stars are studied using three different families of systematically varied extended relativistic mean field model. Several forces within each of the families are so considered that they yield wide variations in the values of the nuclear symmetry energy $a_{\\rm sym}$ and its slope parameter $L$ at the saturation density. The core-crust transition density is calculated using a method based on random-phase-approximation. The core-crust transition density is strongly correlated, in a model independent manner, with the symmetry energy slope parameter evaluated at the saturation density. The pressure at the transition point dose not show any meaningful correlations with the symmetry energy parameters at the saturation density. At best, pressure at the transition point is correlated with the symmetry energy parameters and their linear combination evaluated at the some sub-saturation density. Yet, such corre...

  10. A New Multi-Dimensional General Relativistic Neutrino Hydrodynamics Code for Core-Collapse Supernovae II. Relativistic Explosion Models of Core-Collapse Supernovae

    CERN Document Server

    Mueller, B; Marek, A

    2012-01-01

    We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the CoCoNuT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the spacetime metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 solar mass progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared to Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated ele...

  11. CURRENT USAGE OF COMPONENT BASED PRINCIPLES FOR DEVELOPING WEB APPLICATIONS WITH FRAMEWORKS: A LITERATURE REVIEW

    OpenAIRE

    Matija Novak; Ivan Švogor

    2016-01-01

    Component based software development has become a very popular paradigm in many software engineering branches. In the early phase of Web 2.0 appearance, it was also popular for web application development. From the analyzed papers, between this period and today, use of component based techniques for web application development was somewhat slowed down, however, the recent development indicates a comeback. Most of all it is apparent with W3C’s component web working group. In this article we wa...

  12. Self consistent model of core formation and the effective metal-silicate partitioning

    Science.gov (United States)

    Ichikawa, H.; Labrosse, S.; Kameyama, M.

    2010-12-01

    It has been long known that the formation of the core transforms gravitational energy into heat and is able to heat up the whole Earth by about 2000 K. However, the distribution of this energy within the Earth is still debated and depends on the core formation process considered. Iron rain in the surface magma ocean is supposed to be the first mechanism of separation for large planets, iron then coalesces to form a pond at the base of the magma ocean [Stevenson 1990]. The time scale of the separation can be estimated from falling velocity of the iron phase, which is estimated by numerical simulation [Ichikawa et al., 2010] as ˜ 10cm/s with iron droplet of centimeter-scale. A simple estimate of the metal-silicate partition from the P-T condition of the base of the magma ocean, which must coincide with between peridotite liquidus and solidus by a single-stage model, is inconsistent with Earth's core-mantle partition. P-T conditions where silicate equilibrated with metal are far beyond the liquidus or solidus temperature for about ˜ 700K. For example, estimated P-T conditions are: 40GPa at 3750K for Wade and Wood, 2005, T ≧ 3600K for Chabot and Agee, 2003 and 35GPa at T ≧ 3300K for Gessmann and Rubie, 2000. Meanwhile, Rubie et al., 2003 shown that metal couldn't equilibrate with silicate on the base of the magma ocean before crystallization of silicate. On the other hand, metal-silicate equilibration is achieved only ˜ 5 s in the state of iron rain. Therefore metal and silicate simultaneously separate and equilibrate each other at the P-T condition during the course to the iron pond. Taking into account the release of gravitational energy, temperature of the middle of the magma ocean would be higher than the liquidus. Estimation of the thermal structure during the iron-silicate separation requires the development of a planetary-sized calculation model. However, because of the huge disparity of scales between the cm-sized drops and the magma ocean, a direct

  13. CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O. [Sandia National Labs., Albuquerque, NM (United States)

    1993-10-01

    The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

  14. Numerical modeling of coupled nitrification-denitrification in sediment perfusion cores from the hyporheic zone of the Shingobee River, MN

    Science.gov (United States)

    Sheibley, R.W.; Jackman, A.P.; Duff, J.H.; Triska, F.J.

    2003-01-01

    Nitrification and denitrification kinetics in sediment perfusion cores were numerically modeled and compared to experiments on cores from the Shingobee River MN, USA. The experimental design incorporated mixing groundwater discharge with stream water penetration into the cores, which provided a well-defined, one-dimensional simulation of in situ hydrologic conditions. Ammonium (NH+4) and nitrate (NO-3) concentration gradients suggested the upper region of the cores supported coupled nitrification-denitrification, where groundwater-derived NH+4 was first oxidized to NO-3 then subsequently reduced via denitrification to N2. Nitrification and denitrification were modeled using a Crank-Nicolson finite difference approximation to a one-dimensional advection-dispersion equation. Both processes were modeled using first-order reaction kinetics because substrate concentrations (NH+4 and NO-3) were much smaller than published Michaelis constants. Rate coefficients for nitrification and denitrification ranged from 0.2 to 15.8 h-1 and 0.02 to 8.0 h-1, respectively. The rate constants followed an Arrhenius relationship between 7.5 and 22 ??C. Activation energies for nitrification and denitrification were 162 and 97.3 kJ/mol, respectively. Seasonal NH+4 concentration patterns in the Shingobee River were accurately simulated from the relationship between perfusion core temperature and NH+4 flux to the overlying water. The simulations suggest that NH+4 in groundwater discharge is controlled by sediment nitrification that, consistent with its activation energy, is strongly temperature dependent. ?? 2003 Elsevier Ltd. All rights reserved.

  15. Analytic model for the complex effective index dispersion of metamaterial-cladding large-area hollow core fibers.

    Science.gov (United States)

    Zeisberger, Matthias; Tuniz, Alessandro; Schmidt, Markus A

    2016-09-05

    We present a mathematical model that allows interpreting the dispersion and attenuation of modes in hollow-core fibers (HCFs) on the basis of single interface reflection, giving rise to analytic and semi-analytic expressions for the complex effective indices in the case where the core diameter is large and the guiding is based on the reflection by a thin layer. Our model includes two core-size independent reflection parameters and shows the universal inverse-cubed core diameter dependence of the modal attenuation of HCFs. It substantially reduces simulation complexity and enables large scale parameter sweeps, which we demonstrate on the example of a HCF with a highly anisotropic metallic nanowire cladding, resembling an indefinite metamaterial at high metal filling fractions. We reveal design rules that allow engineering modal discrimination and show that metamaterial HCFs can principally have low losses at mid-IR wavelengths (model can be applied to a great variety of HCFs with large core diameters and can be used for advanced HCF design and performance optimization, in particular with regard to dispersion engineering and modal discrimination.

  16. Modeling X-ray Loops and EUV "Moss" in an Active Region Core

    CERN Document Server

    Winebarger, A R; Falconer, D A

    2007-01-01

    The Soft X-ray intensity of loops in active region cores and corresponding footpoint, or moss, intensity observed in the EUV remain steady for several hours of observation. The steadiness of the emission has prompted many to suggest that the heating in these loops must also be steady, though no direct comparison between the observed X-ray and EUV intensities and the steady heating solutions of the hydrodynamic equations has yet been made. In this paper, we perform these simulations and simultaneously model the X-Ray and EUV moss intensities in one active regioncore with steady uniform heating. To perform this task, we introduce a new technique to constrain the model parameters using the measured EUV footpoint intensity to infer a heating rate. We find that a filling factor of 8% and loops that expand with height provides the best agreement with the intensity in two X-ray filters, though the simulated SXT Al12 intensity is 147% the observed intensity and the SXT AlMg intensity is 80% the observed intensity. Fr...

  17. Atlantic-Arctic exchange in a series of ocean model simulations (CORE-II)

    Science.gov (United States)

    Roth, Christina; Behrens, Erik; Biastoch, Arne

    2014-05-01

    In this study we aim to improve the understanding of exchange processes between the North Atlantic and the Arctic Ocean. The Nordic Sea builds an important connector between these regions, by receiving and modifying warm and saline Atlantic waters, and by providing dense overflow as a backbone of the Atlantic Meridional Overturning Circulation (AMOC). Using a hierarchy of global ocean/sea-ice models, the specific role of the Nordic Seas, both providing a feedback with the AMOC, but also as a modulator of the Atlantic water flowing into the Arctic Ocean, is examined. The models have been performed under the CORE-II protocol, in which atmospheric forcing of the past 60 years was applied in a subsequent series of 5 iterations. During the course of this 300-year long integration, the AMOC shows substantial changes, which are correlated with water mass characteristics in the Denmark Strait overflow characteristics. Quantitative analyses using Lagrangian trajectories explore the impact of these trends on the Arctic Ocean through the Barents Sea and the Fram Strait.

  18. Implications for Post-processing Nucleosynthesis of Core-collapse Supernova Models with Lagrangian Particles

    Science.gov (United States)

    Harris, J. Austin; Hix, W. Raphael; Chertkow, Merek A.; Lee, C. T.; Lentz, Eric J.; Messer, O. E. Bronson

    2017-07-01

    We investigate core-collapse supernova (CCSN) nucleosynthesis with self-consistent, axisymmetric (2D) simulations performed using the neutrino hydrodynamics code Chimera. Computational costs have traditionally constrained the evolution of the nuclear composition within multidimensional CCSN models to, at best, a 14-species α-network capable of tracking only (α ,γ ) reactions from 4He to 60Zn. Such a simplified network limits the ability to accurately evolve detailed composition and neutronization or calculate the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks into post-processing nucleosynthesis calculations. However, limitations such as poor spatial resolution of the tracer particles inconsistent thermodynamic evolution, including misestimation of expansion timescales and uncertain determination of the multidimensional mass cut at the end of the simulation impose uncertainties inherent to this approach. We present a detailed analysis of the impact of such uncertainties for four self-consistent axisymmetric CCSN models initiated from solar-metallicity, nonrotating progenitors of 12, 15, 20, and 25 {M}⊙ and evolved with the smaller α-network to more than 1 s after the launch of an explosion.

  19. The Cusp/Core problem: supernovae feedback versus the baryonic clumps and dynamical friction model

    CERN Document Server

    Del Popolo, A

    2015-01-01

    In the present paper, we compare the predictions of two well known mechanisms considered able to solve the cusp/core problem (a. supernova feedback; b. baryonic clumps-DM interaction) by comparing their theoretical predictions to recent observations of the inner slopes of galaxies with masses ranging from dSphs to normal spirals. We compare the $\\alpha$-$V_{\\rm rot}$ and the $\\alpha$-$M_{\\ast}$ relationships, predicted by the two models with high resolution data coming from \\cite{Adams2014}, \\cite{Simon2005}, LITTLE THINGS \\citep{Oh2014}, THINGS dwarves \\citep{Oh2011a,Oh2011b}, THINGS spirals \\citep{Oh2014}, Sculptor, Fornax and the Milky Way. The comparison of the theoretical predictions with the complete set of data shows that the two models perform similarly, while when we restrict the analysis to a smaller subsample of higher quality, we show that the method presented in this paper (baryonic clumps-DM interaction) performs better than the one based on supernova feedback. We also show that, contrarily to t...

  20. Intercomparison of the Charnock and CORE bulk wind stress formulations for coastal ocean modelling

    Directory of Open Access Journals (Sweden)

    J. M. Brown

    2013-03-01

    Full Text Available The accurate parameterisation of momentum and heat transfer across the air-sea interface is vital for realistic simulation of the atmosphere-ocean system. In many modelling applications accurate representation of the wind stress is required to numerically reproduce surge, coastal ocean circulation, surface waves, turbulence and mixing. Different formulations can be implemented and impact the accuracy of: the instantaneous and long-term residual circulation; the surface mixed layer; and the generation of wave-surge conditions. This, in turn, affects predictions of storm impact, sediment pathways, and coastal resilience to climate change. The specific numerical formulation needs careful selection to ensure the accuracy of the simulation. Two wind stress formulae widely used in respectively the ocean circulation and the storm surge communities are studied with focus on an application to the NW region of the UK. Model-observation validation is performed at two nearshore and one estuarine ADCP stations in Liverpool Bay, a hypertidal region of freshwater influence with vast intertidal areas. The period of study covers both calm and extreme conditions to fully test the robustness of the 10 m wind stress component of the Common Ocean Reference Experiment (CORE bulk formulae and the Charnock relation. In this coastal application a realistic barotropic-baroclinic simulation of the circulation and surge elevation is setup, demonstrating greater accuracy occurs when using the Charnock relation for surface wind stress.

  1. Complementing mutations in core binding factor leukemias: from mouse models to clinical applications.

    Science.gov (United States)

    Müller, A M S; Duque, J; Shizuru, J A; Lübbert, M

    2008-10-02

    A great proportion of acute myeloid leukemias (AMLs) display cytogenetic abnormalities including chromosomal aberrations and/or submicroscopic mutations. These abnormalities significantly influence the prognosis of the disease. Hence, a thorough genetic work-up is an essential constituent of standard diagnostic procedures. Core binding factor (CBF) leukemias denote AMLs with chromosomal aberrations disrupting one of the CBF transcription factor genes; the most common examples are translocation t(8;21) and inversion inv(16), which result in the generation of the AML1-ETO and CBFbeta-MYH11 fusion proteins, respectively. However, in murine models, these alterations alone do not suffice to generate full-blown leukemia, but rather, complementary events are required. In fact, a substantial proportion of primary CBF leukemias display additional activating mutations, mostly of the receptor tyrosine kinase (RTK) c-KIT. The awareness of the impact and prognostic relevance of these 'second hits' is increasing with a wider range of mutations tested in clinical trials. Furthermore, novel agents targeting RTKs are emanating rapidly and entering therapeutic regimens. Here, we present a concise review on complementing mutations in CBF leukemias including pathophysiology, mouse models, and clinical implications.

  2. Time-invariant component-based normalization for a simultaneous PET-MR scanner.

    Science.gov (United States)

    Belzunce, M A; Reader, A J

    2016-05-07

    Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this

  3. Time-invariant component-based normalization for a simultaneous PET-MR scanner

    Science.gov (United States)

    Belzunce, M. A.; Reader, A. J.

    2016-05-01

    Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this

  4. Modeling of the Reactor Core Isolation Cooling Response to Beyond Design Basis Operations - Interim Report

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cardoni, Jeffrey N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilson, Chisom Shawn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Morrow, Charles [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Osborn, Douglas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gauntt, Randall O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    Efforts are being pursued to develop and qualify a system-level model of a reactor core isolation (RCIC) steam-turbine-driven pump. The model is being developed with the intent of employing it to inform the design of experimental configurations for full-scale RCIC testing. The model is expected to be especially valuable in sizing equipment needed in the testing. An additional intent is to use the model in understanding more fully how RCIC apparently managed to operate far removed from its design envelope in the Fukushima Daiichi Unit 2 accident. RCIC modeling is proceeding along two avenues that are expected to complement each other well. The first avenue is the continued development of the system-level RCIC model that will serve in simulating a full reactor system or full experimental configuration of which a RCIC system is part. The model reasonably represents a RCIC system today, especially given design operating conditions, but lacks specifics that are likely important in representing the off-design conditions a RCIC system might experience in an emergency situation such as a loss of all electrical power. A known specific lacking in the system model, for example, is the efficiency at which a flashing slug of water (as opposed to a concentrated jet of steam) could propel the rotating drive wheel of a RCIC turbine. To address this specific, the second avenue is being pursued wherein computational fluid dynamics (CFD) analyses of such a jet are being carried out. The results of the CFD analyses will thus complement and inform the system modeling. The system modeling will, in turn, complement the CFD analysis by providing the system information needed to impose appropriate boundary conditions on the CFD simulations. The system model will be used to inform the selection of configurations and equipment best suitable of supporting planned RCIC experimental testing. Preliminary investigations with the RCIC model indicate that liquid water ingestion by the turbine

  5. Clinical data integration model. Core interoperability ontology for research using primary care data.

    Science.gov (United States)

    Ethier, J-F; Curcin, V; Barton, A; McGilchrist, M M; Bastiaens, H; Andreasson, A; Rossiter, J; Zhao, L; Arvanitis, T N; Taweel, A; Delaney, B C; Burgun, A

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". Primary care data is the single richest source of routine health care data. However its use, both in research and clinical work, often requires data from multiple clinical sites, clinical trials databases and registries. Data integration and interoperability are therefore of utmost importance. TRANSFoRm's general approach relies on a unified interoperability framework, described in a previous paper. We developed a core ontology for an interoperability framework based on data mediation. This article presents how such an ontology, the Clinical Data Integration Model (CDIM), can be designed to support, in conjunction with appropriate terminologies, biomedical data federation within TRANSFoRm, an EU FP7 project that aims to develop the digital infrastructure for a learning healthcare system in European Primary Care. TRANSFoRm utilizes a unified structural / terminological interoperability framework, based on the local-as-view mediation paradigm. Such an approach mandates the global information model to describe the domain of interest independently of the data sources to be explored. Following a requirement analysis process, no ontology focusing on primary care research was identified and, thus we designed a realist ontology based on Basic Formal Ontology to support our framework in collaboration with various terminologies used in primary care. The resulting ontology has 549 classes and 82 object properties and is used to support data integration for TRANSFoRm's use cases. Concepts identified by researchers were successfully expressed in queries using CDIM and pertinent terminologies. As an example, we illustrate how, in TRANSFoRm, the Query Formulation Workbench can capture eligibility criteria in a computable representation, which is based on CDIM. A unified mediation approach to semantic interoperability provides a

  6. Experiences modeling ocean circulation problems on a 30 node commodity cluster with 3840 GPU processor cores.

    Science.gov (United States)

    Hill, C.

    2008-12-01

    Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes

  7. Pore - to - Core Modeling of Soil Organic Matter Decomposition in 3D Soil Structures

    Science.gov (United States)

    Falconer, R. E.; Battaia, G.; Baveye, P.; Otten, W.

    2013-12-01

    There is a growing body of literature supporting the need for microbial contributions to be considered explicitly in carbon-climate models. There is also overwhelming evidence that physical protection within aggregates can play a significant role in organic matter dynamics. Yet current models of soil organic matter dynamics divide soil organic matter into conceptual pools with distinct turnover times, assuming that a combination of biochemical and physical properties control decay without explicit description. Albeit robust in their application, such models are not capable to account for changes in soil structure or microbial populations, or accurately predict the effect of wetness or priming. A spatially explicit model is presented that accounts for microbial dynamics and physical processes, permitting consideration of the heterogeneity of the physical and chemical microenvironments at scales relevant for microbes. Exemplified for fungi, we investigate how micro-scale processes manifest at the core scale with particular emphasis on evolution of CO2 and biomass distribution. The microbial model is based upon previous (Falconer et al, 2012) and includes the following processes: uptake, translocation, recycling, enzyme production, growth, spread and respiration. The model is parameterised through a combination of literature data and parameter estimation (Cazelles et al., 2012).The Carbon model comprises two pools, particulate organic matter which through enzymatic activity is converted into dissolved organic matter. The microbial and carbon dynamics occur within a 3D soil structure obtained by X-ray CT. We show that CO2 is affected not only by the amount of Carbon in the soil but also by microbial dynamics, soil structure and the spatial distribution of OM. The same amount of OM can result in substantially different respiration rates, with surprisingly more CO2 with increased clustering of OM. We can explain this from the colony dynamics, production of enzymes and

  8. Collecting signatures to model latency tolerance in high-level simulations of microthreaded cores

    NARCIS (Netherlands)

    Irfan Uddin, M.; Jesshope, C.R.; van Tol, M.W.; Poss, R.

    2012-01-01

    The current many-core architectures are generally evaluated by a detailed emulation with a cycle-accurate simulation of the execution time. However this detailed simulation of the architecture makes the evaluation of large programs very slow. Since the focus in many-core architecture is shifting

  9. Combustion and Energy Transfer Experiments: A Laboratory Model for Linking Core Concepts across the Science Curriculum

    Science.gov (United States)

    Barreto, Jose C.; Dubetz, Terry A.; Schmidt, Diane L.; Isern, Sharon; Beatty, Thomas; Brown, David W.; Gillman, Edward; Alberte, Randall S.; Egiebor, Nosa O.

    2007-01-01

    Core concepts can be integrated throughout lower-division science and engineering courses by using a series of related, cross-referenced laboratory experiments. Starting with butane combustion in chemistry, the authors expanded the underlying core concepts of energy transfer into laboratories designed for biology, physics, and engineering. This…

  10. Hoyle state and rotational features in Carbon-12 within a no-core shell-model framework

    Energy Technology Data Exchange (ETDEWEB)

    Dreyfuss, Alison C., E-mail: adreyf1@lsu.edu [Keene State College, Keene, NH 03435 (United States); Launey, Kristina D.; Dytrych, Tomáš; Draayer, Jerry P. [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803 (United States); Bahri, Chairul [Department of Physics, University of Notre Dame, Notre Dame, IN 46556-5670 (United States)

    2013-12-18

    By using only a fraction of the model space extended beyond current no-core shell-model limits and a many-nucleon interaction with a single parameter, we gain additional insight within a symmetry-guided shell-model framework, into the many-body dynamics that gives rise to the ground state rotational band together with phenomena tied to alpha-clustering substructures in the low-lying states in {sup 12}C, and in particular, the challenging Hoyle state and its first 2{sup +} and 4{sup +} excitations. For these states, we offer a novel perspective emerging out of no-core shell-model considerations, including a discussion of associated nuclear deformation and matter radii. This, in turn, provides guidance for ab initio shell models by informing key features of nuclear structure and the interaction.

  11. Effects of different per translational kinetics on the dynamics of a core circadian clock model.

    Directory of Open Access Journals (Sweden)

    Paula S Nieto

    Full Text Available Living beings display self-sustained daily rhythms in multiple biological processes, which persist in the absence of external cues since they are generated by endogenous circadian clocks. The period (per gene is a central player within the core molecular mechanism for keeping circadian time in most animals. Recently, the modulation PER translation has been reported, both in mammals and flies, suggesting that translational regulation of clock components is important for the proper clock gene expression and molecular clock performance. Because translational regulation ultimately implies changes in the kinetics of translation and, therefore, in the circadian clock dynamics, we sought to study how and to what extent the molecular clock dynamics is affected by the kinetics of PER translation. With this objective, we used a minimal mathematical model of the molecular circadian clock to qualitatively characterize the dynamical changes derived from kinetically different PER translational mechanisms. We found that the emergence of self-sustained oscillations with characteristic period, amplitude, and phase lag (time delays between per mRNA and protein expression depends on the kinetic parameters related to PER translation. Interestingly, under certain conditions, a PER translation mechanism with saturable kinetics introduces longer time delays than a mechanism ruled by a first-order kinetics. In addition, the kinetic laws of PER translation significantly changed the sensitivity of our model to parameters related to the synthesis and degradation of per mRNA and PER degradation. Lastly, we found a set of parameters, with realistic values, for which our model reproduces some experimental results reported recently for Drosophila melanogaster and we present some predictions derived from our analysis.

  12. Developing Fully Coupled Dynamical Reactor Core Isolation System Models in RELAP-7 for Extended Station Black-Out Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Haihua Zhao; Ling Zou; Hongbin Zhang; David Andrs; Richard Martineau

    2014-04-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup water to the reactor vessel for core cooling when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. It was one of the very few safety systems still available during the Fukushima Daiichi accidents after the tsunamis hit the plants and the system successfully delayed the core meltdown for a few days for unit 2 & 3. Therefore, detailed models for RCIC system components are indispensable to understand extended station black-out accidents (SBO) for BWRs. As part of the effort to develop the new generation reactor system safety analysis code RELAP-7, major components to simulate the RCIC system have been developed. This paper describes the models for those components such as turbine, pump, and wet well. Selected individual component test simulations and a simplified SBO simulation up to but before core damage is presented. The successful implementation of the simplified RCIC and wet well models paves the way to further improve the models for safety analysis by including more detailed physical processes in the near future.

  13. Shear-lag model of diffusion-induced buckling of core-shell nanowires

    Science.gov (United States)

    Li, Yong; Zhang, Kai; Zheng, Bailin; Yang, Fuqian

    2016-07-01

    The lithiation and de-lithiation during the electrochemical cycling of lithium-ion batteries (LIBs) can introduce local deformation in the active materials of electrodes, resulting in the evolution of local stress and strain in the active materials. Understanding the structural degradation associated with lithiation-induced deformation in the active materials is one of the important steps towards structural optimization of the active materials used in LIBs. There are various degradation modes, including swelling, cracking, and buckling especially for the nanowires and nanorods used in LIBs. In this work, a shear-lag model and the theory of diffusion-induced stress are used to investigate diffusion-induced buckling of core-shell nanowires during lithiation. The critical load for the onset of the buckling of a nanowire decreases with the increase of the nanowire length. The larger the surface current density, the less the time is to reach the critical load for the onset of the buckling of the nanowire.

  14. Can Core Flows inferred from Geomagnetic Field Models explain the Earth's Dynamo?

    CERN Document Server

    Schaeffer, Nathanaël; Pais, Maria Alexandra

    2015-01-01

    We test the ability of velocity fields inferred from geomagnetic secular variation data to produce the global magnetic field of the Earth. Our kinematic dynamo calculations use quasi-geostrophic (QG) flows inverted from geomagnetic field models which, as such, incorporate flow structures that are Earth-like and may be important for the geodynamo. Furthermore, the QG hypothesis allows straightforward prolongation of the flow from the core surface to the bulk. As expected from previous studies, we check that a simple quasi-geostrophic flow is not able to sustain the magnetic field against ohmic decay. Additional complexity is then introduced in the flow, inspired by the action of the Lorentz force. Indeed, on centenial timescales, the Lorentz force can balance the Coriolis force and strict quasi-geostrophy may not be the best ansatz. When the columnar flow is modified to account for the action of the Lorentz force, magnetic field is generated for Elsasser numbers larger than 0.25 and magnetic Reynolds numbers l...

  15. ANALISIS PENGARUH KESUKSESAN IMPLEMENTASI CORE BANKING SYSTEM (CBS DENGAN BERBASIS MODEL DELONE DAN MCLEAN

    Directory of Open Access Journals (Sweden)

    Mardiana Andarwati

    2016-10-01

    Full Text Available AbstractCore Banking System (CBS is banking application system implementation to improve service customers, but whether the bank knew that CBS was categorized applied successfully or not.The purpose of theseresearch are determine the successful implementation of CBS using models DeLone and Mclean IS Success consisting of six variables are system quality , information quality, usage , user satisfaction , individual impact , and impact organizations. Test the hypothesis using the Partial Least Suare (PLS.The results of this study are the quality of the system on the intensity of use and user satisfaction are positive and significant,the quality of information on the intensity of use is positive and significant and then user satisfaction isnegative and not significant, the influence of the intensity of use of the employment impacts of individual positive and significant, user satisfaction to the impact of individual positivel and significant, the intensity of use of the user satisfaction the positive and significant, and the impact individual on the impact of the organizationis positive and significant. So, as to impact individual the best relationship of the impact organization 

  16. Structure of the Particle-Hole Amplitudes in No-core Shell Model Wave Functions

    CERN Document Server

    Hayes, A C

    2009-01-01

    We study the structure of the no-core shell model wave functions for $^6$Li and $^{12}$C by investigating the ground state and first excited state electron scattering charge form factors. In both nuclei, large particle-hole ($ph$) amplitudes in the wave functions appear with the opposite sign to that needed to reproduce the shape of the $(e,e')$ form factors, the charge radii, and the B(E2) values for the lowest two states. The difference in sign appears to arise mainly from the monopole $\\Delta\\hbar\\omega=2$ matrix elements of the kinetic and potential energy (T+V) that transform under the harmonic oscillator SU(3) symmetries as $(\\lambda,\\mu)=(2,0)$. These are difficult to determine self-consistently, but they have a strong effect on the structure of the low-lying states and on the giant monopole and quadrupole resonances. The Lee-Suzuki transformation, used to account for the restricted nature of the space in terms of an effective interaction, introduces large higher-order $\\Delta\\hbar\\omega=n, n>$2, $ph$ ...

  17. Lipid-Core Nanocapsules Improved Antiedematogenic Activity of Tacrolimus in Adjuvant-Induced Arthritis Model.

    Science.gov (United States)

    Friedrich, Rossana B; Coradini, Karine; Fonseca, Francisco N; Guterres, Silvia S; Beck, Ruy C R; Pohlmann, Adriana R

    2016-02-01

    Despite significant technological advances, rheumatoid arthritis remains an incurable disease with great impact on the life quality of patients. We studied the encapsulation of tacrolimus in lipidcore nanocapsules (TAC-LNC) as a strategy to enhance its systemic anti-arthritic properties. TAC-LNC presented unimodal distribution of particles with z-average diameter of 212 +/- 11, drug content close to the theoretical value (0.80 mg mL(-1)), and 99.43% of encapsulation efficiency. An in vitro sustained release was determined for TAC-LNC with anomalous transport mechanism (n = 0.61). In vivo studies using an arthritis model induced by Complete Freund's Adjuvant demonstrated that the animals treated with TAC-LNC presented a significantly greater inhibition of paw oedema after intraperitoneal administration. Furthermore, the encapsulation of TAC in lipid-core nanocapsules was potentially able to prevent hyperglycemia in the animals. In conclusion, TAC-LNC was prepared with 100% yield of nanoscopic particles having satisfactory characteristics for systemic use. This formulation represents a promising strategy to the treatment of rheumatoid arthritis in the near future.

  18. Two-state Bose-Hubbard model in the hard-core boson limit

    Directory of Open Access Journals (Sweden)

    O.V. Velychk

    2011-03-01

    Full Text Available Phase transition into the phase with Bose-Einstein (BE condensate in the two-band Bose-Hubbard model with the particle hopping in the excited band only is investigated. Instability connected with such a transition (which appears at excitation energies δ0|, where |t'0| is the particle hopping parameter is considered. The re-entrant behaviour of spinodales is revealed in the hard-core boson limit in the region of positive values of chemical potential. It is found that the order of the phase transition undergoes a change in this case and becomes the first one; the re-entrant transition into the normal phase does not take place in reality. First order phase transitions also exist at negative values of δ (under the condition δ>δcrit≈ − 0.12|t'0|. At μ0|, μ phase diagrams are built and localizations of tricritical points are established. The conditions are found at which the separation on the normal phase and the phase with the BE condensate takes place.

  19. Comparison of microencapsulation properties of spruce galactoglucomannans and arabic gum using a model hydrophobic core compound.

    Science.gov (United States)

    Laine, Pia; Lampi, Anna-Maija; Peura, Marko; Kansikas, Jarno; Mikkonen, Kirsi; Willför, Stefan; Tenkanen, Maija; Jouppila, Kirsi

    2010-01-27

    In the present study, microencapsulation and the physical properties of spruce ( Picea abies ) Omicron-acetyl-galactoglucomannans (GGM) were investigated and compared to those of arabic gum (AG). Microcapsules were obtained by freeze-drying oil-in-water emulsions containing 10 wt % capsule materials (AG, GGM, or a 1:1 mixture of GGM-AG) and 2 wt % alpha-tocopherol (a model hydrophobic core compound that oxidizes easily). Microcapsules were stored at relative humidity (RH) of 0, 33, and 66% at 25 degrees C for different time periods, and their alpha-tocopherol content was determined by HPLC. X-ray microtomography analyses showed that the freeze-dried emulsions of GGM had the highest and those of AG the lowest degree of porosity. According to X-ray diffraction patterns, both freeze-dried AG and GGM showed an amorphous nature. The storage test showed that anhydrous AG microcapsules had higher alpha-tocopherol content than GGM-containing capsules, whereas under 33 and 66% RH conditions GGM was superior in relation to the retention of alpha-tocopherol. The good protection ability of GGM was related to its ability to form thicker walls to microcapsules and better physical stability compared to AG. The glass transition temperature of AG was close to the storage temperature (25 degrees C) at RH of 66%, which explains the remarkable losses of alpha-tocopherol in the microcapsules under those conditions.

  20. Comparison of Measured and Modelled Hydraulic Conductivities of Fractured Sandstone Cores

    Science.gov (United States)

    Baraka-Lokmane, S.; Liedl, R.; Teutsch, G.

    - A new method for characterising the detailed fracture geometry in sandstone cores is presented. This method is based on the impregnation of samples with coloured resin, without significant disturbance of the fractures. The fractures are made clearly visible by the resin, thus allowing the fracture geometry to be examined digitally. In order to model the bulk hydraulic conductivity, the samples are sectioned serially perpendicular to the flow direction. The hydraulic conductivity of individual sections is estimated by summing the contribution of the matrix and each fracture from the digital data. Finally, the hydraulic conductivity of the bulk sample is estimated by a harmonic average in series along the flow path. Results of this geometrical method are compared with actual physical conductivity values measured from fluid experiments carried out prior to sectioning. The predicted conductivity from the fracture geometry parameters (e.g., fracture aperture, fracture width, fracture length and fracture relative roughness all measured using an optical method) is in good agreement with the independent physical measurements, thereby validating the approach.

  1. IceChrono v1: a probabilistic model to compute a common and optimal chronology for several ice cores

    Directory of Open Access Journals (Sweden)

    F. Parrenin

    2014-10-01

    Full Text Available Polar ice cores provides exceptional archives of past environmental conditions. Dating ice and air bubbles/hydrates in ice cores is complicated since it involves different dating methods: modeling of the sedimentation process (accumulation of snow at surface, densification of snow into ice with air trapping and ice flow, use of dated horizons by comparison to other well dated targets (other dated paleo-archives or calculated variations of Earth's orbital parameters, use of dated depth intervals, use of Δdepth information (depth shift between synchronous events in the ice matrix and its air/hydrate content, use of stratigraphic links in between ice cores (ice-ice, air-air or mix ice-air links. Here I propose IceChrono v1, a new probabilistic model to combine these different kinds of chronological information to obtain a common and optimized chronology for several ice cores, as well as its confidence interval. It is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID of air bubbles and the vertical thinning function. IceChrono is similar in scope to the Datice model, but has differences on the mathematical, numerical and programming point of views. I apply IceChrono on two dating experiments. The first one is similar to the AICC2012 experiment and I find similar results than Datice within a few centuries, which is a confirmation of both IceChrono and Datice codes. The second experiment involves only the Berkner ice core in Antarctica and I produce the first dating of this ice core. IceChrono v1 is freely available under the GPL v3 open source license.

  2. Using a Differential Emission Measure and Density Measurements in an Active Region Core to Test a Steady Heating Model

    Science.gov (United States)

    Winebarger, Amy R.; Schmelz, Joan T.; Warren, Harry P.; Saar, Steve H.; Kashyap, Vinay L.

    2011-10-01

    The frequency of heating events in the corona is an important constraint on the coronal heating mechanisms. Observations indicate that the intensities and velocities measured in active region cores are effectively steady, suggesting that heating events occur rapidly enough to keep high-temperature active region loops close to equilibrium. In this paper, we couple observations of active region (AR) 10955 made with the X-Ray Telescope and the EUV Imaging Spectrometer on board Hinode to test a simple steady heating model. First we calculate the differential emission measure (DEM) of the apex region of the loops in the active region core. We find the DEM to be broad and peaked around 3 MK. We then determine the densities in the corresponding footpoint regions. Using potential field extrapolations to approximate the loop lengths and the density-sensitive line ratios to infer the magnitude of the heating, we build a steady heating model for the active region core and find that we can match the general properties of the observed DEM for the temperature range of 6.3 accounts for the base pressure, loop length, and distribution of apex temperatures of the core loops. We find that the density-sensitive spectral line intensities and the bulk of the hot emission in the active region core are consistent with steady heating. We also find, however, that the steady heating model cannot address the emission observed at lower temperatures. This emission may be due to foreground or background structures, or may indicate that the heating in the core is more complicated. Different heating scenarios must be tested to determine if they have the same level of agreement.

  3. A new multi-tracer transport scheme for the dynamical core of NCAR's Community Atmosphere Model

    Science.gov (United States)

    Erath, C.

    2012-04-01

    The integration of a conservative semi-Lagrangian multi-tracer transport scheme (CSLAM) in NCAR's High-Order Method Modeling Environment (HOMME) is considered here. HOMME is a highly scalable atmospheric modeling framework, and its current horizontal discretization relies on spectral element (SE) and/or discontinuous Galerkin (DG) methods on the cubed-sphere. It is one dynamical core of NCAR's Community Atmosphere Model (CAM). The main advantage of CSLAM is that the upstream cell (trajectories) information and computation of weights of integrals can be reused for each additional tracer. This makes CSLAM particularly interesting for global atmospheric modeling with growing number of tracers, e.g. more than 100 tracers for the chemistry version of CAM. An algorithm specifically designed for multiple processors and on the cubed-sphere grid for CSLAM in HOMME is a challenging task. HOMME is running on an element ansatz on the six cube faces. Inside these elements we create an Eulerian finite volume grid of equiangular gnomonic type, which represents the arrival grid in the scheme. But CSLAM relies on backward trajectories, which entails a departure grid. That means departure and arrival grid don't necessary have to be on the same element and certainly not on the same cube face. Also the reconstruction for higher order modeling needs a patch of tracer values which extend the element. Here we consider a third order reconstruction method. Therefore, we introduce a halo for the tracer values in the cell centers of a cube-element. The size of this halo depends on the Courant number (CFL condition) and the reconstruction type. Note that for a third order scheme and CFL number communication can be limited to one per time step. This data structure allows us to consider an element with its halo as one task where we have to be extra carful for elements which share a cube edge due to projection and orientation reasons. We stress that the reconstruction coefficients for elements

  4. Characterization and temporal development of cores in a mouse model of malignant hyperthermia.

    Science.gov (United States)

    Boncompagni, Simona; Rossi, Ann E; Micaroni, Massimo; Hamilton, Susan L; Dirksen, Robert T; Franzini-Armstrong, Clara; Protasi, Feliciano

    2009-12-22

    Malignant hyperthermia (MH) and central core disease are related skeletal muscle diseases often linked to mutations in the type 1 ryanodine receptor (RYR1) gene, encoding for the Ca(2+) release channel of the sarcoplasmic reticulum (SR). In humans, the Y522S RYR1 mutation is associated with malignant hyperthermia susceptibility (MHS) and the presence in skeletal muscle fibers of core regions that lack mitochondria. In heterozygous Y522S knock-in mice (RYR1(Y522S/WT)), the mutation causes SR Ca(2+) leak and MHS. Here, we identified mitochondrial-deficient core regions in skeletal muscle fibers from RYR1(Y522S/WT) knock-in mice and characterized the structural and temporal aspects involved in their formation. Mitochondrial swelling/disruption, the initial detectable structural change observed in young-adult RYR1(Y522S/WT) mice (2 months), does not occur randomly but rather is confined to discrete areas termed presumptive cores. This localized mitochondrial damage is followed by local disruption/loss of nearby SR and transverse tubules, resulting in early cores (2-4 months) and small contracture cores characterized by extreme sarcomere shortening and lack of mitochondria. At later stages (1 year), contracture cores are extended, frequent, and accompanied by areas in which contractile elements are also severely compromised (unstructured cores). Based on these observations, we propose a possible series of events leading to core formation in skeletal muscle fibers of RYR1(Y522S/WT) mice: Initial mitochondrial/SR disruption in confined areas causes significant loss of local Ca(2+) sequestration that eventually results in the formation of contractures and progressive degradation of the contractile elements.

  5. The 57Fe hyperfine interactions in human liver ferritin and its iron-polymaltose analogues: the heterogeneous iron core model

    Science.gov (United States)

    Oshtrakh, M. I.; Alenkina, I. V.; Semionkin, V. A.

    2016-12-01

    Human liver ferritin and its iron-polymaltose pharmaceutical analogues Ferrum Lek, Maltofer® and Ferrifol® were studied using Mössbauer spectroscopy at 295 and 90 K. The Mössbauer spectra were fitted on the basis of a new model of heterogeneous iron core structure using five quadrupole doublets. These components were related to the corresponding more or less close-packed iron core layers/regions demonstrating some variations in the 57Fe hyperfine parameters for the studied samples.

  6. A Component-Based Study of the Effect of Diameter on Bond and Anchorage Characteristics of Blind-Bolted Connections.

    Directory of Open Access Journals (Sweden)

    Muhammad Nasir Amin

    Full Text Available Structural hollow sections are gaining worldwide importance due to their structural and architectural advantages over open steel sections. The only obstacle to their use is their connection with other structural members. To overcome the obstacle of tightening the bolt from one side has given birth to the concept of blind bolts. Blind bolts, being the practical solution to the connection hindrance for the use of hollow and concrete filled hollow sections play a vital role. Flowdrill, the Huck High Strength Blind Bolt and the Lindapter Hollobolt are the well-known commercially available blind bolts. Although the development of blind bolts has largely resolved this issue, the use of structural hollow sections remains limited to shear resistance. Therefore, a new modified version of the blind bolt, known as the "Extended Hollo-Bolt" (EHB due to its enhanced capacity for bonding with concrete, can overcome the issue of low moment resistance capacity associated with blind-bolted connections. The load transfer mechanism of this recently developed blind bolt remains unclear, however. This study uses a parametric approach to characterising the EHB, using diameter as the variable parameter. Stiffness and load-carrying capacity were evaluated at two different bolt sizes. To investigate the load transfer mechanism, a component-based study of the bond and anchorage characteristics was performed by breaking down the EHB into its components. The results of the study provide insight into the load transfer mechanism of the blind bolt in question. The proposed component-based model was validated by a spring model, through which the stiffness of the EHB was compared to that of its components combined. The combined stiffness of the components was found to be roughly equivalent to that of the EHB as a whole, validating the use of this component-based approach.

  7. Effect of direction-dependent diffusion coefficients on the accuracy of the diffusion model for LWR cores

    Energy Technology Data Exchange (ETDEWEB)

    Zerr, R. Joseph; Azmy, Yousry [The Pennsylvania State University, University Park, PA (United States); Ouisloumen, Mohamed [Westinghouse Electric Company, LLC, Monroeville, PA (United States)

    2008-07-01

    Studies have been performed to test for significant gains in core design computational accuracy with the added implementation of direction-dependent diffusion coefficients. The DRAGON code was employed to produce two-group homogeneous B{sub 1} diffusion coefficients and direction-dependent diffusion coefficients with the TIBERE module. A three-dimensional diffusion model of a mini-core was analyzed with the resulting cross section data sets to determine if the multiplication factor or node power was noticeably altered with the more accurate representation of neutronic behaviour in a high-void configuration. Results indicate that using direction-dependent diffusion coefficients homogenized over an entire assembly do not produce significant differences in the results compared to the B{sub 1} counterparts and are much more computationally expensive. Direction-dependent diffusion coefficients that are specific to smaller micro-regions may provide more noteworthy gains in the accuracy of core design computations. (authors)

  8. Structural glitches near the cores of red giants revealed by oscillations in g-mode period spacings from stellar models

    CERN Document Server

    Cunha, M S; Avelino, P P; Christensen-Dalsgaard, J; Townsend, R H D

    2015-01-01

    With recent advances in asteroseismology it is now possible to peer into the cores of red giants, potentially providing a way to study processes such as nuclear burning and mixing through their imprint as sharp structural variations -- glitches -- in the stellar cores. Here we show how such core glitches can affect the oscillations we observe in red giants. We derive an analytical expression describing the expected frequency pattern in the presence of a glitch. This formulation also accounts for the coupling between acoustic and gravity waves. From an extensive set of canonical stellar models we find glitch-induced variation in the period spacing and inertia of non-radial modes during several phases of red-giant evolution. Significant changes are seen in the appearance of mode amplitude and frequency patterns in asteroseismic diagrams such as the power spectrum and the \\'echelle diagram. Interestingly, along the red-giant branch glitch-induced variation occurs only at the luminosity bump, potentially providin...

  9. Coupled core-SOL modelling of W contamination in H-mode JET plasmas with ITER-like wall

    Energy Technology Data Exchange (ETDEWEB)

    Parail, V., E-mail: Vassili.parail@ccfe.ac.uk [CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Corrigan, G.; Da Silva Aresta Belo, P. [CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); De La Luna, E. [Laboratorio Nacional de Fusion, Madrid (Spain); Harting, D. [CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Koechl, F. [Atominstitut, TU Wien, 1020 Vienna (Austria); Koskela, T. [Aalto University, Department of Applied Physics, P.O. Box 14100, FIN-00076 Aalto (Finland); Meigs, A.; Militello-Asp, E.; Romanelli, M. [CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Tsalas, M. [FOM Institute DIFFER, P.O. Box 1207, NL-3430 BE Nieuwegein (Netherlands)

    2015-08-15

    The influence of the ITER-like Wall (ILW) with divertor target plate made of tungsten (W), on plasma performance in JET H-mode is being investigated since 2011 (see F. Romanelli and references therein). One of the key issues in discharges with low level of D fuelling is observed accumulation of W in the plasma core, which leads to a reduction in plasma performance. To study the interplay between W sputtering on the target plate, penetration of W through the SOL and edge transport barrier (ETB) and its further accumulation in plasma core predictive modelling was launched using a coupled 1.5D core and 2D SOL code JINTRAC (Romanelli, 2014; Cenacchi and Taroni, 1988; Taroni et al., 1992; Wiesen et al., 2006). Simulations reveal the important role of ELMs in W sputtering and plasma density control. Analyses also confirm pivotal role played by the neo-classical pinch of heavy impurities within the ETB.

  10. A conceptual model for kimberlite emplacement by solitary interfacial mega-waves on the core mantle boundary

    Science.gov (United States)

    Sim, B. L.; Agterberg, F. P.

    2006-07-01

    If convection in the Earth's liquid outer core is disrupted, degrades to turbulence and begins to behave in a chaotic manner, it will destabilize the Earth's magnetic field and provide the seeds for kimberlite melts via turbulent jets of silicate rich core material which invade the lower mantle. These (proto-) melts may then be captured by extreme amplitude solitary nonlinear waves generated through interaction of the outer core surface with the base of the mantle. A pressure differential behind the wave front then provides a mechanism for the captured melt to ascend to the upper mantle and crust so quickly that emplacement may indirectly promote a type of impact fracture cone within the relatively brittle crust. These waves are very rare but of finite probability. The assumption of turbulence transmission between layers is justified using a simple three-layer liquid model. The core derived melts eventually become frozen in place as localised topographic highs in the Mohorovicic discontinuity (Moho), or as deep rooted intrusive events. The intrusion's final composition is a function of melt contamination by two separate sources: the core contaminated mantle base and subducted Archean crust. The mega-wave hypothesis offers a plausible vehicle for early stage emplacement of kimberlite pipes and explains the age association of diamondiferous kimberlites with magnetic reversals and tectonic plate rearrangements.

  11. Modeled and Measured Dynamics of a Composite Beam with Periodically Varying Foam Core

    Science.gov (United States)

    Cabell, Randolph H.; Cano, Roberto J.; Schiller, Noah H.; Roberts Gary D.

    2012-01-01

    The dynamics of a sandwich beam with carbon fiber composite facesheets and foam core with periodic variations in material properties are studied. The purpose of the study is to compare finite element predictions with experimental measurements on fabricated beam specimens. For the study, three beams were fabricated: one with a compliant foam core, a second with a stiffer core, and a third with the two cores alternating down the length of the beam to create a periodic variation in properties. This periodic variation produces a bandgap in the frequency domain where vibrational energy does not readily propagate down the length of the beam. Mode shapes and natural frequencies are compared, as well as frequency responses from point force input to velocity response at the opposite end of the beam.

  12. Importance-truncated no-core shell model for fermionic many-body systems

    Energy Technology Data Exchange (ETDEWEB)

    Spies, Helena

    2017-03-15

    The exact solution of quantum mechanical many-body problems is only possible for few particles. Therefore, numerical methods were developed in the fields of quantum physics and quantum chemistry for larger particle numbers. Configuration Interaction (CI) methods or the No-Core Shell Model (NCSM) allow ab initio calculations for light and intermediate-mass nuclei, without resorting to phenomenology. An extension of the NCSM is the Importance-Truncated No-Core Shell Model, which uses an a priori selection of the most important basis states. The importance truncation was first developed and applied in quantum chemistry in the 1970s and latter successfully applied to models of light and intermediate mass nuclei. Other numerical methods for calculations for ultra-cold fermionic many-body systems are the Fixed-Node Diffusion Monte Carlo method (FN-DMC) and the stochastic variational approach with Correlated Gaussian basis functions (CG). There are also such method as the Coupled-Cluster method, Green's Function Monte Carlo (GFMC) method, et cetera, used for calculation of many-body systems. In this thesis, we adopt the IT-NCSM for the calculation of ultra-cold Fermi gases at unitarity. Ultracold gases are dilute, strongly correlated systems, in which the average interparticle distance is much larger than the range of the interaction. Therefore, the detailed radial dependence of the potential is not resolved, and the potential can be replaced by an effective contact interaction. At low energy, s-wave scattering dominates and the interaction can be described by the s-wave scattering length. If the scattering length is small and negative, Cooper-pairs are formed in the Bardeen-Cooper-Schrieffer (BCS) regime. If the scattering length is small and positive, these Cooper-pairs become strongly bound molecules in a Bose-Einstein-Condensate (BEC). In between (for large scattering lengths) is the unitary limit with universal properties. Calculations of the energy spectra

  13. Novel magnetic core materials impact modelling and analysis for minimization of RF heating loss

    Science.gov (United States)

    Ghosh, Bablu Kumar; Mohamad, Khairul Anuar; Saad, Ismail

    2016-02-01

    The eddy current that exists in RF transformer/inductor leads to generation of noise/heat in the circuit and ultimately reduces efficiency in RF system. Eddy current is generated in the magnetic core of the inductor/transformer largely determine the power loss for power transferring process. The losses for high-frequency magnetic components are complicated due to both the eddy current variation in magnetic core and copper windings reactance variation with frequency. Core materials permeability and permittivity are also related to variation of such losses those linked to the operating frequency. This paper will discuss mainly the selection of novel magnetic core materials for minimization of eddy power loss by using the approach of empirical equation and impedance plane simulation software TEDDY V1.2. By varying the operating frequency from 100 kHz to 1GHz and magnetic flux density from 0 to 2 Tesla, the eddy power loss is evaluated in our study. The Nano crystalline core material is found to be the best core material due to its low eddy power loss at low conductivity for optimum band of frequency application.

  14. A concept of a component based system to determine pot-plant shelf-life

    DEFF Research Database (Denmark)

    Körner, Oliver; Skou, Anne-Marie Thonning; Aaslyng, Jesper Peter Mazanti;

    2006-01-01

    to calculate the expected keeping quality, or it will be able to apply the system as decision support during plant cultivation. In the latter case, the model-based system can be implemented in a greenhouse climate computer. The concept contains information on climate control strategies, controlled stress......, the keeping quality of a plant after removal from the greenhouse could be estimated. A concept of a system that describes a model based knowledge system aiming at determination of the last selling date for pot plants is presented. The core of the conceptual system is a tool that can either be used......Plant keeping quality during shelf life is next to genetic attributes also determined by plant treatment. This is attributed to inner plant quality parameters. We expect that a model including information gathered during crop cultivation could be used to predict the inner crop quality. From that...

  15. Intercomparison of radiocarbon bomb pulse and 210Pb age models. A study in a peat bog core from North Poland

    Science.gov (United States)

    Piotrowska, Natalia; De Vleeschouwer, François; Sikorski, Jarosław; Pawlyta, Jacek; Fagel, Nathalie; Le Roux, Gaël; Pazdur, Anna

    2010-04-01

    Radiocarbon and 210Pb were measured on the uppermost 40 cm of a Wardenaar peat core retrieved from a Baltic raised bog at Słowińskie Błota (Pomerania, North Poland). This site is the subject of ongoing multiproxy studies covering the last 1300 years. Radiocarbon age model was constructed on the basis of 14 AMS dates obtained on selected Sphagnum spp. fragments, with use of P_Sequence tool. We present here a comparison of this model with the age model obtained using CRS model classically applied to 210Pb measurements.

  16. Downscale cascades in tracer transport test cases: an intercomparison of the dynamical cores in the Community Atmosphere Model CAM5

    Directory of Open Access Journals (Sweden)

    J. Kent

    2012-07-01

    Full Text Available The accurate modelling of cascades to unresolved scales is an important part of the tracer transport component of dynamical cores of weather and climate models. This paper aims to investigate the ability of the advection schemes in the National Center for Atmospheric Research's Community Atmosphere Model version 5 (CAM5 to model this cascade. In order to quantify the effects of the different advection schemes in CAM5, four two-dimensional tracer transport test cases are presented. Three of the tests stretch the tracer below the scale of coarse resolution grids to ensure the downscale cascade of tracer variance. These results are compared with a high resolution reference solution, which is simulated on a resolution fine enough to resolve the tracer during the test. The fourth test has two separate flow cells, and is designed so that any tracer in the Western Hemisphere should not pass into the Eastern Hemisphere. This is to test whether the diffusion in transport schemes, often in the form of explicit hyper-diffusion terms or implicit through monotonic limiters, contains unphysical mixing.

    An intercomparison of three of the dynamical cores of the National Center for Atmospheric Research's Community Atmosphere Model version 5 is performed. The results show that the finite-volume (CAM-FV and spectral element (CAM-SE dynamical cores model the downscale cascade of tracer variance better than the semi-Lagrangian transport scheme of the Eulerian spectral transform core (CAM-EUL. Each scheme tested produces unphysical mass in the Eastern Hemisphere of the separate cells test.

  17. Downscale cascades in tracer transport test cases: an intercomparison of the dynamical cores in the Community Atmosphere Model CAM5

    Directory of Open Access Journals (Sweden)

    J. Kent

    2012-12-01

    Full Text Available The accurate modeling of cascades to unresolved scales is an important part of the tracer transport component of dynamical cores of weather and climate models. This paper aims to investigate the ability of the advection schemes in the National Center for Atmospheric Research's Community Atmosphere Model version 5 (CAM5 to model this cascade. In order to quantify the effects of the different advection schemes in CAM5, four two-dimensional tracer transport test cases are presented. Three of the tests stretch the tracer below the scale of coarse resolution grids to ensure the downscale cascade of tracer variance. These results are compared with a high resolution reference solution, which is simulated on a resolution fine enough to resolve the tracer during the test. The fourth test has two separate flow cells, and is designed so that any tracer in the western hemisphere should not pass into the eastern hemisphere. This is to test whether the diffusion in transport schemes, often in the form of explicit hyper-diffusion terms or implicit through monotonic limiters, contains unphysical mixing.

    An intercomparison of three of the dynamical cores of the National Center for Atmospheric Research's Community Atmosphere Model version 5 is performed. The results show that the finite-volume (CAM-FV and spectral element (CAM-SE dynamical cores model the downscale cascade of tracer variance better than the semi-Lagrangian transport scheme of the Eulerian spectral transform core (CAM-EUL. Each scheme tested produces unphysical mass in the eastern hemisphere of the separate cells test.

  18. A predictive model of shell morphology in CdSe/CdS core/shell quantum dots

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Ke; Kelley, David F., E-mail: dfkelley@ucmerced.edu [Chemistry and Chemical Biology, University of California, Merced, 5200 North Lake Road, Merced, California 95343 (United States)

    2014-11-21

    Lattice mismatch in core/shell nanoparticles occurs when the core and shell materials have different lattice parameters. When there is a significant lattice mismatch, a coherent core-shell interface results in substantial lattice strain energy, which can affect the shell morphology. The shell can be of uniform thickness or can be rough, having thin and thick regions. A smooth shell minimizes the surface energy at the expense of increased lattice strain energy and a rough shell does the opposite. A quantitative treatment of the lattice strain energy in determining the shell morphology of CdSe/CdS core/shell nanoparticles is presented here. We use the inhomogeneity in hole tunneling rates through the shell to adsorbed hole acceptors to quantify the extent of shell thickness inhomogeneity. The results can be understood in terms of a model based on elastic continuum calculations, which indicate that the lattice strain energy depends on both core size and shell thickness. The model assumes thermodynamic equilibrium, i.e., that the shell morphology corresponds to a minimum total (lattice strain plus surface) energy. Comparison with the experimental results indicates that CdSe/CdS nanoparticles undergo an abrupt transition from smooth to rough shells when the total lattice strain energy exceeds about 27 eV or the strain energy density exceeds 0.59 eV/nm{sup 2}. We also find that the predictions of this model are not followed for CdSe/CdS nanoparticles when the shell is deposited at very low temperature and therefore equilibrium is not established.

  19. Excitation of travelling torsional normal modes in an Earth's core model

    Science.gov (United States)

    Gillet, N.; Jault, D.; Canet, E.

    2017-09-01

    The proximity between the 6 yr recurrence time of the torsional Alfvén waves that have been inferred in the Earth's outer core over 1940-2010 and their 4 yr traveltime across the fluid core is nicely explained if these travelling waves are to be considered as normal modes. We discuss to what extent the emergence of free torsional modes from a stochastic forcing in the fluid core is compatible with some dissipation, specifically with an electromagnetic torque strong enough to account for the observed length of day variations of 6 yr period. In a spherical cavity enclosed by an insulating mantle, torsional normal modes consist of standing waves. In the presence of a conducting mantle, they transform into outward travelling waves very similar to the torsional waves that have been detected in the Earth's outer core. With such a resonant response a periodic forcing is not required to explain the regular recurrence of torsional waves; neither is the search for a source of motions in the vicinity of the cylindrical surface tangent to the inner core, where travelling waves seem to emerge. We discuss these results in the light of the reflection properties of torsional waves at the equator. We are able to reproduce the properties found for geophysical time-series of geostrophic flows (detection of a normal mode, almost total absorption at the equator) if the conductance of the lowermost mantle is 3 × 107 to 108 S.

  20. Mobile Computing Clouds Interactive Model and Algorithm Based On Multi-core Grids

    Directory of Open Access Journals (Sweden)

    Liu Lizhao

    2013-09-01

    Full Text Available Multi-core technology is the key technology of mobile cloud computing, with the boom development of cloud technology, the authors focus on the problem of how to make the target code computed by mobile cloud terminal multi-core compiler to use cloud multi-core system construction, to ensure synchronization of data cross-validation compilation, and propose the concept of end mobile cloud entity indirect synchronization and direct synchronization; use wave ormation energy conversion, give our a method to calculate indirect synchronization value and direct synchronization value according to the cross experience and cross time of compilation entity; construct function relative level algorithm with Hellinger distance,  and give an algorithm method of comprehensive synchronization value. Through experiment statistics and analysis, take threshold limit value as the average, self-synchronization value as deviation, the update function of indirect synchronization value is constructed; an inter-domain multi-core synchronization flow chart is given; then inter-domain compilation data synchronization update experiment is carried out with more than 3000 end mobile cloud multi-core compilation environment. Through the analysis of data compilation operation process and results, the synchronization algorithm is proved to be reasonable and effective.  

  1. PENINGKATAN AKTIVITAS DAN HASIL BELAJAR TERBENTUKNYA HARGA PASAR DENGAN PENERAPAN MODEL CORE PADA SISWA KELAS VIII SMP N 2 UNGARAN

    Directory of Open Access Journals (Sweden)

    Lala Sakuntala

    2013-11-01

    Full Text Available Permasalahan yang muncul dalam pembelajaran di SMP N 2 Ungaran berawal dari penggunan metode dan model pembelajaran yang digunakan guru pada saat pembelajaran. Dalam pembalajaran guru cenderung masih menggunakan ceramah, sehingga menyebabkan kurangnya interaksi dan motivasi siswa pada saat pembelajaran berlangsung. Berdasarkan hasil observasi awal di SMP N 2 Ungaran, diperoleh data bahwa 53,12% siswa belum tuntas dalam pembelajaran materi harga keseimbangan. Tujuan dari penelitian ini yaitu untuk mengetahui apakah ada peningkatan menggunakan model pembelajaran CORE terhadap aktivitas dan hasil belajar pokok bahasan pembentukan harga pasar siswa kelas VIII di SMP Negeri 2 Ungaran. Subjek penelitian ini adalah siswa kelas VIII D SMP N 2 Ungaran dengan menerapkan model pembelajaran CORE. Rancangan penelitian ini merupakan penelitian tindakan kelas yang terdiri dari dua siklus, setiap siklus meliputi perencanaan, pelaksanaan, pengamatan, dan refleksi. Hasil penelitian siklus I menunjukkan rata-rata hasil belajar siswa sebesar 74,2 dengan ketuntasn klasikal 72%. Untuk hasil penelitian siklus II menunjukkan rata-rata hasil belajar siswa sebesar 78.9 dengan ketuntasan klasikal 88%. Hasil aktivitas siswa dan guru meningkat dengan diterapkannya model pembelajaran CORE. Terlihat pada hasil aktivitas siswa siklus I sebesar 70% meningkat menjadi 87.5% pad siklus II. Sedangkan aktivitas guru pada siklus II sebesar 75% meningkat menjadi 95% pada siklus II. Berdasarkan hasil penelitian diatas dapat disimpulkan bahwa terjadi peningkatan aktivitas dan hasil belajar siswa dengan menggunakan model pembelajaran CORE pada materi harga keseimbangan. Saran yang berkaitan dengan hasil penelitian adalah perlu adanya kesiapan guru sebelum memulai pelajaran, guru hendaknya mampu menguasai kelas dengan baik, dan memilih metode yang tepat untuk diterapkan dalam proses belajar mengajar. The problems that arise in learning in SMP 2 Ungaran originated from the use of learning

  2. A Service Component-based Accounting and Charging Architecture to Support Interim Mechanisms across Multiple Domains

    NARCIS (Netherlands)

    Le, van M.; Beijnum, van B.J.F.; Huitema, G.B.

    2004-01-01

    Today, telematics services are often compositions of different chargeable service components offered by different service providers. To enhance component-based accounting and charging, the service composition information is used to match with the corresponding charging structure of a service session

  3. Reducing the Runtime Acceptance Costs of Large-Scale Distributed Component-Based Systems

    NARCIS (Netherlands)

    Gonzalez, A.; Piel, E.; Gross, H.G.

    2008-01-01

    Software Systems of Systems (SoS) are large-scale distributed component-based systems in which the individual components are elaborate and complex systems in their own right. Distinguishing characteristics are their short expected integration and deployment time, and the need to modify their archite

  4. A service component-based accounting and charging architecture to support interim mechanisms across multiple domains

    NARCIS (Netherlands)

    Le, M. van; Beijnum, B.J.F. van; Huitema, G.B.

    2004-01-01

    Today, telematics services are often compositions of different chargeable service components offered by different service providers. To enhance component-based accounting and charging, the service composition information is used to match with the corresponding charging structure of a service session

  5. The performance of a component-based allergen microarray for the diagnosis of kiwifruit allergy

    NARCIS (Netherlands)

    Bublin, M.; Dennstedt, S.; Buchegger, M.; Ciardiello, M. Antonietta; Bernardi, M. L.; Tuppo, L.; Harwanegg, C.; Hafner, C.; Ebner, C.; Ballmer-Weber, B. K.; Knulst, A.; Hoffmann-Sommergruber, K.; Radauer, C.; Mari, A.; Breiteneder, H.

    P>Background Allergy to kiwifruit is increasingly reported across Europe. Currently, the reliability of its diagnosis by the measurement of allergen-specific IgE with extracts or by skin testing with fresh fruits is unsatisfying. Objective To evaluate the usefulness of a component-based allergen

  6. Reducing the Runtime Acceptance Costs of Large-Scale Distributed Component-Based Systems

    NARCIS (Netherlands)

    Gonzalez, A.; Piel, E.; Gross, H.G.

    2008-01-01

    Software Systems of Systems (SoS) are large-scale distributed component-based systems in which the individual components are elaborate and complex systems in their own right. Distinguishing characteristics are their short expected integration and deployment time, and the need to modify their

  7. Prototypic implementations of the building block for component based open Hypermedia systems (BB/CB-OHSs)

    DEFF Research Database (Denmark)

    Mohamed, Omer I. Eldai

    2005-01-01

    In this paper we describe the prototypic implementations of the BuildingBlock (BB/CB-OHSs) that proposed to address some of the Component-based Open Hypermedia Systems (CB-OHSs) issues, including distribution and interoperability [4, 11, 12]. Four service implementations were described below...

  8. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE. II. RELATIVISTIC EXPLOSION MODELS OF CORE-COLLAPSE SUPERNOVAE

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas, E-mail: bjmuellr@mpa-garching.mpg.de, E-mail: thj@mpa-garching.mpg.de [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching (Germany)

    2012-09-01

    We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M{sub Sun} progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.

  9. A Computational Fluid Dynamic and Heat Transfer Model for Gaseous Core and Gas Cooled Space Power and Propulsion Reactors

    Science.gov (United States)

    Anghaie, S.; Chen, G.

    1996-01-01

    A computational model based on the axisymmetric, thin-layer Navier-Stokes equations is developed to predict the convective, radiation and conductive heat transfer in high temperature space nuclear reactors. An implicit-explicit, finite volume, MacCormack method in conjunction with the Gauss-Seidel line iteration procedure is utilized to solve the thermal and fluid governing equations. Simulation of coolant and propellant flows in these reactors involves the subsonic and supersonic flows of hydrogen, helium and uranium tetrafluoride under variable boundary conditions. An enthalpy-rebalancing scheme is developed and implemented to enhance and accelerate the rate of convergence when a wall heat flux boundary condition is used. The model also incorporated the Baldwin and Lomax two-layer algebraic turbulence scheme for the calculation of the turbulent kinetic energy and eddy diffusivity of energy. The Rosseland diffusion approximation is used to simulate the radiative energy transfer in the optically thick environment of gas core reactors. The computational model is benchmarked with experimental data on flow separation angle and drag force acting on a suspended sphere in a cylindrical tube. The heat transfer is validated by comparing the computed results with the standard heat transfer correlations predictions. The model is used to simulate flow and heat transfer under a variety of design conditions. The effect of internal heat generation on the heat transfer in the gas core reactors is examined for a variety of power densities, 100 W/cc, 500 W/cc and 1000 W/cc. The maximum temperature, corresponding with the heat generation rates, are 2150 K, 2750 K and 3550 K, respectively. This analysis shows that the maximum temperature is strongly dependent on the value of heat generation rate. It also indicates that a heat generation rate higher than 1000 W/cc is necessary to maintain the gas temperature at about 3500 K, which is typical design temperature required to achieve high

  10. IceChrono1: a probabilistic model to compute a common and optimal chronology for several ice cores

    Science.gov (United States)

    Parrenin, Frédéric; Bazin, Lucie; Capron, Emilie; Landais, Amaëlle; Lemieux-Dudon, Bénédicte; Masson-Delmotte, Valérie

    2016-04-01

    Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores and the estimation of the age scale uncertainty are essential to interpret the climate and environmental records that they contain. It is however a complex problem which involves different methods. Here, we present IceChrono1, a new probabilistic model integrating various sources of chronological information to produce a common and optimized chronology for several ice cores, as well as its uncertainty. IceChrono1 is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID) of air bubbles and the thinning function. The chronological information integrated into the model are: models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice and air dated horizons, ice and air depth intervals with known durations, Δdepth observations (depth shift between synchronous events recorded in the ice and in the air) and finally air and ice stratigraphic links in between ice cores. The optimization is formulated as a least squares problem, implying that all densities of probabilities are assumed to be Gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono follows an approach similar to that of the Datice model which was recently used to produce the AICC2012 chronology for 4 Antarctic ice cores and 1 Greenland ice core. IceChrono1 provides improvements and simplifications with respect to Datice from the mathematical, numerical and programming point of views. The capabilities of IceChrono is demonstrated on a case study similar to the AICC2012 dating experiment. We find results similar to those of Datice, within a few centuries, which is a confirmation of both IceChrono and Datice codes. We also test new functionalities with respect to the original version of Datice: observations as ice intervals

  11. Characterization and temporal development of cores in a mouse model of malignant hyperthermia

    OpenAIRE

    2009-01-01

    Malignant hyperthermia (MH) and central core disease are related skeletal muscle diseases often linked to mutations in the type 1 ryanodine receptor (RYR1) gene, encoding for the Ca2+ release channel of the sarcoplasmic reticulum (SR). In humans, the Y522S RYR1 mutation is associated with malignant hyperthermia susceptibility (MHS) and the presence in skeletal muscle fibers of core regions that lack mitochondria. In heterozygous Y522S knock-in mice (RYR1Y522S/WT), the mutation causes SR Ca2+ ...

  12. New radiofrequency device to reduce bleeding after core needle biopsy: Experimental study in a porcine liver model

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Sang Hyeok; Rhim, Hyun Chul; Lee, Min Woo; Song, Kyoung Doo; Kang, Tae Wook; Kim, Young Sun; Lim, Hyo Keun [Dept. of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2017-01-15

    To evaluate the in vivo efficiency of the biopsy tract radiofrequency ablation for hemostasis after core biopsy of the liver in a porcine liver model, including situations with bleeding tendency and a larger (16-gauge) core needle. A preliminary study was performed using one pig to determine optimal ablation parameters. For the main experiment, four pigs were assigned to different groups according to heparinization use and biopsy needle caliber. In each pig, 14 control (without tract ablation) and 14 experimental (tract ablation) ultrasound-guided core biopsies were performed using either an 18- or 16-gauge needle. Post-biopsy bleeding amounts were measured by soaking up the blood for five minutes. The results were compared using the Mann-Whitney U test. The optimal parameters for biopsy tract ablation were determined as a 2-cm active tip electrode set at 40-watt with a tip temperature of 70–80℃. The bleeding amounts in all experimental groups were smaller than those in the controls; however they were significant in the non-heparinized pig biopsied with an 18-gauge needle and in two heparinized pigs (p < 0.001). In the heparinized pigs, the mean blood loss in the experimental group was 3.5% and 13.5% of the controls biopsied with an 18- and 16-gauge needle, respectively. Radiofrequency ablation of hepatic core biopsy tract ablation may reduce post-biopsy bleeding even under bleeding tendency and using a larger core needle, according to the result from in vivo porcine model experiments.

  13. New Radiofrequency Device to Reduce Bleeding after Core Needle Biopsy: Experimental Study in a Porcine Liver Model

    Science.gov (United States)

    Lim, Sanghyeok; Lee, Min Woo; Song, Kyoung Doo; Kang, Tae Wook; Kim, Young-sun; Lim, Hyo Keun

    2017-01-01

    Objective To evaluate the in vivo efficiency of the biopsy tract radiofrequency ablation for hemostasis after core biopsy of the liver in a porcine liver model, including situations with bleeding tendency and a larger (16-gauge) core needle. Materials and Methods A preliminary study was performed using one pig to determine optimal ablation parameters. For the main experiment, four pigs were assigned to different groups according to heparinization use and biopsy needle caliber. In each pig, 14 control (without tract ablation) and 14 experimental (tract ablation) ultrasound-guided core biopsies were performed using either an 18- or 16-gauge needle. Post-biopsy bleeding amounts were measured by soaking up the blood for five minutes. The results were compared using the Mann-Whitney U test. Results The optimal parameters for biopsy tract ablation were determined as a 2-cm active tip electrode set at 40-watt with a tip temperature of 70–80℃. The bleeding amounts in all experimental groups were smaller than those in the controls; however they were significant in the non-heparinized pig biopsied with an 18-gauge needle and in two heparinized pigs (p < 0.001). In the heparinized pigs, the mean blood loss in the experimental group was 3.5% and 13.5% of the controls biopsied with an 18- and 16-gauge needle, respectively. Conclusion Radiofrequency ablation of hepatic core biopsy tract ablation may reduce post-biopsy bleeding even under bleeding tendency and using a larger core needle, according to the result from in vivo porcine model experiments. PMID:28096727

  14. redGEM: Systematic reduction and analysis of genome-scale metabolic reconstructions for development of consistent core metabolic models.

    Science.gov (United States)

    Ataman, Meric; Hernandez Gardiol, Daniel F; Fengos, Georgios; Hatzimanikatis, Vassily

    2017-07-01

    Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.

  15. redGEM: Systematic reduction and analysis of genome-scale metabolic reconstructions for development of consistent core metabolic models.

    Directory of Open Access Journals (Sweden)

    Meric Ataman

    2017-07-01

    Full Text Available Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.

  16. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    Energy Technology Data Exchange (ETDEWEB)

    Pecchia, M.; D' Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  17. Proteomics Core

    Data.gov (United States)

    Federal Laboratory Consortium — Proteomics Core is the central resource for mass spectrometry based proteomics within the NHLBI. The Core staff help collaborators design proteomics experiments in a...

  18. Proteomics Core

    Data.gov (United States)

    Federal Laboratory Consortium — Proteomics Core is the central resource for mass spectrometry based proteomics within the NHLBI. The Core staff help collaborators design proteomics experiments in...

  19. Development of a cross-section methodology and a real-time core model for VVER-1000 simulator application

    Energy Technology Data Exchange (ETDEWEB)

    Georgieva, Emiliya Lyudmilova

    2016-06-06

    The novel academic contributions are summarized as follows. A) A cross-section modelling methodology and a cycle-specific cross-section update procedure are developed to meet fidelity requirements applicable to a cycle-specific reactor core simulation, as well as particular customer needs and practices supporting VVER-1000 operation and safety. B) A real-time version of the Nodal Expansion Method code is developed and implemented into Kozloduy 6 full-scope replica control room simulator.

  20. Development Of Advanced Sandwich Core Topologies Using Fused Deposition Modeling And Electroforming Processes

    Science.gov (United States)

    Storck, Steven M.

    New weight efficient materials are needed to enhance the performance of vehicle systems allowing increased speed, maneuverability and fuel economy. This work leveraged a multi-length-scale composite approach combined with hybrid material methodology to create new state-of-the-art additive manufactured sandwich core material. The goal of the research was to generate a new material to expands material space for strength versus density. Fused-Deposition-Modeling (FDM) was used to remove geometric manufacturing constraints, and electrodepositing was used to generate a high specific-strength, bio-inspired hybrid material. Microtension samples (3mm x 1mm with 250mum x 250mum gage) were used to investigate the electrodeposited coatings in the transverse (TD) and growth (GD) directions. Three bath chemistries were tested: copper, traditional nickel sulfamate (TNS) nickel, and nickel deposited with a platinum anode (NDPA). NDPA shows tensile strength exceeding 1600 MPa, significantly beyond the literature reported values of 60MPa. This strengthening was linked to grain size refinement into the sub-30nm range, in addition to grain texture refinement resulting in only 17% of the slip systems for nickel being active. Anisotropy was observed in nickel deposits, which was linked to texture evolution inside of the coating. Microsample testing guided the selection of 15mum layer of copper deposition followed by a 250 mum NDPA layer. Classical formulas for structural collapse were used to guide an experimental parametric study to establish a weight/volume efficient strut topology. Length, diameter and thickness were all investigated to determine the optimal column topology. The most optimal topology exists when Eulerian buckling, shell micro buckling and yielding failure modes all exist in a single geometric topology. Three macro-scale sandwich topologies (pyramidal, tetrahedral, and strut-reinforced-tetrahedral (SRT) were investigated with respect to strength-per-unit-weight. The