WorldWideScience

Sample records for modeling tool consists

  1. Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.

    2003-01-01

    . In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling...

  2. Chip Multithreaded Consistency Model

    Institute of Scientific and Technical Information of China (English)

    Zu-Song Li; Dan-Dan Huan; Wei-Wu Hu; Zhi-Min Tang

    2008-01-01

    Multithreaded technique is the developing trend of high performance processor. Memory consistency model is essential to the correctness, performance and complexity of multithreaded processor. The chip multithreaded consistency model adapting to multithreaded processor is proposed in this paper. The restriction imposed on memory event ordering by chip multithreaded consistency is presented and formalized. With the idea of critical cycle built by Wei-Wu Hu, we prove that the proposed chip multithreaded consistency model satisfies the criterion of correct execution of sequential consistency model. Chip multithreaded consistency model provides a way of achieving high performance compared with sequential consistency model and ensures the compatibility of software that the execution result in multithreaded processor is the same as the execution result in uniprocessor. The implementation strategy of chip multithreaded consistency model in Godson-2 SMT processor is also proposed. Godson-2 SMT processor supports chip multithreaded consistency model correctly by exception scheme based on the sequential memory access queue of each thread.

  3. Consistent model driven architecture

    Science.gov (United States)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  4. A Spectral Unmixing Model for the Integration of Multi-Sensor Imagery: A Tool to Generate Consistent Time Series Data

    Directory of Open Access Journals (Sweden)

    Georgia Doxani

    2015-10-01

    Full Text Available The Sentinel missions have been designed to support the operational services of the Copernicus program, ensuring long-term availability of data for a wide range of spectral, spatial and temporal resolutions. In particular, Sentinel-2 (S-2 data with improved high spatial resolution and higher revisit frequency (five days with the pair of satellites in operation will play a fundamental role in recording land cover types and monitoring land cover changes at regular intervals. Nevertheless, cloud coverage usually hinders the time series availability and consequently the continuous land surface monitoring. In an attempt to alleviate this limitation, the synergistic use of instruments with different features is investigated, aiming at the future synergy of the S-2 MultiSpectral Instrument (MSI and Sentinel-3 (S-3 Ocean and Land Colour Instrument (OLCI. To that end, an unmixing model is proposed with the intention of integrating the benefits of the two Sentinel missions, when both in orbit, in one composite image. The main goal is to fill the data gaps in the S-2 record, based on the more frequent information of the S-3 time series. The proposed fusion model has been applied on MODIS (MOD09GA L2G and SPOT4 (Take 5 data and the experimental results have demonstrated that the approach has high potential. However, the different acquisition characteristics of the sensors, i.e. illumination and viewing geometry, should be taken into consideration and bidirectional effects correction has to be performed in order to reduce noise in the reflectance time series.

  5. Self-consistent triaxial models

    CERN Document Server

    Sanders, Jason L

    2015-01-01

    We present self-consistent triaxial stellar systems that have analytic distribution functions (DFs) expressed in terms of the actions. These provide triaxial density profiles with cores or cusps at the centre. They are the first self-consistent triaxial models with analytic DFs suitable for modelling giant ellipticals and dark haloes. Specifically, we study triaxial models that reproduce the Hernquist profile from Williams & Evans (2015), as well as flattened isochrones of the form proposed by Binney (2014). We explore the kinematics and orbital structure of these models in some detail. The models typically become more radially anisotropic on moving outwards, have velocity ellipsoids aligned in Cartesian coordinates in the centre and aligned in spherical polar coordinates in the outer parts. In projection, the ellipticity of the isophotes and the position angle of the major axis of our models generally changes with radius. So, a natural application is to elliptical galaxies that exhibit isophote twisting....

  6. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  7. Consistent ranking of volatility models

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2006-01-01

    We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...

  8. Consistent ranking of volatility models

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2006-01-01

    result in an inferior model being chosen as "best" with a probability that converges to one as the sample size increases. We document the practical relevance of this problem in an empirical application and by simulation experiments. Our results provide an additional argument for using the realized...... variance in out-of-sample evaluations rather than the squared return. We derive the theoretical results in a general framework that is not specific to the comparison of volatility models. Similar problems can arise in comparisons of forecasting models whenever the predicted variable is a latent variable.......We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...

  9. A parameter optimization tool for evaluating the physical consistency of the plot-scale water budget of the integrated eco-hydrological model GEOtop in complex terrain

    Science.gov (United States)

    Bertoldi, Giacomo; Cordano, Emanuele; Brenner, Johannes; Senoner, Samuel; Della Chiesa, Stefano; Niedrist, Georg

    2017-04-01

    In mountain regions, the plot- and catchment-scale water and energy budgets are controlled by a complex interplay of different abiotic (i.e. topography, geology, climate) and biotic (i.e. vegetation, land management) controlling factors. When integrated, physically-based eco-hydrological models are used in mountain areas, there are a large number of parameters, topographic and boundary conditions that need to be chosen. However, data on soil and land-cover properties are relatively scarce and do not reflect the strong variability at the local scale. For this reason, tools for uncertainty quantification and optimal parameters identification are essential not only to improve model performances, but also to identify most relevant parameters to be measured in the field and to evaluate the impact of different assumptions for topographic and boundary conditions (surface, lateral and subsurface water and energy fluxes), which are usually unknown. In this contribution, we present the results of a sensitivity analysis exercise for a set of 20 experimental stations located in the Italian Alps, representative of different conditions in terms of topography (elevation, slope, aspect), land use (pastures, meadows, and apple orchards), soil type and groundwater influence. Besides micrometeorological parameters, each station provides soil water content at different depths, and in three stations (one for each land cover) eddy covariance fluxes. The aims of this work are: (I) To present an approach for improving calibration of plot-scale soil moisture and evapotranspiration (ET). (II) To identify the most sensitive parameters and relevant factors controlling temporal and spatial differences among sites. (III) Identify possible model structural deficiencies or uncertainties in boundary conditions. Simulations have been performed with the GEOtop 2.0 model, which is a physically-based, fully distributed integrated eco-hydrological model that has been specifically designed for mountain

  10. A Framework of Memory Consistency Models

    Institute of Scientific and Technical Information of China (English)

    胡伟武; 施巍松; 等

    1998-01-01

    Previous descriptions of memory consistency models in shared-memory multiprocessor systems are mainly expressed as constraints on the memory access event ordering and hence are hardware-centric.This paper presents a framework of memory consistency models which describes the memory consistency model on the behavior level.Based on the understanding that the behavior of an execution is determined by the execution order of conflicting accesses,a memory consistency model is defined as an interprocessor synchronization mechanism which orders the execution of operations from different processors.Synchronization order of an execution under certain consistency model is also defined.The synchronization order,together with the program order determines the behavior of an execution.This paper also presents criteria for correct program and correct implementation of consistency models.Regarding an implementation of a consistency model as certain memory event ordering constraints,this paper provides a method to prove the correctness of consistency model implementations,and the correctness of the lock-based cache coherence protocol is proved with this method.

  11. A self-consistent Maltsev pulse model

    Science.gov (United States)

    Buneman, O.

    1985-04-01

    A self-consistent model for an electron pulse propagating through a plasma is presented. In this model, the charge imbalance between plasma ions, plasma electrons and pulse electrons creates the travelling potential well in which the pulse electrons are trapped.

  12. Self-Consistent Asset Pricing Models

    CERN Document Server

    Malevergne, Y

    2006-01-01

    We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alpha's and beta's of the factor model are unobservable. Self-consistency leads to renormalized beta's with zero effective alpha's, which are observable with standard OLS regressions. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value $\\alpha_i$ at the origin between an asset $i$'s return and the proxy's return. Self-consistency also introduces ``orthogonality'' and ``normality'' conditions linking the beta's, alpha's (as well as the residuals) and the weights of the proxy por...

  13. Entropy-based consistent model driven architecture

    Science.gov (United States)

    Niepostyn, Stanisław Jerzy

    2016-09-01

    A description of software architecture is a plan of the IT system construction, therefore any architecture gaps affect the overall success of an entire project. The definitions mostly describe software architecture as a set of views which are mutually unrelated, hence potentially inconsistent. Software architecture completeness is also often described in an ambiguous way. As a result most methods of IT systems building comprise many gaps and ambiguities, thus presenting obstacles for software building automation. In this article the consistency and completeness of software architecture are mathematically defined based on calculation of entropy of the architecture description. Following this approach, in this paper we also propose our method of automatic verification of consistency and completeness of the software architecture development method presented in our previous article as Consistent Model Driven Architecture (CMDA). The proposed FBS (Functionality-Behaviour-Structure) entropy-based metric applied in our CMDA approach enables IT architects to decide whether the modelling process is complete and consistent. With this metric, software architects could assess the readiness of undergoing modelling work for the start of IT system building. It even allows them to assess objectively whether the designed software architecture of the IT system could be implemented at all. The overall benefit of such an approach is that it facilitates the preparation of complete and consistent software architecture more effectively as well as it enables assessing and monitoring of the ongoing modelling development status. We demonstrate this with a few industry examples of IT system designs.

  14. Self-consistent model of fermions

    CERN Document Server

    Yershov, V N

    2002-01-01

    We discuss a composite model of fermions based on three-flavoured preons. We show that the opposite character of the Coulomb and strong interactions between these preons lead to formation of complex structures reproducing three generations of quarks and leptons with all their quantum numbers and masses. The model is self-consistent (it doesn't use input parameters). Nevertheless, the masses of the generated structures match the experimental values.

  15. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...... height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...

  16. Developing consistent pronunciation models for phonemic variants

    CSIR Research Space (South Africa)

    Davel, M

    2006-09-01

    Full Text Available from a lexicon containing variants. In this paper we (the authors) address both these issues by creating ‘pseudo-phonemes’ associated with sets of ‘generation restriction rules’ to model those pronunciations that are consistently realised as two or more...

  17. Are there consistent models giving observable NSI ?

    CERN Document Server

    Martinez, Enrique Fernandez

    2013-01-01

    While the existing direct bounds on neutrino NSI are rather weak, order 10(−)(1) for propagation and 10(−)(2) for production and detection, the close connection between these interactions and new NSI affecting the better-constrained charged letpon sector through gauge invariance make these bounds hard to saturate in realistic models. Indeed, Standard Model extensions leading to neutrino NSI typically imply constraints at the 10(−)(3) level. The question of whether consistent models leading to observable neutrino NSI naturally arises and was discussed in a dedicated session at NUFACT 11. Here we summarize that discussion.

  18. Population Density Modeling Tool

    Science.gov (United States)

    2014-02-05

    194 POPULATION DENSITY MODELING TOOL by Davy Andrew Michael Knott David Burke 26 June 2012 Distribution...MARYLAND NAWCADPAX/TR-2012/194 26 June 2012 POPULATION DENSITY MODELING TOOL by Davy Andrew Michael Knott David Burke...Density Modeling Tool 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Davy Andrew Michael Knott David Burke 5d. PROJECT NUMBER

  19. Thermodynamically consistent model calibration in chemical kinetics

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2011-05-01

    Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new

  20. Consistent quadrupole-octupole collective model

    Science.gov (United States)

    Dobrowolski, A.; Mazurek, K.; Góźdź, A.

    2016-11-01

    Within this work we present a consistent approach to quadrupole-octupole collective vibrations coupled with the rotational motion. A realistic collective Hamiltonian with variable mass-parameter tensor and potential obtained through the macroscopic-microscopic Strutinsky-like method with particle-number-projected BCS (Bardeen-Cooper-Schrieffer) approach in full vibrational and rotational, nine-dimensional collective space is diagonalized in the basis of projected harmonic oscillator eigensolutions. This orthogonal basis of zero-, one-, two-, and three-phonon oscillator-like functions in vibrational part, coupled with the corresponding Wigner function is, in addition, symmetrized with respect to the so-called symmetrization group, appropriate to the collective space of the model. In the present model it is D4 group acting in the body-fixed frame. This symmetrization procedure is applied in order to provide the uniqueness of the Hamiltonian eigensolutions with respect to the laboratory coordinate system. The symmetrization is obtained using the projection onto the irreducible representation technique. The model generates the quadrupole ground-state spectrum as well as the lowest negative-parity spectrum in 156Gd nucleus. The interband and intraband B (E 1 ) and B (E 2 ) reduced transition probabilities are also calculated within those bands and compared with the recent experimental results for this nucleus. Such a collective approach is helpful in searching for the fingerprints of the possible high-rank symmetries (e.g., octahedral and tetrahedral) in nuclear collective bands.

  1. Simplified Models for Dark Matter Face their Consistent Completions

    Energy Technology Data Exchange (ETDEWEB)

    Goncalves, Dorival [Pittsburgh U.; Machado, Pedro N. [Madrid, IFT; No, Jose Miguel [Sussex U.

    2016-11-14

    Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.

  2. Simplified Models for Dark Matter Face their Consistent Completions

    CERN Document Server

    Goncalves, Dorival; No, Jose Miguel

    2016-01-01

    Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.

  3. Consistent estimators in random censorship semiparametric models

    Institute of Scientific and Technical Information of China (English)

    王启华

    1996-01-01

    For the fixed design regression modelwhen Y, are randomly censored on the right, the estimators of unknown parameter and regression function g from censored observations are defined in the two cases .where the censored distribution is known and unknown, respectively. Moreover, the sufficient conditions under which these estimators are strongly consistent and pth (p>2) mean consistent are also established.

  4. Pressure-Balance Consistency in Magnetospheric Modelling

    Institute of Scientific and Technical Information of China (English)

    肖永登; 陈出新

    2003-01-01

    There have been many magnetic field models for geophysical and astrophysical bodies.These theoretical or empirical models represent the reality very well in some cases,but in other cases they may be far from reality.We argue that these models will become more reasonable if they are modified by some coordinate transformations.In order to demonstrate the transformation,we use this method to resolve the "pressure-balance inconsistency"problem that occurs when plasma transports from the outer plasma sheet of the Earth into the inner plasma sheet.

  5. Consistent Partial Least Squares Path Modeling

    NARCIS (Netherlands)

    Dijkstra, Theo K.; Henseler, Jörg

    2015-01-01

    This paper resumes the discussion in information systems research on the use of partial least squares (PLS) path modeling and shows that the inconsistency of PLS path coefficient estimates in the case of reflective measurement can have adverse consequences for hypothesis testing. To remedy this, the

  6. Short Polymer Modeling using Self-Consistent Integral Equation Method

    Science.gov (United States)

    Kim, Yeongyoon; Park, So Jung; Kim, Jaeup

    2014-03-01

    Self-consistent field theory (SCFT) is an excellent mean field theoretical tool for predicting the morphologies of polymer based materials. In the standard SCFT, the polymer is modeled as a Gaussian chain which is suitable for a polymer of high molecular weight, but not necessarily for a polymer of low molecular weight. In order to overcome this limitation, Matsen and coworkers have recently developed SCFT of discrete polymer chains in which one polymer is modeled as finite number of beads joined by freely jointed bonds of fixed length. In their model, the diffusion equation of the canonical SCFT is replaced by an iterative integral equation, and the full spectral method is used for the production of the phase diagram of short block copolymers. In this study, for the finite length chain problem, we apply pseudospectral method which is the most efficient numerical scheme to solve the iterative integral equation. We use this new numerical method to investigate two different types of polymer bonds: spring-beads model and freely-jointed chain model. By comparing these results with those of the Gaussian chain model, the influences on the morphologies of diblock copolymer melts due to the chain length and the type of bonds are examined. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (no. 2012R1A1A2043633).

  7. An Extended Model Driven Framework for End-to-End Consistent Model Transformation

    Directory of Open Access Journals (Sweden)

    Mr. G. Ramesh

    2016-08-01

    Full Text Available Model Driven Development (MDD results in quick transformation from models to corresponding systems. Forward engineering features of modelling tools can help in generating source code from models. To build a robust system it is important to have consistency checking in the design models and the same between design model and the transformed implementation. Our framework named as Extensible Real Time Software Design Inconsistency Checker (XRTSDIC proposed in our previous papers supports consistency checking in design models. This paper focuses on automatic model transformation. An algorithm and defined transformation rules for model transformation from UML class diagram to ERD and SQL are being proposed. The model transformation bestows many advantages such as reducing cost of development, improving quality, enhancing productivity and leveraging customer satisfaction. Proposed framework has been enhanced to ensure that the transformed implementations conform to their model counterparts besides checking end-to-end consistency.

  8. CONSISTENCY OF LS ESTIMATOR IN SIMPLE LINEAR EV REGRESSION MODELS

    Institute of Scientific and Technical Information of China (English)

    Liu Jixue; Chen Xiru

    2005-01-01

    Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadratic-mean consistency.

  9. Logical consistency and sum-constrained linear models

    NARCIS (Netherlands)

    van Perlo -ten Kleij, Frederieke; Steerneman, A.G.M.; Koning, Ruud H.

    2006-01-01

    A topic that has received quite some attention in the seventies and eighties is logical consistency of sum-constrained linear models. Loosely defined, a sum-constrained model is logically consistent if the restrictions on the parameters and explanatory variables are such that the sum constraint is a

  10. MetaBar - a tool for consistent contextual data acquisition and standards compliant submission

    Directory of Open Access Journals (Sweden)

    Kottmann Renzo

    2010-06-01

    Full Text Available Abstract Background Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. Results MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft® Excel® spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC, comprising of the DNA DataBase of Japan (DDBJ, the European Molecular Biology Laboratory database (EMBL and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3. Conclusion The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as

  11. Comparison of two different modelling tools

    DEFF Research Database (Denmark)

    Brix, Wiebke; Elmegaard, Brian

    2009-01-01

    In this paper a test case is solved using two different modelling tools, Engineering Equation Solver (EES) and WinDali, in order to compare the tools. The system of equations solved, is a static model of an evaporator used for refrigeration. The evaporator consists of two parallel channels......, and it is investigated how a non-uniform airflow influences the refrigerant mass flow rate distribution and the total cooling capacity of the heat exchanger. It is shown that the cooling capacity decreases significantly with increasing maldistribution of the airflow. Comparing the two simulation tools it is found...

  12. Graphical Modeling Language Tool

    NARCIS (Netherlands)

    Rumnit, M.

    2003-01-01

    The group of the faculty EE-Math-CS of the University of Twente is developing a graphical modeling language for specifying concurrency in software design. This graphical modeling language has a mathematical background based on the theorie of CSP. This language contains the power to create trustworth

  13. Tools for Model Evaluation

    DEFF Research Database (Denmark)

    Olesen, H. R.

    1998-01-01

    Proceedings of the Twenty-Second NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held June 6-10, 1997, in Clermont-Ferrand, France.......Proceedings of the Twenty-Second NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held June 6-10, 1997, in Clermont-Ferrand, France....

  14. The Self-Consistency Model of Subjective Confidence

    Science.gov (United States)

    Koriat, Asher

    2012-01-01

    How do people monitor the correctness of their answers? A self-consistency model is proposed for the process underlying confidence judgments and their accuracy. In answering a 2-alternative question, participants are assumed to retrieve a sample of representations of the question and base their confidence on the consistency with which the chosen…

  15. Model Checking Data Consistency for Cache Coherence Protocols

    Institute of Scientific and Technical Information of China (English)

    Hong Pan; Hui-Min Lin; Yi Lv

    2006-01-01

    A method for automatic verification of cache coherence protocols is presented, in which cache coherence protocols are modeled as concurrent value-passing processes, and control and data consistency requirement are described as formulas in first-orderμ-calculus. A model checker is employed to check if the protocol under investigation satisfies the required properties. Using this method a data consistency error has been revealed in a well-known cache coherence protocol.The error has been corrected, and the revised protocol has been shown free from data consistency error for any data domain size, by appealing to data independence technique.

  16. Standard Model Vacuum Stability and Weyl Consistency Conditions

    DEFF Research Database (Denmark)

    Antipin, Oleg; Gillioz, Marc; Krog, Jens;

    2013-01-01

    At high energy the standard model possesses conformal symmetry at the classical level. This is reflected at the quantum level by relations between the different beta functions of the model. These relations are known as the Weyl consistency conditions. We show that it is possible to satisfy them...... order by order in perturbation theory, provided that a suitable coupling constant counting scheme is used. As a direct phenomenological application, we study the stability of the standard model vacuum at high energies and compare with previous computations violating the Weyl consistency conditions....

  17. Quantum monadology: a consistent world model for consciousness and physics.

    Science.gov (United States)

    Nakagomi, Teruaki

    2003-04-01

    The NL world model presented in the previous paper is embodied by use of relativistic quantum mechanics, which reveals the significance of the reduction of quantum states and the relativity principle, and locates consciousness and the concept of flowing time consistently in physics. This model provides a consistent framework to solve apparent incompatibilities between consciousness (as our interior experience) and matter (as described by quantum mechanics and relativity theory). Does matter have an inside? What is the flowing time now? Does physics allow the indeterminism by volition? The problem of quantum measurement is also resolved in this model.

  18. Consistency and Reconciliation Model In Regional Development Planning

    Directory of Open Access Journals (Sweden)

    Dina Suryawati

    2016-10-01

    Full Text Available The aim of this study was to identify the problems and determine the conceptual model of regional development planning. Regional development planning is a systemic, complex and unstructured process. Therefore, this study used soft systems methodology to outline unstructured issues with a structured approach. The conceptual models that were successfully constructed in this study are a model of consistency and a model of reconciliation. Regional development planning is a process that is well-integrated with central planning and inter-regional planning documents. Integration and consistency of regional planning documents are very important in order to achieve the development goals that have been set. On the other hand, the process of development planning in the region involves technocratic system, that is, both top-down and bottom-up system of participation. Both must be balanced, do not overlap and do not dominate each other. regional, development, planning, consistency, reconciliation

  19. Model-Consistent Sparse Estimation through the Bootstrap

    CERN Document Server

    Bach, Francis

    2009-01-01

    We consider the least-square linear regression problem with regularization by the $\\ell^1$-norm, a problem usually referred to as the Lasso. In this paper, we first present a detailed asymptotic analysis of model consistency of the Lasso in low-dimensional settings. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection. For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection procedure, referred to as the Bolasso, is extended to high-dimensional settings by a provably consistent two-step procedure.

  20. Consistency analysis of a nonbirefringent Lorentz-violating planar model

    CERN Document Server

    Casana, Rodolfo; Moreira, Roemir P M

    2011-01-01

    In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman's propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor $\\kappa_{\\mu\

  1. Multiscale Parameter Regionalization for consistent global water resources modelling

    Science.gov (United States)

    Wanders, Niko; Wood, Eric; Pan, Ming; Samaniego, Luis; Thober, Stephan; Kumar, Rohini; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc F. P.

    2017-04-01

    Due to an increasing demand for high- and hyper-resolution water resources information, it has become increasingly important to ensure consistency in model simulations across scales. This consistency can be ensured by scale independent parameterization of the land surface processes, even after calibration of the water resource model. Here, we use the Multiscale Parameter Regionalization technique (MPR, Samaniego et al. 2010, WRR) to allow for a novel, spatially consistent, scale independent parameterization of the global water resource model PCR-GLOBWB. The implementation of MPR in PCR-GLOBWB allows for calibration at coarse resolutions and subsequent parameter transfer to the hyper-resolution. In this study, the model was calibrated at 50 km resolution over Europe and validation carried out at resolutions of 50 km, 10 km and 1 km. MPR allows for a direct transfer of the calibrated transfer function parameters across scales and we find that we can maintain consistent land-atmosphere fluxes across scales. Here we focus on the 2003 European drought and show that the new parameterization allows for high-resolution calibrated simulations of water resources during the drought. For example, we find a reduction from 29% to 9.4% in the percentile difference in the annual evaporative flux across scales when compared against default simulations. Soil moisture errors are reduced from 25% to 6.9%, clearly indicating the benefits of the MPR implementation. This new parameterization allows us to show more spatial detail in water resources simulations that are consistent across scales and also allow validation of discharge for smaller catchments, even with calibrations at a coarse 50 km resolution. The implementation of MPR allows for novel high-resolution calibrated simulations of a global water resources model, providing calibrated high-resolution model simulations with transferred parameter sets from coarse resolutions. The applied methodology can be transferred to other

  2. Marky: a tool supporting annotation consistency in multi-user and iterative document annotation projects.

    Science.gov (United States)

    Pérez-Pérez, Martín; Glez-Peña, Daniel; Fdez-Riverola, Florentino; Lourenço, Anália

    2015-02-01

    Document annotation is a key task in the development of Text Mining methods and applications. High quality annotated corpora are invaluable, but their preparation requires a considerable amount of resources and time. Although the existing annotation tools offer good user interaction interfaces to domain experts, project management and quality control abilities are still limited. Therefore, the current work introduces Marky, a new Web-based document annotation tool equipped to manage multi-user and iterative projects, and to evaluate annotation quality throughout the project life cycle. At the core, Marky is a Web application based on the open source CakePHP framework. User interface relies on HTML5 and CSS3 technologies. Rangy library assists in browser-independent implementation of common DOM range and selection tasks, and Ajax and JQuery technologies are used to enhance user-system interaction. Marky grants solid management of inter- and intra-annotator work. Most notably, its annotation tracking system supports systematic and on-demand agreement analysis and annotation amendment. Each annotator may work over documents as usual, but all the annotations made are saved by the tracking system and may be further compared. So, the project administrator is able to evaluate annotation consistency among annotators and across rounds of annotation, while annotators are able to reject or amend subsets of annotations made in previous rounds. As a side effect, the tracking system minimises resource and time consumption. Marky is a novel environment for managing multi-user and iterative document annotation projects. Compared to other tools, Marky offers a similar visually intuitive annotation experience while providing unique means to minimise annotation effort and enforce annotation quality, and therefore corpus consistency. Marky is freely available for non-commercial use at http://sing.ei.uvigo.es/marky. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Emergent Dynamics of a Thermodynamically Consistent Particle Model

    Science.gov (United States)

    Ha, Seung-Yeal; Ruggeri, Tommaso

    2017-03-01

    We present a thermodynamically consistent particle (TCP) model motivated by the theory of multi-temperature mixture of fluids in the case of spatially homogeneous processes. The proposed model incorporates the Cucker-Smale (C-S) type flocking model as its isothermal approximation. However, it is more complex than the C-S model, because the mutual interactions are not only " mechanical" but are also affected by the "temperature effect" as individual particles may exhibit distinct internal energies. We develop a framework for asymptotic weak and strong flocking in the context of the proposed model.

  4. Viscoelastic models with consistent hypoelasticity for fluids undergoing finite deformations

    Science.gov (United States)

    Altmeyer, Guillaume; Rouhaud, Emmanuelle; Panicaud, Benoit; Roos, Arjen; Kerner, Richard; Wang, Mingchuan

    2015-08-01

    Constitutive models of viscoelastic fluids are written with rate-form equations when considering finite deformations. Trying to extend the approach used to model these effects from an infinitesimal deformation to a finite transformation framework, one has to ensure that the tensors and their rates are indifferent with respect to the change of observer and to the superposition with rigid body motions. Frame-indifference problems can be solved with the use of an objective stress transport, but the choice of such an operator is not obvious and the use of certain transports usually leads to physically inconsistent formulation of hypoelasticity. The aim of this paper is to present a consistent formulation of hypoelasticity and to combine it with a viscosity model to construct a consistent viscoelastic model. In particular, the hypoelastic model is reversible.

  5. MultiSIMNRA: A computational tool for self-consistent ion beam analysis using SIMNRA

    Energy Technology Data Exchange (ETDEWEB)

    Silva, T.F., E-mail: tfsilva@if.usp.br [Instituto de Física da Universidade de São Paulo, Rua do Matão, trav. R 187, 05508-090 São Paulo (Brazil); Rodrigues, C.L. [Instituto de Física da Universidade de São Paulo, Rua do Matão, trav. R 187, 05508-090 São Paulo (Brazil); Mayer, M. [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, D-85748 Garching (Germany); Moro, M.V.; Trindade, G.F.; Aguirre, F.R.; Added, N.; Rizzutto, M.A.; Tabacniks, M.H. [Instituto de Física da Universidade de São Paulo, Rua do Matão, trav. R 187, 05508-090 São Paulo (Brazil)

    2016-03-15

    Highlights: • MultiSIMNRA enables the self-consistent analysis of multiple ion beam techniques. • Self-consistent analysis enables unequivocal and reliable modeling of the sample. • Four different computational algorithms available for model optimizations. • Definition of constraints enables to include prior knowledge into the analysis. - Abstract: SIMNRA is widely adopted by the scientific community of ion beam analysis for the simulation and interpretation of nuclear scattering techniques for material characterization. Taking advantage of its recognized reliability and quality of the simulations, we developed a computer program that uses multiple parallel sessions of SIMNRA to perform self-consistent analysis of data obtained by different ion beam techniques or in different experimental conditions of a given sample. In this paper, we present a result using MultiSIMNRA for a self-consistent multi-elemental analysis of a thin film produced by magnetron sputtering. The results demonstrate the potentialities of the self-consistent analysis and its feasibility using MultiSIMNRA.

  6. Bolasso: model consistent Lasso estimation through the bootstrap

    CERN Document Server

    Bach, Francis

    2008-01-01

    We consider the least-square linear regression problem with regularization by the l1-norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning rep...

  7. Detection and quantification of flow consistency in business process models

    DEFF Research Database (Denmark)

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel

    2017-01-01

    Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect......, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second......, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics...

  8. A consistent transported PDF model for treating differential molecular diffusion

    Science.gov (United States)

    Wang, Haifeng; Zhang, Pei

    2016-11-01

    Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.

  9. Towards consistent nuclear models and comprehensive nuclear data evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Bouland, O [Los Alamos National Laboratory; Hale, G M [Los Alamos National Laboratory; Lynn, J E [Los Alamos National Laboratory; Talou, P [Los Alamos National Laboratory; Bernard, D [FRANCE; Litaize, O [FRANCE; Noguere, G [FRANCE; De Saint Jean, C [FRANCE; Serot, O [FRANCE

    2010-01-01

    The essence of this paper is to enlighten the consistency achieved nowadays in nuclear data and uncertainties assessments in terms of compound nucleus reaction theory from neutron separation energy to continuum. Making the continuity of theories used in resolved (R-matrix theory), unresolved resonance (average R-matrix theory) and continuum (optical model) rangcs by the generalization of the so-called SPRT method, consistent average parameters are extracted from observed measurements and associated covariances are therefore calculated over the whole energy range. This paper recalls, in particular, recent advances on fission cross section calculations and is willing to suggest some hints for future developments.

  10. Towards a self-consistent dynamical nuclear model

    Science.gov (United States)

    Roca-Maza, X.; Niu, Y. F.; Colò, G.; Bortignon, P. F.

    2017-04-01

    Density functional theory (DFT) is a powerful and accurate tool, exploited in nuclear physics to investigate the ground-state and some of the collective properties of nuclei along the whole nuclear chart. Models based on DFT are not, however, suitable for the description of single-particle dynamics in nuclei. Following the field theoretical approach by A Bohr and B R Mottelson to describe nuclear interactions between single-particle and vibrational degrees of freedom, we have taken important steps towards the building of a microscopic dynamic nuclear model. In connection with this, one important issue that needs to be better understood is the renormalization of the effective interaction in the particle-vibration approach. One possible way to renormalize the interaction is by the so-called subtraction method. In this contribution, we will implement the subtraction method in our model for the first time and study its consequences.

  11. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    options on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... on S&P 500 across strikes and maturities as well as options on the VIX volatility index. The calibration of the model is done in two steps, first by matching VIX option prices and then by matching prices of options on the underlying....

  12. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Kokholm, Thomas

    on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... strikes and maturities as well as options on the VIX volatility index. The calibration of the model is done in two steps, first by matching VIX option prices and then by matching prices of options on the underlying....

  13. Consistency Across Standards or Standards in a New Business Model

    Science.gov (United States)

    Russo, Dane M.

    2010-01-01

    Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.

  14. A detailed self-consistent vertical Milky Way disc model

    Directory of Open Access Journals (Sweden)

    Gao S.

    2012-02-01

    Full Text Available We present a self-consistent vertical disc model of thin and thick disc in the solar vicinity. The model is optimized to fit the local kinematics of main sequence stars by varying the star formation history and the dynamical heating function. The star formation history and the dynamical heating function are not uniquely determined by the local kinematics alone. For four different pairs of input functions we calculate star count predictions at high galactic latitude as a function of colour. The comparison with North Galactic Pole data of SDSS/SEGUE leads to significant constraints of the local star formation history.

  15. Radio data and synchrotron emission in consistent cosmic ray models

    CERN Document Server

    Bringmann, Torsten; Lineros, Roberto A

    2011-01-01

    We consider the propagation of electrons in phenomenological two-zone diffusion models compatible with cosmic-ray nuclear data and compute the diffuse synchrotron emission resulting from their interaction with galactic magnetic fields. We find models in agreement not only with cosmic ray data but also with radio surveys at essentially all frequencies. Requiring such a globally consistent description strongly disfavors both a very large (L>15 kpc) and small (L<1 kpc) effective size of the diffusive halo. This has profound implications for, e.g., indirect dark matter searches.

  16. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Kokholm, Thomas

    We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...

  17. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    2013-01-01

    We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...

  18. Self consistent modeling of accretion columns in accretion powered pulsars

    Science.gov (United States)

    Falkner, Sebastian; Schwarm, Fritz-Walter; Wolff, Michael Thomas; Becker, Peter A.; Wilms, Joern

    2016-04-01

    We combine three physical models to self-consistently derive the observed flux and pulse profiles of neutron stars' accretion columns. From the thermal and bulk Comptonization model by Becker & Wolff (2006) we obtain seed photon continua produced in the dense inner regions of the accretion column. In a thin outer layer these seed continua are imprinted with cyclotron resonant scattering features calculated using Monte Carlo simulations. The observed phase and energy dependent flux corresponding to these emission profiles is then calculated, taking relativistic light bending into account. We present simulated pulse profiles and the predicted dependency of the observable X-ray spectrum as a function of pulse phase.

  19. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Kokholm, Thomas

    We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...

  20. A consistent collinear triad approximation for operational wave models

    Science.gov (United States)

    Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.

    2016-08-01

    In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.

  1. Warped 5D Standard Model Consistent with EWPT

    CERN Document Server

    Cabrer, Joan A; Quiros, Mariano

    2011-01-01

    For a 5D Standard Model propagating in an AdS background with an IR localized Higgs, compatibility of bulk KK gauge modes with EWPT yields a phenomenologically unappealing KK spectrum (m > 12.5 TeV) and leads to a "little hierarchy problem". For a bulk Higgs the solution to the hierarchy problem reduces the previous bound only by sqrt(3). As a way out, models with an enhanced bulk gauge symmetry SU(2)_R x U(1)_(B-L) were proposed. In this note we describe a much simpler (5D Standard) Model, where introduction of an enlarged gauge symmetry is no longer required. It is based on a warped gravitational background which departs from AdS at the IR brane and a bulk propagating Higgs. The model is consistent with EWPT for a range of KK masses within the LHC reach.

  2. Consistent regularization and renormalization in models with inhomogeneous phases

    CERN Document Server

    Adhikari, Prabal

    2016-01-01

    In many models in condensed matter physics and high-energy physics, one finds inhomogeneous phases at high density and low temperature. These phases are characterized by a spatially dependent condensate or order parameter. A proper calculation requires that one takes the vacuum fluctuations of the model into account. These fluctuations are ultraviolet divergent and must be regularized. We discuss different consistent ways of regularizing and renormalizing quantum fluctuations, focusing on a symmetric energy cutoff scheme and dimensional regularization. We apply these techniques calculating the vacuum energy in the NJL model in 1+1 dimensions in the large-$N_c$ limit and the 3+1 dimensional quark-meson model in the mean-field approximation both for a one-dimensional chiral-density wave.

  3. Consistent regularization and renormalization in models with inhomogeneous phases

    Science.gov (United States)

    Adhikari, Prabal; Andersen, Jens O.

    2017-02-01

    In many models in condensed matter and high-energy physics, one finds inhomogeneous phases at high density and low temperature. These phases are characterized by a spatially dependent condensate or order parameter. A proper calculation requires that one takes the vacuum fluctuations of the model into account. These fluctuations are ultraviolet divergent and must be regularized. We discuss different ways of consistently regularizing and renormalizing quantum fluctuations, focusing on momentum cutoff, symmetric energy cutoff, and dimensional regularization. We apply these techniques calculating the vacuum energy in the Nambu-Jona-Lasinio model in 1 +1 dimensions in the large-Nc limit and in the 3 +1 dimensional quark-meson model in the mean-field approximation both for a one-dimensional chiral-density wave.

  4. Self-consistent triaxial de Zeeuw-Carollo Models

    CERN Document Server

    Thakur, Parijat; Das, Mousumi; Chakraborty, D K; Ann, H B

    2007-01-01

    We use the usual method of Schwarzschild to construct self-consistent solutions for the triaxial de Zeeuw & Carollo (1996) models with central density cusps. ZC96 models are triaxial generalisations of spherical $\\gamma$-models of Dehnen whose densities vary as $r^{-\\gamma}$ near the center and $r^{-4}$ at large radii and hence, possess a central density core for $\\gamma=0$ and cusps for $\\gamma > 0$. We consider four triaxial models from ZC96, two prolate triaxials: $(p, q) = (0.65, 0.60)$ with $\\gamma = 1.0$ and 1.5, and two oblate triaxials: $(p, q) = (0.95, 0.60)$ with $\\gamma = 1.0$ and 1.5. We compute 4500 orbits in each model for time periods of $10^{5} T_{D}$. We find that a large fraction of the orbits in each model are stochastic by means of their nonzero Liapunov exponents. The stochastic orbits in each model can sustain regular shapes for $\\sim 10^{3} T_{D}$ or longer, which suggests that they diffuse slowly through their allowed phase-space. Except for the oblate triaxial models with $\\gamma ...

  5. Are paleoclimate model ensembles consistent with the MARGO data synthesis?

    Directory of Open Access Journals (Sweden)

    J. C. Hargreaves

    2011-03-01

    Full Text Available We investigate the consistency of various ensembles of model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day, however, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.

  6. Are paleoclimate model ensembles consistent with the MARGO data synthesis?

    Directory of Open Access Journals (Sweden)

    J. C. Hargreaves

    2011-08-01

    Full Text Available We investigate the consistency of various ensembles of climate model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day. However, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.

  7. Consistency analysis of a nonbirefringent Lorentz-violating planar model

    Energy Technology Data Exchange (ETDEWEB)

    Casana, Rodolfo; Ferreira, Manoel M.; Moreira, Roemir P.M. [Universidade Federal do Maranhao (UFMA), Departamento de Fisica, Sao Luis, MA (Brazil)

    2012-07-15

    In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor {kappa}{sub {mu}{nu}}. The propagator of the gauge field is explicitly evaluated and expressed in terms of linear independent symmetric tensors, presenting only one physical mode. The same holds for the scalar propagator. A consistency analysis is performed based on the poles of the propagators. The isotropic parity-even sector is stable, causal and unitary mode for 0{<=}{kappa}{sub 00}<1. On the other hand, the anisotropic sector is stable and unitary but in general noncausal. Finally, it is shown that this planar model interacting with a {lambda}{phi}{sup 4}-Higgs field supports compact-like vortex configurations. (orig.)

  8. Consistency analysis of a nonbirefringent Lorentz-violating planar model

    Science.gov (United States)

    Casana, Rodolfo; Ferreira, Manoel M.; Moreira, Roemir P. M.

    2012-07-01

    In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor κ μν . The propagator of the gauge field is explicitly evaluated and expressed in terms of linear independent symmetric tensors, presenting only one physical mode. The same holds for the scalar propagator. A consistency analysis is performed based on the poles of the propagators. The isotropic parity-even sector is stable, causal and unitary mode for 0≤ κ 00<1. On the other hand, the anisotropic sector is stable and unitary but in general noncausal. Finally, it is shown that this planar model interacting with a λ| φ|4-Higgs field supports compactlike vortex configurations.

  9. Integrating a Decision Management Tool with UML Modeling Tools

    DEFF Research Database (Denmark)

    Könemann, Patrick

    Numerous design decisions are made while developing software systems, which influence the architecture of these systems as well as following decisions. A number of decision management tools already exist for capturing, documenting, and maintaining design decisions, but also for guiding developers...... the development process. In this report, we propose an integration of a decision management and a UML-based modeling tool, based on use cases we distill from a case study: the modeling tool shall show all decisions related to a model and allow its users to extend or update them; the decision management tool shall...... trigger the modeling tool to realize design decisions in the models. We define tool-independent concepts and architecture building blocks supporting these use cases and present how they can be implemented in the IBM Rational Software Modeler and Architectural Decision Knowledge Wiki. This seamless...

  10. Self-Consistent Modeling of Reionization in Cosmological Hydrodynamical Simulations

    CERN Document Server

    Oñorbe, Jose; Lukić, Zarija

    2016-01-01

    The ultraviolet background (UVB) emitted by quasars and galaxies governs the ionization and thermal state of the intergalactic medium (IGM), regulates the formation of high-redshift galaxies, and is thus a key quantity for modeling cosmic reionization. The vast majority of cosmological hydrodynamical simulations implement the UVB via a set of spatially uniform photoionization and photoheating rates derived from UVB synthesis models. We show that simulations using canonical UVB rates reionize, and perhaps more importantly, spuriously heat the IGM, much earlier z ~ 15 than they should. This problem arises because at z > 6, where observational constraints are non-existent, the UVB amplitude is far too high. We introduce a new methodology to remedy this issue, and generate self-consistent photoionization and photoheating rates to model any chosen reionization history. Following this approach, we run a suite of hydrodynamical simulations of different reionization scenarios, and explore the impact of the timing of ...

  11. Consistent Static Models of Local Thermospheric Composition Profiles

    CERN Document Server

    Picone, J M; Drob, D P

    2016-01-01

    The authors investigate the ideal, nondriven multifluid equations of motion to identify consistent (i.e., truly stationary), mechanically static models for composition profiles within the thermosphere. These physically faithful functions are necessary to define the parametric core of future empirical atmospheric models and climatologies. Based on the strength of interspecies coupling, the thermosphere has three altitude regions: (1) the lower thermosphere (herein z ~200 km), in which the species flows are approximately uncoupled; and (3) a transition region in between, where the effective species particle mass and the effective species vertical flow interpolate between the solutions for the upper and lower thermosphere. We place this view in the context of current terminology within the community, i.e., a fully mixed (lower) region and an upper region in diffusive equilibrium (DE). The latter condition, DE, currently used in empirical composition models, does not represent a truly static composition profile ...

  12. Thermodynamically consistent model of brittle oil shales under overpressure

    Science.gov (United States)

    Izvekov, Oleg

    2016-04-01

    The concept of dual porosity is a common way for simulation of oil shale production. In the frame of this concept the porous fractured media is considered as superposition of two permeable continua with mass exchange. As a rule the concept doesn't take into account such as the well-known phenomenon as slip along natural fractures, overpressure in low permeability matrix and so on. Overpressure can lead to development of secondary fractures in low permeability matrix in the process of drilling and pressure reduction during production. In this work a new thermodynamically consistent model which generalizes the model of dual porosity is proposed. Particularities of the model are as follows. The set of natural fractures is considered as permeable continuum. Damage mechanics is applied to simulation of secondary fractures development in low permeability matrix. Slip along natural fractures is simulated in the frame of plasticity theory with Drucker-Prager criterion.

  13. A minimal model of self-consistent partial synchrony

    Science.gov (United States)

    Clusella, Pau; Politi, Antonio; Rosenblum, Michael

    2016-09-01

    We show that self-consistent partial synchrony in globally coupled oscillatory ensembles is a general phenomenon. We analyze in detail appearance and stability properties of this state in possibly the simplest setup of a biharmonic Kuramoto-Daido phase model as well as demonstrate the effect in limit-cycle relaxational Rayleigh oscillators. Such a regime extends the notion of splay state from a uniform distribution of phases to an oscillating one. Suitable collective observables such as the Kuramoto order parameter allow detecting the presence of an inhomogeneous distribution. The characteristic and most peculiar property of self-consistent partial synchrony is the difference between the frequency of single units and that of the macroscopic field.

  14. Self consistent tight binding model for dissociable water

    Science.gov (United States)

    Lin, You; Wynveen, Aaron; Halley, J. W.; Curtiss, L. A.; Redfern, P. C.

    2012-05-01

    We report results of development of a self consistent tight binding model for water. The model explicitly describes the electrons of the liquid self consistently, allows dissociation of the water and permits fast direct dynamics molecular dynamics calculations of the fluid properties. It is parameterized by fitting to first principles calculations on water monomers, dimers, and trimers. We report calculated radial distribution functions of the bulk liquid, a phase diagram and structure of solvated protons within the model as well as ac conductivity of a system of 96 water molecules of which one is dissociated. Structural properties and the phase diagram are in good agreement with experiment and first principles calculations. The estimated DC conductivity of a computational sample containing a dissociated water molecule was an order of magnitude larger than that reported from experiment though the calculated ratio of proton to hydroxyl contributions to the conductivity is very close to the experimental value. The conductivity results suggest a Grotthuss-like mechanism for the proton component of the conductivity.

  15. Mean-field theory and self-consistent dynamo modeling

    Energy Technology Data Exchange (ETDEWEB)

    Yoshizawa, Akira; Yokoi, Nobumitsu [Tokyo Univ. (Japan). Inst. of Industrial Science; Itoh, Sanae-I [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Itoh, Kimitaka [National Inst. for Fusion Science, Toki, Gifu (Japan)

    2001-12-01

    Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)

  16. Consistency of the tachyon warm inflationary universe models

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiao-Min; Zhu, Jian-Yang, E-mail: zhangxm@mail.bnu.edu.cn, E-mail: zhujy@bnu.edu.cn [Department of Physics, Beijing Normal University, Beijing 100875 (China)

    2014-02-01

    This study concerns the consistency of the tachyon warm inflationary models. A linear stability analysis is performed to find the slow-roll conditions, characterized by the potential slow-roll (PSR) parameters, for the existence of a tachyon warm inflationary attractor in the system. The PSR parameters in the tachyon warm inflationary models are redefined. Two cases, an exponential potential and an inverse power-law potential, are studied, when the dissipative coefficient Γ = Γ{sub 0} and Γ = Γ(φ), respectively. A crucial condition is obtained for a tachyon warm inflationary model characterized by the Hubble slow-roll (HSR) parameter ε{sub H}, and the condition is extendable to some other inflationary models as well. A proper number of e-folds is obtained in both cases of the tachyon warm inflation, in contrast to existing works. It is also found that a constant dissipative coefficient (Γ = Γ{sub 0}) is usually not a suitable assumption for a warm inflationary model.

  17. A self-consistent spin-diffusion model for micromagnetics

    KAUST Repository

    Abert, Claas

    2016-12-17

    We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.

  18. Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems

    KAUST Repository

    Garg, Vikram V

    2014-09-27

    Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.

  19. Modeling lung motion using consistent image registration in four-dimensional computed tomography for radiation therapy

    Science.gov (United States)

    Lu, Wei; Song, Joo Hyun; Christensen, Gary E.; Parikh, Parag J.; Bradley, Jeffrey D.; Low, Daniel A.

    2006-03-01

    Respiratory motion is a significant source of error in conformal radiation therapy for the thorax and upper abdomen. Four-dimensional computed tomography (4D CT) has been proposed to reduce the uncertainty caused by internal respiratory organ motion. A 4D CT dataset is retrospectively reconstructed at various stages of a respiratory cycle. An important tool for 4D treatment planning is deformable image registration. An inverse consistent image registration is used to model lung motion from one respiratory stage to another during a breathing cycle. This diffeomorphic registration jointly estimates the forward and reverse transformations providing more accurate correspondence between two images. Registration results and modeled motions in the lung are shown for three example respiratory stages. The results demonstrate that the consistent image registration satisfactorily models the large motions in the lung, providing a useful tool for 4D planning and delivering.

  20. Classical and Quantum Consistency of the DGP Model

    CERN Document Server

    Nicolis, A; Nicolis, Alberto; Rattazzi, Riccardo

    2004-01-01

    We study the Dvali-Gabadadze-Porrati model by the method of the boundary effective action. The truncation of this action to the bending mode \\pi consistently describes physics in a wide range of regimes both at the classical and at the quantum level. The Vainshtein effect, which restores agreement with precise tests of general relativity, follows straightforwardly. We give a simple and general proof of stability, i.e. absence of ghosts in the fluctuations, valid for most of the relevant cases, like for instance the spherical source in asymptotically flat space. However we confirm that around certain interesting self-accelerating cosmological solutions there is a ghost. We consider the issue of quantum corrections. Around flat space \\pi becomes strongly coupled below a macroscopic length of 1000 km, thus impairing the predictivity of the model. Indeed the tower of higher dimensional operators which is expected by a generic UV completion of the model limits predictivity at even larger length scales. We outline ...

  1. Consistent constraints on the Standard Model Effective Field Theory

    CERN Document Server

    Berthier, Laure

    2015-01-01

    We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, $\\Lambda \\gtrsim \\, 3 \\, {\\rm TeV}$. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an $\\rm S,T$ analysis is modified by the theory errors we include as an illustrative example.

  2. Creation of Consistent Burn Wounds: A Rat Model

    Directory of Open Access Journals (Sweden)

    Elijah Zhengyang Cai

    2014-07-01

    Full Text Available Background Burn infliction techniques are poorly described in rat models. An accurate study can only be achieved with wounds that are uniform in size and depth. We describe a simple reproducible method for creating consistent burn wounds in rats. Methods Ten male Sprague-Dawley rats were anesthetized and dorsum shaved. A 100 g cylindrical stainless-steel rod (1 cm diameter was heated to 100℃ in boiling water. Temperature was monitored using a thermocouple. We performed two consecutive toe-pinch tests on different limbs to assess the depth of sedation. Burn infliction was limited to the loin. The skin was pulled upwards, away from the underlying viscera, creating a flat surface. The rod rested on its own weight for 5, 10, and 20 seconds at three different sites on each rat. Wounds were evaluated for size, morphology and depth. Results Average wound size was 0.9957 cm2 (standard deviation [SD] 0.1845 (n=30. Wounds created with duration of 5 seconds were pale, with an indistinct margin of erythema. Wounds of 10 and 20 seconds were well-defined, uniformly brown with a rim of erythema. Average depths of tissue damage were 1.30 mm (SD 0.424, 2.35 mm (SD 0.071, and 2.60 mm (SD 0.283 for duration of 5, 10, 20 seconds respectively. Burn duration of 5 seconds resulted in full-thickness damage. Burn duration of 10 seconds and 20 seconds resulted in full-thickness damage, involving subjacent skeletal muscle. Conclusions This is a simple reproducible method for creating burn wounds consistent in size and depth in a rat burn model.

  3. A self-consistent dynamo model for fully convective stars

    Science.gov (United States)

    Yadav, Rakesh Kumar; Christensen, Ulrich; Morin, Julien; Gastine, Thomas; Reiners, Ansgar; Poppenhaeger, Katja; Wolk, Scott J.

    2016-01-01

    The tachocline region inside the Sun, where the rigidly rotating radiative core meets the differentially rotating convection zone, is thought to be crucial for generating the Sun's magnetic field. Low-mass fully convective stars do not possess a tachocline and were originally expected to generate only weak small-scale magnetic fields. Observations, however, have painted a different picture of magnetism in rapidly-rotating fully convective stars: (1) Zeeman broadening measurements revealed average surface field of several kiloGauss (kG), which is similar to the typical field strength found in sunspots. (2) Zeeman-Doppler-Imaging (ZDI) technique discovered large-scale magnetic fields with a morphology often similar to the Earth's dipole-dominated field. (3) Comparison of Zeeman broadening and ZDI results showed that more than 80% of the magnetic flux resides at small scales. So far, theoretical and computer simulation efforts have not been able to reproduce these features simultaneously. Here we present a self-consistent global model of magnetic field generation in low-mass fully convective stars. A distributed dynamo working in the model spontaneously produces a dipole-dominated surface magnetic field of the observed strength. The interaction of this field with the turbulent convection in outer layers shreds it, producing small-scale fields that carry most of the magnetic flux. The ZDI technique applied to synthetic spectropolarimetric data based on our model recovers most of the large-scale field. Our model simultaneously reproduces the morphology and magnitude of the large-scale field as well as the magnitude of the small-scale field observed on low-mass fully convective stars.

  4. Modeling Tool Advances Rotorcraft Design

    Science.gov (United States)

    2007-01-01

    Continuum Dynamics Inc. (CDI), founded in 1979, specializes in advanced engineering services, including fluid dynamic modeling and analysis for aeronautics research. The company has completed a number of SBIR research projects with NASA, including early rotorcraft work done through Langley Research Center, but more recently, out of Ames Research Center. NASA Small Business Innovation Research (SBIR) grants on helicopter wake modeling resulted in the Comprehensive Hierarchical Aeromechanics Rotorcraft Model (CHARM), a tool for studying helicopter and tiltrotor unsteady free wake modeling, including distributed and integrated loads, and performance prediction. Application of the software code in a blade redesign program for Carson Helicopters, of Perkasie, Pennsylvania, increased the payload and cruise speeds of its S-61 helicopter. Follow-on development resulted in a $24 million revenue increase for Sikorsky Aircraft Corporation, of Stratford, Connecticut, as part of the company's rotor design efforts. Now under continuous development for more than 25 years, CHARM models the complete aerodynamics and dynamics of rotorcraft in general flight conditions. CHARM has been used to model a broad spectrum of rotorcraft attributes, including performance, blade loading, blade-vortex interaction noise, air flow fields, and hub loads. The highly accurate software is currently in use by all major rotorcraft manufacturers, NASA, the U.S. Army, and the U.S. Navy.

  5. Pluralistic and stochastic gene regulation: examples, models and consistent theory.

    Science.gov (United States)

    Salas, Elisa N; Shu, Jiang; Cserhati, Matyas F; Weeks, Donald P; Ladunga, Istvan

    2016-06-01

    We present a theory of pluralistic and stochastic gene regulation. To bridge the gap between empirical studies and mathematical models, we integrate pre-existing observations with our meta-analyses of the ENCODE ChIP-Seq experiments. Earlier evidence includes fluctuations in levels, location, activity, and binding of transcription factors, variable DNA motifs, and bursts in gene expression. Stochastic regulation is also indicated by frequently subdued effects of knockout mutants of regulators, their evolutionary losses/gains and massive rewiring of regulatory sites. We report wide-spread pluralistic regulation in ≈800 000 tightly co-expressed pairs of diverse human genes. Typically, half of ≈50 observed regulators bind to both genes reproducibly, twice more than in independently expressed gene pairs. We also examine the largest set of co-expressed genes, which code for cytoplasmic ribosomal proteins. Numerous regulatory complexes are highly significant enriched in ribosomal genes compared to highly expressed non-ribosomal genes. We could not find any DNA-associated, strict sense master regulator. Despite major fluctuations in transcription factor binding, our machine learning model accurately predicted transcript levels using binding sites of 20+ regulators. Our pluralistic and stochastic theory is consistent with partially random binding patterns, redundancy, stochastic regulator binding, burst-like expression, degeneracy of binding motifs and massive regulatory rewiring during evolution.

  6. Consistency of modified MLE in EV model with replicated observations

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Sanguo

    2001-01-01

    [1]Kendall, M., Stuart, A., The Advanced Theory of Statistics, Vol. 2, New York: Charles Griffin, 1979.[2]Anderson, T. W., Estimating linear statistical relationships, Ann. Statist., 1984, 12: 1.[3]Cui Hengjian, Asymptotic normality of M-estimates in the EV model, Sys. Sci. and Math. Sci., 1997, 10(3): 225.[4]Madansky, A., The fitting of straight lines when both variables are subject to error, JASA, 1959, 54: 173.[5]Villegas, C., Maximum likelihood estimations of a linear functional relationship, Ann. Math. Statist., 1961, 32(4): 1048.[6]Stout, W. F., Almost Sure Convergence, New York: Academic Press, 1974.[7]Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975.[8]Lai, T. L., Robbins, H., Wei, C. Z., Strong consistency of least squares estimates in multiple regression, J. Multivariate Anal., 1979, 9: 343.[9]Chen Xiru, On limiting properties of U-statistics and von-Mises statistics, Scientia Sinica (in Chinese), 1980, (6): 522.

  7. Dynamic Consistency between Value and Coordination Models - Research Issues.

    NARCIS (Netherlands)

    Bodenstaff, L.; Wombacher, Andreas; Reichert, M.U.; meersman, R; Tari, Z; herrero, p

    Inter-organizational business cooperations can be described from different viewpoints each fulfilling a specific purpose. Since all viewpoints describe the same system they must not contradict each other, thus, must be consistent. Consistency can be checked based on common semantic concepts of the

  8. A proposal for a consistent parametrization of earth models

    Science.gov (United States)

    Forbriger, Thomas; Friederich, Wolfgang

    2005-08-01

    The current way to parametrize earth models in terms of real-valued seismic velocities and quality factors is incomplete as it does not specify how complex-valued viscoelastic moduli or complex velocities should be computed from them. Various ways to do this can be found in the literature. Depending on the context they may specify (1) the real part of the viscoelastic modulus, (2) the absolute value of the viscoelastic modulus, (3) the real part of complex velocity or (4) the phase velocity of a propagating plane wave. We propose here to exclusively use the first alternative because it is the only one which allows both a flexible choice of elastic parameters and a mathematically rigorous evaluation of the complex-valued viscoelastic moduli. The other definitions only permit an evaluation of viscoelastic moduli if the tabulated quality factors are directly associated with the listed velocities. Ignoring the subtle differences between the three definitions leads to variations in viscoelastic moduli which are second order in 1/Q where Q is a quality factor. This may be the reason why the topic has never been discussed in the literature. In case of shallow seismic media, however, where quality factors may assume values of less than 10, the subtle differences become noticeable in synthetic seismograms. It is then essential to use the same definition in all algorithms to make results comparable. Matters become worse for anisotropic media, which are commonly specified in terms of real elastic moduli and quality factors for effective isotropic moduli. In that case, the complex-valued viscoelastic moduli cannot be determined uniquely. However, interpreting the tabulated constants as the real parts of the complex-valued viscoelastic moduli at least allows a consistent definition, which respects the relative magnitude of the anelastic and anisotropic parts compared to the elastic parts. It should be noted that all these considerations apply to complex-valued viscoelastic

  9. New geometric design consistency model based on operating speed profiles for road safety evaluation.

    Science.gov (United States)

    Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo

    2013-12-01

    To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment.

  10. Alien wavelength modeling tool and field trial

    DEFF Research Database (Denmark)

    Sambo, N.; Sgambelluri, A.; Secondini, M.

    2015-01-01

    A modeling tool is presented for pre-FEC BER estimation of PM-QPSK alien wavelength signals. A field trial is demonstrated and used as validation of the tool's correctness. A very close correspondence between the performance of the field trial and the one predicted by the modeling tool has been...

  11. Self-consistent modelling of resonant tunnelling structures

    DEFF Research Database (Denmark)

    Fiig, T.; Jauho, A.P.

    1992-01-01

    We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated with the ......We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated...... applied voltages and carrier densities at the emitter-barrier interface. We include the two-dimensional accumulation layer charge and the quantum well charge in our self-consistent scheme. We discuss the evaluation of the current contribution originating from the two-dimensional accumulation layer charges...

  12. A consistent modelling methodology for secondary settling tanks in wastewater treatment.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar

    2011-03-01

    The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models.

  13. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Kokholm, Thomas

    on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across......We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...

  14. Is the island universe model consistent with observations?

    OpenAIRE

    Piao, Yun-Song

    2005-01-01

    We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.

  15. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  16. Vertically integrated simulation tools for self-consistent tracking and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Forest, E.; Nishimura, H.

    1989-03-01

    A modeling, simulation and analysis code complex, the Gemini Package, was developed for the study of single-particle dynamics in the Advanced Light Source (ALS), a 1--2 GeV synchrotron radiation source now being built at Lawrence Berkeley Laboratory. The purpose of this paper is to describe the philosophy behind the package, with special emphasis on our vertical approach. 8 refs., 2 figs.

  17. An MCMC Circumstellar Disks Modeling Tool

    Science.gov (United States)

    Wolff, Schuyler; Perrin, Marshall D.; Mazoyer, Johan; Choquet, Elodie; Soummer, Remi; Ren, Bin; Pueyo, Laurent; Debes, John H.; Duchene, Gaspard; Pinte, Christophe; Menard, Francois

    2016-01-01

    We present an enhanced software framework for the Monte Carlo Markov Chain modeling of circumstellar disk observations, including spectral energy distributions and multi wavelength images from a variety of instruments (e.g. GPI, NICI, HST, WFIRST). The goal is to self-consistently and simultaneously fit a wide variety of observables in order to place constraints on the physical properties of a given disk, while also rigorously assessing the uncertainties in the derived properties. This modular code is designed to work with a collection of existing modeling tools, ranging from simple scripts to define the geometry for optically thin debris disks, to full radiative transfer modeling of complex grain structures in protoplanetary disks (using the MCFOST radiative transfer modeling code). The MCMC chain relies on direct chi squared comparison of model images/spectra to observations. We will include a discussion of how best to weight different observations in the modeling of a single disk and how to incorporate forward modeling from PCA PSF subtraction techniques. The code is open source, python, and available from github. Results for several disks at various evolutionary stages will be discussed.

  18. Consistent Evolution of Software Artifacts and Non-Functional Models

    Science.gov (United States)

    2014-11-14

    Ruscio D., Pierantonio A., Arcelli D., Eramo R., Trubiani C., Tucci M. Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica ...Models (SRMs), and ( ii ) antipattern solutions as Target Role Models (TRMs). Hence, SRM-TRM pairs represent new instruments in the hands of developers to...helps to identify the antipatterns that more heavily contribute to the violation of performance requirements [10], and ( ii ) another one aimed at

  19. Fire behavior modeling-a decision tool

    Science.gov (United States)

    Jack Cohen; Bill Bradshaw

    1986-01-01

    The usefulness of an analytical model as a fire management decision tool is determined by the correspondence of its descriptive capability to the specific decision context. Fire managers must determine the usefulness of fire models as a decision tool when applied to varied situations. Because the wildland fire phenomenon is complex, analytical fire spread models will...

  20. General model for boring tool optimization

    Science.gov (United States)

    Moraru, G. M.; rbes, M. V. Ze; Popescu, L. G.

    2016-08-01

    Optimizing a tool (and therefore those for boring) consist in improving its performance through maximizing the objective functions chosen by the designer and/or by user. In order to define and to implement the proposed objective functions, contribute numerous features and performance required by tool users. Incorporation of new features makes the cutting tool to be competitive in the market and to meet user requirements.

  1. Spatial Modeling Tools for Cell Biology

    Science.gov (United States)

    2006-10-01

    34 iv Figure 5.1: Computational results for a diffusion problem on planar square thin film............ 36 Figure 5.2... Wisc . Open Microscopy Env. Pre-CoBi Model Lib. CFDRC CoBi Tools CFDRC CoBi Tools Simulation Environment JigCell Tools Figure 4.1: Cell biology

  2. Gas Clumping in Self-Consistent Reionisation Models

    CERN Document Server

    Finlator, K; Özel, F; Davé, R

    2012-01-01

    We use a suite of cosmological hydrodynamic simulations including a self-consistent treatment for inhomogeneous reionisation to study the impact of galactic outflows and photoionisation heating on the volume-averaged recombination rate of the intergalactic medium (IGM). By incorporating an evolving ionising escape fraction and a treatment for self-shielding within Lyman limit systems, we have run the first simulations of "photon-starved" reionisation scenarios that simultaneously reproduce observations of the abundance of galaxies, the optical depth to electron scattering of cosmic microwave background photons \\tau, and the effective optical depth to Lyman\\alpha absorption at z=5. We confirm that an ionising background reduces the clumping factor C by more than 50% by smoothing moderately-overdense (\\Delta=1--100) regions. Meanwhile, outflows increase clumping only modestly. The clumping factor of ionised gas is much lower than the overall baryonic clumping factor because the most overdense gas is self-shield...

  3. Modelling plasticity of unsaturated soils in a thermodynamically consistent framework

    CERN Document Server

    Coussy, O

    2010-01-01

    Constitutive equations of unsaturated soils are often derived in a thermodynamically consistent framework through the use a unique 'effective' interstitial pressure. This later is naturally chosen as the space averaged interstitial pressure. However, experimental observations have revealed that two stress state variables were needed to describe the stress-strain-strength behaviour of unsaturated soils. The thermodynamics analysis presented here shows that the most general approach to the behaviour of unsaturated soils actually requires three stress state variables: the suction, which is required to describe the retention properties of the soil and two effective stresses, which are required to describe the soil deformation at water saturation held constant. Actually, it is shown that a simple assumption related to internal deformation leads to the need of a unique effective stress to formulate the stress-strain constitutive equation describing the soil deformation. An elastoplastic framework is then presented ...

  4. Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; Tartakovsky, Alexandre M.; Parks, Michael L.

    2017-04-01

    We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.

  5. Consistency Problem with Tracer Advection in the Atmospheric Model GAMIL

    Institute of Scientific and Technical Information of China (English)

    ZHANG Kai; WAN Hui; WANG Bin; ZHANG Meigen

    2008-01-01

    The radon transport test,which is a widely used test case for atmospheric transport models,is carried out to evaluate the tracer advection schemes in the Grid-Point Atmospheric Model of IAP-LASG (GAMIL).TWO of the three available schemes in the model are found to be associated with significant biases in the polar regions and in the upper part of the atmosphere,which implies potentially large errors in the simulation of ozone-like tracers.Theoretical analyses show that inconsistency exists between the advection schemes and the discrete continuity equation in the dynamical core of GAMIL and consequently leads to spurious sources and sinks in the tracer transport equation.The impact of this type of inconsistency is demonstrated by idealized tests and identified as the cause of the aforementioned biases.Other potential effects of this inconsistency are also discussed.Results of this study provide some hints for choosing suitable advection schemes in the GAMIL model.At least for the polar-region-concentrated atmospheric components and the closely correlated chemical species,the Flux-Form Semi-Lagrangian advection scheme produces more reasonable simulations of the large-scale transport processes without significantly increasing the computational expense.

  6. Self-consistent Models of Strong Interaction with Chiral Symmetry

    Science.gov (United States)

    Nambu, Y.; Pascual, P.

    1963-04-01

    Some simple models of (renormalizable) meson-nucleon interaction are examined in which the nucleon mass is entirely due to interaction and the chiral ( gamma {sub 5}) symmetry is "broken'' to become a hidden symmetry. It is found that such a scheme is possible provided that a vector meson is introduced as an elementary field. (auth)

  7. A seismologically consistent compositional model of Earth's core.

    Science.gov (United States)

    Badro, James; Côté, Alexander S; Brodholt, John P

    2014-05-27

    Earth's core is less dense than iron, and therefore it must contain "light elements," such as S, Si, O, or C. We use ab initio molecular dynamics to calculate the density and bulk sound velocity in liquid metal alloys at the pressure and temperature conditions of Earth's outer core. We compare the velocity and density for any composition in the (Fe-Ni, C, O, Si, S) system to radial seismological models and find a range of compositional models that fit the seismological data. We find no oxygen-free composition that fits the seismological data, and therefore our results indicate that oxygen is always required in the outer core. An oxygen-rich core is a strong indication of high-pressure and high-temperature conditions of core differentiation in a deep magma ocean with an FeO concentration (oxygen fugacity) higher than that of the present-day mantle.

  8. A more consistent intraluminal rhesus monkey model of ischemic stroke

    Institute of Scientific and Technical Information of China (English)

    Bo Zhao; Fauzia Akbary; Shengli Li; Jing Lu; Feng Ling; Xunming Ji; Guowei Shang; Jian Chen; Xiaokun Geng; Xin Ye; Guoxun Xu; Ju Wang; Jiasheng Zheng; Hongjun Li

    2014-01-01

    Endovascular surgery is advantageous in experimentally induced ischemic stroke because it causes fewer cranial traumatic lesions than invasive surgery and can closely mimic the pathophysiol-ogy in stroke patients. However, the outcomes are highly variable, which limits the accuracy of evaluations of ischemic stroke studies. In this study, eight healthy adult rhesus monkeys were randomized into two groups with four monkeys in each group:middle cerebral artery occlusion at origin segment (M1) and middle cerebral artery occlusion at M2 segment. The blood lfow in the middle cerebral artery was blocked completely for 2 hours using the endovascular microcoil placement technique (1 mm × 10 cm) (undetachable), to establish a model of cerebral ischemia. The microcoil was withdrawn and the middle cerebral artery blood lfow was restored. A revers-ible middle cerebral artery occlusion model was identiifed by hematoxylin-eosin staining, digital subtraction angiography, magnetic resonance angiography, magnetic resonance imaging, and neurological evaluation. The results showed that the middle cerebral artery occlusion model was successfully established in eight adult healthy rhesus monkeys, and ischemic lesions were apparent in the brain tissue of rhesus monkeys at 24 hours after occlusion. The rhesus monkeys had symp-toms of neurological deifcits. Compared with the M1 occlusion group, the M2 occlusion group had lower infarction volume and higher neurological scores. These experimental ifndings indicate that reversible middle cerebral artery occlusion can be produced with the endovascular microcoil technique in rhesus monkeys. The M2 occluded model had less infarction and less neurological impairment, which offers the potential for application in the ifeld of brain injury research.

  9. A more consistent intraluminal rhesus monkey model of ischemic stroke.

    Science.gov (United States)

    Zhao, Bo; Shang, Guowei; Chen, Jian; Geng, Xiaokun; Ye, Xin; Xu, Guoxun; Wang, Ju; Zheng, Jiasheng; Li, Hongjun; Akbary, Fauzia; Li, Shengli; Lu, Jing; Ling, Feng; Ji, Xunming

    2014-12-01

    Endovascular surgery is advantageous in experimentally induced ischemic stroke because it causes fewer cranial traumatic lesions than invasive surgery and can closely mimic the pathophysiology in stroke patients. However, the outcomes are highly variable, which limits the accuracy of evaluations of ischemic stroke studies. In this study, eight healthy adult rhesus monkeys were randomized into two groups with four monkeys in each group: middle cerebral artery occlusion at origin segment (M1) and middle cerebral artery occlusion at M2 segment. The blood flow in the middle cerebral artery was blocked completely for 2 hours using the endovascular microcoil placement technique (1 mm × 10 cm) (undetachable), to establish a model of cerebral ischemia. The microcoil was withdrawn and the middle cerebral artery blood flow was restored. A reversible middle cerebral artery occlusion model was identified by hematoxylin-eosin staining, digital subtraction angiography, magnetic resonance angiography, magnetic resonance imaging, and neurological evaluation. The results showed that the middle cerebral artery occlusion model was successfully established in eight adult healthy rhesus monkeys, and ischemic lesions were apparent in the brain tissue of rhesus monkeys at 24 hours after occlusion. The rhesus monkeys had symptoms of neurological deficits. Compared with the M1 occlusion group, the M2 occlusion group had lower infarction volume and higher neurological scores. These experimental findings indicate that reversible middle cerebral artery occlusion can be produced with the endovascular microcoil technique in rhesus monkeys. The M2 occluded model had less infarction and less neurological impairment, which offers the potential for application in the field of brain injury research.

  10. Flood damage: a model for consistent, complete and multipurpose scenarios

    Science.gov (United States)

    Menoni, Scira; Molinari, Daniela; Ballio, Francesco; Minucci, Guido; Mejri, Ouejdane; Atun, Funda; Berni, Nicola; Pandolfo, Claudia

    2016-12-01

    Effective flood risk mitigation requires the impacts of flood events to be much better and more reliably known than is currently the case. Available post-flood damage assessments usually supply only a partial vision of the consequences of the floods as they typically respond to the specific needs of a particular stakeholder. Consequently, they generally focus (i) on particular items at risk, (ii) on a certain time window after the occurrence of the flood, (iii) on a specific scale of analysis or (iv) on the analysis of damage only, without an investigation of damage mechanisms and root causes. This paper responds to the necessity of a more integrated interpretation of flood events as the base to address the variety of needs arising after a disaster. In particular, a model is supplied to develop multipurpose complete event scenarios. The model organizes available information after the event according to five logical axes. This way post-flood damage assessments can be developed that (i) are multisectoral, (ii) consider physical as well as functional and systemic damage, (iii) address the spatial scales that are relevant for the event at stake depending on the type of damage that has to be analyzed, i.e., direct, functional and systemic, (iv) consider the temporal evolution of damage and finally (v) allow damage mechanisms and root causes to be understood. All the above features are key for the multi-usability of resulting flood scenarios. The model allows, on the one hand, the rationalization of efforts currently implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.

  11. Deterministic Consistency: A Programming Model for Shared Memory Parallelism

    OpenAIRE

    Aviram, Amittai; Ford, Bryan

    2009-01-01

    The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "...

  12. Models and Modelling Tools for Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    2016-01-01

    The design, development and reliability of a chemical product and the process to manufacture it, need to be consistent with the end-use characteristics of the desired product. One of the common ways to match the desired product-process characteristics is through trial and error based experiments......-based framework is that in the design, development and/or manufacturing of a chemical product-process, the knowledge of the applied phenomena together with the product-process design details can be provided with diverse degrees of abstractions and details. This would allow the experimental resources......, are the needed models for such a framework available? Or, are modelling tools that can help to develop the needed models available? Can such a model-based framework provide the needed model-based work-flows matching the requirements of the specific chemical product-process design problems? What types of models...

  13. Requirements for clinical information modelling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Jódar-Sánchez, Francisco; Kalra, Dipak

    2015-07-01

    This study proposes consensus requirements for clinical information modelling tools that can support modelling tasks in medium/large scale institutions. Rather than identify which functionalities are currently available in existing tools, the study has focused on functionalities that should be covered in order to provide guidance about how to evolve the existing tools. After identifying a set of 56 requirements for clinical information modelling tools based on a literature review and interviews with experts, a classical Delphi study methodology was applied to conduct a two round survey in order to classify them as essential or recommended. Essential requirements are those that must be met by any tool that claims to be suitable for clinical information modelling, and if we one day have a certified tools list, any tool that does not meet essential criteria would be excluded. Recommended requirements are those more advanced requirements that may be met by tools offering a superior product or only needed in certain modelling situations. According to the answers provided by 57 experts from 14 different countries, we found a high level of agreement to enable the study to identify 20 essential and 21 recommended requirements for these tools. It is expected that this list of identified requirements will guide developers on the inclusion of new basic and advanced functionalities that have strong support by end users. This list could also guide regulators in order to identify requirements that could be demanded of tools adopted within their institutions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    -friendly system, which will make the model development process easier and faster and provide the way for unified and consistent model documentation. The modeller can use the template for their specific problem or to extend and/or adopt a model. This is based on the idea of model reuse, which emphasizes the use...... and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer...... aided methods and tools, that include procedures to perform model translation, model analysis, model verification/validation, model solution and model documentation; 4) model transfer – export/import to/from other application for further extension and application – several types of formats, such as XML...

  15. Consistency problems for Heath-Jarrow-Morton interest rate models

    CERN Document Server

    Filipović, Damir

    2001-01-01

    The book is written for a reader with knowledge in mathematical finance (in particular interest rate theory) and elementary stochastic analysis, such as provided by Revuz and Yor (Continuous Martingales and Brownian Motion, Springer 1991). It gives a short introduction both to interest rate theory and to stochastic equations in infinite dimension. The main topic is the Heath-Jarrow-Morton (HJM) methodology for the modelling of interest rates. Experts in SDE in infinite dimension with interest in applications will find here the rigorous derivation of the popular "Musiela equation" (referred to in the book as HJMM equation). The convenient interpretation of the classical HJM set-up (with all the no-arbitrage considerations) within the semigroup framework of Da Prato and Zabczyk (Stochastic Equations in Infinite Dimensions) is provided. One of the principal objectives of the author is the characterization of finite-dimensional invariant manifolds, an issue that turns out to be vital for applications. Finally, ge...

  16. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define...... a taxonomy of aspects around conservation, constraints and constitutive relations. Aspects of the ICAS-MoT toolbox are given to illustrate the functionality of a computer aided modelling tool, which incorporates an interface to MS Excel....

  17. Model Analysis ToolKit

    Energy Technology Data Exchange (ETDEWEB)

    2015-05-15

    MATK provides basic functionality to facilitate model analysis within the Python computational environment. Model analysis setup within MATK includes: - define parameters - define observations - define model (python function) - define samplesets (sets of parameter combinations) Currently supported functionality includes: - forward model runs - Latin-Hypercube sampling of parameters - multi-dimensional parameter studies - parallel execution of parameter samples - model calibration using internal Levenberg-Marquardt algorithm - model calibration using lmfit package - model calibration using levmar package - Markov Chain Monte Carlo using pymc package MATK facilitates model analysis using: - scipy - calibration (scipy.optimize) - rpy2 - Python interface to R

  18. Aggregated wind power plant models consisting of IEC wind turbine models

    DEFF Research Database (Denmark)

    Altin, Müfit; Göksu, Ömer; Hansen, Anca Daniela

    2015-01-01

    turbines, parameters and models to represent each individual wind turbine in detail makes it necessary to develop aggregated wind power plant models considering the simulation time for power system stability studies. In this paper, aggregated wind power plant models consisting of the IEC 61400-27 variable...

  19. Software Engineering Tools for Scientific Models

    Science.gov (United States)

    Abrams, Marc; Saboo, Pallabi; Sonsini, Mike

    2013-01-01

    Software tools were constructed to address issues the NASA Fortran development community faces, and they were tested on real models currently in use at NASA. These proof-of-concept tools address the High-End Computing Program and the Modeling, Analysis, and Prediction Program. Two examples are the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) atmospheric model in Cell Fortran on the Cell Broadband Engine, and the Goddard Institute for Space Studies (GISS) coupled atmosphere- ocean model called ModelE, written in fixed format Fortran.

  20. Simulation Tool for Inventory Models: SIMIN

    OpenAIRE

    Pratiksha Saxen; Tulsi Kushwaha

    2014-01-01

    In this paper, an integrated simulation optimization model for the inventory system is developed. An effective algorithm is developed to evaluate and analyze the back-end stored simulation results. This paper proposes simulation tool SIMIN (Inventory Simulation) to simulate inventory models. SIMIN is a tool which simulates and compares the results of different inventory models. To overcome various practical restrictive assumptions, SIMIN provides values for a number of performance measurement...

  1. A consistent picture of the hydroclimatic response to global warming from multiple indices: Models and observations

    Science.gov (United States)

    Giorgi, F.; Coppola, E.; Raffaele, F.

    2014-10-01

    We analyze trends of six daily precipitation-based and physically interconnected hydroclimatic indices in an ensemble of historical and 21st century climate projections under forcing from increasing greenhouse gas (GHG) concentrations (Representative Concentration Pathways (RCP)8.5), along with gridded (land only) observations for the late decades of the twentieth century. The indices include metrics of intensity (SDII) and extremes (R95) of precipitation, dry (DSL), and wet spell length, the hydroclimatic intensity index (HY-INT), and a newly introduced index of precipitation area (PA). All the indices in both the 21st century and historical simulations provide a consistent picture of a predominant shift toward a hydroclimatic regime of more intense, shorter, less frequent, and less widespread precipitation events in response to GHG-induced global warming. The trends are larger and more spatially consistent over tropical than extratropical regions, pointing to the importance of tropical convection in regulating this response, and show substantial regional spatial variability. Observed trends in the indices analyzed are qualitatively and consistently in line with the simulated ones, at least at the global and full tropical scale, further supporting the robustness of the identified prevailing hydroclimatic responses. The HY-INT, PA, and R95 indices show the most consistent response to global warming, and thus offer the most promising tools for formal hydroclimatic model validation and detection/attribution studies. The physical mechanism underlying this response and some of the applications of our results are also discussed.

  2. ANSYS tools in modeling tires

    Science.gov (United States)

    Ali, Ashraf; Lovell, Michael

    1995-08-01

    This presentation summarizes the capabilities in the ANSYS program that relate to the computational modeling of tires. The power and the difficulties associated with modeling nearly incompressible rubber-like materials using hyperelastic constitutive relationships are highlighted from a developer's point of view. The topics covered include a hyperelastic material constitutive model for rubber-like materials, a general overview of contact-friction capabilities, and the acoustic fluid-structure interaction problem for noise prediction. Brief theoretical development and example problems are presented for each topic.

  3. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  4. Cockpit System Situational Awareness Modeling Tool

    Science.gov (United States)

    Keller, John; Lebiere, Christian; Shay, Rick; Latorella, Kara

    2004-01-01

    This project explored the possibility of predicting pilot situational awareness (SA) using human performance modeling techniques for the purpose of evaluating developing cockpit systems. The Improved Performance Research Integration Tool (IMPRINT) was combined with the Adaptive Control of Thought-Rational (ACT-R) cognitive modeling architecture to produce a tool that can model both the discrete tasks of pilots and the cognitive processes associated with SA. The techniques for using this tool to predict SA were demonstrated using the newly developed Aviation Weather Information (AWIN) system. By providing an SA prediction tool to cockpit system designers, cockpit concepts can be assessed early in the design process while providing a cost-effective complement to the traditional pilot-in-the-loop experiments and data collection techniques.

  5. The european Trans-Tools transport model

    NARCIS (Netherlands)

    Rooijen, T. van; Burgess, A.

    2008-01-01

    The paper presents the use of ArcGIS in the Transtools Transport Model, TRANS-TOOLS, created by an international consortium for the European Commission. The model describe passenger as well as freight transport in Europe with all medium and long distance modes (cars, vans, trucks, train, inland

  6. The european Trans-Tools transport model

    NARCIS (Netherlands)

    Rooijen, T. van; Burgess, A.

    2008-01-01

    The paper presents the use of ArcGIS in the Transtools Transport Model, TRANS-TOOLS, created by an international consortium for the European Commission. The model describe passenger as well as freight transport in Europe with all medium and long distance modes (cars, vans, trucks, train, inland wate

  7. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh;

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  8. Consistent model reduction of polymer chains in solution in dissipative particle dynamics: Model description

    KAUST Repository

    Moreno Chaparro, Nicolas

    2015-06-30

    We introduce a framework for model reduction of polymer chain models for dissipative particle dynamics (DPD) simulations, where the properties governing the phase equilibria such as the characteristic size of the chain, compressibility, density, and temperature are preserved. The proposed methodology reduces the number of degrees of freedom required in traditional DPD representations to model equilibrium properties of systems with complex molecules (e.g., linear polymers). Based on geometrical considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling of the simulation parameters. In order to satisfy the geometrical constraints in the reduced model we introduce bond-angle potentials that account for the changes in the chain free energy after the model reduction. Following this coarse-graining process we represent high molecular weight DPD chains (i.e., ≥200≥200 beads per chain) with a significant reduction in the number of particles required (i.e., ≥20≥20 times the original system). We show that our methodology has potential applications modeling systems of high molecular weight molecules at large scales, such as diblock copolymer and DNA.

  9. TOOL FORCE MODEL FOR DIAMOND TURNING

    Institute of Scientific and Technical Information of China (English)

    Wang Hongxiang; Sun Tao; Li Dan; Dong Shen

    2004-01-01

    A new tool force model to be presented is based upon process geometry and the characteristics of the force system,in which the forces acting on the tool rake face,the cutting edge rounding and the clearance face have been considered,and the size effect is accountable for the new model.It is desired that the model can be well applicable to conventional diamond turning and the model may be employed as a tool in the design of diamond tools.This approach is quite different from traditional investigations primarily based on empirical studies.As the depth of cut becomes the same order as the rounded cutting edge radius,sliding along the clearance face due to elastic recovery of workpiece material and plowing due to the rounded cutting edge may become important in micro-machining,the forces acting on the cutting edge rounding and the clearance face can not be neglected.For this reason,it is very important to understand the influence of some parameters on tool forces and develop a model of the relationship between them.

  10. A new k-epsilon model consistent with Monin-Obukhov similarity theory

    DEFF Research Database (Denmark)

    van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.

    2016-01-01

    A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...

  11. A simplified stock-flow consistent post-Keynesian growth model

    OpenAIRE

    dos Santos, Claudio H.; Zezza, Gennaro

    2005-01-01

    A Simplified Stock-Flow Consistent Post-Keynesian Growth Model Claudio H. Dos Santos* and Gennaro Zezza** Abstract: Despite being arguably the most rigorous form of structuralist/post-Keynesian macroeconomics, stock-flow consistent models are quite often complex and difficult to deal with. This paper presents a model that, despite retaining the methodological advantages of the stock-flow consistent method, is intuitive enough to be taught at an undergraduate level. Moreover, the model can eas...

  12. Stochastic modelling of spatially and temporally consistent daily precipitation time-series over complex topography

    Science.gov (United States)

    Keller, D. E.; Fischer, A. M.; Frei, C.; Liniger, M. A.; Appenzeller, C.; Knutti, R.

    2014-07-01

    Many climate impact assessments over topographically complex terrain require high-resolution precipitation time-series that have a spatio-temporal correlation structure consistent with observations. This consistency is essential for spatially distributed modelling of processes with non-linear responses to precipitation input (e.g. soil water and river runoff modelling). In this regard, weather generators (WGs) designed and calibrated for multiple sites are an appealing technique to stochastically simulate time-series that approximate the observed temporal and spatial dependencies. In this study, we present a stochastic multi-site precipitation generator and validate it over the hydrological catchment Thur in the Swiss Alps. The model consists of several Richardson-type WGs that are run with correlated random number streams reflecting the observed correlation structure among all possible station pairs. A first-order two-state Markov process simulates intermittence of daily precipitation, while precipitation amounts are simulated from a mixture model of two exponential distributions. The model is calibrated separately for each month over the time-period 1961-2011. The WG is skilful at individual sites in representing the annual cycle of the precipitation statistics, such as mean wet day frequency and intensity as well as monthly precipitation sums. It reproduces realistically the multi-day statistics such as the frequencies of dry and wet spell lengths and precipitation sums over consecutive wet days. Substantial added value is demonstrated in simulating daily areal precipitation sums in comparison to multiple WGs that lack the spatial dependency in the stochastic process: the multi-site WG is capable to capture about 95% of the observed variability in daily area sums, while the summed time-series from multiple single-site WGs only explains about 13%. Limitation of the WG have been detected in reproducing observed variability from year to year, a component that has

  13. Consistency and bicharacteristic analysis of integral porosity shallow water models. Explaining model oversensitivity to mesh design

    Science.gov (United States)

    Guinot, Vincent

    2017-09-01

    The Integral Porosity and Dual Integral Porosity two-dimensional shallow water models have been proposed recently as efficient upscaled models for urban floods. Very little is known so far about their consistency and wave propagation properties. Simple numerical experiments show that both models are unusually sensitive to the computational grid. In the present paper, a two-dimensional consistency and characteristic analysis is carried out for these two models. The following results are obtained: (i) the models are almost insensitive to grid design when the porosity is isotropic, (ii) anisotropic porosity fields induce an artificial polarization of the mass/momentum fluxes along preferential directions when triangular meshes are used and (iii) extra first-order derivatives appear in the governing equations when regular, quadrangular cells are used. The hyperbolic system is thus mesh-dependent, and with it the wave propagation properties of the model solutions. Criteria are derived to make the solution less mesh-dependent, but it is not certain that these criteria can be satisfied at all computational points when real-world situations are dealt with.

  14. A review of electricity market modelling tools

    Directory of Open Access Journals (Sweden)

    Sandra Milena Londoño Hernández

    2010-05-01

    Full Text Available Deregulating electricity markets around the world in the search for efficiency has introduced competition into the electricity marke- ting and generation business. Studying interactions amongst the participants has thus acquired great importance for regulators and market participants for analysing market evolution and suitably defining their bidding strategies. Different tools have thereof- re been used for modelling competitive electricity markets during the last few years. This paper presents an analytical review of the bibliography found regarding this subject; it also presents the most used tools along with their advantages and disadvantages. Such analysis was done by comparing the models used, identifying the main market characteristics such as market structure, bid structure and kind of bidding. This analysis concluded that the kind of tool to be used mainly depends on a particular study’s goal and scope.

  15. HYDROLOGICAL PROCESSES MODELLING USING ADVANCED HYDROINFORMATIC TOOLS

    Directory of Open Access Journals (Sweden)

    BEILICCI ERIKA

    2014-03-01

    Full Text Available The water has an essential role in the functioning of ecosystems by integrating the complex physical, chemical, and biological processes that sustain life. Water is a key factor in determining the productivity of ecosystems, biodiversity and species composition. Water is also essential for humanity: water supply systems for population, agriculture, fisheries, industries, and hydroelectric power depend on water supplies. The modelling of hydrological processes is an important activity for water resources management, especially now, when the climate change is one of the major challenges of our century, with strong influence on hydrological processes dynamics. Climate change and needs for more knowledge in water resources require the use of advanced hydroinformatic tools in hydrological processes modelling. The rationale and purpose of advanced hydroinformatic tools is to develop a new relationship between the stakeholders and the users and suppliers of the systems: to offer the basis (systems which supply useable results, the validity of which cannot be put in reasonable doubt by any of the stakeholders involved. For a successful modelling of hydrological processes also need specialists well trained and able to use advanced hydro-informatics tools. Results of modelling can be a useful tool for decision makers to taking efficient measures in social, economical and ecological domain regarding water resources, for an integrated water resources management.

  16. The Twente lower extremity model : consistent dynamic simulation of the human locomotor apparatus

    OpenAIRE

    Klein Horsman, Martijn Dirk

    2007-01-01

    Orthopedic interventions such as tendon transfers have shown to be successful in the treatment of gait disorders. Still, in many cases dysfunctions remained or worsened. To assist clinicians, an interactive tool will be useful that allows evaluation of if-then scenarios with respect to treatment methods. Comprehensive musculoskeletal models have shown a high potential to serve as such a tool. By varying anatomical model parameters, alterations in anatomy due to surgery can be implemented. Inv...

  17. Consistent adjacency-spectral partitioning for the stochastic block model when the model parameters are unknown

    CERN Document Server

    Fishkind, Donniell E; Tang, Minh; Vogelstein, Joshua T; Priebe, Carey E

    2012-01-01

    A stochastic block model consists of a random partition of n vertices into blocks 1,2,...,K for which, conditioned on the partition, every pair of vertices has probability of adjacency entirely determined by the block membership of the two vertices. (The model parameters are K, the distribution of the random partition, and a communication probability matrix M in [0,1]^(K x K) listing the adjacency probabilities associated with all pairs of blocks.) Suppose a realization of the n x n vertex adjacency matrix is observed, but the underlying partition of the vertices into blocks is not observed; the main inferential task is to correctly partition the vertices into the blocks with only a negligible number of vertices misassigned. For this inferential task, Rohe et al. (2011) prove the consistency of spectral partitioning applied to the normalized Laplacian, and Sussman et al. (2011) extend this to prove consistency of spectral partitioning directly on the adjacency matrix; both procedures assume that K and rankM a...

  18. An analytical model for resistivity tools

    Energy Technology Data Exchange (ETDEWEB)

    Hovgaard, J.

    1991-04-01

    An analytical model for resistivity tools is developed. It takes into account the effect of the borehole and the actual shape of the electrodes. The model is two-dimensional, i.e. the model does not deal with eccentricity. The electrical potential around a current source satisfies Poisson`s equation. The method used here to solve Poisson`s equation is the expansion fo the potential function in terms of a complete set of functions involving one of the coordinates with coefficients which are undetermined functions of the other coordinate. Numerical examples of the use of the model are presented. The results are compared with results given in the literature. (au).

  19. QUALITY SERVICES EVALUATION MODEL BASED ON DEDICATED SOFTWARE TOOL

    Directory of Open Access Journals (Sweden)

    ANDREEA CRISTINA IONICĂ

    2012-10-01

    Full Text Available In this paper we introduced a new model, called Service Quality (SQ, which combines QFD and SERVQUAL methods. This model takes from the SERVQUAL method the five dimensions of requirements and three of characteristics and from the QFD method the application methodology. The originality of the SQ model consists in computing a global index that reflects the customers’ requirements accomplishment level by the quality characteristics. In order to prove the viability of the SQ model, there was developed a software tool that was applied for the evaluation of a health care services provider.

  20. Induction generator models in dynamic simulation tools

    DEFF Research Database (Denmark)

    Knudsen, Hans; Akhmatov, Vladislav

    1999-01-01

    . It is found to be possible to include a transient model in dynamic stability tools and, then, obtain correct results also in dynamic tools. The representation of the rotating system influences on the voltage recovery shape which is an important observation in case of windmills, where a heavy mill is connected......For AC network with large amount of induction generators (windmills) the paper demonstrates a significant discrepancy in the simulated voltage recovery after fault in weak networks when comparing dynamic and transient stability descriptions and the reasons of discrepancies are explained...

  1. Consistent and Conservative Model Selection with the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    2015-01-01

    the tuning parameter by Bayesian Information Criterion (BIC) results in consistent model selection. However, it is also shown that the adaptive Lasso has no power against shrinking alternatives of the form c/T if it is tuned to perform consistent model selection. We show that if the adaptive Lasso is tuned...

  2. Towards a consistent model of the Galaxy; 2, Derivation of the model

    CERN Document Server

    Méra, D; Schäffer, R

    1998-01-01

    We use the calculations derived in a previous paper (Méra, Chabrier and Schaeffer, 1997), based on observational constraints arising from star counts, microlensing experiments and kinematic properties, to determine the amount of dark matter under the form of stellar and sub-stellar objects in the different parts of the Galaxy. This yields the derivation of different mass-models for the Galaxy. In the light of all the afore-mentioned constraints, we discuss two models that correspond to different conclusions about the nature and the location of the Galactic dark matter. In the first model there is a small amount of dark matter in the disk, and a large fraction of the dark matter in the halo is still undetected and likely to be non-baryonic. The second, less conventional model is consistent with entirely, or at least predominantly baryonic dark matter, under the form of brown dwarfs in the disk and white dwarfs in the dark halo. We derive observational predictions for these two models which should be verifiabl...

  3. Animal models: an important tool in mycology.

    Science.gov (United States)

    Capilla, Javier; Clemons, Karl V; Stevens, David A

    2007-12-01

    Animal models of fungal infections are, and will remain, a key tool in the advancement of the medical mycology. Many different types of animal models of fungal infection have been developed, with murine models the most frequently used, for studies of pathogenesis, virulence, immunology, diagnosis, and therapy. The ability to control numerous variables in performing the model allows us to mimic human disease states and quantitatively monitor the course of the disease. However, no single model can answer all questions and different animal species or different routes of infection can show somewhat different results. Thus, the choice of which animal model to use must be made carefully, addressing issues of the type of human disease to mimic, the parameters to follow and collection of the appropriate data to answer those questions being asked. This review addresses a variety of uses for animal models in medical mycology. It focuses on the most clinically important diseases affecting humans and cites various examples of the different types of studies that have been performed. Overall, animal models of fungal infection will continue to be valuable tools in addressing questions concerning fungal infections and contribute to our deeper understanding of how these infections occur, progress and can be controlled and eliminated.

  4. Homology modeling: an important tool for the drug discovery.

    Science.gov (United States)

    França, Tanos Celmar Costa

    2015-01-01

    In the last decades, homology modeling has become a popular tool to access theoretical three-dimensional (3D) structures of molecular targets. So far several 3D models of proteins have been built by this technique and used in a great diversity of structural biology studies. But are those models consistent enough with experimental structures to make this technique an effective and reliable tool for drug discovery? Here we present, briefly, the fundamentals and current state-of-the-art of the homology modeling techniques used to build 3D structures of molecular targets, which experimental structures are not available in databases, and list some of the more important works, using this technique, available in literature today. In many cases those studies have afforded successful models for the drug design of more selective agonists/antagonists to the molecular targets in focus and guided promising experimental works, proving that, when the appropriate templates are available, useful models can be built using some of the several software available today for this purpose. Limitations of the experimental techniques used to solve 3D structures allied to constant improvements in the homology modeling software will maintain the need for theoretical models, establishing the homology modeling as a fundamental tool for the drug discovery.

  5. Self-consistent modeling of DEMOs with 1.5D BALDUR integrated predictive modeling code

    Science.gov (United States)

    Wisitsorasak, A.; Somjinda, B.; Promping, J.; Onjun, T.

    2017-02-01

    Self-consistent simulations of four DEMO designs proposed by teams from China, Europe, India, and Korea are carried out using the BALDUR integrated predictive modeling code in which theory-based models are used, for both core transport and boundary conditions. In these simulations, a combination of the NCLASS neoclassical transport and multimode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a pedestal temperature model based on a combination of magnetic and flow shear stabilization, pedestal width scaling and an infinite- n ballooning pressure gradient model and a pedestal density model based on a line average density. Even though an optimistic scenario is considered, the simulation results suggest that, with the exclusion of ELMs, the fusion gain Q obtained for these reactors is pessimistic compared to their original designs, i.e. 52% for the Chinese design, 63% for the European design, 22% for the Korean design, and 26% for the Indian design. In addition, the predicted bootstrap current fractions are also found to be lower than their original designs, as fractions of their original designs, i.e. 0.49 (China), 0.66 (Europe), and 0.58 (India). Furthermore, in relation to sensitivity, it is found that increasing values of the auxiliary heating power and the electron line average density from their design values yield an enhancement of fusion performance. In addition, inclusion of sawtooth oscillation effects demonstrate positive impacts on the plasma and fusion performance in European, Indian and Korean DEMOs, but degrade the performance in the Chinese DEMO.

  6. A tool box for implementing supersymmetric models

    Science.gov (United States)

    Staub, Florian; Ohl, Thorsten; Porod, Werner; Speckner, Christian

    2012-10-01

    We present a framework for performing a comprehensive analysis of a large class of supersymmetric models, including spectrum calculation, dark matter studies and collider phenomenology. To this end, the respective model is defined in an easy and straightforward way using the Mathematica package SARAH. SARAH then generates model files for CalcHep which can be used with micrOMEGAs as well as model files for WHIZARD and O'Mega. In addition, Fortran source code for SPheno is created which facilitates the determination of the particle spectrum using two-loop renormalization group equations and one-loop corrections to the masses. As an additional feature, the generated SPheno code can write out input files suitable for use with HiggsBounds to apply bounds coming from the Higgs searches to the model. Combining all programs provides a closed chain from model building to phenomenology. Program summary Program title: SUSY Phenomenology toolbox. Catalog identifier: AEMN_v1_0. Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMN_v1_0.html. Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html. No. of lines in distributed program, including test data, etc.: 140206. No. of bytes in distributed program, including test data, etc.: 1319681. Distribution format: tar.gz. Programming language: Autoconf, Mathematica. Computer: PC running Linux, Mac. Operating system: Linux, Mac OS. Classification: 11.6. Nature of problem: Comprehensive studies of supersymmetric models beyond the MSSM is considerably complicated by the number of different tasks that have to be accomplished, including the calculation of the mass spectrum and the implementation of the model into tools for performing collider studies, calculating the dark matter density and checking the compatibility with existing collider bounds (in particular, from the Higgs searches). Solution method: The

  7. Tool for physics beyond the standard model

    Science.gov (United States)

    Newby, Christopher A.

    The standard model (SM) of particle physics is a well studied theory, but there are hints that the SM is not the final story. What the full picture is, no one knows, but this thesis looks into three methods useful for exploring a few of the possibilities. To begin I present a paper by Spencer Chang, Nirmal Raj, Chaowaroj Wanotayaroj, and me, that studies the Higgs boson. The scalar particle first seen in 2012 may be the vanilla SM version, but there is some evidence that its couplings are different than predicted. By means of increasing the Higgs' coupling to vector bosons and fermions, we can be more consistent with the data. Next, in a paper by Spencer Chang, Gabriel Barello, and me, we elaborate on a tool created to study dark matter (DM) direct detection. The original work by Anand. et al. focused on elastic dark matter, whereas we extended this work to include the in elastic case, where different DM mass states enter and leave the collision. We also examine several direct detection experiments with our new framework to see if DAMA's modulation can be explained while avoiding the strong constraints imposed by the other experiments. We find that there are several operators that can do this. Finally, in a paper by Spencer Chang, Gabriel Barello, and me, we study an interesting phenomenon know as kinetic mixing, where two gauge bosons can share interactions with particles even though these particles aren't charged under both gauge groups. This, in and of itself, is not new, but we discuss a different method of obtaining this mixing where instead of mixing between two Abelian groups one of the groups is Nonabelian. Using this we then see that there is an inherent mass scale in the mixing strength; something that is absent in the Abelian-Abelian case. Furthermore, if the Nonabelian symmetry is the SU(2)L of the SM then the mass scale of the physics responsible for the mixing is about 1 TeV, right around the sweet spot for detection at the LHC. This dissertation

  8. GridTool: A surface modeling and grid generation tool

    Science.gov (United States)

    Samareh-Abolhassani, Jamshid

    1995-01-01

    GridTool is designed around the concept that the surface grids are generated on a set of bi-linear patches. This type of grid generation is quite easy to implement, and it avoids the problems associated with complex CAD surface representations and associated surface parameterizations. However, the resulting surface grids are close to but not on the original CAD surfaces. This problem can be alleviated by projecting the resulting surface grids onto the original CAD surfaces. GridTool is designed primary for unstructured grid generation systems. Currently, GridTool supports VGRID and FELISA systems, and it can be easily extended to support other unstructured grid generation systems. The data in GridTool is stored parametrically so that once the problem is set up, one can modify the surfaces and the entire set of points, curves and patches will be updated automatically. This is very useful in a multidisciplinary design and optimization process. GridTool is written entirely in ANSI 'C', the interface is based on the FORMS library, and the graphics is based on the GL library. The code has been tested successfully on IRIS workstations running IRIX4.0 and above. The memory is allocated dynamically, therefore, memory size will depend on the complexity of geometry/grid. GridTool data structure is based on a link-list structure which allows the required memory to expand and contract dynamically according to the user's data size and action. Data structure contains several types of objects such as points, curves, patches, sources and surfaces. At any given time, there is always an active object which is drawn in magenta, or in their highlighted colors as defined by the resource file which will be discussed later.

  9. A Symplectic Multi-Particle Tracking Model for Self-Consistent Space-Charge Simulation

    CERN Document Server

    Qiang, Ji

    2016-01-01

    Symplectic tracking is important in accelerator beam dynamics simulation. So far, to the best of our knowledge, there is no self-consistent symplectic space-charge tracking model available in the accelerator community. In this paper, we present a two-dimensional and a three-dimensional symplectic multi-particle spectral model for space-charge tracking simulation. This model includes both the effect from external fields and the effect of self-consistent space-charge fields using a split-operator method. Such a model preserves the phase space structure and shows much less numerical emittance growth than the particle-in-cell model in the illustrative examples.

  10. On the (in)consistency of a multi-model ensemble of the past 30 years land surface state.

    Science.gov (United States)

    Dutra, Emanuel; Schellekens, Jaap; Beck, Hylke; Balsamo, Gianpaolo

    2016-04-01

    Global land-surface and hydrological models are a fundamental tool in understanding the land-surface state and evolution either coupled to atmospheric models for climate and weather predictions or in stand-alone mode. In this study we take a recently developed dataset consisting in stand-alone simulations by 10 global hydrological and land surface models sharing the same atmospheric forcing for the period 1979-2012 (the eart2Observe dataset). This multi-model ensemble provides the first freely available dataset with such a spatial/temporal scale that allows for a characterization of the multi-model characteristics such as inter-model consistency and error-spread relationship. We will present a metric for the ensemble consistency using the concept of potential predictability, that can be interpreted as a proxy for the multi-model agreement. Initial results point to regions of low inter-model agreement in the polar and tropical regions, the latter also present when comparing globally available precipitation datasets. In addition to this, the discharge ensemble spread around the ensemble mean was compared to the error of the ensemble mean for several large-scale and small scale basins. This showed a general under-estimation of the ensemble spread, particularly in tropical basins, suggesting that the current dataset lacks the representation of the precipitation uncertainty in the input meteorological data.

  11. A CVAR scenario for a standard monetary model using theory-consistent expectations

    DEFF Research Database (Denmark)

    Juselius, Katarina

    2017-01-01

    A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...

  12. Induction generator models in dynamic simulation tools

    DEFF Research Database (Denmark)

    Knudsen, Hans; Akhmatov, Vladislav

    1999-01-01

    For AC network with large amount of induction generators (windmills) the paper demonstrates a significant discrepancy in the simulated voltage recovery after fault in weak networks when comparing dynamic and transient stability descriptions and the reasons of discrepancies are explained. It is fo......For AC network with large amount of induction generators (windmills) the paper demonstrates a significant discrepancy in the simulated voltage recovery after fault in weak networks when comparing dynamic and transient stability descriptions and the reasons of discrepancies are explained....... It is found to be possible to include a transient model in dynamic stability tools and, then, obtain correct results also in dynamic tools. The representation of the rotating system influences on the voltage recovery shape which is an important observation in case of windmills, where a heavy mill is connected...

  13. Development of a Kohn-Sham like potential in the Self-Consistent Atomic Deformation Model

    CERN Document Server

    Mehl, M J; Stokes, H T

    1996-01-01

    This is a brief description of how to derive the local ``atomic'' potentials from the Self-Consistent Atomic Deformation (SCAD) model density function. Particular attention is paid to the spherically averaged case.

  14. Development of a Kohn-Sham like potential in the Self-Consistent Atomic Deformation Model

    OpenAIRE

    Mehl, M. J.; Boyer, L. L.; Stokes, H. T.

    1996-01-01

    This is a brief description of how to derive the local ``atomic'' potentials from the Self-Consistent Atomic Deformation (SCAD) model density function. Particular attention is paid to the spherically averaged case.

  15. Bayesian nonparametric estimation and consistency of mixed multinomial logit choice models

    CERN Document Server

    De Blasi, Pierpaolo; Lau, John W; 10.3150/09-BEJ233

    2011-01-01

    This paper develops nonparametric estimation for discrete choice models based on the mixed multinomial logit (MMNL) model. It has been shown that MMNL models encompass all discrete choice models derived under the assumption of random utility maximization, subject to the identification of an unknown distribution $G$. Noting the mixture model description of the MMNL, we employ a Bayesian nonparametric approach, using nonparametric priors on the unknown mixing distribution $G$, to estimate choice probabilities. We provide an important theoretical support for the use of the proposed methodology by investigating consistency of the posterior distribution for a general nonparametric prior on the mixing distribution. Consistency is defined according to an $L_1$-type distance on the space of choice probabilities and is achieved by extending to a regression model framework a recent approach to strong consistency based on the summability of square roots of prior probabilities. Moving to estimation, slightly different te...

  16. Thermodynamically consistent mesoscopic fluid particle models for a van der Waals fluid

    OpenAIRE

    Serrano, Mar; Español, Pep

    2000-01-01

    The GENERIC structure allows for a unified treatment of different discrete models of hydrodynamics. We first propose a finite volume Lagrangian discretization of the continuum equations of hydrodynamics through the Voronoi tessellation. We then show that a slight modification of these discrete equations has the GENERIC structure. The GENERIC structure ensures thermodynamic consistency and allows for the introduction of correct thermal noise. In this way, we obtain a consistent discrete model ...

  17. WMT: The CSDMS Web Modeling Tool

    Science.gov (United States)

    Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged

  18. Comparison of BrainTool to other UML modeling and model transformation tools

    Science.gov (United States)

    Nikiforova, Oksana; Gusarovs, Konstantins

    2017-07-01

    In the last 30 years there were numerous model generated software systems offered targeting problems with the development productivity and the resulting software quality. CASE tools developed due today's date are being advertised as having "complete code-generation capabilities". Nowadays the Object Management Group (OMG) is calling similar arguments in regards to the Unified Modeling Language (UML) models at different levels of abstraction. It is being said that software development automation using CASE tools enables significant level of automation. Actual today's CASE tools are usually offering a combination of several features starting with a model editor and a model repository for a traditional ones and ending with code generator (that could be using a scripting or domain-specific (DSL) language), transformation tool to produce the new artifacts from the manually created and transformation definition editor to define new transformations for the most advanced ones. Present paper contains the results of CASE tool (mainly UML editors) comparison against the level of the automation they are offering.

  19. Assessing the consistency between short-term global temperature trends in observations and climate model projections

    CERN Document Server

    Michaels, Patrick J; Christy, John R; Herman, Chad S; Liljegren, Lucia M; Annan, James D

    2013-01-01

    Assessing the consistency between short-term global temperature trends in observations and climate model projections is a challenging problem. While climate models capture many processes governing short-term climate fluctuations, they are not expected to simulate the specific timing of these somewhat random phenomena - the occurrence of which may impact the realized trend. Therefore, to assess model performance, we develop distributions of projected temperature trends from a collection of climate models running the IPCC A1B emissions scenario. We evaluate where observed trends of length 5 to 15 years fall within the distribution of model trends of the same length. We find that current trends lie near the lower limits of the model distributions, with cumulative probability-of-occurrence values typically between 5 percent and 20 percent, and probabilities below 5 percent not uncommon. Our results indicate cause for concern regarding the consistency between climate model projections and observed climate behavior...

  20. STRONGLY CONSISTENT ESTIMATION FOR A MULTIVARIATE LINEAR RELATIONSHIP MODEL WITH ESTIMATED COVARIANCES MATRIX

    Institute of Scientific and Technical Information of China (English)

    Yee LEUNG; WU Kefa; DONG Tianxin

    2001-01-01

    In this paper, a multivariate linear functional relationship model, where the covariance matrix of the observational errors is not restricted, is considered. The parameter estimation of this model is discussed. The estimators are shown to be a strongly consistent estimation under some mild conditions on the incidental parameters.

  1. Physically-consistent subgrid-scale models for large-eddy simulation of incompressible turbulent flows

    CERN Document Server

    Silvis, Maurits H

    2015-01-01

    Assuming a general constitutive relation for the turbulent stresses in terms of the local large-scale velocity gradient, we constructed a class of subgrid-scale models for large-eddy simulation that are consistent with important physical and mathematical properties. In particular, they preserve symmetries of the Navier-Stokes equations and exhibit the proper near-wall scaling. They furthermore show desirable dissipation behavior and are capable of describing nondissipative effects. We provided examples of such physically-consistent models and showed that existing subgrid-scale models do not all satisfy the desired properties.

  2. Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers

    Science.gov (United States)

    Cartar, William; Mørk, Jesper; Hughes, Stephen

    2017-08-01

    We present a powerful computational approach to simulate the threshold behavior of photonic-crystal quantum-dot (QD) lasers. Using a finite-difference time-domain (FDTD) technique, Maxwell-Bloch equations representing a system of thousands of statistically independent and randomly positioned two-level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within the model. These Maxwell-Bloch equations are implemented by using Lumerical's flexible material plug-in tool, which allows a user to define additional equations of motion for the nonlinear polarization. We implement the gain ensemble within triangular-lattice photonic-crystal cavities of various length N (where N refers to the number of missing holes), and investigate the cavity mode characteristics and the threshold regime as a function of cavity length. We develop effective two-dimensional model simulations which are derived after studying the full three-dimensional passive material structures by matching the cavity quality factors and resonance properties. We also demonstrate how to obtain the correct point-dipole radiative decay rate from Fermi's golden rule, which is captured naturally by the FDTD method. Our numerical simulations predict that the pump threshold plateaus around cavity lengths greater than N =9 , which we identify as a consequence of the complex spatial dynamics and gain coupling from the inhomogeneous QD ensemble. This behavior is not expected from simple rate-equation analysis commonly adopted in the literature, but is in qualitative agreement with recent experiments. Single-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also

  3. The fundamental solution for a consistent complex model of the shallow shell equations

    OpenAIRE

    Matthew P. Coleman

    1999-01-01

    The calculation of the Fourier transforms of the fundamental solution in shallow shell theory ostensibly was accomplished by J. L. Sanders [J. Appl. Mech. 37 (1970), 361-366]. However, as is shown in detail in this paper, the complex model used by Sanders is, in fact, inconsistent. This paper provides a consistent version of Sanders's complex model, along with the Fourier transforms of the fundamental solution for this corrected model. The inverse Fourier transforms are then calculated for th...

  4. Consistent Fundamental Matrix Estimation in a Quadratic Measurement Error Model Arising in Motion Analysis

    OpenAIRE

    Kukush, A.; Markovsky, I.; Van Huffel, S.

    2002-01-01

    Consistent estimators of the rank-deficient fundamental matrix yielding information on the relative orientation of two images in two-view motion analysis are derived. The estimators are derived by minimizing a corrected contrast function in a quadratic measurement error model. In addition, a consistent estimator for the measurement error variance is obtained. Simulation results show the improved accuracy of the newly proposed estimator compared to the ordinary total least-squares estimator.

  5. Neural Networks for Hydrological Modeling Tool for Operational Purposes

    Science.gov (United States)

    Bhatt, Divya; Jain, Ashu

    2010-05-01

    Hydrological models are useful in many water resources applications such as flood control, irrigation and drainage, hydro power generation, water supply, erosion and sediment control, etc. Estimates of runoff are needed in many water resources planning, design development, operation and maintenance activities. Runoff is generally computed using rainfall-runoff models. Computer based hydrologic models have become popular for obtaining hydrological forecasts and for managing water systems. Rainfall-runoff library (RRL) is computer software developed by Cooperative Research Centre for Catchment Hydrology (CRCCH), Australia consisting of five different conceptual rainfall-runoff models, and has been in operation in many water resources applications in Australia. Recently, soft artificial intelligence tools such as Artificial Neural Networks (ANNs) have become popular for research purposes but have not been adopted in operational hydrological forecasts. There is a strong need to develop ANN models based on real catchment data and compare them with the conceptual models actually in use in real catchments. In this paper, the results from an investigation on the use of RRL and ANNs are presented. Out of the five conceptual models in the RRL toolkit, SimHyd model has been used. Genetic Algorithm has been used as an optimizer in the RRL to calibrate the SimHyd model. Trial and error procedures were employed to arrive at the best values of various parameters involved in the GA optimizer to develop the SimHyd model. The results obtained from the best configuration of the SimHyd model are presented here. Feed-forward neural network model structure trained by back-propagation training algorithm has been adopted here to develop the ANN models. The daily rainfall and runoff data derived from Bird Creek Basin, Oklahoma, USA have been employed to develop all the models included here. A wide range of error statistics have been used to evaluate the performance of all the models

  6. The Science Consistency Review A Tool To Evaluate the Use of Scientific Information in Land Management Decisionmaking

    Science.gov (United States)

    James M. Guldin; David Cawrse; Russell Graham; Miles Hemstrom; Linda Joyce; Steve Kessler; Ranotta McNair; George Peterson; Charles G. Shaw; Peter Stine; Mark Twery; Jeffrey Walter

    2003-01-01

    The paper outlines a process called the science consistency review, which can be used to evaluate the use of scientific information in land management decisions. Developed with specific reference to land management decisions in the U.S. Department of Agriculture Forest Service, the process involves assembling a team of reviewers under a review administrator to...

  7. Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency

    Science.gov (United States)

    Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.

    2013-09-01

    A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.

  8. Collaboro: a collaborative (meta modeling tool

    Directory of Open Access Journals (Sweden)

    Javier Luis Cánovas Izquierdo

    2016-10-01

    Full Text Available Software development is becoming more and more collaborative, emphasizing the role of end-users in the development process to make sure the final product will satisfy customer needs. This is especially relevant when developing Domain-Specific Modeling Languages (DSMLs, which are modeling languages specifically designed to carry out the tasks of a particular domain. While end-users are actually the experts of the domain for which a DSML is developed, their participation in the DSML specification process is still rather limited nowadays. In this paper, we propose a more community-aware language development process by enabling the active participation of all community members (both developers and end-users from the very beginning. Our proposal, called Collaboro, is based on a DSML itself enabling the representation of change proposals during the language design and the discussion (and trace back of possible solutions, comments and decisions arisen during the collaboration. Collaboro also incorporates a metric-based recommender system to help community members to define high-quality notations for the DSMLs. We also show how Collaboro can be used at the model-level to facilitate the collaborative specification of software models. Tool support is available both as an Eclipse plug-in a web-based solution.

  9. Self-consistent models of quasi-relaxed rotating stellar systems

    CERN Document Server

    Varri, A L

    2012-01-01

    Two new families of self-consistent axisymmetric truncated equilibrium models for the description of quasi-relaxed rotating stellar systems are presented. The first extends the spherical King models to the case of solid-body rotation. The second is characterized by differential rotation, designed to be rigid in the central regions and to vanish in the outer parts, where the energy truncation becomes effective. The models are constructed by solving the nonlinear Poisson equation for the self-consistent mean-field potential. For rigidly rotating configurations, the solutions are obtained by an asymptotic expansion on the rotation strength parameter. The differentially rotating models are constructed by means of an iterative approach based on a Legendre series expansion of the density and the potential. The two classes of models exhibit complementary properties. The rigidly rotating configurations are flattened toward the equatorial plane, with deviations from spherical symmetry that increase with the distance f...

  10. Self-consistent core-pedestal transport simulations with neural network accelerated models

    Science.gov (United States)

    Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.

    2017-08-01

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.

  11. A pandemic influenza modeling and visualization tool

    Energy Technology Data Exchange (ETDEWEB)

    Maciejewski, Ross; Livengood, Philip; Rudolph, Stephen; Collins, Timothy F.; Ebert, David S.; Brigantic, Robert T.; Corley, Courtney D.; Muller, George A.; Sanders, Stephen W.

    2011-08-01

    The National Strategy for Pandemic Influenza outlines a plan for community response to a potential pandemic. In this outline, state and local communities are charged with enhancing their preparedness. In order to help public health officials better understand these charges, we have developed a modeling and visualization toolkit (PanViz) for analyzing the effect of decision measures implemented during a simulated pandemic influenza scenario. Spread vectors based on the point of origin and distance traveled over time are calculated and the factors of age distribution and population density are taken into effect. Healthcare officials are able to explore the effects of the pandemic on the population through a spatiotemporal view, moving forward and backward through time and inserting decision points at various days to determine the impact. Linked statistical displays are also shown, providing county level summaries of data in terms of the number of sick, hospitalized and dead as a result of the outbreak. Currently, this tool has been deployed in Indiana State Department of Health planning and preparedness exercises, and as an educational tool for demonstrating the impact of social distancing strategies during the recent H1N1 (swine flu) outbreak.

  12. Collaborative Inquiry Learning: Models, tools, and challenges

    Science.gov (United States)

    Bell, Thorsten; Urhahne, Detlef; Schanze, Sascha; Ploetzner, Rolf

    2010-02-01

    Collaborative inquiry learning is one of the most challenging and exciting ventures for today's schools. It aims at bringing a new and promising culture of teaching and learning into the classroom where students in groups engage in self-regulated learning activities supported by the teacher. It is expected that this way of learning fosters students' motivation and interest in science, that they learn to perform steps of inquiry similar to scientists and that they gain knowledge on scientific processes. Starting from general pedagogical reflections and science standards, the article reviews some prominent models of inquiry learning. This comparison results in a set of inquiry processes being the basis for cooperation in the scientific network NetCoIL. Inquiry learning is conceived in several ways with emphasis on different processes. For an illustration of the spectrum, some main conceptions of inquiry and their focuses are described. In the next step, the article describes exemplary computer tools and environments from within and outside the NetCoIL network that were designed to support processes of collaborative inquiry learning. These tools are analysed by describing their functionalities as well as effects on student learning known from the literature. The article closes with challenges for further developments elaborated by the NetCoIL network.

  13. The Spectrum of the Baryon Masses in a Self-consistent SU(3) Quantum Skyrme Model

    CERN Document Server

    Jurciukonis, Darius; Regelskis, Vidas

    2012-01-01

    The semiclassical SU(3) Skyrme model is traditionally considered as describing a rigid quantum rotator with the profile function being fixed by the classical solution of the corresponding SU(2) Skyrme model. In contrast, we go beyond the classical profile function by quantizing the SU(3) Skyrme model canonically. The quantization of the model is performed in terms of the collective coordinate formalism and leads to the establishment of purely quantum corrections of the model. These new corrections are of fundamental importance. They are crucial in obtaining stable quantum solitons of the quantum SU(3) Skyrme model, thus making the model self-consistent and not dependent on the classical solution of the SU(2) case. We show that such a treatment of the model leads to a family of stable quantum solitons that describe the baryon octet and decuplet and reproduce the experimental values of their masses.

  14. A consistency assessment of coupled cohesive zone models for mixed-mode debonding problems

    Directory of Open Access Journals (Sweden)

    R. Dimitri

    2014-07-01

    Full Text Available Due to their simplicity, cohesive zone models (CZMs are very attractive to describe mixed-mode failure and debonding processes of materials and interfaces. Although a large number of coupled CZMs have been proposed, and despite the extensive related literature, little attention has been devoted to ensuring the consistency of these models for mixed-mode conditions, primarily in a thermodynamical sense. A lack of consistency may affect the local or global response of a mechanical system. This contribution deals with the consistency check for some widely used exponential and bilinear mixed-mode CZMs. The coupling effect on stresses and energy dissipation is first investigated and the path-dependance of the mixed-mode debonding work of separation is analitically evaluated. Analytical predictions are also compared with results from numerical implementations, where the interface is described with zero-thickness contact elements. A node-to-segment strategy is here adopted, which incorporates decohesion and contact within a unified framework. A new thermodynamically consistent mixed-mode CZ model based on a reformulation of the Xu-Needleman model as modified by van den Bosch et al. is finally proposed and derived by applying the Coleman and Noll procedure in accordance with the second law of thermodynamics. The model holds monolithically for loading and unloading processes, as well as for decohesion and contact, and its performance is demonstrated through suitable examples.

  15. Metrics and tools for consistent cohort discovery and financial analyses post-transition to ICD-10-CM.

    Science.gov (United States)

    Boyd, Andrew D; Li, Jianrong John; Kenost, Colleen; Joese, Binoy; Yang, Young Min; Kalagidis, Olympia A; Zenku, Ilir; Saner, Donald; Bahroos, Neil; Lussier, Yves A

    2015-05-01

    In the United States, International Classification of Disease Clinical Modification (ICD-9-CM, the ninth revision) diagnosis codes are commonly used to identify patient cohorts and to conduct financial analyses related to disease. In October 2015, the healthcare system of the United States will transition to ICD-10-CM (the tenth revision) diagnosis codes. One challenge posed to clinical researchers and other analysts is conducting diagnosis-related queries across datasets containing both coding schemes. Further, healthcare administrators will manage growth, trends, and strategic planning with these dually-coded datasets. The majority of the ICD-9-CM to ICD-10-CM translations are complex and nonreciprocal, creating convoluted representations and meanings. Similarly, mapping back from ICD-10-CM to ICD-9-CM is equally complex, yet different from mapping forward, as relationships are likewise nonreciprocal. Indeed, 10 of the 21 top clinical categories are complex as 78% of their diagnosis codes are labeled as "convoluted" by our analyses. Analysis and research related to external causes of morbidity, injury, and poisoning will face the greatest challenges due to 41 745 (90%) convolutions and a decrease in the number of codes. We created a web portal tool and translation tables to list all ICD-9-CM diagnosis codes related to the specific input of ICD-10-CM diagnosis codes and their level of complexity: "identity" (reciprocal), "class-to-subclass," "subclass-to-class," "convoluted," or "no mapping." These tools provide guidance on ambiguous and complex translations to reveal where reports or analyses may be challenging to impossible.Web portal: http://www.lussierlab.org/transition-to-ICD9CM/Tables annotated with levels of translation complexity: http://www.lussierlab.org/publications/ICD10to9.

  16. Towards an Information Model of Consistency Maintenance in Distributed Interactive Applications

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2008-01-01

    Full Text Available A novel framework to model and explore predictive contract mechanisms in distributed interactive applications (DIAs using information theory is proposed. In our model, the entity state update scheme is modelled as an information generation, encoding, and reconstruction process. Such a perspective facilitates a quantitative measurement of state fidelity loss as a result of the distribution protocol. Results from an experimental study on a first-person shooter game are used to illustrate the utility of this measurement process. We contend that our proposed model is a starting point to reframe and analyse consistency maintenance in DIAs as a problem in distributed interactive media compression.

  17. Analytical model for effect of temperature variation on PSF consistency in wavefront coding infrared imaging system

    Science.gov (United States)

    Feng, Bin; Shi, Zelin; Zhang, Chengshuo; Xu, Baoshu; Zhang, Xiaodong

    2016-05-01

    The point spread function (PSF) inconsistency caused by temperature variation leads to artifacts in decoded images of a wavefront coding infrared imaging system. Therefore, this paper proposes an analytical model for the effect of temperature variation on the PSF consistency. In the proposed model, a formula for the thermal deformation of an optical phase mask is derived. This formula indicates that a cubic optical phase mask (CPM) is still cubic after thermal deformation. A proposed equivalent cubic phase mask (E-CPM) is a virtual and room-temperature lens which characterizes the optical effect of temperature variation on the CPM. Additionally, a calculating method for PSF consistency after temperature variation is presented. Numerical simulation illustrates the validity of the proposed model and some significant conclusions are drawn. Given the form parameter, the PSF consistency achieved by a Ge-material CPM is better than the PSF consistency by a ZnSe-material CPM. The effect of the optical phase mask on PSF inconsistency is much slighter than that of the auxiliary lens group. A large form parameter of the CPM will introduce large defocus-insensitive aberrations, which improves the PSF consistency but degrades the room-temperature MTF.

  18. Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a Dual Risk Model

    Directory of Open Access Journals (Sweden)

    Lidong Zhang

    2014-01-01

    Full Text Available We are concerned with optimal investment strategy for a dual risk model. We assume that the company can invest into a risk-free asset and a risky asset. Short-selling and borrowing money are allowed. Due to lack of iterated-expectation property, the Bellman Optimization Principle does not hold. Thus we investigate the precommitted strategy and time-consistent strategy, respectively. We take three steps to derive the precommitted investment strategy. Furthermore, the time-consistent investment strategy is also obtained by solving the extended Hamilton-Jacobi-Bellman equations. We compare the precommitted strategy with time-consistent strategy and find that these different strategies have different advantages: the former can make value function maximized at the original time t=0 and the latter strategy is time-consistent for the whole time horizon. Finally, numerical analysis is presented for our results.

  19. A thermodynamically consistent phase-field model for two-phase flows with thermocapillary effects

    CERN Document Server

    Guo, Zhenlin

    2014-01-01

    In this paper, we develop a phase-field model for binary incompressible fluid with thermocapillary effects, which allows the different properties (densities, viscosities and heat conductivities) for each component and meanwhile maintains the thermodynamic consistency. The governing equations of the model including the Navier-Stokes equations, Cahn-Hilliard equations and energy balance equation are derived together within a thermodynamic framework based on the entropy generation, which guarantees the thermodynamic consistency. The sharp-interface limit analysis is carried out to show that the interfacial conditions of the classical sharp-interface models can be recovered from our phase-field model. Moreover, some numerical examples including thermocapillary migration of a bubble and thermocapillary convections in a two- layer fluid system are computed by using a continuous finite element method. The results are compared to the existing analytical solutions and theoretical predictions as validations for our mod...

  20. Nonparametric test of consistency between cosmological models and multiband CMB measurements

    CERN Document Server

    Aghamousa, Amir

    2015-01-01

    We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare between the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit $\\Lambda$CDM model at $95\\% (\\sim 2\\sigma)$ confidence distance from the center of the nonparametri...

  1. A simplified benchmark” Stock-Flow Consistent (SFC) post-Keynesian growth model

    OpenAIRE

    Cláudio H. dos Santos; Zezza, Gennaro

    2007-01-01

    Despite being arguably one of the most active areas of research in heterodox macroeconomics, the study of the dynamic properties of stock-flow consistent (SFC) growth models of financially sophisticated economies is still in its early stages. This paper attempts to offer a contribution to this line of research by presenting a simplified Post-Keynesian SFC growth model with well-defined dynamic properties, and using it to shed light on the merits and limitations of the current heterodox SFC li...

  2. A Consistent Direct Method for Estimating Parameters in Ordinary Differential Equations Models

    OpenAIRE

    Holte, Sarah E.

    2016-01-01

    Ordinary differential equations provide an attractive framework for modeling temporal dynamics in a variety of scientific settings. We show how consistent estimation for parameters in ODE models can be obtained by modifying a direct (non-iterative) least squares method similar to the direct methods originally developed by Himmelbau, Jones and Bischoff. Our method is called the bias-corrected least squares (BCLS) method since it is a modification of least squares methods known to be biased. Co...

  3. Evaluation of clinical information modeling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Austin, Tony; Moreno-Conde, Jesús; Parra-Calderón, Carlos L; Kalra, Dipak

    2016-11-01

    Clinical information models are formal specifications for representing the structure and semantics of the clinical content within electronic health record systems. This research aims to define, test, and validate evaluation metrics for software tools designed to support the processes associated with the definition, management, and implementation of these models. The proposed framework builds on previous research that focused on obtaining agreement on the essential requirements in this area. A set of 50 conformance criteria were defined based on the 20 functional requirements agreed by that consensus and applied to evaluate the currently available tools. Of the 11 initiative developing tools for clinical information modeling identified, 9 were evaluated according to their performance on the evaluation metrics. Results show that functionalities related to management of data types, specifications, metadata, and terminology or ontology bindings have a good level of adoption. Improvements can be made in other areas focused on information modeling and associated processes. Other criteria related to displaying semantic relationships between concepts and communication with terminology servers had low levels of adoption. The proposed evaluation metrics were successfully tested and validated against a representative sample of existing tools. The results identify the need to improve tool support for information modeling and software development processes, especially in those areas related to governance, clinician involvement, and optimizing the technical validation of testing processes. This research confirmed the potential of these evaluation metrics to support decision makers in identifying the most appropriate tool for their organization. Los Modelos de Información Clínica son especificaciones para representar la estructura y características semánticas del contenido clínico en los sistemas de Historia Clínica Electrónica. Esta investigación define, prueba y valida

  4. Comment on Self-Consistent Model of Black Hole Formation and Evaporation

    CERN Document Server

    Ho, Pei-Ming

    2015-01-01

    In an earlier work, Kawai et al proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.

  5. Consistent phase-change modeling for CO2-based heat mining operation

    DEFF Research Database (Denmark)

    Singh, Ashok Kumar; Veje, Christian

    2017-01-01

    –gas phase transition with more accuracy and consistency. Calculation of fluid properties and saturation state were based on the volume translated Peng–Robinson equation of state and results verified. The present model has been applied to a scenario to simulate a CO2-based heat mining process. In this paper...

  6. Comment on self-consistent model of black hole formation and evaporation

    Energy Technology Data Exchange (ETDEWEB)

    Ho, Pei-Ming [Department of Physics and Center for Theoretical Sciences, Center for Advanced Study in Theoretical Sciences,National Taiwan University, Taipei 106, Taiwan, R.O.C. (China)

    2015-08-18

    In an earlier work, Kawai et al. proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.

  7. Spatial coincidence modelling, automated database updating and data consistency in vector GIS.

    NARCIS (Netherlands)

    Kufoniyi, O.

    1995-01-01

    This thesis presents formal approaches for automated database updating and consistency control in vector- structured spatial databases. To serve as a framework, a conceptual data model is formalized for the representation of geo-data from multiple map layers in which a map layer denotes a set of ter

  8. A General Pressure Gradient Formulation for Ocean Models - Part II: Energy, Momentum, and Bottom Torque Consistency

    Science.gov (United States)

    Song, Y.; Wright, D.

    1998-01-01

    A formulation of the pressure gradient force for use in models with topography-following coordinates is proposed and diagnostically analyzed by Song. We investigate numerical consistency with respect to global energy conservation, depth-integrated momentum changes, and the represent of the bottom pressure torque.

  9. Subjective Confidence in Perceptual Judgments: A Test of the Self-Consistency Model

    Science.gov (United States)

    Koriat, Asher

    2011-01-01

    Two questions about subjective confidence in perceptual judgments are examined: the bases for these judgments and the reasons for their accuracy. Confidence in perceptual judgments has been claimed to rest on qualitatively different processes than confidence in memory tasks. However, predictions from a self-consistency model (SCM), which had been…

  10. Subjective Confidence in Perceptual Judgments: A Test of the Self-Consistency Model

    Science.gov (United States)

    Koriat, Asher

    2011-01-01

    Two questions about subjective confidence in perceptual judgments are examined: the bases for these judgments and the reasons for their accuracy. Confidence in perceptual judgments has been claimed to rest on qualitatively different processes than confidence in memory tasks. However, predictions from a self-consistency model (SCM), which had been…

  11. STRONG CONSISTENCY OF M ESTIMATOR IN LINEAR MODEL FOR NEGATIVELY ASSOCIATED SAMPLES

    Institute of Scientific and Technical Information of China (English)

    Qunying WU

    2006-01-01

    This paper discusses the strong consistency of M estimator of regression parameter in linear model for negatively associated samples. As a result, the author extends Theorem 1 and Theorem 2 of Shanchao YANG (2002) to the NA errors without necessarily imposing any extra condition.

  12. DATA QUALITY TOOLS FOR DATAWAREHOUSE MODELS

    Directory of Open Access Journals (Sweden)

    JASPREETI SINGH

    2015-05-01

    Full Text Available Data quality tools aim at detecting and correcting data problems that influence the accuracy and efficiency of data analysis applications. Data warehousing activities require data quality tools to ready the data and ensure that clean data populates the warehouse, thus raising usability of the warehouse. This research targets on the problems in the data that are addressed by data quality tools. We classify data quality tools based on datawarehouse stages and features of tool; which address the data quality problems and understand their functionalities.

  13. Building self-consistent, short-term earthquake probability (STEP models: improved strategies and calibration procedures

    Directory of Open Access Journals (Sweden)

    Damiano Monelli

    2010-11-01

    Full Text Available We present here two self-consistent implementations of a short-term earthquake probability (STEP model that produces daily seismicity forecasts for the area of the Italian national seismic network. Both implementations combine a time-varying and a time-invariant contribution, for which we assume that the instrumental Italian earthquake catalog provides the best information. For the time-invariant contribution, the catalog is declustered using the clustering technique of the STEP model; the smoothed seismicity model is generated from the declustered catalog. The time-varying contribution is what distinguishes the two implementations: 1 for one implementation (STEP-LG, the original model parameterization and estimation is used; 2 for the other (STEP-NG, the mean abundance method is used to estimate aftershock productivity. In the STEP-NG implementation, earthquakes with magnitude up to ML= 6.2 are expected to be less productive compared to the STEP-LG implementation, whereas larger earthquakes are expected to be more productive. We have retrospectively tested the performance of these two implementations and applied likelihood tests to evaluate their consistencies with observed earthquakes. Both of these implementations were consistent with the observed earthquake data in space: STEP-NG performed better than STEP-LG in terms of forecast rates. More generally, we found that testing earthquake forecasts issued at regular intervals does not test the full power of clustering models, and future experiments should allow for more frequent forecasts starting at the times of triggering events.

  14. Viscoelasticity behavior for finite deformations, using a consistent hypoelastic model based on Rivlin materials

    Science.gov (United States)

    Altmeyer, Guillaume; Panicaud, Benoit; Rouhaud, Emmanuelle; Wang, Mingchuan; Roos, Arjen; Kerner, Richard

    2016-11-01

    When constructing viscoelastic models, rate-form relations appear naturally to relate strain and stress tensors. One has to ensure that these tensors and their rates are indifferent with respect to the change of observers and to the superposition with rigid body motions. Objective transports are commonly accepted to ensure this invariance. However, the large number of transport operators developed makes the choice often difficult for the user and may lead to physically inconsistent formulation of hypoelasticity. In this paper, a methodology based on the use of the Lie derivative is proposed to model consistent hypoelasticity as an equivalent incremental formulation of hyperelasticity. Both models are shown to be reversible and completely equivalent. Extension to viscoelasticity is then proposed from this consistent model by associating consistent hypoelastic models with viscous behavior. As an illustration, Mooney-Rivlin nonlinear elasticity is coupled with Newton viscosity and a Maxwell-like material is investigated. Numerical solutions are then presented to illustrate a viscoelastic material subjected to finite deformations for a large range of strain rates.

  15. Consistent interpretation of molecular simulation kinetics using Markov state models biased with external information

    CERN Document Server

    Rudzinski, Joseph F; Bereau, Tristan

    2016-01-01

    Molecular simulations can provide microscopic insight into the physical and chemical driving forces of complex molecular processes. Despite continued advancement of simulation methodology, model errors may lead to inconsistencies between simulated and reference (e.g., from experiments or higher-level simulations) observables. To bound the microscopic information generated by computer simulations within reference measurements, we propose a method that reweights the microscopic transitions of the system to improve consistency with a set of coarse kinetic observables. The method employs the well-developed Markov state modeling framework to efficiently link microscopic dynamics with long-time scale constraints, thereby consistently addressing a wide range of time scales. To emphasize the robustness of the method, we consider two distinct coarse-grained models with significant kinetic inconsistencies. When applied to the simulated conformational dynamics of small peptides, the reweighting procedure systematically ...

  16. Consistency and consensus models for group decision-making with uncertain 2-tuple linguistic preference relations

    Science.gov (United States)

    Zhang, Zhen; Guo, Chonghui

    2016-08-01

    Due to the uncertainty of the decision environment and the lack of knowledge, decision-makers may use uncertain linguistic preference relations to express their preferences over alternatives and criteria. For group decision-making problems with preference relations, it is important to consider the individual consistency and the group consensus before aggregating the preference information. In this paper, consistency and consensus models for group decision-making with uncertain 2-tuple linguistic preference relations (U2TLPRs) are investigated. First of all, a formula which can construct a consistent U2TLPR from the original preference relation is presented. Based on the consistent preference relation, the individual consistency index for a U2TLPR is defined. An iterative algorithm is then developed to improve the individual consistency of a U2TLPR. To help decision-makers reach consensus in group decision-making under uncertain linguistic environment, the individual consensus and group consensus indices for group decision-making with U2TLPRs are defined. Based on the two indices, an algorithm for consensus reaching in group decision-making with U2TLPRs is also developed. Finally, two examples are provided to illustrate the effectiveness of the proposed algorithms.

  17. The fundamental solution for a consistent complex model of the shallow shell equations

    Directory of Open Access Journals (Sweden)

    Matthew P. Coleman

    1999-09-01

    Full Text Available The calculation of the Fourier transforms of the fundamental solution in shallow shell theory ostensibly was accomplished by J. L. Sanders [J. Appl. Mech. 37 (1970, 361-366]. However, as is shown in detail in this paper, the complex model used by Sanders is, in fact, inconsistent. This paper provides a consistent version of Sanders's complex model, along with the Fourier transforms of the fundamental solution for this corrected model. The inverse Fourier transforms are then calculated for the particular cases of the shallow spherical and circular cylindrical shells, and the results of the latter are seen to be in agreement with results appearing elsewhere in the literature.

  18. Tests and applications of self-consistent cranking in the interacting boson model

    CERN Document Server

    Kuyucak, S; Kuyucak, Serdar; Sugita, Michiaki

    1999-01-01

    The self-consistent cranking method is tested by comparing the cranking calculations in the interacting boson model with the exact results obtained from the SU(3) and O(6) dynamical symmetries and from numerical diagonalization. The method is used to study the spin dependence of shape variables in the $sd$ and $sdg$ boson models. When realistic sets of parameters are used, both models lead to similar results: axial shape is retained with increasing cranking frequency while fluctuations in the shape variable $\\gamma$ are slightly reduced.

  19. Consistency maintenance for constraint in role-based access control model

    Institute of Scientific and Technical Information of China (English)

    韩伟力; 陈刚; 尹建伟; 董金祥

    2002-01-01

    Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far'few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces correaponding formal rules, rulebased reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally,the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-ori-ented product data management (PDM) system.

  20. Consistency maintenance for constraint in role-based access control model

    Institute of Scientific and Technical Information of China (English)

    韩伟力; 陈刚; 尹建伟; 董金祥

    2002-01-01

    Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces corresponding formal rules, rule-based reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally, the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-oriented product data management (PDM) system.

  1. Ecotoxicological mechanisms and models in an impact analysis tool for oil spills

    NARCIS (Netherlands)

    Laender, de F.; Olsen, G.H.; Frost, T.; Grosvik, B.E.; Klok, T.C.

    2011-01-01

    In an international collaborative effort, an impact analysis tool is being developed to predict the effect of accidental oil spills on recruitment and production of Atlantic cod (Gadus morhua) in the Barents Sea. The tool consisted of three coupled ecological models that describe (1) plankton biomas

  2. Ecotoxicological mechanisms and models in an impact analysis tool for oil spills

    NARCIS (Netherlands)

    Laender, de F.; Olsen, G.H.; Frost, T.; Grosvik, B.E.; Klok, T.C.

    2011-01-01

    In an international collaborative effort, an impact analysis tool is being developed to predict the effect of accidental oil spills on recruitment and production of Atlantic cod (Gadus morhua) in the Barents Sea. The tool consisted of three coupled ecological models that describe (1) plankton

  3. A New Hierarchy of Phylogenetic Models Consistent with Heterogeneous Substitution Rates.

    Science.gov (United States)

    Woodhams, Michael D; Fernández-Sánchez, Jesús; Sumner, Jeremy G

    2015-07-01

    When the process underlying DNA substitutions varies across evolutionary history, some standard Markov models underlying phylogenetic methods are mathematically inconsistent. The most prominent example is the general time-reversible model (GTR) together with some, but not all, of its submodels. To rectify this deficiency, nonhomogeneous Lie Markov models have been identified as the class of models that are consistent in the face of a changing process of DNA substitutions regardless of taxon sampling. Some well-known models in popular use are within this class, but are either overly simplistic (e.g., the Kimura two-parameter model) or overly complex (the general Markov model). On a diverse set of biological data sets, we test a hierarchy of Lie Markov models spanning the full range of parameter richness. Compared against the benchmark of the ever-popular GTR model, we find that as a whole the Lie Markov models perform well, with the best performing models having 8-10 parameters and the ability to recognize the distinction between purines and pyrimidines. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society of Systematic Biologists.

  4. Self-consistent chaotic transport in a high-dimensional mean-field Hamiltonian map model

    CERN Document Server

    Martínez-del-Río, D; Olvera, A; Calleja, R

    2016-01-01

    Self-consistent chaotic transport is studied in a Hamiltonian mean-field model. The model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of $N$ coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherent structures. Numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of th...

  5. Consistency and asymptotic normality of profilekernel and backfitting estimators in semiparametric reproductive dispersion nonlinear models

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Semiparametric reproductive dispersion nonlinear model (SRDNM) is an extension of nonlinear reproductive dispersion models and semiparametric nonlinear regression models, and includes semiparametric nonlinear model and semiparametric generalized linear model as its special cases. Based on the local kernel estimate of nonparametric component, profile-kernel and backfitting estimators of parameters of interest are proposed in SRDNM, and theoretical comparison of both estimators is also investigated in this paper. Under some regularity conditions, strong consistency and asymptotic normality of two estimators are proved. It is shown that the backfitting method produces a larger asymptotic variance than that for the profile-kernel method. A simulation study and a real example are used to illustrate the proposed methodologies.

  6. Detecting consistent patterns of directional adaptation using differential selection codon models.

    Science.gov (United States)

    Parto, Sahar; Lartillot, Nicolas

    2017-06-23

    Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.

  7. A thermodynamically consistent model for granular-fluid mixtures considering pore pressure evolution and hypoplastic behavior

    Science.gov (United States)

    Hess, Julian; Wang, Yongqi

    2016-11-01

    A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.

  8. A control-oriented self-consistent model of an inductively-coupled plasma

    Science.gov (United States)

    Keville, Bernard; Turner, Miles

    2009-10-01

    An essential first step in the design of real time control algorithms for plasma processes is to determine dynamical relationships between actuator quantities such as gas flow rate set points and plasma states such electron density. An ideal first principles-based, control-oriented model should exhibit the simplicity and computational requirements of an empirical model and, in addition, despite sacrificing first principles detail, capture enough of the essential physics and chemistry of the process in order to provide reasonably accurate qualitative predictions. This presentation describes a control-oriented model of a cylindrical low pressure planar inductive discharge with a stove top antenna. The model consists of equivalent circuit coupled to a global model of the plasma chemistry to produce a self-consistent zero-dimensional model of the discharge. The non-local plasma conductivity and the fields in the plasma are determined from the wave equation and the two-term solution of the Boltzmann equation. Expressions for the antenna impedance and the parameters of the transformer equivalent circuit in terms of the isotropic electron distribution and the geometry of the chamber are presented.

  9. Consistent increase in Indian monsoon rainfall and its variability across CMIP-5 models

    Directory of Open Access Journals (Sweden)

    A. Menon

    2013-01-01

    Full Text Available The possibility of an impact of global warming on the Indian monsoon is of critical importance for the large population of this region. Future projections within the Coupled Model Intercomparison Project Phase 3 (CMIP-3 showed a wide range of trends with varying magnitude and sign across models. Here the Indian summer monsoon rainfall is evaluated in 20 CMIP-5 models for the period 1850 to 2100. In the new generation of climate models a consistent increase in seasonal mean rainfall during the summer monsoon periods arises. All models simulate stronger seasonal mean rainfall in the future compared to the historic period under the strongest warming scenario RCP-8.5. Increase in seasonal mean rainfall is the largest for the RCP-8.5 scenario compared to other RCPs. The interannual variability of the Indian monsoon rainfall also shows a consistent positive trend under unabated global warming. Since both the long-term increase in monsoon rainfall as well as the increase in interannual variability in the future is robust across a wide range of models, some confidence can be attributed to these projected trends.

  10. Modeling, methodologies and tools for molecular and nano-scale communications modeling, methodologies and tools

    CERN Document Server

    Nakano, Tadashi; Moore, Michael

    2017-01-01

    (Preliminary) The book presents the state of art in the emerging field of molecular and nanoscale communication. It gives special attention to fundamental models, and advanced methodologies and tools used in the field. It covers a wide range of applications, e.g. nanomedicine, nanorobot communication, bioremediation and environmental managements. It addresses advanced graduate students, academics and professionals working at the forefront in their fields and at the interfaces between different areas of research, such as engineering, computer science, biology and nanotechnology.

  11. Non-Perturbative Self-Consistent Model in SU(N Gauge Field Theory

    Directory of Open Access Journals (Sweden)

    Koshelkin A.V.

    2012-06-01

    Full Text Available Non-perturbative quasi-classical model in a gauge theory with the Yang-Mills (YM field is developed. The self-consistent solutions of the Dirac equation in the SU(N gauge field, which is in the eikonal approximation, and the Yang-Mills (YM equations containing the external fermion current are solved. It shown that the developed model has the self-consistent solutions of the Dirac and Yang-Mills equations at N ≥ 3. In this way, the solutions take place provided that the fermion and gauge fields exist simultaneously, so that the fermion current completely compensates the current generated by the gauge field due to self-interaction of it.

  12. Physical consistency of subgrid-scale models for large-eddy simulation of incompressible turbulent flows

    Science.gov (United States)

    Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel

    2017-01-01

    We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.

  13. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Institute of Scientific and Technical Information of China (English)

    John Jack P. RIEGEL III; David DAVISON

    2016-01-01

    Historically, there has been little correlation between the material properties used in (1) empirical formulae, (2) analytical formulations, and (3) numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014) to show how the Effective Flow Stress (EFS) strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN) (Anderson and Walker, 1991) and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical) to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D=10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a baseline with a full

  14. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  15. Asymptotic normality and strong consistency of maximum quasi-likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YIN; Changming; ZHAO; Lincheng; WEI; Chengdong

    2006-01-01

    In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.

  16. A thermodynamically consistent model of the post-translational Kai circadian clock

    Science.gov (United States)

    Lubensky, David K.; ten Wolde, Pieter Rein

    2017-01-01

    The principal pacemaker of the circadian clock of the cyanobacterium S. elongatus is a protein phosphorylation cycle consisting of three proteins, KaiA, KaiB and KaiC. KaiC forms a homohexamer, with each monomer consisting of two domains, CI and CII. Both domains can bind and hydrolyze ATP, but only the CII domain can be phosphorylated, at two residues, in a well-defined sequence. While this system has been studied extensively, how the clock is driven thermodynamically has remained elusive. Inspired by recent experimental observations and building on ideas from previous mathematical models, we present a new, thermodynamically consistent, statistical-mechanical model of the clock. At its heart are two main ideas: i) ATP hydrolysis in the CI domain provides the thermodynamic driving force for the clock, switching KaiC between an active conformational state in which its phosphorylation level tends to rise and an inactive one in which it tends to fall; ii) phosphorylation of the CII domain provides the timer for the hydrolysis in the CI domain. The model also naturally explains how KaiA, by acting as a nucleotide exchange factor, can stimulate phosphorylation of KaiC, and how the differential affinity of KaiA for the different KaiC phosphoforms generates the characteristic temporal order of KaiC phosphorylation. As the phosphorylation level in the CII domain rises, the release of ADP from CI slows down, making the inactive conformational state of KaiC more stable. In the inactive state, KaiC binds KaiB, which not only stabilizes this state further, but also leads to the sequestration of KaiA, and hence to KaiC dephosphorylation. Using a dedicated kinetic Monte Carlo algorithm, which makes it possible to efficiently simulate this system consisting of more than a billion reactions, we show that the model can describe a wealth of experimental data. PMID:28296888

  17. ICFD modeling of final settlers - developing consistent and effective simulation model structures

    DEFF Research Database (Denmark)

    Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham

    analysis exercises is kept to a minimum (4). Consequently, detailed information related to, for instance, design boundaries, may be ignored, and their effects may only be accounted for through calibration of model parameters used as catchalls, and by arbitrary amendments of structural uncertainty...... of (6). Further details are shown in (5). Results and discussions Factor screening. Factor screening is carried out by imposing statistically designed moderate (under-loaded) and extreme (under-, critical and overloaded) operational boundary conditions on the 2-D CFD SST model (8). Results obtained...

  18. Self-consistent Dark Matter simplified models with an s-channel scalar mediator

    Science.gov (United States)

    Bell, Nicole F.; Busoni, Giorgio; Sanderson, Isaac W.

    2017-03-01

    We examine Simplified Models in which fermionic DM interacts with Standard Model (SM) fermions via the exchange of an s-channel scalar mediator. The single-mediator version of this model is not gauge invariant, and instead we must consider models with two scalar mediators which mix and interfere. The minimal gauge invariant scenario involves the mixing of a new singlet scalar with the Standard Model Higgs boson, and is tightly constrained. We construct two Higgs doublet model (2HDM) extensions of this scenario, where the singlet mixes with the 2nd Higgs doublet. Compared with the one doublet model, this provides greater freedom for the masses and mixing angle of the scalar mediators, and their coupling to SM fermions. We outline constraints on these models, and discuss Yukawa structures that allow enhanced couplings, yet keep potentially dangerous flavour violating processes under control. We examine the direct detection phenomenology of these models, accounting for interference of the scalar mediators, and interference of different quarks in the nucleus. Regions of parameter space consistent with direct detection measurements are determined.

  19. Self-Consistent Ring Current/Electromagnetic Ion Cyclotron Waves Modeling

    Science.gov (United States)

    Khazanov, G. V.; Gamayunov, K. V.; Gallagher, D. L.

    2006-01-01

    The self-consistent treatment of the RC ion dynamics and EMIC waves, which are thought to exert important influences on the ion dynamical evolution, is an important missing element in our understanding of the storm-and recovery-time ring current evolution. For example, the EMlC waves cause the RC decay on a time scale of about one hour or less during the main phase of storms. The oblique EMIC waves damp due to Landau resonance with the thermal plasmaspheric electrons, and subsequent transport of the dissipating wave energy into the ionosphere below causes an ionosphere temperature enhancement. Under certain conditions, relativistic electrons, with energies 21 MeV, can be removed from the outer radiation belt by EMIC wave scattering during a magnetic storm. That is why the modeling of EMIC waves is critical and timely issue in magnetospheric physics. This study will generalize the self-consistent theoretical description of RC ions and EMIC waves that has been developed by Khazanov et al. [2002, 2003] and include the heavy ions and propagation effects of EMIC waves in the global dynamic of self-consistent RC - EMIC waves coupling. The results of our newly developed model that will be presented at the meeting, focusing mainly on the dynamic of EMIC waves and comparison of these results with the previous global RC modeling studies devoted to EMIC waves formation. We also discuss RC ion precipitations and wave induced thermal electron fluxes into the ionosphere.

  20. Quantal self-consistent cranking model for monopole excitations in even-even light nuclei

    CERN Document Server

    Gulshani, P

    2014-01-01

    In this article, we derive a quantal self-consistent time-reversal invariant cranking model for isoscalar monopole excitation coupled to intrinsic motion in even-even light nuclei. The model uses a wavefunction that is a product of monopole and intrinsic wavefunctions and a constrained variational method to derive, from a many-particle Schrodinger equation, a pair of coupled self-consistent cranking-type Schrodinger equations for the monopole and intrinsic systems. The monopole and intrinsic wavefunctions are coupled to each other by the two cranking equations and their associated parameters and by two constraints imposed on the intrinsic system. For an isotropic Nilsson shell model and an effective residual two-body interaction, the two coupled cranking equations are solved in the Tamm Dancoff approximation. The strength of the interaction is determined from a Hartree-Fock self-consistency argument. The excitation energy of the first excited state is determined and found to agree closely with those observed ...

  1. Predicting giant magnetoresistance using a self-consistent micromagnetic diffusion model

    CERN Document Server

    Abert, Claas; Bruckner, Florian; Vogler, Christoph; Praetorius, Dirk; Suess, Dieter

    2015-01-01

    We propose a self-consistent micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. Potential calculations for a magnetic multilayer structure with perpendicular current flow confirm experimental findings of a non-sinosoidal dependence of the resistivity on the tilting angle of the magnetization in the different layers. While the sinosoidal dependency is observed for certain material parameter limits, a realistic choice of these parameters leads to a notably narrower distribution.

  2. Self-consistent tight-binding atomic-relaxation model of titanium dioxide

    Energy Technology Data Exchange (ETDEWEB)

    Schelling, P.K.; Yu, N.; Halley, J.W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455 (United States)

    1998-07-01

    We report a self-consistent tight-binding atomic-relaxation model for titanium dioxide. We fit the parameters of the model to first-principles electronic structure calculations of the band structure and energy as a function of lattice parameters in bulk rutile. We report the method and results for the surface structures and energies of relaxed (110), (100), and (001) surfaces of rutile TiO{sub 2} as well as work functions for these surfaces. Good agreement with first-principles calculations and experiments, where available, is found for these surfaces. We find significant charge transfer (increased covalency) at the surfaces. {copyright} {ital 1998} {ital The American Physical Society}

  3. A Self-Consistent Model for Thermal Oxidation of Silicon at Low Oxide Thickness

    Directory of Open Access Journals (Sweden)

    Gerald Gerlach

    2016-01-01

    Full Text Available Thermal oxidation of silicon belongs to the most decisive steps in microelectronic fabrication because it allows creating electrically insulating areas which enclose electrically conductive devices and device areas, respectively. Deal and Grove developed the first model (DG-model for the thermal oxidation of silicon describing the oxide thickness versus oxidation time relationship with very good agreement for oxide thicknesses of more than 23 nm. Their approach named as general relationship is the basis of many similar investigations. However, measurement results show that the DG-model does not apply to very thin oxides in the range of a few nm. Additionally, it is inherently not self-consistent. The aim of this paper is to develop a self-consistent model that is based on the continuity equation instead of Fick’s law as the DG-model is. As literature data show, the relationship between silicon oxide thickness and oxidation time is governed—down to oxide thicknesses of just a few nm—by a power-of-time law. Given by the time-independent surface concentration of oxidants at the oxide surface, Fickian diffusion seems to be neglectable for oxidant migration. The oxidant flux has been revealed to be carried by non-Fickian flux processes depending on sites being able to lodge dopants (oxidants, the so-called DOCC-sites, as well as on the dopant jump rate.

  4. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  5. Towards a self-consistent halo model for the nonlinear large-scale structure

    CERN Document Server

    Schmidt, Fabian

    2015-01-01

    The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: $(i)$ they do not enforce the stress-energy conservation of matter; $(ii)$ they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model ("EHM") that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed, and results of perturbation theory and the effective field theory can in principle be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written he...

  6. A technique for generating consistent ice sheet initial conditions for coupled ice-sheet/climate models

    Directory of Open Access Journals (Sweden)

    J. G. Fyke

    2013-04-01

    Full Text Available A new technique for generating ice sheet preindustrial 1850 initial conditions for coupled ice-sheet/climate models is developed and demonstrated over the Greenland Ice Sheet using the Community Earth System Model (CESM. Paleoclimate end-member simulations and ice core data are used to derive continuous surface mass balance fields which are used to force a long transient ice sheet model simulation. The procedure accounts for the evolution of climate through the last glacial period and converges to a simulated preindustrial 1850 ice sheet that is geometrically and thermodynamically consistent with the 1850 preindustrial simulated CESM state, yet contains a transient memory of past climate that compares well to observations and independent model studies. This allows future coupled ice-sheet/climate projections of climate change that include ice sheets to integrate the effect of past climate conditions on the state of the Greenland Ice Sheet, while maintaining system-wide continuity between past and future climate simulations.

  7. Inflation Model (with doublet scalar field) consistent with Lambda CDM and WMAP cosmological observations

    CERN Document Server

    Amruth, B R; R., Amruth B.; Patwardhan, Ajay

    2006-01-01

    Cosmological inflation models with modifications to include recent cosmological observations has been an active area of research after WMAP 3 results, which have given us information about the composition of dark matter, normal matter and dark energy and the anisotropy at the 300,000 years horizon with high precision. We work on inflation models of Guth and Linde and modify them by introducing a doublet scalar field to give normal matter particles and their supersymmetric partners which result in normal and dark matter of our universe. We include the cosmological constant term as the vaccuum expectation value of the stress energy tensor, as the dark energy. We callibrate the parameters of our model using recent observations of density fluctuations. We develop a model which consistently fits with the recent observations.

  8. SALT Spectropolarimetry and Self-Consistent SED and Polarization Modeling of Blazars

    Science.gov (United States)

    Böttcher, Markus; van Soelen, Brian; Britto, Richard; Buckley, David; Marais, Johannes; Schutte, Hester

    2017-09-01

    We report on recent results from a target-of-opportunity program to obtain spectropolarimetry observations with the Southern African Large Telescope (SALT) on flaring gamma-ray blazars. SALT spectropolarimetry and contemporaneous multi-wavelength spectral energy distribution (SED) data are being modelled self-consistently with a leptonic single-zone model. Such modeling provides an accurate estimate of the degree of order of the magnetic field in the emission region and the thermal contributions (from the host galaxy and the accretion disk) to the SED, thus putting strong constraints on the physical parameters of the gamma-ray emitting region. For the specific case of the $\\gamma$-ray blazar 4C+01.02, we demonstrate that the combined SED and spectropolarimetry modeling constrains the mass of the central black hole in this blazar to $M_{\\rm BH} \\sim 10^9 \\, M_{\\odot}$.

  9. Numerical experiments on consistent horizontal and vertical resolution for atmospheric models and observing systems

    Science.gov (United States)

    Fox-Rabinovitz, Michael S.; Lindzen, Richard S.

    1993-01-01

    Simple numerical experiments are performed in order to determine the effects of inconsistent combinations of horizontal and vertical resolution in both atmospheric models and observing systems. In both cases, we find that inconsistent spatial resolution is associated with enhanced noise generation. A rather fine horizontal resolution in a satellite-data observing system seems to be excessive when combined with the usually available relatively coarse vertical resolution. Using horizontal filters of different strengths, adjusted in such a way as to render the effective horizontal resolution more consistent with vertical resolution for the observing system, may result in improvement of the analysis accuracy. The increase of vertical resolution for a satellite data observing system with better vertically resolved data, the results are different in that little or no horizontal filtering is needed to make spatial resolution more consistent for the system. The obtained experimental estimates of consistent vertical and effective horizontal resolution are in a general agreement with consistent resolution estimates previously derived theoretically by the authors.

  10. Relativistic Consistent Angular-Momentum Projected Shell-Model:Relativistic Mean Field

    Institute of Scientific and Technical Information of China (English)

    LI Yan-Song; LONG Gui-Lu

    2004-01-01

    We develop a relativistic nuclear structure model, relativistic consistent angular-momentum projected shellmodel (RECAPS), which combines the relativistic mean-field theory with the angular-momentum projection method.In this new model, nuclear ground-state properties are first calculated consistently using relativistic mean-field (RMF)theory. Then angular momentum projection method is used to project out states with good angular momentum from a few important configurations. By diagonalizing the hamiltonian, the energy levels and wave functions are obtained.This model is a new attempt for the understanding of nuclear structure of normal nuclei and for the prediction of nuclear properties of nuclei far from stability. In this paper, we will describe the treatment of the relativistic mean field. A computer code, RECAPS-RMF, is developed. It solves the relativistic mean field with axial-symmetric deformation in the spherical harmonic oscillator basis. Comparisons between our calculations and existing relativistic mean-field calculations are made to test the model. These include the ground-state properties of spherical nuclei 16O and 208Pb,the deformed nucleus 20Ne. Good agreement is obtained.

  11. Communication: Consistent interpretation of molecular simulation kinetics using Markov state models biased with external information

    Science.gov (United States)

    Rudzinski, Joseph F.; Kremer, Kurt; Bereau, Tristan

    2016-02-01

    Molecular simulations can provide microscopic insight into the physical and chemical driving forces of complex molecular processes. Despite continued advancement of simulation methodology, model errors may lead to inconsistencies between simulated and reference (e.g., from experiments or higher-level simulations) observables. To bound the microscopic information generated by computer simulations within reference measurements, we propose a method that reweights the microscopic transitions of the system to improve consistency with a set of coarse kinetic observables. The method employs the well-developed Markov state modeling framework to efficiently link microscopic dynamics with long-time scale constraints, thereby consistently addressing a wide range of time scales. To emphasize the robustness of the method, we consider two distinct coarse-grained models with significant kinetic inconsistencies. When applied to the simulated conformational dynamics of small peptides, the reweighting procedure systematically improves the time scale separation of the slowest processes. Additionally, constraining the forward and backward rates between metastable states leads to slight improvement of their relative stabilities and, thus, refined equilibrium properties of the resulting model. Finally, we find that difficulties in simultaneously describing both the simulated data and the provided constraints can help identify specific limitations of the underlying simulation approach.

  12. Ring current Atmosphere interactions Model with Self-Consistent Magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    2016-09-09

    The Ring current Atmosphere interactions Model with Self-Consistent magnetic field (B) is a unique code that combines a kinetic model of ring current plasma with a three dimensional force-balanced model of the terrestrial magnetic field. The kinetic portion, RAM, solves the kinetic equation to yield the bounce-averaged distribution function as a function of azimuth, radial distance, energy and pitch angle for three ion species (H+, He+, and O+) and, optionally, electrons. The domain is a circle in the Solar-Magnetic (SM) equatorial plane with a radial span of 2 to 6.5 RE. It has an energy range of approximately 100 eV to 500 KeV. The 3-D force balanced magnetic field model, SCB, balances the JxB force with the divergence of the general pressure tensor to calculate the magnetic field configuration within its domain. The domain ranges from near the Earth’s surface, where the field is assumed dipolar, to the shell created by field lines passing through the SM equatorial plane at a radial distance of 6.5 RE. The two codes work in tandem, with RAM providing anisotropic pressure to SCB and SCB returning the self-consistent magnetic field through which RAM plasma is advected.

  13. Multiple Servers - Queue Model for Agent Based Technology in Cache Consistence Maintenance of Mobile Environment

    Directory of Open Access Journals (Sweden)

    G.Shanmugarathinam

    2013-01-01

    Full Text Available Caching is one of the important techniques in mobile computing. In caching, frequently accessed data is stored in mobile clients to avoid network traffic and improve the performance in mobile computing. In a mobile computing environment, the number of mobile users increases and requests the server for any updation, but most of the time the server is busy and the client has to wait for a long time. The cache consistency maintenance is difficult for both client and the server. This paper is proposes a technique using a queuing system consisting of one or more servers that provide services of some sort to arrive mobile hosts using agent based technology. This services mechanism of a queuing system is specified by the number of servers each server having its own queue, Agent based technology will maintain the cache consistency between the client and the server .This model saves wireless bandwidth, reduces network traffic and reduces the workload on the server. The simulation result was analyzed with previous technique and the proposed model shows significantly better performance than the earlier approach.

  14. Physical consistency of subgrid-scale models for large-eddy simulation of incompressible turbulent flows

    CERN Document Server

    Silvis, Maurits H; Verstappen, Roel

    2016-01-01

    We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is p...

  15. RNA secondary structure modeling at consistent high accuracy using differential SHAPE.

    Science.gov (United States)

    Rice, Greggory M; Leonard, Christopher W; Weeks, Kevin M

    2014-06-01

    RNA secondary structure modeling is a challenging problem, and recent successes have raised the standards for accuracy, consistency, and tractability. Large increases in accuracy have been achieved by including data on reactivity toward chemical probes: Incorporation of 1M7 SHAPE reactivity data into an mfold-class algorithm results in median accuracies for base pair prediction that exceed 90%. However, a few RNA structures are modeled with significantly lower accuracy. Here, we show that incorporating differential reactivities from the NMIA and 1M6 reagents--which detect noncanonical and tertiary interactions--into prediction algorithms results in highly accurate secondary structure models for RNAs that were previously shown to be difficult to model. For these RNAs, 93% of accepted canonical base pairs were recovered in SHAPE-directed models. Discrepancies between accepted and modeled structures were small and appear to reflect genuine structural differences. Three-reagent SHAPE-directed modeling scales concisely to structurally complex RNAs to resolve the in-solution secondary structure analysis problem for many classes of RNA.

  16. Gas cooling in semi-analytic models and SPH simulations: are results consistent?

    CERN Document Server

    Saro, A; Borgani, S; Dolag, K

    2010-01-01

    We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical SPH simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: a) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; b) while all stars associated with the BCG were formed in its progenitors i...

  17. A Fully Nonlinear, Dynamically Consistent Numerical Model for Ship Maneuvering in a Seaway

    Directory of Open Access Journals (Sweden)

    Ray-Qing Lin

    2011-01-01

    Full Text Available This is the continuation of our research on development of a fully nonlinear, dynamically consistent, numerical ship motion model (DiSSEL. In this paper we report our results on modeling ship maneuvering in arbitrary seaway that is one of the most challenging and important problems in seakeeping. In our modeling, we developed an adaptive algorithm to maintain dynamical balances numerically as the encounter frequencies (the wave frequencies as measured on the ship varying with the ship maneuvering state. The key of this new algorithm is to evaluate the encounter frequency variation differently in the physical domain and in the frequency domain, thus effectively eliminating possible numerical dynamical imbalances. We have tested this algorithm with several well-documented maneuvering experiments, and our results agree very well with experimental data. In particular, the numerical time series of roll and pitch motions and the numerical ship tracks (i.e., surge, sway, and yaw are nearly identical to those of experiments.

  18. On Multiscale Modeling: Preserving Energy Dissipation Across the Scales with Consistent Handshaking Methods

    Science.gov (United States)

    Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.; Waas, Anthony M.

    2013-01-01

    A mesh objective crack band model was implemented within the generalized method of cells micromechanics theory. This model was linked to a macroscale finite element model to predict post-peak strain softening in composite materials. Although a mesh objective theory was implemented at the microscale, it does not preclude pathological mesh dependence at the macroscale. To ensure mesh objectivity at both scales, the energy density and the energy release rate must be preserved identically across the two scales. This requires a consistent characteristic length or localization limiter. The effects of scaling (or not scaling) the dimensions of the microscale repeating unit cell (RUC), according to the macroscale element size, in a multiscale analysis was investigated using two examples. Additionally, the ramifications of the macroscale element shape, compared to the RUC, was studied.

  19. Consistent neutron star models with magnetic field dependent equations of state

    CERN Document Server

    Chatterjee, Debarati; Novak, Jerome; Oertel, Micaela

    2014-01-01

    We present a self-consistent model for the study of the structure of a neutron star in strong magnetic fields. Starting from a microscopic Lagrangian, this model includes the effect of the magnetic field on the equation of state, the interaction of the electromagnetic field with matter (magnetisation), and anisotropies in the energy-momentum tensor, as well as general relativistic aspects. We build numerical axisymmetric stationary models and show the applicability of the approach with one example quark matter equation of state (EoS) often employed in the recent literature for studies of strongly magnetised neutron stars. For this EoS, the effect of inclusion of magnetic field dependence or the magnetisation do not increase the maximum mass significantly in contrast to what has been claimed by previous studies.

  20. A self consistent chemically stratified atmosphere model for the roAp star 10 Aquilae

    CERN Document Server

    Nesvacil, Nicole; Ryabchikova, Tanya A; Kochukhov, Oleg; Akberov, Artur; Weiss, Werner W

    2012-01-01

    Context: Chemically peculiar A type (Ap) stars are a subgroup of the CP2 stars which exhibit anomalous overabundances of numerous elements, e.g. Fe, Cr, Sr and rare earth elements. The pulsating subgroup of the Ap stars, the roAp stars, present ideal laboratories to observe and model pulsational signatures as well as the interplay of the pulsations with strong magnetic fields and vertical abundance gradients. Aims: Based on high resolution spectroscopic observations and observed stellar energy distributions we construct a self consistent model atmosphere, that accounts for modulations of the temperature-pressure structure caused by vertical abundance gradients, for the roAp star 10 Aquilae (HD 176232). We demonstrate that such an analysis can be used to determine precisely the fundamental atmospheric parameters required for pulsation modelling. Methods: Average abundances were derived for 56 species. For Mg, Si, Ca, Cr, Fe, Co, Sr, Pr, and Nd vertical stratification profiles were empirically derived using the...

  1. A self-consistent model for a longitudinal discharge excited He-Sr recombination laser

    Energy Technology Data Exchange (ETDEWEB)

    Carman, R.J. (Centre for Lasers and Applications, Macquarie University, Sydney NSW 2109 (AU))

    1990-09-01

    A computer model has been developed to simulate the plasma kinetics in a high-repetition frequency, discharge excited He-Sr recombination laser. A detailed rate equation analysis, incorporating about 80 collisional and radiative processes, is used to determine the temporal and spatial (radial) behavior of the discharge parameters and the intracavity laser field during the current pulse, recombination phase, and afterglow periods. The set of coupled first-order ordinary differential equations used to describe the plasma and external electrical circuit are integrated over multiple discharge cycles to yield fully self-consistent results. The computer model has been used to simulate the behavior of the laser for a set of standard conditions corresponding to typical operating conditions. The species population densities predicted by the model are compared with radial and time-dependent Hook measurements determined experimentally for the same set of standard conditions.

  2. A heterogeneous traffic flow model consisting of two types of vehicles with different sensitivities

    Science.gov (United States)

    Li, Zhipeng; Xu, Xun; Xu, Shangzhi; Qian, Yeqing

    2017-01-01

    A heterogeneous car following model is constructed for traffic flow consisting of low- and high-sensitivity vehicles. The stability criterion of new model is obtained by using the linear stability theory. We derive the neutral stability diagram for the proposed model with five distinct regions. We conclude the effect of the percentage of low-sensitivity vehicle on the traffic stability in each region. In addition, we further consider a special case that the number of the low-sensitivity vehicles is equal to that of the high-sensitivity ones. We explore the dependence of traffic stability on the average value and the standard deviation of two sensitivities characterizing two vehicle types. The direct numerical simulation results verify the conclusion of theoretical analysis.

  3. Modeling and Simulation Tools for Heavy Lift Airships

    Science.gov (United States)

    Hochstetler, Ron; Chachad, Girish; Hardy, Gordon; Blanken, Matthew; Melton, John

    2016-01-01

    For conventional fixed wing and rotary wing aircraft a variety of modeling and simulation tools have been developed to provide designers the means to thoroughly investigate proposed designs and operational concepts. However, lighter-than-air (LTA) airships, hybrid air vehicles, and aerostats have some important aspects that are different from heavier-than-air (HTA) vehicles. In order to account for these differences, modifications are required to the standard design tools to fully characterize the LTA vehicle design and performance parameters.. To address these LTA design and operational factors, LTA development organizations have created unique proprietary modeling tools, often at their own expense. An expansion of this limited LTA tool set could be accomplished by leveraging existing modeling and simulation capabilities available in the National laboratories and public research centers. Development of an expanded set of publicly available LTA modeling and simulation tools for LTA developers would mitigate the reliance on proprietary LTA design tools in use today. A set of well researched, open source, high fidelity LTA design modeling and simulation tools would advance LTA vehicle development and also provide the analytical basis for accurate LTA operational cost assessments. This paper will present the modeling and analysis tool capabilities required for LTA vehicle design, analysis of operations, and full life-cycle support. A survey of the tools currently available will be assessed to identify the gaps between their capabilities and the LTA industry's needs. Options for development of new modeling and analysis capabilities to supplement contemporary tools will also be presented.

  4. Modeling of etch profile evolution including wafer charging effects using self consistent ion fluxes

    Energy Technology Data Exchange (ETDEWEB)

    Hoekstra, R.J.; Kushner, M.J. [Univ. of Illinois, Urbana, IL (United States). Dept. of Electrical and Computer Engineering

    1996-12-31

    As high density plasma reactors become more predominate in industry, the need has intensified for computer aided design tools which address both equipment issues such as ion flux uniformity onto the water and process issues such etch feature profile evolution. A hierarchy of models has been developed to address these issues with the goal of producing a comprehensive plasma processing design capability. The Hybrid Plasma Equipment Model (HPEM) produces ion and neutral densities, and electric fields in the reactor. The Plasma Chemistry Monte Carlo Model (PCMC) determines the angular and energy distributions of ion and neutral fluxes to the wafer using species source functions, time dependent bulk electric fields, and sheath potentials from the HPEM. These fluxes are then used by the Monte Carlo Feature Profile Model (MCFP) to determine the time evolution of etch feature profiles. Using this hierarchy, the effects of physical modifications of the reactor, such as changing wafer clamps or electrode structures, on etch profiles can be evaluated. The effects of wafer charging on feature evolution are examined by calculating the fields produced by the charge deposited by ions and electrons within the features. The effect of radial variations and nonuniformity in angular and energy distribution of the reactive fluxes on feature profiles and feature charging will be discussed for p-Si etching in inductively-coupled plasma (ICP) sustained in chlorine gas mixtures. The effects of over- and under-wafer topography on etch profiles will also be discussed.

  5. nIFTy cosmology: the clustering consistency of galaxy formation models

    Science.gov (United States)

    Pujol, Arnau; Skibba, Ramin A.; Gaztañaga, Enrique; Benson, Andrew; Blaizot, Jeremy; Bower, Richard; Carretero, Jorge; Castander, Francisco J.; Cattaneo, Andrea; Cora, Sofia A.; Croton, Darren J.; Cui, Weiguang; Cunnama, Daniel; De Lucia, Gabriella; Devriendt, Julien E.; Elahi, Pascal J.; Font, Andreea; Fontanot, Fabio; Garcia-Bellido, Juan; Gargiulo, Ignacio D.; Gonzalez-Perez, Violeta; Helly, John; Henriques, Bruno M. B.; Hirschmann, Michaela; Knebe, Alexander; Lee, Jaehyun; Mamon, Gary A.; Monaco, Pierluigi; Onions, Julian; Padilla, Nelson D.; Pearce, Frazer R.; Power, Chris; Somerville, Rachel S.; Srisawat, Chaichalit; Thomas, Peter A.; Tollet, Edouard; Vega-Martínez, Cristian A.; Yi, Sukyoung K.

    2017-07-01

    We present a clustering comparison of 12 galaxy formation models [including semi-analytic models (SAMs) and halo occupation distribution (HOD) models] all run on halo catalogues and merger trees extracted from a single Λ cold dark matter N-body simulation. We compare the results of the measurements of the mean halo occupation numbers, the radial distribution of galaxies in haloes and the two-point correlation functions (2PCF). We also study the implications of the different treatments of orphan (galaxies not assigned to any dark matter subhalo) and non-orphan galaxies in these measurements. Our main result is that the galaxy formation models generally agree in their clustering predictions but they disagree significantly between HOD and SAMs for the orphan satellites. Although there is a very good agreement between the models on the 2PCF of central galaxies, the scatter between the models when orphan satellites are included can be larger than a factor of 2 for scales smaller than 1 h-1 Mpc. We also show that galaxy formation models that do not include orphan satellite galaxies have a significantly lower 2PCF on small scales, consistent with previous studies. Finally, we show that the 2PCF of orphan satellites is remarkably different between SAMs and HOD models. Orphan satellites in SAMs present a higher clustering than in HOD models because they tend to occupy more massive haloes. We conclude that orphan satellites have an important role on galaxy clustering and they are the main cause of the differences in the clustering between HOD models and SAMs.

  6. Reproduction of consistent pulse-waveform changes using a computational model of the cerebral circulatory system.

    Science.gov (United States)

    Connolly, Mark; He, Xing; Gonzalez, Nestor; Vespa, Paul; DiStefano, Joe; Hu, Xiao

    2014-03-01

    Due to the inaccessibility of the cranial vault, it is difficult to study cerebral blood flow dynamics directly. A mathematical model can be useful to study these dynamics. The model presented here is a novel combination of a one-dimensional fluid flow model representing the major vessels of the circle of Willis (CoW), with six individually parameterized auto-regulatory models of the distal vascular beds. This model has the unique ability to simulate high temporal resolution flow and velocity waveforms, amenable to pulse-waveform analysis, as well as sophisticated phenomena such as auto-regulation. Previous work with human patients has shown that vasodilation induced by CO2 inhalation causes 12 consistent pulse-waveform changes as measured by the morphological clustering and analysis of intracranial pressure algorithm. To validate this model, we simulated vasodilation and successfully reproduced 9 out of the 12 pulse-waveform changes. A subsequent sensitivity analysis found that these 12 pulse-waveform changes were most affected by the parameters associated with the shape of the smooth muscle tension response and vessel elasticity, providing insight into the physiological mechanisms responsible for observed changes in the pulse-waveform shape.

  7. Consistent multi-internal-temperature models for vibrational and electronic nonequilibrium in hypersonic nitrogen plasma flows

    Energy Technology Data Exchange (ETDEWEB)

    Guy, Aurélien, E-mail: aurelien.guy@onera.fr; Bourdon, Anne, E-mail: anne.bourdon@lpp.polytechnique.fr; Perrin, Marie-Yvonne, E-mail: marie-yvonne.perrin@ecp.fr [CNRS, UPR 288, Laboratoire d' Énergétique Moléculaire et Macroscopique, Combustion (EM2C), Grande Voie des Vignes, 92295 Châtenay-Malabry (France); Ecole Centrale Paris, Grande Voie des Vignes, 92295 Châtenay-Malabry (France)

    2015-04-15

    In this work, a state-to-state vibrational and electronic collisional model is developed to investigate nonequilibrium phenomena behind a shock wave in an ionized nitrogen flow. In the ionization dynamics behind the shock wave, the electron energy budget is of key importance and it is found that the main depletion term corresponds to the electronic excitation of N atoms, and conversely the major creation terms are the electron-vibration term at the beginning, then replaced by the electron ions elastic exchange term. Based on these results, a macroscopic multi-internal-temperature model for the vibration of N{sub 2} and the electronic levels of N atoms is derived with several groups of vibrational levels of N{sub 2} and electronic levels of N with their own internal temperatures to model the shape of the vibrational distribution of N{sub 2} and of the electronic excitation of N, respectively. In this model, energy and chemistry source terms are calculated self-consistently from the rate coefficients of the state-to-state database. For the shock wave condition studied, a good agreement is observed on the ionization dynamics as well as on the atomic bound-bound radiation between the state-to-state model and the macroscopic multi-internal temperature model with only one group of vibrational levels of N{sub 2} and two groups of electronic levels of N.

  8. Validity test and its consistency in the construction of patient loyalty model

    Science.gov (United States)

    Yanuar, Ferra

    2016-04-01

    The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.

  9. No consistent bioenergetic defects in presynaptic nerve terminals isolated from mouse models of Alzheimer's disease.

    Science.gov (United States)

    Choi, Sung W; Gerencser, Akos A; Ng, Ryan; Flynn, James M; Melov, Simon; Danielson, Steven R; Gibson, Bradford W; Nicholls, David G; Bredesen, Dale E; Brand, Martin D

    2012-11-21

    Depressed cortical energy supply and impaired synaptic function are predominant associations of Alzheimer's disease (AD). To test the hypothesis that presynaptic bioenergetic deficits are associated with the progression of AD pathogenesis, we compared bioenergetic variables of cortical and hippocampal presynaptic nerve terminals (synaptosomes) from commonly used mouse models with AD-like phenotypes (J20 age 6 months, Tg2576 age 16 months, and APP/PS age 9 and 14 months) to age-matched controls. No consistent bioenergetic deficiencies were detected in synaptosomes from the three models; only APP/PS cortical synaptosomes from 14-month-old mice showed an increase in respiration associated with proton leak. J20 mice were chosen for a highly stringent investigation of mitochondrial function and content. There were no significant differences in the quality of the synaptosomal preparations or the mitochondrial volume fraction. Furthermore, respiratory variables, calcium handling, and membrane potentials of synaptosomes from symptomatic J20 mice under calcium-imposed stress were not consistently impaired. The recovery of marker proteins during synaptosome preparation was the same, ruling out the possibility that the lack of functional bioenergetic defects in synaptosomes from J20 mice was due to the selective loss of damaged synaptosomes during sample preparation. Our results support the conclusion that the intrinsic bioenergetic capacities of presynaptic nerve terminals are maintained in these symptomatic AD mouse models.

  10. Consistently modeling the same movement strategy is more important than model skill level in observational learning contexts.

    Science.gov (United States)

    Buchanan, John J; Dean, Noah

    2014-02-01

    The experiment undertaken was designed to elucidate the impact of model skill level on observational learning processes. The task was bimanual circle tracing with a 90° relative phase lead of one hand over the other hand. Observer groups watched videos of either an instruction model, a discovery model, or a skilled model. The instruction and skilled model always performed the task with the same movement strategy, the right-arm traced clockwise and the left-arm counterclockwise around circle templates with the right-arm leading. The discovery model used several movement strategies (tracing-direction/hand-lead) during practice. Observation of the instruction and skilled model provided a significant benefit compared to the discovery model when performing the 90° relative phase pattern in a post-observation test. The observers of the discovery model had significant room for improvement and benefited from post-observation practice of the 90° pattern. The benefit of a model is found in the consistency with which that model uses the same movement strategy, and not within the skill level of the model. It is the consistency in strategy modeled that allows observers to develop an abstract perceptual representation of the task that can be implemented into a coordinated action. Theoretically, the results show that movement strategy information (relative motion direction, hand lead) and relative phase information can be detected through visual perception processes and be successfully mapped to outgoing motor commands within an observational learning context.

  11. A Globally Consistent Methodology for an Exposure Model for Natural Catastrophe Risk Assessment

    Science.gov (United States)

    Gunasekera, Rashmin; Ishizawa, Oscar; Pandey, Bishwa; Saito, Keiko

    2013-04-01

    There is a high demand for the development of a globally consistent and robust exposure data model employing a top down approach, to be used in national level catastrophic risk profiling for the public sector liability. To this effect, there are currently several initiatives such as UN-ISDR Global Assessment Report (GAR) and Global Exposure Database for Global Earthquake Model (GED4GEM). However, the consistency and granularity differs from region to region, a problem that is overcome in this proposed approach using national datasets for example in Latin America and the Caribbean Region (LCR). The methodology proposed in this paper aim to produce a global open exposure dataset based upon population, country specific building type distribution and other global/economic indicators such as World Bank indices that are suitable for natural catastrophe risk modelling purposes. The output would be a GIS raster grid at approximately 1 km spatial resolution which would highlight urbaness (building typology distribution, occupancy and use) for each cell at sub national level and compatible with other global initiatives and datasets. It would make use of datasets on population, census, demographic, building data and land use/land cover which are largely available in the public domain. The resultant exposure dataset could be used in conjunction with hazard and vulnerability components to create views of risk for multiple hazards that include earthquake, flood and windstorms. The model we hope would also assist in steps towards future initiatives for open, interchangeable and compatible databases for catastrophe risk modelling. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent.

  12. Self-consistent Spectral Functions in the $O(N)$ Model from the FRG

    CERN Document Server

    Strodthoff, Nils

    2016-01-01

    We present the first self-consistent direct calculation of a spectral function in the framework of the Functional Renormalization Group. The study is carried out in the relativistic $O(N)$ model, where the full momentum dependence of the propagators in the complex plane as well as momentum-dependent vertices are considered. The analysis is supplemented by a comparative study of the Euclidean momentum dependence and of the complex momentum dependence on the level of spectral functions. This work lays the groundwork for the computation of full spectral functions in more complex systems.

  13. Self-consistent description of $\\Lambda$ hypernuclei in the quark-meson coupling model

    CERN Document Server

    Tsushima, K; Thomas, A W

    1997-01-01

    The quark-meson coupling model, which has been successfully used to describe the properties of both finite nuclei and infinite nuclear matter, is applied to a study of $\\Lambda$ hypernuclei. With the assumption that the (self-consistent) exchanged scalar, and vector, mesons couple only to the u and d quarks, a very weak spin-orbit force in the $\\Lambda$-nucleus interaction is achieved automatically. This can be interpreted as a direct consequence of the quark structure of the $\\Lambda$ hyperon. Possible implications and extensions of the present investigation are also discussed.

  14. Premixed Combustion Simulations with a Self-Consistent Plasma Model for Initiation

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran; Grout, Ray

    2016-01-08

    Combustion simulations of H2-O2 ignition are presented here, with a self-consistent plasma fluid model for ignition initiation. The plasma fluid equations for a nanosecond pulsed discharge are solved and coupled with the governing equations of combustion. The discharge operates with the propagation of cathode directed streamer, with radical species produced at streamer heads. These radical species play an important role in the ignition process. The streamer propagation speeds and radical production rates were found to be sensitive to gas temperature and fuel-oxidizer equivalence ratio. The oxygen radical production rates strongly depend on equivalence ratio and subsequently results in faster ignition of leaner mixtures.

  15. Supporting Consistency in Linked Specialized Engineering Models Through Bindings and Updating

    Institute of Scientific and Technical Information of China (English)

    Albertus H. Olivier; Gert C. van Rooyen; Berthold Firmenich; Karl E. Beucke

    2008-01-01

    Currently, some commercial software applications support users to work in an integrated environ-ment. However, this is limited to the suite of models provided by the software vendor and consequently it forces all the parties to use the same software. In contrast, the research described in this paper investigates ways of using standard software applications, which may be specialized for different professional domains.These are linked for effective transfer of information and a binding mechanism is provided to support consis-tency. The proposed solution was implemented using a CAD application and an independent finite element application in order to verify the theoretical aspects of this work.

  16. A “Minsky crisis” in a Stock-Flow Consistent model

    OpenAIRE

    Mouakil, Tarik

    2014-01-01

    This study uses the Stock-Flow Consistent modelling approach to assess the relevance of Minsky’s demonstration of his financial instability hypothesis. We show that this demonstration, based on the assumption of a pro-cyclical leverage ratio, is incompatible with the Kaleckian analysis of profits endorsed by Minsky. Therefore we suggest replacing the assumption of a pro-cyclical leverage ratio with one of a pro-cyclical short-term borrowing, which also appears in Minsky’s work. Cet article...

  17. MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models

    OpenAIRE

    2016-01-01

    Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes...

  18. Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters.

    Science.gov (United States)

    Rácz, A; Bajusz, D; Héberger, K

    2015-01-01

    Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.

  19. Tides, Rotation Or Anisotropy? Self-consistent Nonspherical Models For Globular Clusters

    Science.gov (United States)

    Varri, Anna L.; Bertin, G.

    2011-01-01

    Spherical models of quasi-relaxed stellar systems provide a successful zeroth-order description of globular clusters. Yet, the great progress made in recent years in the acquisition of detailed information of the structure of these stellar systems calls for a renewed effort on the side of modeling. In particular, more general analytical models would allow to address the long-standing issue of the physical origin of the deviations from spherical symmetry of the globular clusters, that now can be properly measured. In fact, it remains to be established which is the cause of the observed flattening, among external tides, internal rotation, and pressure anisotropy. In this paper we focus on the first two physical ingredients. We start by briefly describing a recently studied family of triaxial models that incorporate in a self-consistent way the tidal effects of the host galaxy, as a collisionless analogue of the Roche problem (Varri & Bertin ApJ 2009). We then present two new families of axisymmetric models in which the deviations from spherical symmetry are induced by the presence of internal rotation. The first one is an extension of the well-known family of King models to the case of axisymmetric equilibria flattened by solid-body rotation. The second family is characterized by differential rotation, designed to be rigid in the center and to vanish in the outer parts, where the imposed truncation in phase space becomes effective. For possible application to globular clusters, models of interest should be those, in both families, characterized by low values of the rotation strength parameter and quasi-spherical shape. For general interest in stellar dynamics, we show that, for high values of that parameter, the differentially rotating models may exhibit unexpected morphologies, even with a toroidal core.

  20. A new self-consistent hybrid chemistry model for Mars and cometary environments

    Science.gov (United States)

    Wedlund, Cyril Simon; Kallio, Esa; Jarvinen, Riku; Dyadechkin, Sergey; Alho, Markku

    2014-05-01

    Over the last 15 years, a 3-D hybrid-PIC planetary plasma interaction modelling platform, named HYB, has been developed, which was applied to several planetary environment such as those of Mars, Venus, Mercury, and more recently, the Moon. We present here another evolution of HYB including a fully consistent ionospheric-chemistry package designed to reproduce the main ions in the lower boundary of the model. This evolution, also permitted by the increase in computing power and the switch to spherical coordinates for higher spatial resolution (Dyadechkin et al., 2013), is motivated by the imminent arrival of the Rosetta spacecraft in the vicinity of comet 67P/Churyumov-Gerasimenko. In this presentation we show the application of the new HYB-ionosphere model to 1D and 2D hybrid simulations at Mars above 100 km altitude and demonstrate that with a limited number of chemical reactions, good agreement with 1D kinetic models may be found. This is a first validation step before applying the model to the 67P/CG comet environment, which, like Mars, is expected be rich in carbon oxide compounds.

  1. [THE MODEL OF NEUROVASCULAR UNIT IN VITRO CONSISTING OF THREE CELLS TYPES].

    Science.gov (United States)

    Khilazheva, E D; Boytsova, E B; Pozhilenkova, E A; Solonchuk, Yu R; Salmina, A B

    2015-01-01

    There are many ways to model blood brain barrier and neurovascular unit in vitro. All existing models have their disadvantages, advantages and some peculiarities of preparation and usage. We obtained the three-cells neurovascular unit model in vitro using progenitor cells isolated from the rat embryos brain (Wistar, 14-16 d). After withdrawal of the progenitor cells the neurospheres were cultured with subsequent differentiation into astrocytes and neurons. Endothelial cells were isolated from embryonic brain too. During the differentiation of progenitor cells the astrocytes monolayer formation occurs after 7-9 d, neurons monolayer--after 10-14 d, endothelial cells monolayer--after 7 d. Our protocol for simultaneous isolation and cultivation of neurons, astrocytes and endothelial cells reduces the time needed to obtain neurovascular unit model in vitro, consisting of three cells types and reduce the number of animals used. It is also important to note the cerebral origin of all cell types, which is also an advantage of our model in vitro.

  2. Application of a Multigrid Method to a Mass-Consistent Diagnostic Wind Model.

    Science.gov (United States)

    Wang, Yansen; Williamson, Chatt; Garvey, Dennis; Chang, Sam; Cogan, James

    2005-07-01

    A multigrid numerical method has been applied to a three-dimensional, high-resolution diagnostic model for flow over complex terrain using a mass-consistent approach. The theoretical background for the model is based on a variational analysis using mass conservation as a constraint. The model was designed for diagnostic wind simulation at the microscale in complex terrain and in urban areas. The numerical implementation takes advantage of a multigrid method that greatly improves the computation speed. Three preliminary test cases for the model's numerical efficiency and its accuracy are given. The model results are compared with an analytical solution for flow over a hemisphere. Flow over a bell-shaped hill is computed to demonstrate that the numerical method is applicable in the case of parameterized lee vortices. A simulation of the mean wind field in an urban domain has also been carried out and compared with observational data. The comparison indicated that the multigrid method takes only 3%-5% of the time that is required by the traditional Gauss-Seidel method.

  3. Self-Consistent Model for Pulsed Direct-Current N2 Glow Discharge

    Institute of Scientific and Technical Information of China (English)

    Liu Chengsen; Wang Dezhen

    2005-01-01

    A self-consistent analysis of a pulsed direct-current (DC) N2 glow discharge is presented. The model is based on a numerical solution of the continuity equations for electron and ions coupled with Poisson's equation. The spatial-temporal variations of ionic and electronic densities and electric field are obtained. The electric field structure exhibits all the characteristic regions of a typical glow discharge (the cathode fall, the negative glow, and the positive column).Current-voltage characteristics of the discharge can be obtained from the model. The calculated current-voltage results using a constant secondary electron emission coefficient for the gas pressure 133.32 Pa are in reasonable agreement with experiment.

  4. Consistency in Regularizations of the Gauged NJL Model at One Loop Level

    CERN Document Server

    Battistel, O A

    1999-01-01

    In this work we revisit questions recently raised in the literature associated to relevant but divergent amplitudes in the gauged NJL model. The questions raised involve ambiguities and symmetry violations which concern the model's predictive power at one loop level. Our study shows by means of an alternative prescription to handle divergent amplitudes, that it is possible to obtain unambiguous and symmetry preserving amplitudes. The procedure adopted makes use solely of {\\it general} properties of an eventual regulator, thus avoiding an explicit form. We find, after a thorough analysis of the problem that there are well established conditions to be fulfiled by any consistent regularization prescription in order to avoid the problems of concern at one loop level.

  5. Macro-particle FEL model with self-consistent spontaneous radiation

    CERN Document Server

    Litvinenko, Vladimir N

    2015-01-01

    Spontaneous radiation plays an important role in SASE FELs and storage ring FELs operating in giant pulse mode. It defines the correlation function of the FEL radiation as well as its many spectral features. Simulations of these systems using randomly distributed macro-particles with charge much higher that of a single electron create the problem of anomalously strong spontaneous radiation, limiting the capabilities of many FEL codes. In this paper we present a self-consistent macro-particle model which provided statistically exact simulation of multi-mode, multi-harmonic and multi-frequency short-wavelength 3-D FELs including the high power and saturation effects. The use of macro-particle clones allows both spontaneous and induced radiation to be treated in the same fashion. Simulations using this model do not require a seed and provide complete temporal and spatial structure of the FEL optical field.

  6. Scale-consistent two-way coupling of land-surface and atmospheric models

    Science.gov (United States)

    Schomburg, A.; Venema, V.; Ament, F.; Simmer, C.

    2009-04-01

    Processes at the land surface and in the atmosphere act on different spatial scales. While in the atmosphere small-scale heterogeneity is smoothed out quickly by turbulent mixing, this is not the case at the land surface where small-scale variability of orography, land cover, soil texture, soil moisture etc. varies only slowly in time. For the modelling of the fluxes between the land-surface and the atmosphere it is consequently more scale consistent to model the surface processes at a higher spatial resolution than the atmospheric processes. The mosaic approach is one way to deal with this problem. Using this technique the Soil Vegetation Atmosphere Transfer (SVAT) scheme is solved on a higher resolution than the atmosphere, which is possible since a SVAT module generally demands considerably less computation time than the atmospheric part. The upscaling of the turbulent fluxes of sensible and latent heat at the interface to the atmosphere is realized by averaging, due to the nonlinearities involved this is a more sensible approach than averaging the soil properties and computing the fluxes in a second step. The atmospheric quantities are usually assumed to be homogeneous for all soil-subpixels pertaining to one coarse atmospheric grid box. In this work, the aim is to develop a downscaling approach in which the atmospheric quantities at the lowest model layer are disaggregated before they enter the SVAT module at the higher mosaic resolution. The overall aim is a better simulation of the heat fluxes which play an important role for the energy and moisture budgets at the surface. The disaggregation rules for the atmospheric variables will depend on high-resolution surface properties and the current atmospheric conditions. To reduce biases due to nonlinearities we will add small-scale variability according to such rules as well as noise for the variability we can not explain. The model used in this work is the COSMO-model, the weather forecast model (and regional

  7. Microwave air plasmas in capillaries at low pressure I. Self-consistent modeling

    Science.gov (United States)

    Coche, P.; Guerra, V.; Alves, L. L.

    2016-06-01

    This work presents the self-consistent modeling of micro-plasmas generated in dry air using microwaves (2.45 GHz excitation frequency), within capillaries (model couples the system of rate balance equations for the most relevant neutral and charged species of the plasma to the homogeneous electron Boltzmann equation. The maintenance electric field is self-consistently calculated adopting a transport theory for low to intermediate pressures, taking into account the presence of O- ions in addition to several positive ions, the dominant species being O{}2+ , NO+ and O+ . The low-pressure small-radius conditions considered yield very-intense reduced electric fields (˜600-1500 Td), coherent with species losses controlled by transport and wall recombination, and kinetic mechanisms strongly dependent on electron-impact collisions. The charged-particle transport losses are strongly influenced by the presence of the negative ion, despite its low-density (˜10% of the electron density). For electron densities in the range (1-≤ft. 4\\right)× {{10}12} cm-3, the system exhibits high dissociation degrees for O2 (˜20-70%, depending on the working conditions, in contrast with the  ˜0.1% dissociation obtained for N2), a high concentration of O2(a) (˜1014 cm-3) and NO(X) (5× {{10}14} cm-3) and low ozone production (<{{10}-3}% ).

  8. On some problems of weak consistency of quasi-maximum likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.

  9. On some problems of weak consistency of quasi-maximum likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    ZHANG SanGuo; LIAO Yuan

    2008-01-01

    In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.

  10. SELF-CONSISTENT FIELD MODEL OF BRUSHES FORMED BY ROOT-TETHERED DENDRONS

    Directory of Open Access Journals (Sweden)

    E. B. Zhulina

    2015-05-01

    Full Text Available We present an analytical self-consistent field (scf theory that describes planar brushes formed by regularly branched root-tethered dendrons of the second and third generations. The developed approach gives the possibility for calculation of the scf molecular potential acting at monomers of the tethered chains. In the linear elasticity regime for stretched polymers, the molecular potential has a parabolic shape with the parameter k depending on architectural parameters of tethered macromolecules: polymerization degrees of spacers, branching functionalities, and number of generations. For dendrons of the second generation, we formulate a general equation for parameter k and analyze how variations in the architectural parameters of these dendrons affect the molecular potential. For dendrons of the third generation, an analytical expression for parameter k is available only for symmetric macromolecules with equal lengths of all spacers and equal branching functionalities in all generations. We analyze how the thickness of dendron brush in a good solvent is affected by variations in the chain architecture. Results of the developed scf theory are compared with predictions of boxlike scaling model. We demonstrate that in the limit of high branching functionalities, the results of both approaches become consistent if the value of exponent bin boxlike model is put to unity.In conclusion, we briefly discuss the systems to which the developed scf theory is applicable. These are: planar and concave spherical and cylindrical brushes under various solvent conditions (including solvent-free melted brushes and brush-like layers of ionic (polyelectrolyte dendrons.

  11. Multipseudopotential interaction: A consistent study of cubic equations of state in lattice Boltzmann models.

    Science.gov (United States)

    Khajepor, Sorush; Chen, Baixin

    2016-01-01

    A method is developed to analytically and consistently implement cubic equations of state into the recently proposed multipseudopotential interaction (MPI) scheme in the class of two-phase lattice Boltzmann (LB) models [S. Khajepor, J. Wen, and B. Chen, Phys. Rev. E 91, 023301 (2015)]10.1103/PhysRevE.91.023301. An MPI forcing term is applied to reduce the constraints on the mathematical shape of the thermodynamically consistent pseudopotentials; this allows the parameters of the MPI forces to be determined analytically without the need of curve fitting or trial and error methods. Attraction and repulsion parts of equations of state (EOSs), representing underlying molecular interactions, are modeled by individual pseudopotentials. Four EOSs, van der Waals, Carnahan-Starling, Peng-Robinson, and Soave-Redlich-Kwong, are investigated and the results show that the developed MPI-LB system can satisfactorily recover the thermodynamic states of interest. The phase interface is predicted analytically and controlled via EOS parameters independently and its effect on the vapor-liquid equilibrium system is studied. The scheme is highly stable to very high density ratios and the accuracy of the results can be enhanced by increasing the interface resolution. The MPI drop is evaluated with regard to surface tension, spurious velocities, isotropy, dynamic behavior, and the stability dependence on the relaxation time.

  12. Towards self-consistent modelling of the Sgr A* accretion flow: linking theory and observation

    Science.gov (United States)

    Roberts, Shawn R.; Jiang, Yan-Fei; Wang, Q. Daniel; Ostriker, Jeremiah P.

    2017-04-01

    The interplay between supermassive black holes (SMBHs) and their environments is believed to command an essential role in galaxy evolution. The majority of these SMBHs are in the radiative inefficient accretion phase where this interplay remains elusive, but suggestively important, due to few observational constraints. To remedy this, we directly fit 2D hydrodynamic simulations to Chandra observations of Sgr A* with Markov chain Monte Carlo sampling, self-consistently modelling the 2D inflow-outflow solution for the first time. We find the temperature and density at flow onset are consistent with the origin of the gas in the stellar winds of massive stars in the vicinity of Sgr A*. We place the first observational constraints on the angular momentum of the gas and estimate the centrifugal radius, rc ≈ 0.056 rb ≈ 8 × 10-3 pc, where rb is the Bondi radius. Less than 1 per cent of the inflowing gas accretes on to the SMBH, the remainder being ejected in a polar outflow. We decouple the quiescent point-like emission from the spatially extended flow. We find this point-like emission, accounting for ˜4 per cent of the quiescent flux, is spectrally too steep to be explained by unresolved flares, nor bremsstrahlung, but is likely a combination of a relatively steep synchrotron power law and the high-energy tail of inverse-Compton emission. With this self-consistent model of the accretion flow structure, we make predictions for the flow dynamics and discuss how future X-ray spectroscopic observations can further our understanding of the Sgr A* accretion flow.

  13. Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models

    KAUST Repository

    Vignal, Philippe

    2016-02-11

    Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are

  14. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper;

    2003-01-01

    for improvement of the reliability of physical model results. This paper demonstrates by examples that numerical modelling benefits in various ways from experimental studies (in large and small laboratory facilities). The examples range from very general hydrodynamic descriptions of wave phenomena to specific......Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...... hydrodynamic interaction with structures. The examples also show that numerical model development benefits from international co-operation and sharing of high quality results....

  15. Self-consistent mean-field model for palmitoyloleoylphosphatidylcholine-palmitoyl sphingomyelin-cholesterol lipid bilayers

    Science.gov (United States)

    Tumaneng, Paul W.; Pandit, Sagar A.; Zhao, Guijun; Scott, H. L.

    2011-03-01

    The connection between membrane inhomogeneity and the structural basis of lipid rafts has sparked interest in the lateral organization of model lipid bilayers of two and three components. In an effort to investigate anisotropic lipid distribution in mixed bilayers, a self-consistent mean-field theoretical model is applied to palmitoyloleoylphosphatidylcholine (POPC)-palmitoyl sphingomyelin (PSM)-cholesterol mixtures. The compositional dependence of lateral organization in these mixtures is mapped onto a ternary plot. The model utilizes molecular dynamics simulations to estimate interaction parameters and to construct chain conformation libraries. We find that at some concentration ratios the bilayers separate spatially into regions of higher and lower chain order coinciding with areas enriched with PSM and POPC, respectively. To examine the effect of the asymmetric chain structure of POPC on bilayer lateral inhomogeneity, we consider POPC-lipid interactions with and without angular dependence. Results are compared with experimental data and with results from a similar model for mixtures of dioleoylphosphatidylcholine, steroyl sphingomyelin, and cholesterol.

  16. A parameter study of self-consistent disk models around Herbig AeBe stars

    CERN Document Server

    Meijer, J; De Koter, A; Dullemond, C P; Van Boekel, R; Waters, L B F M

    2008-01-01

    We present a parameter study of self-consistent models of protoplanetary disks around Herbig AeBe stars. We use the code developed by Dullemond and Dominik, which solves the 2D radiative transfer problem including an iteration for the vertical hydrostatic structure of the disk. This grid of models will be used for several studies on disk emission and mineralogy in followup papers. In this paper we take a first look on the new models, compare them with previous modeling attempts and focus on the effects of various parameters on the overall structure of the SED that leads to the classification of Herbig AeBe stars into two groups, with a flaring (group I) or self-shadowed (group II) SED. We find that the parameter of overriding importance to the SED is the total mass in grains smaller than 25um, confirming the earlier results by Dullemond and Dominik. All other parameters studied have only minor influences, and will alter the SED type only in borderline cases. We find that there is no natural dichotomy between ...

  17. A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints

    Directory of Open Access Journals (Sweden)

    L. Kantha

    2016-01-01

    Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.

  18. Self-consistent modeling of terahertz waveguide and cavity with frequency-dependent conductivity

    Science.gov (United States)

    Huang, Y. J.; Chu, K. R.; Thumm, M.

    2015-01-01

    The surface resistance of metals, and hence the Ohmic dissipation per unit area, scales with the square root of the frequency of an incident electromagnetic wave. As is well recognized, this can lead to excessive wall losses at terahertz (THz) frequencies. On the other hand, high-frequency oscillatory motion of conduction electrons tends to mitigate the collisional damping. As a result, the classical theory predicts that metals behave more like a transparent medium at frequencies above the ultraviolet. Such a behavior difference is inherent in the AC conductivity, a frequency-dependent complex quantity commonly used to treat electromagnetics of metals at optical frequencies. The THz region falls in the gap between microwave and optical frequencies. However, metals are still commonly modeled by the DC conductivity in currently active vacuum electronics research aimed at the development of high-power THz sources (notably the gyrotron), although a small reduction of the DC conductivity due to surface roughness is sometimes included. In this study, we present a self-consistent modeling of the gyrotron interaction structures (a metallic waveguide or cavity) with the AC conductivity. The resulting waveguide attenuation constants and cavity quality factors are compared with those of the DC-conductivity model. The reduction in Ohmic losses under the AC-conductivity model is shown to be increasingly significant as the frequency reaches deeper into the THz region. Such effects are of considerable importance to THz gyrotrons for which the minimization of Ohmic losses constitutes a major design consideration.

  19. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.[2]Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.[3]Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.[4]Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.[5]Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.[6]Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.[7]Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.[8]Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.

  20. Consistent post-reaction vibrational energy redistribution in DSMC simulations using TCE model

    Science.gov (United States)

    Borges Sebastião, Israel; Alexeenko, Alina

    2016-10-01

    The direct simulation Monte Carlo (DSMC) method has been widely applied to study shockwaves, hypersonic reentry flows, and other nonequilibrium flow phenomena. Although there is currently active research on high-fidelity models based on ab initio data, the total collision energy (TCE) and Larsen-Borgnakke (LB) models remain the most often used chemistry and relaxation models in DSMC simulations, respectively. The conventional implementation of the discrete LB model, however, may not satisfy detailed balance when recombination and exchange reactions play an important role in the flow energy balance. This issue can become even more critical in reacting mixtures involving polyatomic molecules, such as in combustion. In this work, this important shortcoming is addressed and an empirical approach to consistently specify the post-reaction vibrational states close to thermochemical equilibrium conditions is proposed within the TCE framework. Following Bird's quantum-kinetic (QK) methodology for populating post-reaction states, the new TCE-based approach involves two main steps. The state-specific TCE reaction probabilities for a forward reaction are first pre-computed from equilibrium 0-D simulations. These probabilities are then employed to populate the post-reaction vibrational states of the corresponding reverse reaction. The new approach is illustrated by application to exchange and recombination reactions relevant to H2-O2 combustion processes.

  1. Thermocline Storage Filled with Structured Ceramics. Numerical Consistency of the Developed Numerical Model and First Observations

    Science.gov (United States)

    Motte, Fabrice; Bugler-Lamb, Samuel L.; Falcoz, Quentin

    2015-07-01

    The attraction of solar energy is greatly enhanced by the possibility of it being used during times of reduced or non-existent solar flux, such as weather induced intermittences or the darkness of the night. Therefore optimizing thermal storage for use in solar energy plants is crucial for the success of this sustainable energy source. Here we present a study of a structured bed filler dedicated to Thermocline type thermal storage, believed to outweigh the financial and thermal benefits of other systems currently in use such as packed bed Thermocline tanks. Several criterions such as Thermocline thickness and Thermocline centering are defined with the purpose of facilitating the assessment of the efficiency of the tank to complement the standard concepts of power output. A numerical model is developed that reduces to two dimensions the modeling of such a tank. The structure within the tank is designed to be built using simple bricks harboring rectangular channels through which the solar heat transfer and storage fluid will flow. The model is scrutinized and tested for physical robustness, and the results are presented in this paper. The consistency of the model is achieved within particular ranges for each physical variable.

  2. Advanced reach tool (ART) : Development of the mechanistic model

    NARCIS (Netherlands)

    Fransman, W.; Tongeren, M. van; Cherrie, J.W.; Tischer, M.; Schneider, T.; Schinkel, J.; Kromhout, H.; Warren, N.; Goede, H.; Tielemans, E.

    2011-01-01

    This paper describes the development of the mechanistic model within a collaborative project, referred to as the Advanced REACH Tool (ART) project, to develop a tool to model inhalation exposure for workers sharing similar operational conditions across different industries and locations in Europe. T

  3. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    Science.gov (United States)

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  4. Development of a real-time simulation tool towards self-consistent scenario of plasma start-up and sustainment on helical fusion reactor FFHR-d1

    Science.gov (United States)

    Goto, T.; Miyazawa, J.; Sakamoto, R.; Suzuki, Y.; Suzuki, C.; Seki, R.; Satake, S.; Huang, B.; Nunami, M.; Yokoyama, M.; Sagara, A.; the FFHR Design Group

    2017-06-01

    This study closely investigates the plasma operation scenario for the LHD-type helical reactor FFHR-d1 in view of MHD equilibrium/stability, neoclassical transport, alpha energy loss and impurity effect. In 1D calculation code that reproduces the typical pellet discharges in LHD experiments, we identify a self-consistent solution of the plasma operation scenario which achieves steady-state sustainment of the burning plasma with a fusion gain of Q ~ 10 was found within the operation regime that has been already confirmed in LHD experiment. The developed calculation tool enables systematic analysis of the operation regime in real time.

  5. A consistent model for \\pi N transition distribution amplitudes and backward pion electroproduction

    CERN Document Server

    Lansberg, J P; Semenov-Tian-Shansky, K; Szymanowski, L

    2011-01-01

    The extension of the concept of generalized parton distributions leads to the introduction of baryon to meson transition distribution amplitudes (TDAs), non-diagonal matrix elements of the nonlocal three quark operator between a nucleon and a meson state. We present a general framework for modelling nucleon to pion ($\\pi N$) TDAs. Our main tool is the spectral representation for \\pi N TDAs in terms of quadruple distributions. We propose a factorized Ansatz for quadruple distributions with input from the soft-pion theorem for \\pi N TDAs. The spectral representation is complemented with a D-term like contribution from the nucleon exchange in the cross channel. We then study backward pion electroproduction in the QCD collinear factorization approach in which the non-perturbative part of the amplitude involves \\pi N TDAs. Within our two component model for \\pi N TDAs we update previous leading-twist estimates of the unpolarized cross section. Finally, we compute the transverse target single spin asymmetry as a fu...

  6. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    Science.gov (United States)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  7. Modeling Extreme Solar Energetic Particle Acceleration with Self-Consistent Wave Generation

    Science.gov (United States)

    Arthur, A. D.; le Roux, J. A.

    2015-12-01

    Observations of extreme solar energetic particle (SEP) events associated with coronal mass ejection driven shocks have detected particle energies up to a few GeV at 1 AU within the first ~10 minutes to 1 hour of shock acceleration. Whether or not acceleration by a single shock is sufficient in these events or if some combination of multiple shocks or solar flares is required is currently not well understood. Furthermore, the observed onset times of the extreme SEP events place the shock in the corona when the particles escape upstream. We have updated our focused transport theory model that has successfully been applied to the termination shock and traveling interplanetary shocks in the past to investigate extreme SEP acceleration in the solar corona. This model solves the time-dependent Focused Transport Equation including particle preheating due to the cross shock electric field and the divergence, adiabatic compression, and acceleration of the solar wind flow. Diffusive shock acceleration of SEPs is included via the first-order Fermi mechanism for parallel shocks. To investigate the effects of the solar corona on the acceleration of SEPs, we have included an empirical model for the plasma number density, temperature, and velocity. The shock acceleration process becomes highly time-dependent due to the rapid variation of these coronal properties with heliocentric distance. Additionally, particle interaction with MHD wave turbulence is modeled in terms of gyroresonant interactions with parallel propagating Alfven waves. However, previous modeling efforts suggest that the background amplitude of the solar wind turbulence is not sufficient to accelerate SEPs to extreme energies over the short time scales observed. To account for this, we have included the transport and self-consistent amplification of MHD waves by the SEPs through wave-particle gyroresonance. We will present the results of this extended model for a single fast quasi-parallel CME driven shock in the

  8. Bibliographic Relationships in MARC and Consistent with FRBR Model According to RDA Rules

    Directory of Open Access Journals (Sweden)

    Mahsa Fardehoseiny

    2013-03-01

    Full Text Available This study was conducted to investigate the bibliographic relationships in the MARC and it’s consistency with the FRBR model. With establishing the necessary relations between bibliographic records, users will retrieve their necessary information faster and more easily. It is important to make a good communication in existing bibliographic records to help users to find what they need. This study’s purpose was to define the relationships between bibliographic records in the National Library's OPAC database and the study’s method was descriptive content analysis approach. In this study, the online catalog (OPAC National Library of Iran has been used to collect information. All records with the mentioned criteria listed in the final report of the IFLA bibliographic relations about the first group entities in FRBR model and RDA rules has been implemented and analyzed. According to this study, if software has been developed in which the data transferring was based on the conceptual model and the MARC’s data that already exists in the National Library's bibliographic database, these relationships will not be transferable. Withal, in this study the relationships on consistent FRBR and MARC concluded with an intelligent mind and the machine is unable to detect them. The results of this study showed that the relations which conveyed from MARC to FRBR, was about 47/70 percent of the MARC fields, in other hand by FRBR to MARC with the use of all intelligent efforts, and diagnosis of MARC relationships, only 31/38 percent of the relations can be covered through the MARC. But based on real data and usable fields in Boostan-e-Saadi with MARC pattern, records on the National Library of Iran showed that the results reduced to 16/95 percent..

  9. Formulation of a self-consistent model for quantum well pin solar cells

    Science.gov (United States)

    Ramey, S.; Khoie, R.

    1997-04-01

    A self-consistent numerical simulation model for a pin single-cell solar cell is formulated. The solar cell device consists of a p-AlGaAs region, an intrinsic i-AlGaAs/GaAs region with several quantum wells, and a n-AlGaAs region. Our simulator solves a field-dependent Schrödinger equation self-consistently with Poisson and Drift-Diffusion equations. The emphasis is given to the study of the capture of electrons by the quantum wells, the escape of electrons from the quantum wells, and the absorption and recombination within the quantum wells. We believe this would be the first such comprehensive model ever reported. The field-dependent Schrödinger equation is solved using the transfer matrix method. The eigenfunctions and eigenenergies obtained are used to calculate the escape rate of electrons from the quantum wells, and the non-radiative recombination rates of electrons at the boundaries of the quantum wells. These rates together with the capture rates of electrons by the quantum wells are then used in a self-consistent numerical Poisson-Drift-Diffusion solver. The resulting field profiles are then used in the field-dependent Schrödinger solver, and the iteration process is repeated until convergence is reached. In a p-AlGaAs i-AlGaAs/GaAs n-AlGaAs cell with aluminum mole fraction of 0.3, with one 100 Å-wide 284 meV-deep quantum well, the eigenenergies with zero field are 36meV, 136meV, and 267meV, for the first, second and third subbands, respectively. With an electric field of 50 kV/cm, the eigenenergies are shifted to 58meV, 160meV, and 282meV, respectively. With these eigenenergies, the thermionic escape time of electrons from the GaAs Γ-valley, varies from 220 pS to 90 pS for electric fields ranging from 10 to 50 kV/cm. These preliminary results are in good agreement with those reported by other researchers.

  10. Dynamic wind turbine models in power system simulation tool

    DEFF Research Database (Denmark)

    Hansen, A.; Jauch, Clemens; Soerensen, P.

    The present report describes the dynamic wind turbine models implemented in the power system simulation tool DIgSILENT. The developed models are a part of the results of a national research project, whose overall objective is to create a model database in different simulation tools. The report...... provides a description of the wind turbine modelling, both at a component level and at a system level....

  11. Modeling Languages: metrics and assessing tools

    OpenAIRE

    Fonte, Daniela; Boas, Ismael Vilas; Azevedo, José; Peixoto, José João; Faria, Pedro; Silva, Pedro; Sá, Tiago de, 1990-; Costa, Ulisses; da Cruz, Daniela; Henriques, Pedro Rangel

    2012-01-01

    Any traditional engineering field has metrics to rigorously assess the quality of their products. Engineers know that the output must satisfy the requirements, must comply with the production and market rules, and must be competitive. Professionals in the new field of software engineering started a few years ago to define metrics to appraise their product: individual programs and software systems. This concern motivates the need to assess not only the outcome but also the process and tools em...

  12. A new validation-assessment tool for health-economic decision models

    NARCIS (Netherlands)

    Mauskopf, J.; Vemer, P.; Voorn, van G.A.K.; Corro Ramos, I.

    2014-01-01

    A validation-assessment tool is being developed for decision makers to transparently and consistently evaluate the validation status of different health-economic decision models. It is designed as a list of validation techniques covering all relevant aspects of model validation to be filled in by

  13. The mathematical and computer modeling of the worm tool shaping

    Science.gov (United States)

    Panchuk, K. L.; Lyashkov, A. A.; Ayusheev, T. V.

    2017-06-01

    Traditionally mathematical profiling of the worm tool is carried out on the first T. Olivier method, known in the theory of gear gearings, with receiving an intermediate surface of the making lath. It complicates process of profiling and its realization by means of computer 3D-modeling. The purpose of the work is the improvement of mathematical model of profiling and its realization based on the methods of 3D-modeling. Research problems are: receiving of the mathematical model of profiling which excludes the presence of the making lath in it; realization of the received model by means of frame and superficial modeling; development and approbation of technology of solid-state modeling for the solution of the problem of profiling. As the basic, the kinematic method of research of the mutually envelope surfaces is accepted. Computer research is executed by means of CAD based on the methods of 3D-modeling. We have developed mathematical model of profiling of the worm tool; frame, superficial and solid-state models of shaping of the mutually enveloping surfaces of the detail and the tool are received. The offered mathematical models and the technologies of 3D-modeling of shaping represent tools for theoretical and experimental profiling of the worm tool. The results of researches can be used at design of metal-cutting tools.

  14. Many-Task Computing Tools for Multiscale Modeling

    OpenAIRE

    Katz, Daniel S.; Ripeanu, Matei; Wilde, Michael

    2011-01-01

    This paper discusses the use of many-task computing tools for multiscale modeling. It defines multiscale modeling and places different examples of it on a coupling spectrum, discusses the Swift parallel scripting language, describes three multiscale modeling applications that could use Swift, and then talks about how the Swift model is being extended to cover more of the multiscale modeling coupling spectrum.

  15. Alterations in Striatal Synaptic Transmission are Consistent across Genetic Mouse Models of Huntington's Disease

    Directory of Open Access Journals (Sweden)

    Damian M Cummings

    2010-05-01

    Full Text Available Since the identification of the gene responsible for HD (Huntington's disease, many genetic mouse models have been generated. Each employs a unique approach for delivery of the mutated gene and has a different CAG repeat length and background strain. The resultant diversity in the genetic context and phenotypes of these models has led to extensive debate regarding the relevance of each model to the human disorder. Here, we compare and contrast the striatal synaptic phenotypes of two models of HD, namely the YAC128 mouse, which carries the full-length huntingtin gene on a yeast artificial chromosome, and the CAG140 KI*** (knock-in mouse, which carries a human/mouse chimaeric gene that is expressed in the context of the mouse genome, with our previously published data obtained from the R6/2 mouse, which is transgenic for exon 1 mutant huntingtin. We show that striatal MSNs (medium-sized spiny neurons in YAC128 and CAG140 KI mice have similar electrophysiological phenotypes to that of the R6/2 mouse. These include a progressive increase in membrane input resistance, a reduction in membrane capacitance, a lower frequency of spontaneous excitatory postsynaptic currents and a greater frequency of spontaneous inhibitory postsynaptic currents in a subpopulation of striatal neurons. Thus, despite differences in the context of the inserted gene between these three models of HD, the primary electrophysiological changes observed in striatal MSNs are consistent. The outcomes suggest that the changes are due to the expression of mutant huntingtin and such alterations can be extended to the human condition.

  16. Alterations in striatal synaptic transmission are consistent across genetic mouse models of Huntington's disease

    Directory of Open Access Journals (Sweden)

    Damian M Cummings

    2010-06-01

    Full Text Available Since the identification of the gene responsible for HD (Huntington's disease, many genetic mouse models have been generated. Each employs a unique approach for delivery of the mutated gene and has a different CAG repeat length and background strain. The resultant diversity in the genetic context and phenotypes of these models has led to extensive debate regarding the relevance of each model to the human disorder. Here, we compare and contrast the striatal synaptic phenotypes of two models of HD, namely the YAC128 mouse, which carries the full-length huntingtin gene on a yeast artificial chromosome, and the CAG140 KI (knock-in mouse, which carries a human/mouse chimaeric gene that is expressed in the context of the mouse genome, with our previously published data obtained from the R6/2 mouse, which is transgenic for exon 1 mutant huntingtin. We show that striatal MSNs (medium-sized spiny neurons in YAC128 and CAG140 KI mice have similar electrophysiological phenotypes to that of the R6/2 mouse. These include a progressive increase in membrane input resistance, a reduction in membrane capacitance, a lower frequency of spontaneous excitatory postsynaptic currents and a greater frequency of spontaneous inhibitory postsynaptic currents in a subpopulation of striatal neurons. Thus, despite differences in the context of the inserted gene between these three models of HD, the primary electrophysiological changes observed in striatal MSNs are consistent. The outcomes suggest that the changes are due to the expression of mutant huntingtin and such alterations can be extended to the human condition.

  17. Effective rates from thermodynamically consistent coarse-graining of models for molecular motors with probe particles.

    Science.gov (United States)

    Zimmermann, Eva; Seifert, Udo

    2015-02-01

    Many single-molecule experiments for molecular motors comprise not only the motor but also large probe particles coupled to it. The theoretical analysis of these assays, however, often takes into account only the degrees of freedom representing the motor. We present a coarse-graining method that maps a model comprising two coupled degrees of freedom which represent motor and probe particle to such an effective one-particle model by eliminating the dynamics of the probe particle in a thermodynamically and dynamically consistent way. The coarse-grained rates obey a local detailed balance condition and reproduce the net currents. Moreover, the average entropy production as well as the thermodynamic efficiency is invariant under this coarse-graining procedure. Our analysis reveals that only by assuming unrealistically fast probe particles, the coarse-grained transition rates coincide with the transition rates of the traditionally used one-particle motor models. Additionally, we find that for multicyclic motors the stall force can depend on the probe size. We apply this coarse-graining method to specific case studies of the F(1)-ATPase and the kinesin motor.

  18. Providing comprehensive and consistent access to astronomical observatory archive data: the NASA archive model

    Science.gov (United States)

    McGlynn, Thomas; Fabbiano, Giuseppina; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; Pevunova, Olga; Imel, David; Berriman, Graham B.; Teplitz, Harry I.; Groom, Steve L.; Desai, Vandana R.; Landry, Walter

    2016-07-01

    Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.

  19. A three-dimensional PEM fuel cell model with consistent treatment of water transport in MEA

    Science.gov (United States)

    Meng, Hua

    In this paper, a three-dimensional PEM fuel cell model with a consistent water transport treatment in the membrane electrode assembly (MEA) has been developed. In this new PEM fuel cell model, the conservation equation of the water concentration is solved in the gas channels, gas diffusion layers, and catalyst layers while a conservation equation of the water content is established in the membrane. These two equations are connected using a set of internal boundary conditions based on the thermodynamic phase equilibrium and flux equality at the interface of the membrane and the catalyst layer. The existing fictitious water concentration treatment, which assumes thermodynamic phase equilibrium between the water content in the membrane phase and the water concentration, is applied in the two catalyst layers to consider water transport in the membrane phase. Since all the other conservation equations are still developed and solved in the single-domain framework without resort to interfacial boundary conditions, the present new PEM fuel cell model is termed as a mixed-domain method. Results from this mixed-domain approach have been compared extensively with those from the single-domain method, showing good accuracy in terms of not only cell performances and current distributions but also water content variations in the membrane.

  20. Consistency of non-flat $\\Lambda$CDM model with the new result from BOSS

    CERN Document Server

    Kumar, Suresh

    2015-01-01

    Using 137,562 quasars in the redshift range $2.1\\leq z\\leq3.5$ from the Data Release 11 (DR11) of the Baryon Oscillation Spectroscopic Survey (BOSS) of Sloan Digital Sky Survey (SDSS)-III, the BOSS-SDSS collaboration estimated the expansion rate $H(z=2.34)=222\\pm7$ km/s/Mpc of Universe, and reported that this value is in tension with the predictions of flat $\\Lambda$CDM model at around 2.5$\\sigma$ level. In this letter, we briefly describe some attempts made in the literature to relieve the tension, and show that the tension can naturally be alleviated in non-flat $\\Lambda$CDM model with positive curvature. However, this idea confronts with the inflation paradigm which predicts almost a spatially flat Universe. Nevertheless, the theoretical consistency of the non-flat $\\Lambda$CDM model with the new result from BOSS deserves attention of the community.

  1. Cepheid models based on self-consistent stellar evolution and pulsation calculations the right answer?

    CERN Document Server

    Baraffe, I; Méra, D; Chabrier, G; Beaulieu, J P

    1998-01-01

    We have computed stellar evolutionary models for stars in a mass range characteristic of Cepheid variables ($3consistent mass-period-luminosity relations. The period - luminosity relation as a function of metallicity is analysed and compared to the recent EROS observations in the Magellanic Clouds. The models reproduce the observed width of the instability strips for the SMC and LMC. We determine a statistical P-L relationship, taking into account the evolutionary timescales and a mass distribution given by a Salpeter mass function. Excellent agreement is found with the SMC PL relationship determined by Sasselov et al. (1997). The models reproduce the change of slope in the P-L relationship near $P\\sim 2.5$ days discovered recently by the EROS collaboration (Bauer 1997; Bauer et al. 1998) and ...

  2. Methodology and consistency of slant and vertical assessments for ionospheric electron content models

    Science.gov (United States)

    Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül

    2017-05-01

    A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.

  3. Towards Self-Consistent Modelling of the Sgr A* Accretion Flow: Linking Theory and Observation

    CERN Document Server

    Roberts, Shawn R; Jiang, Yan-Fei; Ostriker, Jeremiah P

    2016-01-01

    The interplay between supermassive black holes (SMBHs) and their environments is believed to command an essential role in galaxy evolution. The majority of these SMBHs are in the radiative inefficient accretion phase where this interplay remains elusive, but suggestively important, due to few observational constraints. To remedy this, we directly fit 2-D hydrodynamic simulations to Chandra observations of Sgr A* with Markov Chain Monte Carlo sampling, self-consistently modelling the 2-D inflow-outflow solution for the first time. We find the temperature and density at flow onset are consistent with the origin of the gas in the stellar winds of massive stars in the vicinity of Sgr A*. We place the first observational constraints on the angular momentum of the gas and estimate the centrifugal radius, r$_c$ $\\approx$ 0.056 r$_b$ $\\approx8\\times10^{-3}$ pc, where r$_b$ is the Bondi radius. Less than 1\\% of the inflowing gas accretes onto the SMBH, the remainder being ejected in a polar outflow. For the first time...

  4. A self-consistent linear-mode model of stellar convection

    Science.gov (United States)

    Macauslan, J.

    1985-01-01

    A normal-mode expansion of the linearized fluid equations in terms of small subset of spherical harmonics can provide a foundation for a physically motivated, self-consistent description of a solar-type convection zone. In the absence of dissipation, a second-order differential equation governs the radial dependence of the modes, so that interpretation of the effects on convection quantities of the normal-form 'potential well' is straightforward. The philosophy is quite different from the more recent work of Narasimha and Antia (1982): all envelopes presented here differ substantially from MLT envelopes, and therefore, from theirs, which are constructed to be consistent with MLT. The amplitude of all modes is set by a Kelvin-Helmholtz-('shear'-) instability argument unrelated to solar observations, with the result that the convection description may be considered to arise from 'first-hueristic-principles'. The thermodynamics modelled vaguely resemble the sun's, and more vigorously convective envelopes show some phenomena qualitatively like solar observations (e.g., atmospheric velocity spectra).

  5. The self-consistent field model for Fermi systems with account of three-body interactions

    Directory of Open Access Journals (Sweden)

    Yu.M. Poluektov

    2015-12-01

    Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.

  6. Self-consistent 2-phase AGN torus models: SED library for observers

    CERN Document Server

    Siebenmorgen, Ralf; Efstathiou, Andreas

    2015-01-01

    We assume that dust near active galactic nuclei (AGN) is distributed in a torus-like geometry, which may be described by a clumpy medium or a homogeneous disk or as a combination of the two (i.e. a 2-phase medium). The dust particles considered are fluffy and have higher submillimeter emissivities than grains in the diffuse ISM. The dust-photon interaction is treated in a fully self-consistent three dimensional radiative transfer code. We provide an AGN library of spectral energy distributions (SEDs). Its purpose is to quickly obtain estimates of the basic parameters of the AGN, such as the intrinsic luminosity of the central source, the viewing angle, the inner radius, the volume filling factor and optical depth of the clouds, and the optical depth of the disk midplane, and to predict the flux at yet unobserved wavelengths. The procedure is simple and consists of finding an element in the library that matches the observations. We discuss the general properties of the models and in particular the 10mic. silic...

  7. Height-Diameter Models for Mixed-Species Forests Consisting of Spruce, Fir, and Beech

    Directory of Open Access Journals (Sweden)

    Petráš Rudolf

    2014-06-01

    Full Text Available Height-diameter models define the general relationship between the tree height and diameter at each growth stage of the forest stand. This paper presents generalized height-diameter models for mixed-species forest stands consisting of Norway spruce (Picea abies Karst., Silver fir (Abies alba L., and European beech (Fagus sylvatica L. from Slovakia. The models were derived using two growth functions from the exponential family: the two-parameter Michailoff and three-parameter Korf functions. Generalized height-diameter functions must normally be constrained to pass through the mean stand diameter and height, and then the final growth model has only one or two parameters to be estimated. These “free” parameters are then expressed over the quadratic mean diameter, height and stand age and the final mathematical form of the model is obtained. The study material included 50 long-term experimental plots located in the Western Carpathians. The plots were established 40-50 years ago and have been repeatedly measured at 5 to 10-year intervals. The dataset includes 7,950 height measurements of spruce, 21,661 of fir and 5,794 of beech. As many as 9 regression models were derived for each species. Although the “goodness of fit” of all models showed that they were generally well suited for the data, the best results were obtained for silver fir. The coefficient of determination ranged from 0.946 to 0.948, RMSE (m was in the interval 1.94-1.97 and the bias (m was -0.031 to 0.063. Although slightly imprecise parameter estimation was established for spruce, the estimations of the regression parameters obtained for beech were quite less precise. The coefficient of determination for beech was 0.854-0.860, RMSE (m 2.67-2.72, and the bias (m ranged from -0.144 to -0.056. The majority of models using Korf’s formula produced slightly better estimations than Michailoff’s, and it proved immaterial which estimated parameter was fixed and which parameters

  8. Motion of the Philippine Sea plate consistent with the NUVEL-1A model

    Science.gov (United States)

    Zang, Shao Xian; Chen, Qi Yong; Ning, Jie Yuan; Shen, Zheng Kang; Liu, Yong Gang

    2002-09-01

    We determine Euler vectors for 12 plates, including the Philippine Sea plate (PH), relative to the fixed Pacific plate (PA) by inverting the earthquake slip vectors along the boundaries of the Philippine Sea plate, GPS observed velocities, and 1122 data from the NUVEL-1 and the NUVEL-1A global plate motion model, respectively. This analysis thus also yields Euler vectors for the Philippine Sea plate relative to adjacent plates. Our results are consistent with observed data and can satisfy the geological and geophysical constraints along the Caroline (CR)-PH and PA-CR boundaries. The results also give insight into internal deformation of the Philippine Sea plate. The area enclosed by the Ryukyu Trench-Nankai Trough, Izu-Bonin Trench and GPS stations S102, S063 and Okino Torishima moves uniformly as a rigid plate, but the areas near the Philippine Trench, Mariana Trough and Yap-Palau Trench have obvious deformation.

  9. Plasma Processes : A self-consistent kinetic modeling of a 1-D, bounded, plasma in equilibrium

    Indian Academy of Sciences (India)

    Monojoy Goswami; H Ramachandran

    2000-11-01

    A self-consistent kinetic treatment is presented here, where the Boltzmann equation is solved for a particle conserving Krook collision operator. The resulting equations have been implemented numerically. The treatment solves for the entire quasineutral column, making no assumptions about mfp/, where mfp is the ion-neutral collision mean free path and the size of the device. Coulomb collisions are neglected in favour of collisions with neutrals, and the particle source is modeled as a uniform Maxwellian. Electrons are treated as an inertialess but collisional fluid. The ion distribution function for the trapped and the transiting orbits is obtained. Interesting findings include the anomalous heating of ions as they approach the presheath, the development of strongly non-Maxwellian features near the last mfp, and strong modifications of the sheath criterion.

  10. Scratch as a computational modelling tool for teaching physics

    Science.gov (United States)

    Lopez, Victor; Hernandez, Maria Isabel

    2015-05-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling programs. In this article, we briefly discuss why Scratch could be a useful tool for computational modelling in the primary or secondary physics classroom, and we present practical examples of how it can be used to build a model.

  11. Shape: A 3D Modeling Tool for Astrophysics.

    Science.gov (United States)

    Steffen, Wolfgang; Koning, Nicholas; Wenger, Stephan; Morisset, Christophe; Magnor, Marcus

    2011-04-01

    We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.

  12. Self-consistent seismic cycle simulation in a three-dimensional continuum model: methodology and examples.

    Science.gov (United States)

    Pranger, C. C.; Le Pourhiet, L.; May, D.; van Dinther, Y.; Gerya, T.

    2016-12-01

    Subduction zones evolve over millions of years. The state of stress, the distribution of materials, and the strength and structure of the interface between the two plates is intricately tied to a host of time-dependent physical processes, such as damage, friction, (nonlinear) viscous relaxation, and fluid migration. In addition, the subduction interface has a complex three-dimensional geometry that evolves with time and can adjust in response to a changing stress environment or in response to impinging topographical features, and can even branch off as a splay fault. All in all, the behaviour of (large) earthquakes at the millisecond to minute timescale is heavily dependent on the pattern of stress accumulation during the 100 year inter-seismic period, the events occurring on or near the interface in the past thousands of years, as well as the extended geological history of the region. We address the aforementioned modeling requirements by developing a self-consistent 3D staggered grid finite difference continuum description of motion, thermal advection-diffusion, and poro-visco-elastic two-phase flow. Faults are modelled as plastic shear bands that can develop and evolve in response to a changing stress environment without having a prescribed geometry. They obey a Mohr-Coulomb or Drucker-Prager yield criterion and a rate-and-state friction law. For a sound treatment of plasticity, we borrow elements from mechanical engineering, and extend these with high-quality nonlinear iteration schemes and adaptive time-stepping to resolve the rupture process at all time scales. We will present these techniques together with proof-of-concept examples of self-consistently developing seismic cycles in 2D and 3D, including phases of stress accumulation, fault nucleation, dynamic rupture, and healing.

  13. The Consistent Kinetics Porosity (CKP) Model: A Theory for the Mechanical Behavior of Moderately Porous Solids

    Energy Technology Data Exchange (ETDEWEB)

    BRANNON,REBECCA M.

    2000-11-01

    A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion

  14. Applying Modeling Tools to Ground System Procedures

    Science.gov (United States)

    Di Pasquale, Peter

    2012-01-01

    As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.

  15. Incorporating sediment compaction into a gravitationally self-consistent model for ice age sea-level change

    Science.gov (United States)

    Ferrier, Ken L.; Austermann, Jacqueline; Mitrovica, Jerry X.; Pico, Tamara

    2017-10-01

    Sea-level changes are of wide interest because they regulate coastal hazards, shape the sedimentary geologic record and are sensitive to climate change. In areas where rivers deliver sediment to marine deltas and fans, sea-level changes are strongly modulated by the deposition and compaction of marine sediment. Deposition affects sea level by increasing the elevation of the seafloor, by perturbing crustal elevation and gravity fields and by reducing the volume of seawater through the incorporation of water into sedimentary pore space. In a similar manner, compaction affects sea level by lowering the elevation of the seafloor and by purging water out of sediments and into the ocean. Here we incorporate the effects of sediment compaction into a gravitationally self-consistent global sea-level model by extending the approach of Dalca et al. (2013). We show that incorporating compaction requires accounting for two quantities that are not included in the Dalca et al. (2013) analysis: the mean porosity of the sediment and the degree of saturation in the sediment. We demonstrate the effects of compaction by modelling sea-level responses to two simplified 122-kyr sediment transfer scenarios for the Amazon River system, one including compaction and one neglecting compaction. These simulations show that the largest effect of compaction is on the thickness of the compacting sediment, an effect that is largest where deposition rates are fastest. Compaction can also produce minor sea-level changes in coastal regions by influencing shoreline migration and the location of seawater loading, which perturbs crustal elevations. By providing a tool for modelling gravitationally self-consistent sea-level responses to sediment compaction, this work offers an improved approach for interpreting the drivers of past sea-level changes.

  16. A Self-consistent and Spatially Dependent Model of the Multiband Emission of Pulsar Wind Nebulae

    Science.gov (United States)

    Lu, Fang-Wu; Gao, Quan-Gui; Zhang, Li

    2017-01-01

    A self-consistent and spatially dependent model is presented to investigate the multiband emission of pulsar wind nebulae (PWNe). In this model, a spherically symmetric system is assumed and the dynamical evolution of the PWN is included. The processes of convection, diffusion, adiabatic loss, radiative loss, and photon–photon pair production are taken into account in the electron’s evolution equation, and the processes of synchrotron radiation, inverse Compton scattering, synchrotron self-absorption, and pair production are included for the photon’s evolution equation. Both coupled equations are simultaneously solved. The model is applied to explain observed results of the PWN in MSH 15–52. Our results show that the spectral energy distributions (SEDs) of both electrons and photons are all a function of distance. The observed photon SED of MSH 15–52 can be well reproduced in this model. With the parameters obtained by fitting the observed SED, the spatial variations of photon index and surface brightness observed in the X-ray band can also be well reproduced. Moreover, it can be derived that the present-day diffusion coefficient of MSH 15–52 at the termination shock is {κ }0=6.6× {10}24 {{cm}}2 {{{s}}}-1, the spatial average has a value of \\bar{κ }=1.4× {10}25 {{cm}}2 {{{s}}}-1, and the present-day magnetic field at the termination shock has a value of {B}0=26.6 μ {{G}} and the spatial averaged magnetic field is \\bar{B}=14.9 μ {{G}}. The spatial changes of the spectral index and surface brightness at different bands are predicted.

  17. Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?

    Science.gov (United States)

    Yano, Jun-Ichi

    2016-07-01

    Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.

  18. Development of hydrogeological modelling tools based on NAMMU

    Energy Technology Data Exchange (ETDEWEB)

    Marsic, N. [Kemakta Konsult AB, Stockholm (Sweden); Hartley, L.; Jackson, P.; Poole, M. [AEA Technology, Harwell (United Kingdom); Morvik, A. [Bergen Software Services International AS, Bergen (Norway)

    2001-09-01

    A number of relatively sophisticated hydrogeological models were developed within the SR 97 project to handle issues such as nesting of scales and the effects of salinity. However, these issues and others are considered of significant importance and generality to warrant further development of the hydrogeological methodology. Several such developments based on the NAMMU package are reported here: - Embedded grid: nesting of the regional- and site-scale models within the same numerical model has given greater consistency in the structural model representation and in the flow between scales. Since there is a continuous representation of the regional- and site-scales the modelling of pathways from the repository no longer has to be contained wholly by the site-scale region. This allows greater choice in the size of the site-scale. - Implicit Fracture Zones (IFZ): this method of incorporating the structural model is very efficient and allows changes to either the mesh or fracture zones to be implemented quickly. It also supports great flexibility in the properties of the structures and rock mass. - Stochastic fractures: new functionality has been added to IFZ to allow arbitrary combinations of stochastic or deterministic fracture zones with the rock-mass. Whether a fracture zone is modelled deterministically or stochastically its statistical properties can be defined independently. - Stochastic modelling: efficient methods for Monte-Carlo simulation of stochastic permeability fields have been implemented and tested on SKB's computers. - Visualisation: the visualisation tool Avizier for NAMMU has been enhanced such that it is efficient for checking models and presentation. - PROPER interface: NAMMU outputs pathlines in PROPER format so that it can be included in PA workflow. The developed methods are illustrated by application to stochastic nested modelling of the Beberg site using data from SR 97. The model properties were in accordance with the regional- and site

  19. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Science.gov (United States)

    Biggs, Matthew B; Papin, Jason A

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  20. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Directory of Open Access Journals (Sweden)

    Matthew B Biggs

    Full Text Available Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  1. Self-consistent modeling of CFETR baseline scenarios for steady-state operation

    Science.gov (United States)

    Chen, Jiale; Jian, Xiang; Chan, Vincent S.; Li, Zeyu; Deng, Zhao; Li, Guoqiang; Guo, Wenfeng; Shi, Nan; Chen, Xi; CFETR Physics Team

    2017-07-01

    Integrated modeling for core plasma is performed to increase confidence in the proposed baseline scenario in the 0D analysis for the China Fusion Engineering Test Reactor (CFETR). The steady-state scenarios are obtained through the consistent iterative calculation of equilibrium, transport, auxiliary heating and current drives (H&CD). Three combinations of H&CD schemes (NB + EC, NB + EC + LH, and EC + LH) are used to sustain the scenarios with q min > 2 and fusion power of ˜70-150 MW. The predicted power is within the target range for CFETR Phase I, although the confinement based on physics models is lower than that assumed in 0D analysis. Ideal MHD stability analysis shows that the scenarios are stable against n = 1-10 ideal modes, where n is the toroidal mode number. Optimization of RF current drive for the RF-only scenario is also presented. The simulation workflow for core plasma in this work provides a solid basis for a more extensive research and development effort for the physics design of CFETR.

  2. XLISP-Stat Tools for Building Generalised Estimating Equation Models

    Directory of Open Access Journals (Sweden)

    Thomas Lumley

    1996-12-01

    Full Text Available This paper describes a set of Lisp-Stat tools for building Generalised Estimating Equation models to analyse longitudinal or clustered measurements. The user interface is based on the built-in regression and generalised linear model prototypes, with the addition of object-based error functions, correlation structures and model formula tools. Residual and deletion diagnostic plots are available on the cluster and observation level and use the dynamic graphics capabilities of Lisp-Stat.

  3. The Ising model as a pedagogical tool

    Science.gov (United States)

    Smith, Ryan; Hart, Gus L. W.

    2010-10-01

    Though originally developed to analyze ferromagnetic systems, the Ising model also provides an excellent framework for modeling alloys. The original Ising model represented magnetic moments (up or down) by a +1 or -1 at each point on a lattice and allowed only nearest neighbors interactions to be non-zero. In alloy modeling, the values ±1 represent A and B atoms. The Ising Hamiltonian can be used in a Monte Carlo approach to simulate the thermodynamics of the system (e.g., an order-disorder transition occuring as the temperature is lowered). The simplicity of the model makes it an ideal starting point for a qualitative understanding of magnetism or configuration ordering in a metal. I will demonstrate the application of the Ising model in simple, two-dimensional ferromagnetic systems and alloys.

  4. A Components Library System Model and the Support Tool

    Institute of Scientific and Technical Information of China (English)

    MIAO Huai-kou; LIU Hui; LIU Jing; LI Xiao-bo

    2004-01-01

    Component-based development needs a well-designed components library and a set of support tools.This paper presents the design and implementation of a components library system model and its support tool UMLCASE.A set of practical CASE tools is constructed.UMLCASE can use UML to design Use Case Diagram, Class Diagram etc.And it integrates with components library system.

  5. Toward A Self Consistent MHD Model of Chromospheres and Winds From Late Type Evolved Stars

    Science.gov (United States)

    Airapetian, V. S.; Leake, J. E.; Carpenter, Kenneth G.

    2015-01-01

    We present the first magnetohydrodynamic model of the stellar chromospheric heating and acceleration of the outer atmospheres of cool evolved stars, using α Tau as a case study. We used a 1.5D MHD code with a generalized Ohm's law that accounts for the effects of partial ionization in the stellar atmosphere to study Alfvén wave dissipation and wave reflection. We have demonstrated that due to inclusion of the effects of ion-neutral collisions in magnetized weakly ionized chromospheric plasma on resistivity and the appropriate grid resolution, the numerical resistivity becomes 1-2 orders of magnitude smaller than the physical resistivity. The motions introduced by non-linear transverse Alfvé waves can explain non-thermally broadened and non-Gaussian profiles of optically thin UV lines forming in the stellar chromosphere of α Tau and other late-type giant and supergiant stars. The calculated heating rates in the stellar chromosphere due to resistive (Joule) dissipation of electric currents, induced by upward propagating non-linear Alfvé waves, are consistent with observational constraints on the net radiative losses in UV lines and the continuum from α Tau. At the top of the chromosphere, Alfvé waves experience significant reflection, producing downward propagating transverse waves that interact with upward propagating waves and produce velocity shear in the chromosphere. Our simulations also suggest that momentum deposition by non-linear Alfvé waves becomes significant in the outer chromosphere at 1 stellar radius from the photosphere. The calculated terminal velocity and the mass loss rate are consistent with the observationally derived wind properties in α Tau.

  6. Hazard-consistent ground motions generated with a stochastic fault-rupture model

    Energy Technology Data Exchange (ETDEWEB)

    Nishida, Akemi, E-mail: nishida.akemi@jaea.go.jp [Center for Computational Science and e-Systems, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba 277-0871 (Japan); Igarashi, Sayaka, E-mail: igrsyk00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Sakamoto, Shigehiro, E-mail: shigehiro.sakamoto@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Uchiyama, Yasuo, E-mail: yasuo.uchiyama@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Yamamoto, Yu, E-mail: ymmyu-00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Muramatsu, Ken, E-mail: kmuramat@tcu.ac.jp [Department of Nuclear Safety Engineering, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo 158-8557 (Japan); Takada, Tsuyoshi, E-mail: takada@load.arch.t.u-tokyo.ac.jp [Department of Architecture, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-12-15

    Conventional seismic probabilistic risk assessments (PRAs) of nuclear power plants consist of probabilistic seismic hazard and fragility curves. Even when earthquake ground-motion time histories are required, they are generated to fit specified response spectra, such as uniform hazard spectra at a specified exceedance probability. These ground motions, however, are not directly linked with seismic-source characteristics. In this context, the authors propose a method based on Monte Carlo simulations to generate a set of input ground-motion time histories to develop an advanced PRA scheme that can explain exceedance probability and the sequence of safety-functional loss in a nuclear power plant. These generated ground motions are consistent with seismic hazard at a reference site, and their seismic-source characteristics can be identified in detail. Ground-motion generation is conducted for a reference site, Oarai in Japan, the location of a hypothetical nuclear power plant. A total of 200 ground motions are generated, ranging from 700 to 1100 cm/s{sup 2} peak acceleration, which corresponds to a 10{sup −4} to 10{sup −5} annual exceedance frequency. In the ground-motion generation, seismic sources are selected according to their hazard contribution at the site, and Monte Carlo simulations with stochastic parameters for the seismic-source characteristics are then conducted until ground motions with the target peak acceleration are obtained. These ground motions are selected so that they are consistent with the hazard. Approximately 110,000 simulations were required to generate 200 ground motions with these peak accelerations. Deviations of peak ground motion acceleration generated for 1000–1100 cm/s{sup 2} range from 1.5 to 3.0, where the deviation is evaluated with peak ground motion accelerations generated from the same seismic source. Deviations of 1.0 to 3.0 for stress drops, one of the stochastic parameters of seismic-source characteristics, are required to

  7. Towards three-dimensional continuum models of self-consistent along-strike megathrust segmentation

    Science.gov (United States)

    Pranger, Casper; van Dinther, Ylona; May, Dave; Le Pourhiet, Laetitia; Gerya, Taras

    2016-04-01

    into one algorithm. We are working towards presenting the first benchmarked 3D dynamic rupture models as an important step towards seismic cycle modelling of megathrust segmentation in a three-dimensional subduction setting with slow tectonic loading, self consistent fault development, and spontaneous seismicity.

  8. Gas cooling in semi-analytic models and smoothed particle hydrodynamics simulations: are results consistent?

    Science.gov (United States)

    Saro, A.; De Lucia, G.; Borgani, S.; Dolag, K.

    2010-08-01

    We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical smoothed particle hydrodynamics (SPH) simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. This simplified comparison is thus not meant to be compared with observational data, but is aimed at understanding the level of agreement, at the stripped-down level considered, between two techniques that are widely used to model galaxy formation in a cosmological framework and which present complementary advantages and disadvantages. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: (i) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; (ii) while all stars associated with the BCG were formed in its progenitors in the SAM used here, this holds true only for half of the final BCG stellar mass in the SPH simulation, the remaining half being contributed by tidal stripping of stars from the diffuse stellar component associated with galaxies accreted on the cluster halo; (iii) SPH satellites can lose up to 90 per cent of their stellar mass at the time of accretion, due to tidal stripping, a process not included in the SAM used in this paper; (iv) in the SPH simulation, significant cooling occurs on the most massive satellite galaxies and this lasts for up to 1 Gyr after accretion. This physical process is

  9. Advanced REACH tool: A Bayesian model for occupational exposure assessment

    NARCIS (Netherlands)

    McNally, K.; Warren, N.; Fransman, W.; Entink, R.K.; Schinkel, J.; Van Tongeren, M.; Cherrie, J.W.; Kromhout, H.; Schneider, T.; Tielemans, E.

    2014-01-01

    This paper describes a Bayesian model for the assessment of inhalation exposures in an occupational setting; the methodology underpins a freely available web-based application for exposure assessment, the Advanced REACH Tool (ART). The ART is a higher tier exposure tool that combines disparate sourc

  10. Techniques and tools for efficiently modeling multiprocessor systems

    Science.gov (United States)

    Carpenter, T.; Yalamanchili, S.

    1990-01-01

    System-level tools and methodologies associated with an integrated approach to the development of multiprocessor systems are examined. Tools for capturing initial program structure, automated program partitioning, automated resource allocation, and high-level modeling of the combined application and resource are discussed. The primary language focus of the current implementation is Ada, although the techniques should be appropriate for other programming paradigms.

  11. Scratch as a Computational Modelling Tool for Teaching Physics

    Science.gov (United States)

    Lopez, Victor; Hernandez, Maria Isabel

    2015-01-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling…

  12. Scratch as a Computational Modelling Tool for Teaching Physics

    Science.gov (United States)

    Lopez, Victor; Hernandez, Maria Isabel

    2015-01-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling…

  13. Aligning building information model tools and construction management methods

    NARCIS (Netherlands)

    Hartmann, Timo; van Meerveld, H.J.; Vossebeld, N.; Adriaanse, Adriaan Maria

    2012-01-01

    Few empirical studies exist that can explain how different Building Information Model (BIM) based tool implementation strategies work in practical contexts. To help overcoming this gap, this paper describes the implementation of two BIM based tools, the first, to support the activities at an estimat

  14. The Functional Segregation and Integration Model: Mixture Model Representations of Consistent and Variable Group-Level Connectivity in fMRI.

    Science.gov (United States)

    Churchill, Nathan W; Madsen, Kristoffer; Mørup, Morten

    2016-10-01

    The brain consists of specialized cortical regions that exchange information between each other, reflecting a combination of segregated (local) and integrated (distributed) processes that define brain function. Functional magnetic resonance imaging (fMRI) is widely used to characterize these functional relationships, although it is an ongoing challenge to develop robust, interpretable models for high-dimensional fMRI data. Gaussian mixture models (GMMs) are a powerful tool for parcellating the brain, based on the similarity of voxel time series. However, conventional GMMs have limited parametric flexibility: they only estimate segregated structure and do not model interregional functional connectivity, nor do they account for network variability across voxels or between subjects. To address these issues, this letter develops the functional segregation and integration model (FSIM). This extension of the GMM framework simultaneously estimates spatial clustering and the most consistent group functional connectivity structure. It also explicitly models network variability, based on voxel- and subject-specific network scaling profiles. We compared the FSIM to standard GMM in a predictive cross-validation framework and examined the importance of different model parameters, using both simulated and experimental resting-state data. The reliability of parcellations is not significantly altered by flexibility of the FSIM, whereas voxel- and subject-specific network scaling profiles significantly improve the ability to predict functional connectivity in independent test data. Moreover, the FSIM provides a set of interpretable parameters to characterize both consistent and variable aspects functional connectivity structure. As an example of its utility, we use subject-specific network profiles to identify brain regions where network expression predicts subject age in the experimental data. Thus, the FSIM is effective at summarizing functional connectivity structure in group

  15. Model atmospheres - Tool for identifying interstellar features

    Science.gov (United States)

    Frisch, P. C.; Slojkowski, S. E.; Rodriguez-Bell, T.; York, D.

    1993-01-01

    Model atmosphere parameters are derived for 14 early A stars with rotation velocities, from optical spectra, in excess of 80 km/s. The models are compared with IUE observations of the stars in regions where interstellar lines are expected. In general, with the assumption of solar abundances, excellent fits are obtained in regions longward of 2580 A, and accurate interstellar equivalent widths can be derived using models to establish the continuum. The fits are poorer at shorter wavelengths, particularly at 2026-2062 A, where the stellar model parameters seem inadequate. Features indicating mass flows are evident in stars with known infrared excesses. In gamma TrA, variability in the Mg II lines is seen over the 5-year interval of these data, and also over timescales as short as 26 days. The present technique should be useful in systematic studies of episodic mass flows in A stars and for stellar abundance studies, as well as interstellar features.

  16. Applying computer simulation models as learning tools in fishery management

    Science.gov (United States)

    Johnson, B.L.

    1995-01-01

    Computer models can be powerful tools for addressing many problems in fishery management, but uncertainty about how to apply models and how they should perform can lead to a cautious approach to modeling. Within this approach, we expect models to make quantitative predictions but only after all model inputs have been estimated from empirical data and after the model has been tested for agreement with an independent data set. I review the limitations to this approach and show how models can be more useful as tools for organizing data and concepts, learning about the system to be managed, and exploring management options. Fishery management requires deciding what actions to pursue to meet management objectives. Models do not make decisions for us but can provide valuable input to the decision-making process. When empirical data are lacking, preliminary modeling with parameters derived from other sources can help determine priorities for data collection. When evaluating models for management applications, we should attempt to define the conditions under which the model is a useful, analytical tool (its domain of applicability) and should focus on the decisions made using modeling results, rather than on quantitative model predictions. I describe an example of modeling used as a learning tool for the yellow perch Perca flavescens fishery in Green Bay, Lake Michigan.

  17. Self-consistent modeling of radio-frequency plasma generation in stellarators

    Science.gov (United States)

    Moiseenko, V. E.; Stadnik, Yu. S.; Lysoivan, A. I.; Korovin, V. B.

    2013-11-01

    A self-consistent model of radio-frequency (RF) plasma generation in stellarators in the ion cyclotron frequency range is described. The model includes equations for the particle and energy balance and boundary conditions for Maxwell's equations. The equation of charged particle balance takes into account the influx of particles due to ionization and their loss via diffusion and convection. The equation of electron energy balance takes into account the RF heating power source, as well as energy losses due to the excitation and electron-impact ionization of gas atoms, energy exchange via Coulomb collisions, and plasma heat conduction. The deposited RF power is calculated by solving the boundary problem for Maxwell's equations. When describing the dissipation of the energy of the RF field, collisional absorption and Landau damping are taken into account. At each time step, Maxwell's equations are solved for the current profiles of the plasma density and plasma temperature. The calculations are performed for a cylindrical plasma. The plasma is assumed to be axisymmetric and homogeneous along the plasma column. The system of balance equations is solved using the Crank-Nicholson scheme. Maxwell's equations are solved in a one-dimensional approximation by using the Fourier transformation along the azimuthal and longitudinal coordinates. Results of simulations of RF plasma generation in the Uragan-2M stellarator by using a frame antenna operating at frequencies lower than the ion cyclotron frequency are presented. The calculations show that the slow wave generated by the antenna is efficiently absorbed at the periphery of the plasma column, due to which only a small fraction of the input power reaches the confinement region. As a result, the temperature on the axis of the plasma column remains low, whereas at the periphery it is substantially higher. This leads to strong absorption of the RF field at the periphery via the Landau mechanism.

  18. Toward self-consistent tectono-magmatic numerical model of rift-to-ridge transition

    Science.gov (United States)

    Gerya, Taras; Bercovici, David; Liao, Jie

    2017-04-01

    Natural data from modern and ancient lithospheric extension systems suggest three-dimensional (3D) character of deformation and complex relationship between magmatism and tectonics during the entire rift-to-ridge transition. Therefore, self-consistent high-resolution 3D magmatic-thermomechanical numerical approaches stand as a minimum complexity requirement for modeling and understanding of this transition. Here we present results from our new high-resolution 3D finite-difference marker-in-cell rift-to-ridge models, which account for magmatic accretion of the crust and use non-linear strain-weakened visco-plastic rheology of rocks that couples brittle/plastic failure and ductile damage caused by grain size reduction. Numerical experiments suggest that nucleation of rifting and ridge-transform patterns are decoupled in both space and time. At intermediate stages, two patterns can coexist and interact, which triggers development of detachment faults, failed rift arms, hyper-extended margins and oblique proto-transforms. En echelon rift patterns typically develop in the brittle upper-middle crust whereas proto-ridge and proto-transform structures nucleate in the lithospheric mantle. These deep proto-structures propagate upward, inter-connect and rotate toward a mature orthogonal ridge-transform patterns on the timescale of millions years during incipient thermal-magmatic accretion of the new oceanic-like lithosphere. Ductile damage of the extending lithospheric mantle caused by grain size reduction assisted by Zenner pinning plays critical role in rift-to-ridge transition by stabilizing detachment faults and transform structures. Numerical results compare well with observations from incipient spreading regions and passive continental margins.

  19. Evaluating statistical consistency in the ocean model component of the Community Earth System Model (pyCECT v2.0)

    Science.gov (United States)

    Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen

    2016-07-01

    The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our

  20. A self-consistent first-principle based approach to model carrier mobility in organic materials

    Energy Technology Data Exchange (ETDEWEB)

    Meded, Velimir; Friederich, Pascal; Symalla, Franz; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2015-12-31

    Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using a fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.

  1. Self-consistent Keldysh approach to quenches in the weakly interacting Bose-Hubbard model

    Science.gov (United States)

    Lo Gullo, N.; Dell'Anna, L.

    2016-11-01

    We present a nonequilibrium Green's-functional approach to study the dynamics following a quench in weakly interacting Bose-Hubbard model (BHM). The technique is based on the self-consistent solution of a set of equations which represents a particular case of the most general set of Hedin's equations for the interacting single-particle Green's function. We use the ladder approximation as a skeleton diagram for the two-particle scattering amplitude useful, through the self-energy in the Dyson equation, for finding the interacting single-particle Green's function. This scheme is then implemented numerically by a parallelized code. We exploit this approach to study the correlation propagation after a quench in the interaction parameter, for one and two dimensions. In particular, we show how our approach is able to recover the crossover from the ballistic to the diffusive regime by increasing the boson-boson interaction. Finally we also discuss the role of a thermal initial state on the dynamics both for one- and two-dimensional BHMs, finding that, surprisingly, at high temperature a ballistic evolution is restored.

  2. Self-consistent model of a solid for the description of lattice and magnetic properties

    Science.gov (United States)

    Balcerzak, T.; Szałowski, K.; Jaščur, M.

    2017-03-01

    In the paper a self-consistent theoretical description of the lattice and magnetic properties of a model system with magnetoelastic interaction is presented. The dependence of magnetic exchange integrals on the distance between interacting spins is assumed, which couples the magnetic and the lattice subsystem. The framework is based on summation of the Gibbs free energies for the lattice subsystem and magnetic subsystem. On the basis of minimization principle for the Gibbs energy, a set of equations of state for the system is derived. These equations of state combine the parameters describing the elastic properties (relative volume deformation) and the magnetic properties (magnetization changes). The formalism is extensively illustrated with the numerical calculations performed for a system of ferromagnetically coupled spins S=1/2 localized at the sites of simple cubic lattice. In particular, the significant influence of the magnetic subsystem on the elastic properties is demonstrated. It manifests itself in significant modification of such quantities as the relative volume deformation, thermal expansion coefficient or isothermal compressibility, in particular, in the vicinity of the magnetic phase transition. On the other hand, the influence of lattice subsystem on the magnetic one is also evident. It takes, for example, the form of dependence of the critical (Curie) temperature and magnetization itself on the external pressure, which is thoroughly investigated.

  3. How consistent is cloudiness over Canada from satellite observations and modeling data?

    Science.gov (United States)

    Trishchenko, A. P.; Khlopenkov, K.; Latifovic, R.

    2004-05-01

    Being one of the major modulators of radiation budget and hydrological cycle, clouds are still significant challenge for modeling and satellite retrievals. For example, our analysis shows that for Western Canada the systematic difference in total cloud amounts between NCAR/NCEP Reanalysis-2 and ISCCP reaches 20-30 per cent. Especially difficult are satellite retrievals for Northern climate regions over snow-covered surface and during night-time. To understand better these differences and their influence on earth radiation budget in Northern latitudes, we are attempting to undertake the re-analysis of satellite AVHRR data over Canada using improved data processing and cloud detection algorithms. Details of cloud detection algorithm for day-time and night-time conditions over snow-free and snow-covered surfaces are discussed. Selected results of satellite retrievals for typical summer and winter conditions over Canada are compared to previous analyses, such as ISCCP and Pathfinder projects. Consistency between our cloud retrievals using AVHRR data and those available from MODIS will be also considered.

  4. Consistent assimilation of MERIS FAPAR and atmospheric CO2 into a terrestrial vegetation model and interactive mission benefit analysis

    Directory of Open Access Journals (Sweden)

    P.-P. Mathieu

    2012-08-01

    Full Text Available The terrestrial biosphere is currently a strong sink for anthropogenic CO2 emissions. Through the radiative properties of CO2, the strength of this sink has a direct influence on the radiative budget of the global climate system. The accurate assessment of this sink and its evolution under a changing climate is, hence, paramount for any efficient management strategies of the terrestrial carbon sink to avoid dangerous climate change. Unfortunately, simulations of carbon and water fluxes with terrestrial biosphere models exhibit large uncertainties. A considerable fraction of this uncertainty reflects uncertainty in the parameter values of the process formulations within the models. This paper describes the systematic calibration of the process parameters of a terrestrial biosphere model against two observational data streams: remotely sensed FAPAR (fraction of absorbed photosynthetically active radiation provided by the MERIS (ESA's Medium Resolution Imaging Spectrometer sensor and in situ measurements of atmospheric CO2 provided by the GLOBALVIEW flask sampling network. We use the Carbon Cycle Data Assimilation System (CCDAS to systematically calibrate some 70 parameters of the terrestrial BETHY (Biosphere Energy Transfer Hydrology model. The simultaneous assimilation of all observations provides parameter estimates and uncertainty ranges that are consistent with the observational information. In a subsequent step these parameter uncertainties are propagated through the model to uncertainty ranges for predicted carbon fluxes. We demonstrate the consistent assimilation at global scale, where the global MERIS FAPAR product and atmospheric CO2 are used simultaneously. The assimilation improves the match to independent observations. We quantify how MERIS data improve the accuracy of the current and future (net and gross carbon flux estimates (within and beyond the assimilation period. We further demonstrate the use of an interactive mission benefit

  5. Consistent assimilation of MERIS FAPAR and atmospheric CO2 into a terrestrial vegetation model and interactive mission benefit analysis

    Directory of Open Access Journals (Sweden)

    P.-P. Mathieu

    2011-11-01

    Full Text Available The terrestrial biosphere is currently a strong sink for anthropogenic CO2 emissions. Through the radiative properties of CO2 the strength of this sink has a direct influence on the radiative budget of the global climate system. The accurate assessment of this sink and its evolution under a changing climate is, hence, paramount for any efficient management strategies of the terrestrial carbon sink to avoid dangerous climate change. Unfortunately, simulations of carbon and water fluxes with terrestrial biosphere models exhibit large uncertainties. A considerable fraction of this uncertainty is reflecting uncertainty in the parameter values of the process formulations within the models. This paper describes the systematic calibration of the process parameters of a terrestrial biosphere model against two observational data streams: remotely sensed FAPAR provided by the MERIS sensor and in situ measurements of atmospheric CO2 provided by the GLOBALVIEW flask sampling network. We use the Carbon Cycle Data Assimilation System (CCDAS to systematically calibrate some 70 parameters of the terrestrial biosphere model BETHY. The simultaneous assimilation of all observations provides parameter estimates and uncertainty ranges that are consistent with the observational information. In a subsequent step these parameter uncertainties are propagated through the model to uncertainty ranges for predicted carbon fluxes. We demonstrate the consistent assimilation for two different set-ups: first at site-scale, where MERIS FAPAR observations at a range of sites are used as simultaneous constraints, and second at global scale, where the global MERIS FAPAR product and atmospheric CO2 are used simultaneously. On both scales the assimilation improves the match to independent observations. We quantify how MERIS data improve the accuracy of the current and future (net and gross carbon flux estimates (within and beyond the assimilation period. We further demonstrate the

  6. Multidisciplinary Modelling Tools for Power Electronic Circuits

    DEFF Research Database (Denmark)

    Bahman, Amir Sajjad

    This thesis presents multidisciplinary modelling techniques in a Design For Reliability (DFR) approach for power electronic circuits. With increasing penetration of renewable energy systems, the demand for reliable power conversion systems is becoming critical. Since a large part of electricity...... in reliability assessment of power modules, a three-dimensional lumped thermal network is proposed to be used for fast, accurate and detailed temperature estimation of power module in dynamic operation and different boundary conditions. Since an important issue in the reliability of power electronics...... are generic and valid to be used in circuit simulators or any programing software. These models are important building blocks for the reliable design process or performance assessment of power electronic circuits. The models can save time and cost in power electronics packaging and power converter to evaluate...

  7. Predictions of titanium alloy properties using thermodynamic modeling tools

    Science.gov (United States)

    Zhang, F.; Xie, F.-Y.; Chen, S.-L.; Chang, Y. A.; Furrer, D.; Venkatesh, V.

    2005-12-01

    Thermodynamic modeling tools have become essential in understanding the effect of alloy chemistry on the final microstructure of a material. Implementation of such tools to improve titanium processing via parameter optimization has resulted in significant cost savings through the elimination of shop/laboratory trials and tests. In this study, a thermodynamic modeling tool developed at CompuTherm, LLC, is being used to predict β transus, phase proportions, phase chemistries, partitioning coefficients, and phase boundaries of multicomponent titanium alloys. This modeling tool includes Pandat, software for multicomponent phase equilibrium calculations, and PanTitanium, a thermodynamic database for titanium alloys. Model predictions are compared with experimental results for one α-β alloy (Ti-64) and two near-β alloys (Ti-17 and Ti-10-2-3). The alloying elements, especially the interstitial elements O, N, H, and C, have been shown to have a significant effect on the β transus temperature, and are discussed in more detail herein.

  8. A community diagnostic tool for chemistry climate model validation

    Directory of Open Access Journals (Sweden)

    A. Gettelman

    2012-09-01

    Full Text Available This technical note presents an overview of the Chemistry-Climate Model Validation Diagnostic (CCMVal-Diag tool for model evaluation. The CCMVal-Diag tool is a flexible and extensible open source package that facilitates the complex evaluation of global models. Models can be compared to other models, ensemble members (simulations with the same model, and/or many types of observations. The initial construction and application is to coupled chemistry-climate models (CCMs participating in CCMVal, but the evaluation of climate models that submitted output to the Coupled Model Intercomparison Project (CMIP is also possible. The package has been used to assist with analysis of simulations for the 2010 WMO/UNEP Scientific Ozone Assessment and the SPARC Report on the Evaluation of CCMs. The CCMVal-Diag tool is described and examples of how it functions are presented, along with links to detailed descriptions, instructions and source code. The CCMVal-Diag tool supports model development as well as quantifies model changes, both for different versions of individual models and for different generations of community-wide collections of models used in international assessments. The code allows further extensions by different users for different applications and types, e.g. to other components of the Earth system. User modifications are encouraged and easy to perform with minimum coding.

  9. An internally consistent inverse model to calculate ridge-axis hydrothermal fluxes

    Science.gov (United States)

    Coogan, L. A.; Dosso, S.

    2010-12-01

    Fluid and chemical fluxes from high-temperature, on-axis, hydrothermal systems at mid-ocean ridges have been estimated in a number of ways. These generally use simple mass balances based on either vent fluid compositions or the compositions of altered sheeted dikes. Here we combine these approaches in an internally consistent model. Seawater is assumed to enter the crust and react with the sheeted dike complex at high temperatures. Major element fluxes for both the rock and fluid are calculated from balanced stoichiometric reactions. These reactions include end-member components of the minerals plagioclase, pyroxene, amphibole, chlorite and epidote along with pure anhydrite, quartz, pyrite, pyrrhotite, titanite, magnetite, ilmenite and ulvospinel and the fluid species H2O, Mg2+, Ca2+, Fe2+, Na+, Si4+, H2S, H+ and H2. Trace element abundances (Li, B, K, Rb, Cs, Sr, Ba, U, Tl, Mn, Cu, Zn, Co, Ni, Pb and Os) and isotopic ratios (Li, B, O, Sr, Tl, Os) are calculated from simple mass balance of a fluid-rock reaction. A fraction of the Cu, Zn, Pb, Co, Ni, Os and Mn in the fluid after fluid-rock reaction is allowed to precipitate during discharge before the fluid reaches the seafloor. S-isotopes are tied to mineralogical reactions involving S-bearing phases. The free parameters in the model are the amounts of each mineralogical reaction that occurs, the amounts of the metals precipitated during discharge, and the water-to-rock ratio. These model parameters, and their uncertainties, are constrained by: (i) mineral abundances and mineral major element compositions in altered dikes from ODP Hole 504B and the Pito and Hess Deep tectonic windows (EPR crust); (ii) changes in dike bulk-rock trace element and isotopic compositions from these locations relative to fresh MORB glass compositions; and (iii) published vent fluid compositions from basalt-hosted high-temperature ridge axis hydrothermal systems. Using a numerical inversion algorithm, the probability density of different

  10. Modular target acquisition model & visualization tool

    NARCIS (Netherlands)

    Bijl, P.; Hogervorst, M.A.; Vos, W.K.

    2008-01-01

    We developed a software framework for image-based simulation models in the chain: scene-atmosphere-sensor-image enhancement-display-human observer: EO-VISTA. The goal is to visualize the steps and to quantify (Target Acquisition) task performance. EO-VISTA provides an excellent means to systematical

  11. Student Model Tools Code Release and Documentation

    DEFF Research Database (Denmark)

    Johnson, Matthew; Bull, Susan; Masci, Drew

    This document contains a wealth of information about the design and implementation of the Next-TELL open learner model. Information is included about the final specification (Section 3), the interfaces and features (Section 4), its implementation and technical design (Section 5) and also a summary...

  12. Functional connectivity modeling of consistent cortico-striatal degeneration in Huntington's disease

    Directory of Open Access Journals (Sweden)

    Imis Dogan

    2015-01-01

    Full Text Available Huntington's disease (HD is a progressive neurodegenerative disorder characterized by a complex neuropsychiatric phenotype. In a recent meta-analysis we identified core regions of consistent neurodegeneration in premanifest HD in the striatum and middle occipital gyrus (MOG. For early manifest HD convergent evidence of atrophy was most prominent in the striatum, motor cortex (M1 and inferior frontal junction (IFJ. The aim of the present study was to functionally characterize this topography of brain atrophy and to investigate differential connectivity patterns formed by consistent cortico-striatal atrophy regions in HD. Using areas of striatal and cortical atrophy at different disease stages as seeds, we performed task-free resting-state and task-based meta-analytic connectivity modeling (MACM. MACM utilizes the large data source of the BrainMap database and identifies significant areas of above-chance co-activation with the seed-region via the activation-likelihood-estimation approach. In order to delineate functional networks formed by cortical as well as striatal atrophy regions we computed the conjunction between the co-activation profiles of striatal and cortical seeds in the premanifest and manifest stages of HD, respectively. Functional characterization of the seeds was obtained using the behavioral meta-data of BrainMap. Cortico-striatal atrophy seeds of the premanifest stage of HD showed common co-activation with a rather cognitive network including the striatum, anterior insula, lateral prefrontal, premotor, supplementary motor and parietal regions. A similar but more pronounced co-activation pattern, additionally including the medial prefrontal cortex and thalamic nuclei was found with striatal and IFJ seeds at the manifest HD stage. The striatum and M1 were functionally connected mainly to premotor and sensorimotor areas, posterior insula, putamen and thalamus. Behavioral characterization of the seeds confirmed that experiments

  13. Fluid Survival Tool: A Model Checker for Hybrid Petri Nets

    NARCIS (Netherlands)

    Postema, Björn; Remke, Anne; Haverkort, Boudewijn R.; Ghasemieh, Hamed

    2014-01-01

    Recently, algorithms for model checking Stochastic Time Logic (STL) on Hybrid Petri nets with a single general one-shot transition (HPNG) have been introduced. This paper presents a tool for model checking HPNG models against STL formulas. A graphical user interface (GUI) not only helps to demonstra

  14. Engineering tools for robust creep modelling

    OpenAIRE

    Holmström, Stefan

    2010-01-01

    High temperature creep is often dealt with simplified models to assess and predict the future behavior of materials and components. Also, for most applications the creep properties of interest require costly long-term testing that limits the available data to support design and life assessment. Such test data sets are even smaller for welded joints that are often the weakest links of structures. It is of considerable interest to be able to reliably predict and extrapolate long term creep beha...

  15. Theme E: disabilities: analysis models and tools

    OpenAIRE

    Vigouroux, Nadine; Gorce, Philippe; Roby-Brami, Agnès; Rémi-Néris, Olivier

    2013-01-01

    International audience; This paper presents the topics and the activity of the theme E “disabilities: analysis models and tools” within the GDR STIC Santé. This group has organized a conference and a workshop during the period 2011–2012. The conference has focused on technologies for cognitive, sensory and motor impairments, assessment and use study of assistive technologies, user centered method design and the place of ethics in these research topics. The objective of “bodily integration of ...

  16. Constructing an advanced software tool for planetary atmospheric modeling

    Science.gov (United States)

    Keller, Richard M.; Sims, Michael; Podolak, Ester; Mckay, Christopher

    1990-01-01

    Scientific model building can be an intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot be easily distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We believe that advanced software techniques can facilitate both the model building and model sharing process. In this paper, we describe a prototype for a scientific modeling software tool that serves as an aid to the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities. Our prototype has been developed in the domain of planetary atmospheric modeling, and is being used to construct models of Titan's atmosphere.

  17. Consistent Two-Equation Closure Modelling for Atmospheric Research: Buoyancy and Vegetation Implementations

    DEFF Research Database (Denmark)

    Sogachev, Andrey; Kelly, Mark C.; Leclerc, Monique Y.

    2012-01-01

    A self-consistent two-equation closure treating buoyancy and plant drag effects has been developed, through consideration of the behaviour of the supplementary equation for the length-scale-determining variable in homogeneous turbulent flow. Being consistent with the canonical flow regimes of gri...

  18. A Consistent Fuzzy Preference Relations Based ANP Model for R&D Project Selection

    Directory of Open Access Journals (Sweden)

    Chia-Hua Cheng

    2017-08-01

    Full Text Available In today’s rapidly changing economy, technology companies have to make decisions on research and development (R&D projects investment on a routine bases with such decisions having a direct impact on that company’s profitability, sustainability and future growth. Companies seeking profitable opportunities for investment and project selection must consider many factors such as resource limitations and differences in assessment, with consideration of both qualitative and quantitative criteria. Often, differences in perception by the various stakeholders hinder the attainment of a consensus of opinion and coordination efforts. Thus, in this study, a hybrid model is developed for the consideration of the complex criteria taking into account the different opinions of the various stakeholders who often come from different departments within the company and have different opinions about which direction to take. The decision-making trial and evaluation laboratory (DEMATEL approach is used to convert the cause and effect relations representing the criteria into a visual network structure. A consistent fuzzy preference relations based analytic network process (CFPR-ANP method is developed to calculate the preference-weights of the criteria based on the derived network structure. The CFPR-ANP is an improvement over the original analytic network process (ANP method in that it reduces the problem of inconsistency as well as the number of pairwise comparisons. The combined complex proportional assessment (COPRAS-G method is applied with fuzzy grey relations to resolve conflicts arising from differences in information and opinions provided by the different stakeholders about the selection of the most suitable R&D projects. This novel combination approach is then used to assist an international brand-name company to prioritize projects and make project decisions that will maximize returns and ensure sustainability for the company.

  19. Self-consistent modelling of Mercury’s exosphere by sputtering, micro-meteorite impact and photon-stimulated desorption

    Science.gov (United States)

    Wurz, P.; Whitby, J. A.; Rohner, U.; Martín-Fernández, J. A.; Lammer, H.; Kolb, C.

    2010-10-01

    A Monte-Carlo model of exospheres ( Wurz and Lammer, 2003) was extended by treating the ion-induced sputtering process, photon-stimulated desorption, and micro-meteorite impact vaporisation quantitatively in a self-consistent way starting with the actual release of particles from the mineral surface of Mercury. Based on available literature data we established a global model for the surface mineralogy of Mercury and from that derived the average elemental composition of the surface. This model serves as a tool to estimate densities of species in the exosphere depending on the release mechanism and the associated physical parameters quantitatively describing the particle release from the surface. Our calculation shows that the total contribution to the exospheric density at the Hermean surface by solar wind sputtering is about 4×10 7 m -3, which is much less than the experimental upper limit of the exospheric density of 10 12 m -3. The total calculated exospheric density from micro-meteorite impact vaporisation is about 1.6×10 8 m -3, also much less than the observed value. We conclude that solar wind sputtering and micro-meteorite impact vaporisation contribute only a small fraction of Mercury's exosphere, at least close to the surface. Because of the considerably larger scale height of atoms released via sputtering into the exosphere, sputtered atoms start to dominate the exosphere at altitudes exceeding around 1000 km, with the exception of some light and abundant species released thermally, e.g. H 2 and He. Because of Mercury's strong gravitational field not all particles released by sputtering and micro-meteorite impact escape. Over extended time scales this will lead to an alteration of the surface composition.

  20. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    Energy Technology Data Exchange (ETDEWEB)

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M., E-mail: Frans.leermakers@wur.nl [Laboratory of Physical Chemistry and Colloid Science, Wageningen University, Dreijenplein 6, 6307 HB Wageningen (Netherlands)

    2014-02-14

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k{sub c} and k{sup ¯} and the preferred monolayer curvature J{sub 0}{sup m}, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k{sub c} and the area compression modulus k{sub A} are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k{sup ¯} and J{sub 0}{sup m} can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k{sup ¯} and J{sub 0}{sup m} change sign with relevant parameter changes. Although typically k{sup ¯}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J{sub 0}{sup m}≫0, especially at low ionic

  1. Modeling Tools for Drilling, Reservoir Navigation, and Formation Evaluation

    Directory of Open Access Journals (Sweden)

    Sushant Dutta

    2012-06-01

    Full Text Available The oil and gas industry routinely uses borehole tools for measuring or logging rock and fluid properties of geologic formations to locate hydrocarbons and maximize their production. Pore fluids in formations of interest are usually hydrocarbons or water. Resistivity logging is based on the fact that oil and gas have a substantially higher resistivity than water. The first resistivity log was acquired in 1927, and resistivity logging is still the foremost measurement used for drilling and evaluation. However, the acquisition and interpretation of resistivity logging data has grown in complexity over the years. Resistivity logging tools operate in a wide range of frequencies (from DC to GHz and encounter extremely high (several orders of magnitude conductivity contrast between the metal mandrel of the tool and the geologic formation. Typical challenges include arbitrary angles of tool inclination, full tensor electric and magnetic field measurements, and interpretation of complicated anisotropic formation properties. These challenges combine to form some of the most intractable computational electromagnetic problems in the world. Reliable, fast, and convenient numerical modeling of logging tool responses is critical for tool design, sensor optimization, virtual prototyping, and log data inversion. This spectrum of applications necessitates both depth and breadth of modeling software—from blazing fast one-dimensional (1-D modeling codes to advanced threedimensional (3-D modeling software, and from in-house developed codes to commercial modeling packages. In this paper, with the help of several examples, we demonstrate our approach for using different modeling software to address different drilling and evaluation applications. In one example, fast 1-D modeling provides proactive geosteering information from a deep-reading azimuthal propagation resistivity measurement. In the second example, a 3-D model with multiple vertical resistive fractures

  2. Modeling Tools for Drilling, Reservoir Navigation, and Formation Evaluation

    Directory of Open Access Journals (Sweden)

    Sushant Dutta

    2012-06-01

    Full Text Available The oil and gas industry routinely uses borehole tools for measuring or logging rock and fluid properties of geologic formations to locate hydrocarbons and maximize their production. Pore fluids in formations of interest are usually hydrocarbons or water. Resistivity logging is based on the fact that oil and gas have a substantially higher resistivity than water. The first resistivity log was acquired in 1927, and resistivity logging is still the foremost measurement used for drilling and evaluation. However, the acquisition and interpretation of resistivity logging data has grown in complexity over the years. Resistivity logging tools operate in a wide range of frequencies (from DC to GHz and encounter extremely high (several orders of magnitude conductivity contrast between the metal mandrel of the tool and the geologic formation. Typical challenges include arbitrary angles of tool inclination, full tensor electric and magnetic field measurements, and interpretation of complicated anisotropic formation properties. These challenges combine to form some of the most intractable computational electromagnetic problems in the world. Reliable, fast, and convenient numerical modeling of logging tool responses is critical for tool design, sensor optimization, virtual prototyping, and log data inversion. This spectrum of applications necessitates both depth and breadth of modeling software—from blazing fast one-dimensional (1-D modeling codes to advanced threedimensional (3-D modeling software, and from in-house developed codes to commercial modeling packages. In this paper, with the help of several examples, we demonstrate our approach for using different modeling software to address different drilling and evaluation applications. In one example, fast 1-D modeling provides proactive geosteering information from a deep-reading azimuthal propagation resistivity measurement. In the second example, a 3-D model with multiple vertical resistive fractures

  3. Rasp Tool on Phoenix Robotic Arm Model

    Science.gov (United States)

    2008-01-01

    This close-up photograph taken at the Payload Interoperability Testbed at the University of Arizona, Tucson, shows the motorized rasp protruding from the bottom of the scoop on the engineering model of NASA's Phoenix Mars Lander's Robotic Arm. The rasp will be placed against the hard Martian surface to cut into the hard material and acquire an icy soil sample for analysis by Phoenix's scientific instruments. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is led by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  4. M4AST - A Tool for Asteroid Modelling

    Science.gov (United States)

    Birlan, Mirel; Popescu, Marcel; Irimiea, Lucian; Binzel, Richard

    2016-10-01

    M4AST (Modelling for asteroids) is an online tool devoted to the analysis and interpretation of reflection spectra of asteroids in the visible and near-infrared spectral intervals. It consists into a spectral database of individual objects and a set of routines for analysis which address scientific aspects such as: taxonomy, curve matching with laboratory spectra, space weathering models, and mineralogical diagnosis. Spectral data were obtained using groundbased facilities; part of these data are precompiled from the literature[1].The database is composed by permanent and temporary files. Each permanent file contains a header and two or three columns (wavelength, spectral reflectance, and the error on spectral reflectance). Temporary files can be uploaded anonymously, and are purged for the property of submitted data. The computing routines are organized in order to accomplish several scientific objectives: visualize spectra, compute the asteroid taxonomic class, compare an asteroid spectrum with similar spectra of meteorites, and computing mineralogical parameters. One facility of using the Virtual Observatory protocols was also developed.A new version of the service was released in June 2016. This new release of M4AST contains a database and facilities to model more than 6,000 spectra of asteroids. A new web-interface was designed. This development allows new functionalities into a user-friendly environment. A bridge system of access and exploiting the database SMASS-MIT (http://smass.mit.edu) allows the treatment and analysis of these data in the framework of M4AST environment.Reference:[1] M. Popescu, M. Birlan, and D.A. Nedelcu, "Modeling of asteroids: M4AST," Astronomy & Astrophysics 544, EDP Sciences, pp. A130, 2012.

  5. A Hybrid Tool for User Interface Modeling and Prototyping

    Science.gov (United States)

    Trætteberg, Hallvard

    Although many methods have been proposed, model-based development methods have only to some extent been adopted for UI design. In particular, they are not easy to combine with user-centered design methods. In this paper, we present a hybrid UI modeling and GUI prototyping tool, which is designed to fit better with IS development and UI design traditions. The tool includes a diagram editor for domain and UI models and an execution engine that integrates UI behavior, live UI components and sample data. Thus, both model-based user interface design and prototyping-based iterative design are supported

  6. The Twente lower extremity model : consistent dynamic simulation of the human locomotor apparatus

    NARCIS (Netherlands)

    Klein Horsman, Martijn Dirk

    2007-01-01

    Orthopedic interventions such as tendon transfers have shown to be successful in the treatment of gait disorders. Still, in many cases dysfunctions remained or worsened. To assist clinicians, an interactive tool will be useful that allows evaluation of if-then scenarios with respect to treatment met

  7. Pre-Processing and Modeling Tools for Bigdata

    Directory of Open Access Journals (Sweden)

    Hashem Hadi

    2016-09-01

    Full Text Available Modeling tools and operators help the user / developer to identify the processing field on the top of the sequence and to send into the computing module only the data related to the requested result. The remaining data is not relevant and it will slow down the processing. The biggest challenge nowadays is to get high quality processing results with a reduced computing time and costs. To do so, we must review the processing sequence, by adding several modeling tools. The existing processing models do not take in consideration this aspect and focus on getting high calculation performances which will increase the computing time and costs. In this paper we provide a study of the main modeling tools for BigData and a new model based on pre-processing.

  8. Self-Consistent Approach to Global Charge Neutrality in Electrokinetics: A Surface Potential Trap Model

    Science.gov (United States)

    Wan, Li; Xu, Shixin; Liao, Maijia; Liu, Chun; Sheng, Ping

    2014-01-01

    In this work, we treat the Poisson-Nernst-Planck (PNP) equations as the basis for a consistent framework of the electrokinetic effects. The static limit of the PNP equations is shown to be the charge-conserving Poisson-Boltzmann (CCPB) equation, with guaranteed charge neutrality within the computational domain. We propose a surface potential trap model that attributes an energy cost to the interfacial charge dissociation. In conjunction with the CCPB, the surface potential trap can cause a surface-specific adsorbed charge layer σ. By defining a chemical potential μ that arises from the charge neutrality constraint, a reformulated CCPB can be reduced to the form of the Poisson-Boltzmann equation, whose prediction of the Debye screening layer profile is in excellent agreement with that of the Poisson-Boltzmann equation when the channel width is much larger than the Debye length. However, important differences emerge when the channel width is small, so the Debye screening layers from the opposite sides of the channel overlap with each other. In particular, the theory automatically yields a variation of σ that is generally known as the "charge regulation" behavior, attendant with predictions of force variation as a function of nanoscale separation between two charged surfaces that are in good agreement with the experiments, with no adjustable or additional parameters. We give a generalized definition of the ζ potential that reflects the strength of the electrokinetic effect; its variations with the concentration of surface-specific and surface-nonspecific salt ions are shown to be in good agreement with the experiments. To delineate the behavior of the electro-osmotic (EO) effect, the coupled PNP and Navier-Stokes equations are solved numerically under an applied electric field tangential to the fluid-solid interface. The EO effect is shown to exhibit an intrinsic time dependence that is noninertial in its origin. Under a step-function applied electric field, a

  9. Maier-Saupe model of polymer nematics: Comparing free energies calculated with Self Consistent Field theory and Monte Carlo simulations

    Science.gov (United States)

    Greco, Cristina; Jiang, Ying; Chen, Jeff Z. Y.; Kremer, Kurt; Daoulas, Kostas Ch.

    2016-11-01

    Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.

  10. MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models

    Directory of Open Access Journals (Sweden)

    Maike Kathrin Aurich

    2016-08-01

    Full Text Available Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools , we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorials explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. This computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.

  11. MetaboTools: A Comprehensive Toolbox for Analysis of Genome-Scale Metabolic Models.

    Science.gov (United States)

    Aurich, Maike K; Fleming, Ronan M T; Thiele, Ines

    2016-01-01

    Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorials explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. This computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.

  12. Tools and Models for Integrating Multiple Cellular Networks

    Energy Technology Data Exchange (ETDEWEB)

    Gerstein, Mark [Yale Univ., New Haven, CT (United States). Gerstein Lab.

    2015-11-06

    In this grant, we have systematically investigated the integrated networks, which are responsible for the coordination of activity between metabolic pathways in prokaryotes. We have developed several computational tools to analyze the topology of the integrated networks consisting of metabolic, regulatory, and physical interaction networks. The tools are all open-source, and they are available to download from Github, and can be incorporated in the Knowledgebase. Here, we summarize our work as follow. Understanding the topology of the integrated networks is the first step toward understanding its dynamics and evolution. For Aim 1 of this grant, we have developed a novel algorithm to determine and measure the hierarchical structure of transcriptional regulatory networks [1]. The hierarchy captures the direction of information flow in the network. The algorithm is generally applicable to regulatory networks in prokaryotes, yeast and higher organisms. Integrated datasets are extremely beneficial in understanding the biology of a system in a compact manner due to the conflation of multiple layers of information. Therefore for Aim 2 of this grant, we have developed several tools and carried out analysis for integrating system-wide genomic information. To make use of the structural data, we have developed DynaSIN for protein-protein interactions networks with various dynamical interfaces [2]. We then examined the association between network topology with phenotypic effects such as gene essentiality. In particular, we have organized E. coli and S. cerevisiae transcriptional regulatory networks into hierarchies. We then correlated gene phenotypic effects by tinkering with different layers to elucidate which layers were more tolerant to perturbations [3]. In the context of evolution, we also developed a workflow to guide the comparison between different types of biological networks across various species using the concept of rewiring [4], and Furthermore, we have developed

  13. System capacity and economic modeling computer tool for satellite mobile communications systems

    Science.gov (United States)

    Wiedeman, Robert A.; Wen, Doong; Mccracken, Albert G.

    1988-01-01

    A unique computer modeling tool that combines an engineering tool with a financial analysis program is described. The resulting combination yields a flexible economic model that can predict the cost effectiveness of various mobile systems. Cost modeling is necessary in order to ascertain if a given system with a finite satellite resource is capable of supporting itself financially and to determine what services can be supported. Personal computer techniques using Lotus 123 are used for the model in order to provide as universal an application as possible such that the model can be used and modified to fit many situations and conditions. The output of the engineering portion of the model consists of a channel capacity analysis and link calculations for several qualities of service using up to 16 types of earth terminal configurations. The outputs of the financial model are a revenue analysis, an income statement, and a cost model validation section.

  14. Designer Modeling for Personalized Game Content Creation Tools

    DEFF Research Database (Denmark)

    Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian

    2013-01-01

    With the growing use of automated content creation and computer-aided design tools in game development, there is potential for enhancing the design process through personalized interactions between the software and the game developer. This paper proposes designer modeling for capturing the designer......’s preferences, goals and processes from their interaction with a computer-aided design tool, and suggests methods and domains within game development where such a model can be applied. We describe how designer modeling could be integrated with current work on automated and mixed-initiative content creation...

  15. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    Science.gov (United States)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  16. Business intelligence tools for radiology: creating a prototype model using open-source tools.

    Science.gov (United States)

    Prevedello, Luciano M; Andriole, Katherine P; Hanson, Richard; Kelly, Pauline; Khorasani, Ramin

    2010-04-01

    Digital radiology departments could benefit from the ability to integrate and visualize data (e.g. information reflecting complex workflow states) from all of their imaging and information management systems in one composite presentation view. Leveraging data warehousing tools developed in the business world may be one way to achieve this capability. In total, the concept of managing the information available in this data repository is known as Business Intelligence or BI. This paper describes the concepts used in Business Intelligence, their importance to modern Radiology, and the steps used in the creation of a prototype model of a data warehouse for BI using open-source tools.

  17. Ensuring consistency and persistence to the Quality Information Model - The role of the GeoViQua Broker

    Science.gov (United States)

    Bigagli, Lorenzo; Papeschi, Fabrizio; Nativi, Stefano; Bastin, Lucy; Masó, Joan

    2013-04-01

    GeoViQua (QUAlity aware VIsualisation for the Global Earth Observation System of Systems) is an FP7 project aiming at complementing the Global Earth Observation System of Systems (GEOSS) with rigorous data quality specifications and quality-aware capabilities, in order to improve reliability in scientific studies and policy decision-making. GeoViQua main scientific and technical objective is to enhance the GEOSS Common Infrastructure (GCI) providing the user community with innovative quality-aware search and visualization tools, which will be integrated in the GEOPortal, as well as made available to other end-user interfaces. To this end, GeoViQua will promote the extension of the current standard metadata for geographic information with accurate and expressive quality indicators. The project will also contribute to the definition of a quality label, the GEOLabel, reflecting scientific relevance, quality, acceptance and societal needs. The concept of Quality Information is very broad. When talking about the quality of a product, this is not limited to geophysical quality but also includes concepts like mission quality (e.g. data coverage with respect to planning). In general, it provides an indication of the overall fitness for use of a specific type of product. Employing and extending several ISO standards such as 19115, 19157 and 19139, a common set of data quality indicators has been selected to be used within the project. The resulting work, in the form of a data model, is expressed in XML Schema Language and encoded in XML. Quality information can be stated both by data producers and by data users, actually resulting in two conceptually distinct data models, the Producer Quality model and the User Quality model (or User Feedback model). A very important issue concerns the association between the quality reports and the affected products that are target of the report. This association is usually achieved by means of a Product Identifier (PID), but actually just

  18. Modeling aerosol-cloud interactions with a self-consistent cloud scheme in a general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Ming, Y; Ramaswamy, V; Donner, L J; Phillips, V T; Klein, S A; Ginoux, P A; Horowitz, L H

    2005-05-02

    This paper describes a self-consistent prognostic cloud scheme that is able to predict cloud liquid water, amount and droplet number (N{sub d}) from the same updraft velocity field, and is suitable for modeling aerosol-cloud interactions in general circulation models (GCMs). In the scheme, the evolution of droplets fully interacts with the model meteorology. An explicit treatment of cloud condensation nuclei (CCN) activation allows the scheme to take into account the contributions to N{sub d} of multiple types of aerosol (i.e., sulfate, organic and sea-salt aerosols) and kinetic limitations of the activation process. An implementation of the prognostic scheme in the Geophysical Fluid Dynamics Laboratory (GFDL) AM2 GCM yields a vertical distribution of N{sub d} characteristic of maxima in the lower troposphere differing from that obtained through diagnosing N{sub d} empirically from sulfate mass concentrations. As a result, the agreement of model-predicted present-day cloud parameters with satellite measurements is improved compared to using diagnosed N{sub d}. The simulations with pre-industrial and present-day aerosols show that the combined first and second indirect effects of anthropogenic sulfate and organic aerosols give rise to a global annual mean flux change of -1.8 W m{sup -2} consisting of -2.0 W m{sup -2} in shortwave and 0.2 W m{sup -2} in longwave, as model response alters cloud field, and subsequently longwave radiation. Liquid water path (LWP) and total cloud amount increase by 19% and 0.6%, respectively. Largely owing to high sulfate concentrations from fossil fuel burning, the Northern Hemisphere mid-latitude land and oceans experience strong cooling. So does the tropical land which is dominated by biomass burning organic aerosol. The Northern/Southern Hemisphere and land/ocean ratios are 3.1 and 1.4, respectively. The calculated annual zonal mean flux changes are determined to be statistically significant, exceeding the model's natural

  19. Using open sidewalls for modelling self-consistent lithosphere subduction dynamics

    NARCIS (Netherlands)

    Chertova, M.V.; Geenen, T.; van den Berg, A.; Spakman, W.

    2012-01-01

    Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable) vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free

  20. Pedagogical Approaches Used by Faculty in Holland's Model Environments: The Role of Environmental Consistency

    Science.gov (United States)

    Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.

    2009-01-01

    This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…

  1. Self-consistent tight-binding model of B and N doping in graphene

    DEFF Research Database (Denmark)

    Pedersen, Thomas Garm; Pedersen, Jesper Goor

    2013-01-01

    Boron and nitrogen substitutional impurities in graphene are analyzed using a self-consistent tight-binding approach. An analytical result for the impurity Green's function is derived taking broken electron-hole symmetry into account and validated by comparison to numerical diagonalization...

  2. Pedagogical Approaches Used by Faculty in Holland's Model Environments: The Role of Environmental Consistency

    Science.gov (United States)

    Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.

    2009-01-01

    This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…

  3. Achieving consistent multiple daily low-dose Bacillus anthracis spore inhalation exposures in the rabbit model

    Directory of Open Access Journals (Sweden)

    Roy E Barnewall

    2012-06-01

    Full Text Available Repeated low-level exposures to Bacillus anthracis could occur before or after the remediation of an environmental release. This is especially true for persistent agents such as Bacillus anthracis spores, the causative agent of anthrax. Studies were conducted to examine aerosol methods needed for consistent daily low aerosol concentrations to deliver a low-dose (less than 106 colony forming units (CFU of B. anthracis spores and included a pilot feasibility characterization study, acute exposure study, and a multiple fifteen day exposure study. This manuscript focuses on the state-of-the-science aerosol methodologies used to generate and aerosolize consistent daily low aerosol concentrations and resultant low inhalation doses. The pilot feasibility characterization study determined that the aerosol system was consistent and capable of producing very low aerosol concentrations. In the acute, single day exposure experiment, targeted inhaled doses of 1 x 102, 1 x 103, 1 x 104, and 1 x 105 CFU were used. In the multiple daily exposure experiment, rabbits were exposed multiple days to targeted inhaled doses of 1 x 102, 1 x 103, and 1 x 104 CFU. In all studies, targeted inhaled doses remained fairly consistent from rabbit to rabbit and day to day. The aerosol system produced aerosolized spores within the optimal mass median aerodynamic diameter particle size range to reach deep lung alveoli. Consistency of the inhaled dose was aided by monitoring and recording respiratory parameters during the exposure with real-time plethysmography. Overall, the presented results show that the animal aerosol system was stable and highly reproducible between different studies and multiple exposure days.

  4. Lightweight approach to model traceability in a CASE tool

    Science.gov (United States)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  5. A Suite of Tools for ROC Analysis of Spatial Models

    Directory of Open Access Journals (Sweden)

    Hermann Rodrigues

    2013-09-01

    Full Text Available The Receiver Operating Characteristic (ROC is widely used for assessing the performance of classification algorithms. In GIScience, ROC has been applied to assess models aimed at predicting events, such as land use/cover change (LUCC, species distribution and disease risk. However, GIS software packages offer few statistical tests and guidance tools for ROC analysis and interpretation. This paper presents a suite of GIS tools designed to facilitate ROC curve analysis for GIS users by applying proper statistical tests and analysis procedures. The tools are freely available as models and submodels of Dinamica EGO freeware. The tools give the ROC curve, the area under the curve (AUC, partial AUC, lower and upper AUCs, the confidence interval of AUC, the density of event in probability bins and tests to evaluate the difference between the AUCs of two models. We present first the procedures and statistical tests implemented in Dinamica EGO, then the application of the tools to assess LUCC and species distribution models. Finally, we interpret and discuss the ROC-related statistics resulting from various case studies.

  6. A Delay Model of Multiple-Valued Logic Circuits Consisting of Min, Max, and Literal Operations

    Science.gov (United States)

    Takagi, Noboru

    Delay models for binary logic circuits have been proposed and clarified their mathematical properties. Kleene's ternary logic is one of the simplest delay models to express transient behavior of binary logic circuits. Goto first applied Kleene's ternary logic to hazard detection of binary logic circuits in 1948. Besides Kleene's ternary logic, there are many delay models of binary logic circuits, Lewis's 5-valued logic etc. On the other hand, multiple-valued logic circuits recently play an important role for realizing digital circuits. This is because, for example, they can reduce the size of a chip dramatically. Though multiple-valued logic circuits become more important, there are few discussions on delay models of multiple-valued logic circuits. Then, in this paper, we introduce a delay model of multiple-valued logic circuits, which are constructed by Min, Max, and Literal operations. We then show some of the mathematical properties of our delay model.

  7. Physically-consistent wall boundary conditions for the k-ω turbulence model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Dixen, Martin; Jacobsen, Niels Gjøl

    2010-01-01

    A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components of the fluc......A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components...

  8. Extending the entry consistency model to enable efficient visualization for code-coupling grid applications

    OpenAIRE

    Antoniu, Gabriel; Cudennec, Loïc; Monnet, Sébastien

    2006-01-01

    This paper addresses the problem of efficient visualization of shared data within code coupling grid applications. These applications are structured as a set of distributed, autonomous, weakly-coupled codes. We focus on the case where the codes are able to interact using the abstraction of a shared data space. We propose an efficient visualization scheme by adapting the mechanisms used to maintain the data consistency. We introduce a new operation called relaxed read, as an extension to the e...

  9. Modeling and Simulation Tools: From Systems Biology to Systems Medicine.

    Science.gov (United States)

    Olivier, Brett G; Swat, Maciej J; Moné, Martijn J

    2016-01-01

    Modeling is an integral component of modern biology. In this chapter we look into the role of the model, as it pertains to Systems Medicine, and the software that is required to instantiate and run it. We do this by comparing the development, implementation, and characteristics of tools that have been developed to work with two divergent methodologies: Systems Biology and Pharmacometrics. From the Systems Biology perspective we consider the concept of "Software as a Medical Device" and what this may imply for the migration of research-oriented, simulation software into the domain of human health.In our second perspective, we see how in practice hundreds of computational tools already accompany drug discovery and development at every stage of the process. Standardized exchange formats are required to streamline the model exchange between tools, which would minimize translation errors and reduce the required time. With the emergence, almost 15 years ago, of the SBML standard, a large part of the domain of interest is already covered and models can be shared and passed from software to software without recoding them. Until recently the last stage of the process, the pharmacometric analysis used in clinical studies carried out on subject populations, lacked such an exchange medium. We describe a new emerging exchange format in Pharmacometrics which covers the non-linear mixed effects models, the standard statistical model type used in this area. By interfacing these two formats the entire domain can be covered by complementary standards and subsequently the according tools.

  10. Examining an important urban transportation management tool: subarea modeling

    Directory of Open Access Journals (Sweden)

    Xueming CHEN

    2009-12-01

    Full Text Available At present, customized subarea models have been widely used in local transportation planning throughout the United States. The biggest strengths of a subarea model lie in its more detailed and accurate modeling outputs which better meet local planning requirements. In addition, a subarea model can substantially reduce database size and model running time. In spite of these advantages, subarea models remain quite weak in maintaining consistency with a regional model, modeling transit projects, smart growth measures, air quality conformity, and other areas. Both opportunities and threats exist for subarea modeling. In addition to examining subarea models, this paper introduces the decision-making process in choosing a proper subarea modeling approach (windowing versus focusing and software package. This study concludes that subarea modeling will become more popular in the future. More GIS applications, travel surveys, transit modeling, microsimulation software utilization, and other modeling improvements are expected to be incorporated into the subarea modeling process.

  11. Consistent dust and gas models for protoplanetary disks. I. Disk shape, dust settling, opacities, and PAHs

    NARCIS (Netherlands)

    Woitke, P.; Min, M.; Pinte, C.; Thi, W. -F; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.

    2016-01-01

    We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on

  12. Hydrological hysteresis and its value for assessing process consistency in catchment conceptual models

    Science.gov (United States)

    O. Fovet; L. Ruiz; M. Hrachowitz; M. Faucheux; C. Gascuel-Odoux

    2015-01-01

    While most hydrological models reproduce the general flow dynamics, they frequently fail to adequately mimic system-internal processes. In particular, the relationship between storage and discharge, which often follows annual hysteretic patterns in shallow hard-rock aquifers, is rarely considered in modelling studies. One main reason is that catchment storage is...

  13. Vertical Equating: An Empirical Study of the Consistency of Thurstone and Rasch Model Approaches.

    Science.gov (United States)

    Schratz, Mary K.

    To explore the appropriateness of the Rasch model for the vertical equating of a multi-level, multi-form achievement test series, both the Rasch model and the traditional Thurstone procedures were applied to the Listening Comprehension subtest scores of the Stanford Achievement Test. Two adjacent levels of these tests were administered in 1981 to…

  14. Consistent dust and gas models for protoplanetary disks. I. Disk shape, dust settling, opacities, and PAHs

    NARCIS (Netherlands)

    Woitke, P.; Min, M.; Pinte, C.; Thi, W. -F; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.

    2016-01-01

    We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on t

  15. Self-consistent modelling of hot plasmas within non-extensive Tsallis' thermostatistics

    CERN Document Server

    Pain, Jean-Christophe; Gilleron, Franck

    2011-01-01

    A study of the effects of non-extensivity on the modelling of atomic physics in hot dense plasmas is proposed within Tsallis' statistics. The electronic structure of the plasma is calculated through an average-atom model based on the minimization of the non-extensive free energy.

  16. CONSISTENT USE OF THE KALMAN FILTER IN CHEMICAL TRANSPORT MODELS (CTMS) FOR DEDUCING EMISSIONS

    Science.gov (United States)

    Past research has shown that emissions can be deduced using observed concentrations of a chemical, a Chemical Transport Model (CTM), and the Kalman filter in an inverse modeling application. An expression was derived for the relationship between the "observable" (i.e., the con...

  17. A consistent turbulence formulation for the dynamic wake meandering model in the atmospheric boundary layer

    DEFF Research Database (Denmark)

    Keck, Rolf-Erik; Veldkamp, Dick; Wedel-Heinen, Jens Jakob

    This thesis describes the further development and validation of the dynamic meandering wake model for simulating the flow field and power production of wind farms operating in the atmospheric boundary layer (ABL). The overall objective of the conducted research is to improve the modelling capabil...... intensity. This power drop is comparable to measurements from the North Hoyle and OWEZ wind farms....

  18. Risk Assessment in Fractured Clayey Tills - Which Modeling Tools?

    DEFF Research Database (Denmark)

    Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup; Binning, Philip John

    2012-01-01

    assessment is challenging and the inclusion of the relevant processes is difficult. Furthermore the lack of long-term monitoring data prevents from verifying the accuracy of the different conceptual models. Further investigations based on long-term data and numerical modeling are needed to accurately......The article presents different tools available for risk assessment in fractured clayey tills and their advantages and limitations are discussed. Because of the complex processes occurring during contaminant transport through fractured media, the development of simple practical tools for risk...

  19. Consistent approach to edge detection using multiscale fuzzy modeling analysis in the human retina

    Directory of Open Access Journals (Sweden)

    Mehdi Salimian

    2012-06-01

    Full Text Available Today, many widely used image processing algorithms based on human visual system have been developed. In this paper a smart edge detection based on modeling the performance of simple and complex cells and also modeling and multi-scale image processing in the primary visual cortex is presented. A way to adjust the parameters of Gabor filters (mathematical models of simple cells And the proposed non-linear threshold response are presented in order to Modeling of simple and complex cells. Also, due to multi-scale modeling analysis conducted in the human retina, in the proposed algorithm, all edges of the small and large structures with high precision are detected and localized. Comparing the results of the proposed method for a reliable database with conventional methods shows the higher Performance (about 4-13% and reliability of the proposed method in the detection and localization of edge.

  20. An exact arithmetic toolbox for a consistent and reproducible structural analysis of metabolic network models.

    Science.gov (United States)

    Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie

    2014-10-07

    Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations.

  1. Open source Modeling and optimization tools for Planning

    Energy Technology Data Exchange (ETDEWEB)

    Peles, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-10

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward to complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.

  2. Inclusion of a Direct and Inverse Energy-Consistent Hysteresis Model in Dual Magnetostatic Finite Element Formulations

    OpenAIRE

    Jacques, Kevin; Sabariego, Ruth,; Geuzaine, Christophe; GYSELINCK Johan

    2015-01-01

    This paper deals with the implementation of an energy-consistent ferromagnetic hysteresis model in 2D finite element computations. This vector hysteresis model relies on a strong thermodynamic foundation and ensures the closure of minor hysteresis loops. The model accuracy can be increased by controlling the number of intrinsic cell components while parameters can be easily fitted on common material measurements. Here, the native h-based material model is inverted using the Newton-Raphson met...

  3. Self-Consistent Model of Magnetospheric Electric Field, Ring Current, Plasmasphere, and Electromagnetic Ion Cyclotron Waves: Initial Results

    Science.gov (United States)

    Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.

    2009-01-01

    Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.

  4. A consistent turbulence formulation for the dynamic wake meandering model in the atmospheric boundary layer

    Energy Technology Data Exchange (ETDEWEB)

    Keck, R.-E.

    2013-07-15

    This thesis describes the further development and validation of the dynamic meandering wake model for simulating the flow field and power production of wind farms operating in the atmospheric boundary layer (ABL). The overall objective of the conducted research is to improve the modelling capability of the dynamics wake meandering model to a level where it is sufficiently mature to be applied in industrial applications and for an augmentation of the IEC-standard for wind turbine wake modelling. Based on a comparison of capabilities of the dynamic wake meandering model to the requirement of the wind industry, four areas were identified as high prioritizations for further research: 1. the turbulence distribution in a single wake. 2. multiple wake deficits and build-up of turbulence over a row of turbines. 3. the effect of the atmospheric boundary layer on wake turbulence and wake deficit evolution. 4. atmospheric stability effects on wake deficit evolution and meandering. The conducted research is to a large extent based on detailed wake investigations and reference data generated through computational fluid dynamics simulations, where the wind turbine rotor has been represented by an actuator line model. As a consequence, part of the research also targets the performance of the actuator line model when generating wind turbine wakes in the atmospheric boundary layer. Highlights of the conducted research: 1. A description is given for using the dynamic wake meandering model as a standalone flow-solver for the velocity and turbulence distribution, and power production in a wind farm. The performance of the standalone implementation is validated against field data, higher-order computational fluid dynamics models, as well as the most common engineering wake models in the wind industry. 2. The EllipSys3D actuator line model, including the synthetic methods used to model atmospheric boundary layer shear and turbulence, is verified for modelling the evolution of wind

  5. Integrating decision management with UML modeling concepts and tools

    DEFF Research Database (Denmark)

    Könemann, Patrick

    2009-01-01

    to enforce design decisions (modify the models). We define tool-independent concepts and architecture building blocks supporting these requirements and present first ideas how this can be implemented in the IBM Rational Software Modeler and Architectural Decision Knowledge Wiki. This seamless integration......Numerous design decisions including architectural decisions are made while developing a software system, which influence the architecture of the system as well as subsequent decisions. Several tools already exist for managing design decisions, i.e. capturing, documenting, and maintaining them......, but also for guiding the user by proposing subsequent decisions. In model-based software development, many decisions directly affect the structural and behavioral models used to describe and develop a software system and its architecture. However, the decisions are typically not connected to these models...

  6. A general thermal model of machine tool spindle

    Directory of Open Access Journals (Sweden)

    Yanfang Dong

    2017-01-01

    Full Text Available As the core component of machine tool, the thermal characteristics of the spindle have a significant influence on machine tool running status. Lack of an accurate model of the spindle system, particularly the model of load–deformation coefficient between the bearing rolling elements and rings, severely limits the thermal error analytic precision of the spindle. In this article, bearing internal loads, especially the function relationships between the principal curvature difference F(ρ and auxiliary parameter nδ, semi-major axis a, and semi-minor axis b, have been determined; furthermore, high-precision heat generation combining the heat sinks in the spindle system is calculated; finally, an accurate thermal model of the spindle was established. Moreover, a conventional spindle with embedded fiber Bragg grating temperature sensors has been developed. By comparing the experiment results with simulation, it indicates that the model has good accuracy, which verifies the reliability of the modeling process.

  7. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  8. Using the IEA ETSAP modelling tools for Denmark

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik

    , Environment and Health (CEEH), starting from January 2007. This report summarises the activities under ETSAP Annex X and related project, emphasising the development of modelling tools that will be useful for modelling the Danish energy system. It is also a status report for the development of a model...... signed the agreement and contributed to some early annexes. This project is motivated by an invitation to participate in ETSAP Annex X, "Global Energy Systems and Common Analyses: Climate friendly, Secure and Productive Energy Systems" for the period 2005 to 2007. The main activity is semi-annual...... workshops focusing on presentations of model analyses and use of the ETSAP' tools (the MARKAL/TIMES family of models). The project was also planned to benefit from the EU project ”NEEDS - New Energy Externalities Developments for Sustainability. ETSAP is contributing to a part of NEEDS that develops...

  9. Designing tools for oil exploration using nuclear modeling

    Directory of Open Access Journals (Sweden)

    Mauborgne Marie-Laure

    2017-01-01

    Full Text Available When designing nuclear tools for oil exploration, one of the first steps is typically nuclear modeling for concept evaluation and initial characterization. Having an accurate model, including the availability of accurate cross sections, is essential to reduce or avoid time consuming and costly design iterations. During tool response characterization, modeling is benchmarked with experimental data and then used to complement and to expand the database to make it more detailed and inclusive of more measurement environments which are difficult or impossible to reproduce in the laboratory. We present comparisons of our modeling results obtained using the ENDF/B-VI and ENDF/B-VII cross section data bases, focusing on the response to a few elements found in the tool, borehole and subsurface formation. For neutron-induced inelastic and capture gamma ray spectroscopy, major obstacles may be caused by missing or inaccurate cross sections for essential materials. We show examples of the benchmarking of modeling results against experimental data obtained during tool characterization and discuss observed discrepancies.

  10. Toward a self-consistent, high-resolution absolute plate motion model for the Pacific

    Science.gov (United States)

    Wessel, Paul; Harada, Yasushi; Kroenke, Loren W.

    2006-03-01

    The hot spot hypothesis postulates that linear volcanic trails form as lithospheric plates move relative to stationary or slowly moving plumes. Given geometry and ages from several trails, one can reconstruct absolute plate motions (APM) that provide valuable information about past and present tectonism, paleogeography, and volcanism. Most APM models have been designed by fitting small circles to coeval volcanic chain segments and determining stage rotation poles, opening angles, and time intervals. Unlike relative plate motion (RPM) models, such APM models suffer from oversimplicity, self-inconsistencies, inadequate fits to data, and lack of rigorous uncertainty estimates; in addition, they work only for fixed hot spots. Newer methods are now available that overcome many of these limitations. We present a technique that provides high-resolution APM models derived from stationary or moving hot spots (given prescribed paths). The simplest model assumes stationary hot spots, and an example of such a model is presented. Observations of geometry and chronology on the Pacific plate appear well explained by this type of model. Because it is a one-plate model, it does not discriminate between hot spot drift or true polar wander as explanations for inferred paleolatitudes from the Emperor chain. Whether there was significant relative motion within the hot spots under the Pacific plate during the last ˜70 m.y. is difficult to quantify, given the paucity and geological uncertainty of age determinations. Evidence in support of plume drift appears limited to the period before the 47 Ma Hawaii-Emperor Bend and, apart from the direct paleolatitude determinations, may have been somewhat exaggerated.

  11. Achieving Consistent Multiple Daily Low-Dose Bacillus anthracis Spore Inhalation Exposures in the Rabbit Model

    Science.gov (United States)

    2012-06-13

    daily low-dose Bacillus anthracis spore inhalation exposures in the rabbit model Roy E. Barnewall 1, Jason E. Comer 1, Brian D. Miller 1, BradfordW...multiple exposure days. Keywords: Bacillus anthracis , inhalation exposures, low-dose, subchronic exposures, spores, anthrax, aerosol system INTRODUCTION... Bacillus Anthracis Spore Inhalation Exposures In The Rabbit Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d

  12. Towards Automatic and Topologically Consistent 3D Regional Geological Modeling from Boundaries and Attitudes

    Directory of Open Access Journals (Sweden)

    Jiateng Guo

    2016-02-01

    Full Text Available Three-dimensional (3D geological models are important representations of the results of regional geological surveys. However, the process of constructing 3D geological models from two-dimensional (2D geological elements remains difficult and is not necessarily robust. This paper proposes a method of migrating from 2D elements to 3D models. First, the geological interfaces were constructed using the Hermite Radial Basis Function (HRBF to interpolate the boundaries and attitude data. Then, the subsurface geological bodies were extracted from the spatial map area using the Boolean method between the HRBF surface and the fundamental body. Finally, the top surfaces of the geological bodies were constructed by coupling the geological boundaries to digital elevation models. Based on this workflow, a prototype system was developed, and typical geological structures (e.g., folds, faults, and strata were simulated. Geological modes were constructed through this workflow based on realistic regional geological survey data. The model construction process was rapid, and the resulting models accorded with the constraints of the original data. This method could also be used in other fields of study, including mining geology and urban geotechnical investigations.

  13. A consistent hamiltonian treatment of the Thirring-Wess and Schwinger model in the covariant gauge

    Science.gov (United States)

    Martinovič, L'ubomír

    2014-06-01

    We present a unified hamiltonian treatment of the massless Schwinger model in the Landau gauge and of its non-gauge counterpart-the Thirring-Wess (TW) model. The operator solution of the Dirac equation has the same structure in the both models and identifies free fields as the true dynamical degrees of freedom. The coupled boson field equations (Maxwell and Proca, respectively) can also be solved exactly. The Hamiltonan in Fock representation is derived for the TW model and its diagonalization via a Bogoliubov transformation is suggested. The axial anomaly is derived in both models directly from the operator solution using a hermitian version of the point-splitting regularization. A subtlety of the residual gauge freedom in the covariant gauge is shown to modify the usual definition of the "gauge-invariant" currents. The consequence is that the axial anomaly and the boson mass generation are restricted to the zero-mode sector only. Finally, we discuss quantization of the unphysical gauge-field components in terms of ghost modes in an indefinite-metric space and sketch the next steps within the finite-volume treatment necessary to fully reveal physical content of the model in our hamiltonian formulation.

  14. Large information plus noise random matrix models and consistent subspace estimation in large sensor networks

    CERN Document Server

    Hachem, Walid; Mestre, Xavier; Najim, Jamal; Vallet, Pascal

    2011-01-01

    In array processing, a common problem is to estimate the angles of arrival of $K$ deterministic sources impinging on an array of $M$ antennas, from $N$ observations of the source signal, corrupted by gaussian noise. The problem reduces to estimate a quadratic form (called "localization function") of a certain projection matrix related to the source signal empirical covariance matrix. Recently, a new subspace estimation method (called "G-MUSIC") has been proposed, in the context where the number of available samples $N$ is of the same order of magnitude than the number of sensors $M$. In this context, the traditional subspace methods tend to fail because the empirical covariance matrix of the observations is a poor estimate of the source signal covariance matrix. The G-MUSIC method is based on a new consistent estimator of the localization function in the regime where $M$ and $N$ tend to $+\\infty$ at the same rate. However, the consistency of the angles estimator was not adressed. The purpose of this paper is ...

  15. Consistent dust and gas models for protoplanetary disks. I. Disk shape, dust settling, opacities, and PAHs

    CERN Document Server

    Woitke, P; Pinte, C; Thi, W -F; Kamp, I; Rab, C; Anthonioz, F; Antonellini, S; Baldovin-Saavedra, C; Carmona, A; Dominik, C; Dionatos, O; Greaves, J; Güdel, M; Ilee, J D; Liebhart, A; Ménard, F; Rigon, L; Waters, L B F M; Aresu, G; Meijerink, R; Spaans, M

    2015-01-01

    We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. We propose new standard dust opacities for disk models, we present a simplified treatment of PAHs sufficient to reproduce the PAH emission features, and we suggest using a simple treatment of dust settling. We roughly adjust parameters to obtain a model that predicts typical Class II T Tauri star continuum and line observations. We systematically study the impact of each model parameter (disk mass, disk extension and shape, dust settling, dust size and opacity, gas/dust ratio, etc.) on all continuum and line observables, in particular on the SED, mm-slope, continuum visibilities, and emission lines including [OI] 63um, high-J CO lines, (sub-)mm CO isotopologue lines, and CO fundamental ro-vibrational lines. We find that evolved dust properties (large grains...

  16. Consistent parameter fixing in the quark-meson model with vacuum fluctuations

    Science.gov (United States)

    Carignano, Stefano; Buballa, Michael; Elkamhawy, Wael

    2016-08-01

    We revisit the renormalization prescription for the quark-meson model in an extended mean-field approximation, where vacuum quark fluctuations are included. At a given cutoff scale the model parameters are fixed by fitting vacuum quantities, typically including the sigma-meson mass mσ and the pion decay constant fπ. In most publications the latter is identified with the expectation value of the sigma field, while for mσ the curvature mass is taken. When quark loops are included, this prescription is however inconsistent, and the correct identification involves the renormalized pion decay constant and the sigma pole mass. In the present article we investigate the influence of the parameter-fixing scheme on the phase structure of the model at finite temperature and chemical potential. Despite large differences between the model parameters in the two schemes, we find that in homogeneous matter the effect on the phase diagram is relatively small. For inhomogeneous phases, on the other hand, the choice of the proper renormalization prescription is crucial. In particular, we show that if renormalization effects on the pion decay constant are not considered, the model does not even present a well-defined renormalized limit when the cutoff is sent to infinity.

  17. Shell Effect of Superheavy Nuclei in Self-consistent Mean-Field Models

    Institute of Scientific and Technical Information of China (English)

    RENZhong-Zhou; TAIFei; XUChang; CHENDing-Han; ZHANGHu-Yong; CAIXiang-Zhou; SHENWen-Qing

    2004-01-01

    We analyze in detail the numerical results of superheavy nuclei in deformed relativistic mean-field model and deformed Skyrme-Hartree-Fock model. The common points and differences of both models are systematically compared and discussed. Their consequences on the stability of superheavy nuclei are explored and explained. The theoreticalresults are compared with new data of superheavy nuclei from GSI and from Dubna and reasonable agreement is reached.Nuclear shell effect in superheavy region is analyzed and discussed. The spherical shell effect disappears in some cases due to the appearance of deformation or superdeformation in the ground states of nuclei, where valence nucleons occupysignificantly the intruder levels of nuclei. It is shown for the first time that the significant occupation of vaJence nucleons on the intruder states plays an important role for the ground state properties of superheavy nuclei. Nuclei are stable in the deformed or superdeformed configurations. We further point out that one cannot obtain the octupole deformation of even-even nuclei in the present relativistic mean-field model with the σ,ω and ρ mesons because there is no parityviolating interaction and the conservation of parity of even-even nuclei is a basic assumption of the present relativistic mean-field model.

  18. Consistent treatment of viscoelastic effects at junctions in one-dimensional blood flow models

    Science.gov (United States)

    Müller, Lucas O.; Leugering, Günter; Blanco, Pablo J.

    2016-06-01

    While the numerical discretization of one-dimensional blood flow models for vessels with viscoelastic wall properties is widely established, there is still no clear approach on how to couple one-dimensional segments that compose a network of viscoelastic vessels. In particular for Voigt-type viscoelastic models, assumptions with regard to boundary conditions have to be made, which normally result in neglecting the viscoelastic effect at the edge of vessels. Here we propose a coupling strategy that takes advantage of a hyperbolic reformulation of the original model and the inherent information of the resulting system. We show that applying proper coupling conditions is fundamental for preserving the physical coherence and numerical accuracy of the solution in both academic and physiologically relevant cases.

  19. A Hybrid EAV-Relational Model for Consistent and Scalable Capture of Clinical Research Data.

    Science.gov (United States)

    Khan, Omar; Lim Choi Keung, Sarah N; Zhao, Lei; Arvanitis, Theodoros N

    2014-01-01

    Many clinical research databases are built for specific purposes and their design is often guided by the requirements of their particular setting. Not only does this lead to issues of interoperability and reusability between research groups in the wider community but, within the project itself, changes and additions to the system could be implemented using an ad hoc approach, which may make the system difficult to maintain and even more difficult to share. In this paper, we outline a hybrid Entity-Attribute-Value and relational model approach for modelling data, in light of frequently changing requirements, which enables the back-end database schema to remain static, improving the extensibility and scalability of an application. The model also facilitates data reuse. The methods used build on the modular architecture previously introduced in the CURe project.

  20. THIRD ORDER SHEAR DEFORMATION MODEL FOR LAMINATED SHELLS WITH FINITE ROTATIONS:FORMULATION AND CONSISTENT LINEARIZATION

    Institute of Scientific and Technical Information of China (English)

    Mohamed BALAH; Hamdan Naser AL-GHAMEDY

    2004-01-01

    The paper presents an approach for the formulation of general laminated shells based on a third order shear deformation theory. These shells undergo finite (unlimited in size) rotations and large overall motions but with small strains. A singularity-free parametrization of the rotation field is adopted. The constitutive equations, derived with respect to laminate curvilinear coordinates,are applicable to shell elements with an arbitrary number of orthotropic layers and where the material principal axes can vary from layer to layer. A careful consideration of the consistent linearization procedure pertinent to the proposed parametrization of finite rotations leads to symmetric tangent stiffness matrices. The matrix formulation adopted here makes it possible to implement the present formulation within the framework of the finite element method as a straightforward task.

  1. Integrated landscape/hydrologic modeling tool for semiarid watersheds

    Science.gov (United States)

    Mariano Hernandez; Scott N. Miller

    2000-01-01

    An integrated hydrologic modeling/watershed assessment tool is being developed to aid in determining the susceptibility of semiarid landscapes to natural and human-induced changes across a range of scales. Watershed processes are by definition spatially distributed and are highly variable through time, and this approach is designed to account for their spatial and...

  2. Designer Modeling for Personalized Game Content Creation Tools

    DEFF Research Database (Denmark)

    Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian

    2013-01-01

    With the growing use of automated content creation and computer-aided design tools in game development, there is potential for enhancing the design process through personalized interactions between the software and the game developer. This paper proposes designer modeling for capturing the designer......, and envision future directions which focus on personalizing the processes to a designer’s particular wishes....

  3. Simulation modeling: a powerful tool for process improvement.

    Science.gov (United States)

    Boxerman, S B

    1996-01-01

    Simulation modeling provides an efficient means of examining the operation of a system under a variety of alternative conditions. This tool can potentially enhance a benchmarking project by providing a means for evaluating proposed modifications to the system or process under study.

  4. Designing a Training Tool for Imaging Mental Models

    Science.gov (United States)

    1990-11-01

    about how to weave together their disparate fields into a seamless web of knowledge . Learners often cannot visualize how the concepts and skills they...a seamless web of knowledge ? " How does the availability of a mental modeling tool enhance the ability of instructional designers to prepare

  5. An exact arithmetic toolbox for a consistent and reproducible structural analysis of metabolic network models

    National Research Council Canada - National Science Library

    Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie

    2014-01-01

    .... Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic...

  6. Application of a Mass-Consistent Wind Model to Chinook Windstorms

    Science.gov (United States)

    1988-06-01

    Meteor., 6, 837--344. Endlich, R. M., F. L. Ludwig, C. M. Bhunralkar, and M. A. Estoque , 1380: A practical method for estimating wind character34szics at...Project 8349, Menlo Park, CA. 94025. Endlich, R. M., F. L. Ludwig, C. M. Bhunralkar, and M. A. Estoque , 1982: A diagnostic model for estimating winds

  7. Cepheid models based on self-consistent stellar evolution and pulsation calculations : The right answer?

    NARCIS (Netherlands)

    Baraffe, [No Value; Alibert, Y; Mera, D; Charbrier, G; Beaulieu, JP

    1998-01-01

    We have computed stellar evolutionary models for stars in a mass range characteristic of Cepheid variables (3

  8. Metamodelling Approach and Software Tools for Physical Modelling and Simulation

    Directory of Open Access Journals (Sweden)

    Vitaliy Mezhuyev

    2015-02-01

    Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.

  9. Object-Oriented MDAO Tool with Aeroservoelastic Model Tuning Capability

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley; Lung, Shun-fat

    2008-01-01

    An object-oriented multi-disciplinary analysis and optimization (MDAO) tool has been developed at the NASA Dryden Flight Research Center to automate the design and analysis process and leverage existing commercial as well as in-house codes to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic and hypersonic aircraft. Once the structural analysis discipline is finalized and integrated completely into the MDAO process, other disciplines such as aerodynamics and flight controls will be integrated as well. Simple and efficient model tuning capabilities based on optimization problem are successfully integrated with the MDAO tool. More synchronized all phases of experimental testing (ground and flight), analytical model updating, high-fidelity simulations for model validation, and integrated design may result in reduction of uncertainties in the aeroservoelastic model and increase the flight safety.

  10. HMMEditor: a visual editing tool for profile hidden Markov model

    Directory of Open Access Journals (Sweden)

    Cheng Jianlin

    2008-03-01

    Full Text Available Abstract Background Profile Hidden Markov Model (HMM is a powerful statistical model to represent a family of DNA, RNA, and protein sequences. Profile HMM has been widely used in bioinformatics research such as sequence alignment, gene structure prediction, motif identification, protein structure prediction, and biological database search. However, few comprehensive, visual editing tools for profile HMM are publicly available. Results We develop a visual editor for profile Hidden Markov Models (HMMEditor. HMMEditor can visualize the profile HMM architecture, transition probabilities, and emission probabilities. Moreover, it provides functions to edit and save HMM and parameters. Furthermore, HMMEditor allows users to align a sequence against the profile HMM and to visualize the corresponding Viterbi path. Conclusion HMMEditor provides a set of unique functions to visualize and edit a profile HMM. It is a useful tool for biological sequence analysis and modeling. Both HMMEditor software and web service are freely available.

  11. Tool Steel Heat Treatment Optimization Using Neural Network Modeling

    Science.gov (United States)

    Podgornik, Bojan; Belič, Igor; Leskovšek, Vojteh; Godec, Matjaz

    2016-11-01

    Optimization of tool steel properties and corresponding heat treatment is mainly based on trial and error approach, which requires tremendous experimental work and resources. Therefore, there is a huge need for tools allowing prediction of mechanical properties of tool steels as a function of composition and heat treatment process variables. The aim of the present work was to explore the potential and possibilities of artificial neural network-based modeling to select and optimize vacuum heat treatment conditions depending on the hot work tool steel composition and required properties. In the current case training of the feedforward neural network with error backpropagation training scheme and four layers of neurons (8-20-20-2) scheme was based on the experimentally obtained tempering diagrams for ten different hot work tool steel compositions and at least two austenitizing temperatures. Results show that this type of modeling can be successfully used for detailed and multifunctional analysis of different influential parameters as well as to optimize heat treatment process of hot work tool steels depending on the composition. In terms of composition, V was found as the most beneficial alloying element increasing hardness and fracture toughness of hot work tool steel; Si, Mn, and Cr increase hardness but lead to reduced fracture toughness, while Mo has the opposite effect. Optimum concentration providing high KIc/HRC ratios would include 0.75 pct Si, 0.4 pct Mn, 5.1 pct Cr, 1.5 pct Mo, and 0.5 pct V, with the optimum heat treatment performed at lower austenitizing and intermediate tempering temperatures.

  12. Consistent dust and gas models for protoplanetary disks. I. Disk shape, dust settling, opacities, and PAHs

    Science.gov (United States)

    Woitke, P.; Min, M.; Pinte, C.; Thi, W.-F.; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.

    2016-02-01

    We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on the assumptions about the shape of the disk, the dust opacities, dust settling, and polycyclic aromatic hydrocarbons (PAHs). In particular, we propose new standard dust opacities for disk models, we present a simplified treatment of PAHs in radiative equilibrium which is sufficient to reproduce the PAH emission features, and we suggest using a simple yet physically justified treatment of dust settling. We roughly adjust parameters to obtain a model that predicts continuum and line observations that resemble typical multi-wavelength continuum and line observations of Class II T Tauri stars. We systematically study the impact of each model parameter (disk mass, disk extension and shape, dust settling, dust size and opacity, gas/dust ratio, etc.) on all mainstream continuum and line observables, in particular on the SED, mm-slope, continuum visibilities, and emission lines including [OI] 63 μm, high-J CO lines, (sub-)mm CO isotopologue lines, and CO fundamental ro-vibrational lines. We find that evolved dust properties, i.e. large grains, often needed to fit the SED, have important consequences for disk chemistry and heating/cooling balance, leading to stronger near- to far-IR emission lines in general. Strong dust settling and missing disk flaring have similar effects on continuum observations, but opposite effects on far-IR gas emission lines. PAH molecules can efficiently shield the gas from stellar UV radiation because of their strong absorption and negligible scattering opacities in comparison to evolved dust. The observable millimetre-slope of the SED can become significantly more gentle in the case of cold disk midplanes, which we find regularly in our T Tauri models

  13. Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies

    Science.gov (United States)

    Suh, Peter M.; Conyers, Howard Jason; Mavris, Dimitri N.

    2015-01-01

    This report introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this report is on tool presentation, verification, and validation. These processes are carried out in stages throughout the report. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.

  14. Genome scale models of yeast: towards standardized evaluation and consistent omic integration

    DEFF Research Database (Denmark)

    Sanchez, Benjamin J.; Nielsen, Jens

    2015-01-01

    Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published...... and are currently used for metabolic engineering and elucidating biological interactions. Here we review the history of yeast's GEMs, focusing on recent developments. We study how these models are typically evaluated, using both descriptive and predictive metrics. Additionally, we analyze the different ways...... in which all levels of omics data (from gene expression to flux) have been integrated in yeast GEMs. Relevant conclusions and current challenges for both GEM evaluation and omic integration are highlighted....

  15. Advancing Nucleosynthesis in Self-consistent, Multidimensional Models of Core-Collapse Supernovae

    CERN Document Server

    Harris, J Austin; Chertkow, Merek A; Bruenn, Stephen W; Lentz, Eric J; Messer, O E Bronson; Mezzacappa, Anthony; Blondin, John M; Marronetti, Pedro; Yakunin, Konstantin N

    2014-01-01

    We investigate core-collapse supernova (CCSN) nucleosynthesis in polar axisymmetric simulations using the multidimensional radiation hydrodynamics code CHIMERA. Computational costs have traditionally constrained the evolution of the nuclear composition in CCSN models to, at best, a 14-species $\\alpha$-network. Such a simplified network limits the ability to accurately evolve detailed composition, neutronization and the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks in post-processing nucleosynthesis calculations. Limitations such as poor spatial resolution of the tracer particles, estimation of the expansion timescales, and determination of the "mass-cut" at the end of the simulation impose uncertainties inherent to this approach. We present a detailed analysis of the impact of these uncertainties on post-processing nucleosynthesis calculations and implications for future models.

  16. Thermodynamically consistent modeling for dissolution/growth of bubbles in an incompressible solvent

    CERN Document Server

    Bothe, Dieter

    2014-01-01

    We derive mathematical models of the elementary process of dissolution/growth of bubbles in a liquid under pressure control. The modeling starts with a fully compressible version, both for the liquid and the gas phase so that the entropy principle can be easily evaluated. This yields a full PDE system for a compressible two-phase fluid with mass transfer of the gaseous species. Then the passage to an incompressible solvent in the liquid phase is discussed, where a carefully chosen equation of state for the liquid mixture pressure allows for a limit in which the solvent density is constant. We finally provide a simplification of the PDE system in case of a dilute solution.

  17. Direct Measurements of Local Coupling between Myosin Molecules Are Consistent with a Model of Muscle Activation.

    Directory of Open Access Journals (Sweden)

    Sam Walcott

    2015-11-01

    Full Text Available Muscle contracts due to ATP-dependent interactions of myosin motors with thin filaments composed of the proteins actin, troponin, and tropomyosin. Contraction is initiated when calcium binds to troponin, which changes conformation and displaces tropomyosin, a filamentous protein that wraps around the actin filament, thereby exposing myosin binding sites on actin. Myosin motors interact with each other indirectly via tropomyosin, since myosin binding to actin locally displaces tropomyosin and thereby facilitates binding of nearby myosin. Defining and modeling this local coupling between myosin motors is an open problem in muscle modeling and, more broadly, a requirement to understanding the connection between muscle contraction at the molecular and macro scale. It is challenging to directly observe this coupling, and such measurements have only recently been made. Analysis of these data suggests that two myosin heads are required to activate the thin filament. This result contrasts with a theoretical model, which reproduces several indirect measurements of coupling between myosin, that assumes a single myosin head can activate the thin filament. To understand this apparent discrepancy, we incorporated the model into stochastic simulations of the experiments, which generated simulated data that were then analyzed identically to the experimental measurements. By varying a single parameter, good agreement between simulation and experiment was established. The conclusion that two myosin molecules are required to activate the thin filament arises from an assumption, made during data analysis, that the intensity of the fluorescent tags attached to myosin varies depending on experimental condition. We provide an alternative explanation that reconciles theory and experiment without assuming that the intensity of the fluorescent tags varies.

  18. Self-Consistent, Axisymmetric Two-Integral Models of Elliptical Galaxies with Embedded Nuclear Discs

    OpenAIRE

    Bosch, van den, PPJ Paul; de, Zeeuw, W.

    1996-01-01

    Recently, observations with the Hubble Space Telescope have revealed small stellar discs embedded in the nuclei of a number of ellipticals and S0s. In this paper we construct two-integral axisymmetric models for such systems. We calculate the even part of the phase-space distribution function, and specify the odd part by means of a simple parameterization. We investigate the photometric as well as the kinematic signatures of nuclear discs, including their velocity profiles (VPs), and study th...

  19. Energy regeneration model of self-consistent field of electron beams into electric power*

    Science.gov (United States)

    Kazmin, B. N.; Ryzhov, D. R.; Trifanov, I. V.; Snezhko, A. A.; Savelyeva, M. V.

    2016-04-01

    We consider physic-mathematical models of electric processes in electron beams, conversion of beam parameters into electric power values and their transformation into users’ electric power grid (onboard spacecraft network). We perform computer simulation validating high energy efficiency of the studied processes to be applied in the electric power technology to produce the power as well as electric power plants and propulsion installation in the spacecraft.

  20. Flood damage: a model for consistent, complete and multipurpose scenarios

    Directory of Open Access Journals (Sweden)

    S. Menoni

    2016-12-01

    implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.

  1. A consistent model for leptogenesis, dark matter and the IceCube signal

    Energy Technology Data Exchange (ETDEWEB)

    Fiorentin, M. Re [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Niro, V. [Departamento de Física Teórica, Universidad Autónoma de Madrid,Cantoblanco, E-28049 Madrid (Spain); Instituto de Física Teórica UAM/CSIC,Calle Nicolás Cabrera 13-15, Cantoblanco, E-28049 Madrid (Spain); Fornengo, N. [Dipartimento di Fisica, Università di Torino,via P. Giuria, 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Torino,via P. Giuria, 1, 10125 Torino (Italy)

    2016-11-04

    We discuss a left-right symmetric extension of the Standard Model in which the three additional right-handed neutrinos play a central role in explaining the baryon asymmetry of the Universe, the dark matter abundance and the ultra energetic signal detected by the IceCube experiment. The energy spectrum and neutrino flux measured by IceCube are ascribed to the decays of the lightest right-handed neutrino N{sub 1}, thus fixing its mass and lifetime, while the production of N{sub 1} in the primordial thermal bath occurs via a freeze-in mechanism driven by the additional SU(2){sub R} interactions. The constraints imposed by IceCube and the dark matter abundance allow nonetheless the heavier right-handed neutrinos to realize a standard type-I seesaw leptogenesis, with the B−L asymmetry dominantly produced by the next-to-lightest neutrino N{sub 2}. Further consequences and predictions of the model are that: the N{sub 1} production implies a specific power-law relation between the reheating temperature of the Universe and the vacuum expectation value of the SU(2){sub R} triplet; leptogenesis imposes a lower bound on the reheating temperature of the Universe at 7×10{sup 9} GeV. Additionally, the model requires a vanishing absolute neutrino mass scale m{sub 1}≃0.

  2. Consistent negative response of US crops to high temperatures in observations and crop models

    Science.gov (United States)

    Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja

    2017-04-01

    High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day above 30°C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures above 30°C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.

  3. Consistent negative response of US crops to high temperatures in observations and crop models

    Science.gov (United States)

    Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja

    2017-01-01

    High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day >30 °C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures >30 °C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.

  4. Demonstration of a geostatistical approach to physically consistent downscaling of climate modeling simulations

    KAUST Repository

    Jha, Sanjeev Kumar

    2013-01-01

    A downscaling approach based on multiple-point geostatistics (MPS) is presented. The key concept underlying MPS is to sample spatial patterns from within training images, which can then be used in characterizing the relationship between different variables across multiple scales. The approach is used here to downscale climate variables including skin surface temperature (TSK), soil moisture (SMOIS), and latent heat flux (LH). The performance of the approach is assessed by applying it to data derived from a regional climate model of the Murray-Darling basin in southeast Australia, using model outputs at two spatial resolutions of 50 and 10 km. The data used in this study cover the period from 1985 to 2006, with 1985 to 2005 used for generating the training images that define the relationships of the variables across the different spatial scales. Subsequently, the spatial distributions for the variables in the year 2006 are determined at 10 km resolution using the 50 km resolution data as input. The MPS geostatistical downscaling approach reproduces the spatial distribution of TSK, SMOIS, and LH at 10 km resolution with the correct spatial patterns over different seasons, while providing uncertainty estimates through the use of multiple realizations. The technique has the potential to not only bridge issues of spatial resolution in regional and global climate model simulations but also in feature sharpening in remote sensing applications through image fusion, filling gaps in spatial data, evaluating downscaled variables with available remote sensing images, and aggregating/disaggregating hydrological and groundwater variables for catchment studies.

  5. PEGASE; 2, a metallicity-consistent spectral evolution model of galaxies the documentation and the code

    CERN Document Server

    Fioc, M; Fioc, Michel; Rocca-Volmerange, Brigitte

    1999-01-01

    We provide here the documentation of the new version of the spectral evolution model PEGASE. PEGASE computes synthetic spectra of galaxies in the UV to near-IR range from 0 to 20 Gyr, for a given stellar IMF and evolutionary scenario (star formation law, infall, galactic winds). The radiation emitted by stars from the main sequence to the pre-supernova or white dwarf stage is calculated, as well as the extinction by dust. A simple modeling of the nebular emission (continuum and lines) is also proposed. PEGASE may be used to model starbursts as well as old galaxies. The main improvements of PEGASE.2 relative to PEGASE.1 (Fioc & Rocca-Volmerange 1997) are the following: (1)The stellar evolutionary tracks of the Padova group for metallicities between 0.0001 and 0.1 have been included; (2)The evolution of the metallicity of the interstellar medium (ISM) due to SNII, SNIa and AGB stars is followed. Stars are formed with the same metallicity as the ISM (instead of a solar metallicity in PEGASE.1), providing thu...

  6. The Bioenvironmental modeling of Bahar city based on Climate-consistent Architecture

    Directory of Open Access Journals (Sweden)

    Parna Kazemian

    2014-07-01

    Full Text Available The identification of the climate of a particularplace and the analysis of the climatic needs in terms of human comfort and theuse of construction materials is one of the prerequisites of aclimate-consistent design. In studies on climate and weather, usingillustrative reports, first a picture of the state of climate is offered. Then,based on the obtained results, the range of changes is determined, and thecause-effect relationships at different scales are identified. Finally, by ageneral examination of the obtained information, on the one hand, the range ofchanges is identified, and, on the other hand, their practical uses in thefuture are selected. In the present paper, the bioclimatic conditions of Baharcity, according to the 29-year-long statistics of the synoptic station between1976 and 2005 was examined, using Olgyay and Mahoney indexes. It should beadded that, because of the short distance between Bahar and Hamedan, they havea single synoptic station. The results indicate that Bahar city has dominantlycold weather during most of the months. Therefore, based on the implications ofeach method, the principles of the suggestive architectural designing can beintegrated and improved in order to achieve sustainable development.

  7. A self-consistent impedance method for electromagnetic surface impedance modeling

    Science.gov (United States)

    Thiel, David V.; Mittra, Raj

    2001-01-01

    A two-dimensional, self-consistent impedance method has been derived and used to calculate the electromagnetic surface impedance above buried objects at very low frequencies. The earth half space is discretized using an array of impedance elements. Inhomogeneities in the complex permittivity of the earth are reflected in variations in these impedance elements. The magnetic field is calculated for each cell in the solution space using a difference equation derived from Faraday's and Ampere's laws. It is necessary to include an air layer above the earth's surface to allow the scattered magnetic field to be calculated at the surface. The source field is applied above the earth's surface as a Dirichlet boundary condition, whereas the Neumann condition is employed at all other boundaries in the solution space. This, in turn, enables users to use both finite and infinite magnetic field sources as excitations. The technique is shown to be computationally efficient and yields reasonably accurate results when applied to a number of one- and two-dimensional earth structures with a known surface impedance distribution.

  8. Self-consistent physical parameters for 5 intermediate-age SMC stellar clusters from CMD modelling

    CERN Document Server

    Dias, Bruno; Barbuy, Beatriz; Santiago, Basilio; Ortolani, Sergio; Balbinot, Eduardo

    2013-01-01

    Context. Stellar clusters in the Small Magellanic Cloud (SMC) are useful probes to study the chemical and dynamical evolution of this neighbouring dwarf galaxy, enabling inspection of a large period covering over 10 Gyr. Aims. The main goals of this work are the derivation of age, metallicity, distance modulus, reddening, core radius and central density profile for six sample clusters, in order to place them in the context of the Small Cloud evolution. The studied clusters are: AM 3, HW 1, HW 34, HW 40, Lindsay 2, and Lindsay 3, where HW 1, HW 34, and Lindsay 2 are studied for the first time. Methods. Optical Colour-Magnitude Diagrams (V, B-V CMDs) and radial density profiles were built from images obtained with the 4.1m SOAR telescope, reaching V~23. The determination of structural parameters were carried out applying King profile fitting. The other parameters were derived in a self-consistent way by means of isochrone fitting, which uses the likelihood statistics to identify the synthetic CMDs that best rep...

  9. Greenhouse gases from wastewater treatment - A review of modelling tools.

    Science.gov (United States)

    Mannina, Giorgio; Ekama, George; Caniani, Donatella; Cosenza, Alida; Esposito, Giovanni; Gori, Riccardo; Garrido-Baserba, Manel; Rosso, Diego; Olsson, Gustaf

    2016-05-01

    Nitrous oxide, carbon dioxide and methane are greenhouse gases (GHG) emitted from wastewater treatment that contribute to its carbon footprint. As a result of the increasing awareness of GHG emissions from wastewater treatment plants (WWTPs), new modelling, design, and operational tools have been developed to address and reduce GHG emissions at the plant-wide scale and beyond. This paper reviews the state-of-the-art and the recently developed tools used to understand and manage GHG emissions from WWTPs, and discusses open problems and research gaps. The literature review reveals that knowledge on the processes related to N2O formation, especially due to autotrophic biomass, is still incomplete. The literature review shows also that a plant-wide modelling approach that includes GHG is the best option for the understanding how to reduce the carbon footprint of WWTPs. Indeed, several studies have confirmed that a wide vision of the WWPTs has to be considered in order to make them more sustainable as possible. Mechanistic dynamic models were demonstrated as the most comprehensive and reliable tools for GHG assessment. Very few plant-wide GHG modelling studies have been applied to real WWTPs due to the huge difficulties related to data availability and the model complexity. For further improvement in GHG plant-wide modelling and to favour its use at large real scale, knowledge of the mechanisms involved in GHG formation and release, and data acquisition must be enhanced.

  10. AgMIP Training in Multiple Crop Models and Tools

    Science.gov (United States)

    Boote, Kenneth J.; Porter, Cheryl H.; Hargreaves, John; Hoogenboom, Gerrit; Thornburn, Peter; Mutter, Carolyn

    2015-01-01

    The Agricultural Model Intercomparison and Improvement Project (AgMIP) has the goal of using multiple crop models to evaluate climate impacts on agricultural production and food security in developed and developing countries. There are several major limitations that must be overcome to achieve this goal, including the need to train AgMIP regional research team (RRT) crop modelers to use models other than the ones they are currently familiar with, plus the need to harmonize and interconvert the disparate input file formats used for the various models. Two activities were followed to address these shortcomings among AgMIP RRTs to enable them to use multiple models to evaluate climate impacts on crop production and food security. We designed and conducted courses in which participants trained on two different sets of crop models, with emphasis on the model of least experience. In a second activity, the AgMIP IT group created templates for inputting data on soils, management, weather, and crops into AgMIP harmonized databases, and developed translation tools for converting the harmonized data into files that are ready for multiple crop model simulations. The strategies for creating and conducting the multi-model course and developing entry and translation tools are reviewed in this chapter.

  11. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...... of the framework. The issue of commercial simulators or software providing the necessary features for product-process synthesis-design as opposed to their development by the academic PSE community will also be discussed. An example of a successful collaboration between academia-industry for the development...

  12. A self-consistent 3D model of fluctuations in the helium-ionizing background

    Science.gov (United States)

    Davies, Frederick B.; Furlanetto, Steven R.; Dixon, Keri L.

    2017-03-01

    Large variations in the effective optical depth of the He II Lyα forest have been observed at z ≳ 2.7, but the physical nature of these variations is uncertain: either the Universe is still undergoing the process of He II reionization, or the Universe is highly ionized but the He II-ionizing background fluctuates significantly on large scales. In an effort to build upon our understanding of the latter scenario, we present a novel model for the evolution of ionizing background fluctuations. Previous models have assumed the mean free path of ionizing photons to be spatially uniform, ignoring the dependence of that scale on the local ionization state of the intergalactic medium (IGM). This assumption is reasonable when the mean free path is large compared to the average distance between the primary sources of He II-ionizing photons, ≳ L⋆ quasars. However, when this is no longer the case, the background fluctuations become more severe, and an accurate description of the average propagation of ionizing photons through the IGM requires additionally accounting for the fluctuations in opacity. We demonstrate the importance of this effect by constructing 3D semi-analytic models of the helium-ionizing background from z = 2.5-3.5 that explicitly include a spatially varying mean free path of ionizing photons. The resulting distribution of effective optical depths at large scales in the He II Lyα forest is very similar to the latest observations with HST/COS at 2.5 ≲ z ≲ 3.5.

  13. Consistency of different tropospheric models and mapping functions for precise GNSS processing

    Science.gov (United States)

    Graffigna, Victoria; Hernández-Pajares, Manuel; García-Rigo, Alberto; Gende, Mauricio

    2017-04-01

    The TOmographic Model of the IONospheric electron content (TOMION) software implements a simultaneous precise geodetic and ionospheric modeling, which can be used to test new approaches for real-time precise GNSS modeling (positioning, ionospheric and tropospheric delays, clock errors, among others). In this work, the software is used to estimate the Zenith Tropospheric Delay (ZTD) emulating real time and its performance is evaluated through a comparative analysis with a built-in GIPSY estimation and IGS final troposphere product, exemplified in a two-day experiment performed in East Australia. Furthermore, the troposphere mapping function was upgraded from Niell to Vienna approach. On a first scenario, only forward processing was activated and the coordinates of the Wide Area GNSS network were loosely constrained, without fixing the carrier phase ambiguities, for both reference and rover receivers. On a second one, precise point positioning (PPP) was implemented, iterating for a fixed coordinates set for the second day. Comparisons between TOMION, IGS and GIPSY estimates have been performed and for the first one, IGS clocks and orbits were considered. The agreement with GIPSY results seems to be 10 times better than with the IGS final ZTD product, despite having considered IGS products for the computations. Hence, the subsequent analysis was carried out with respect to the GIPSY computations. The estimates show a typical bias of 2cm for the first strategy and of 7mm for PPP, in the worst cases. Moreover, Vienna mapping function showed in general a fairly better agreement than Niell one for both strategies. The RMS values' were found to be around 1cm for all studied situations, with a slightly fitter performance for the Niell one. Further improvement could be achieved for such estimations with coefficients for the Vienna mapping function calculated from raytracing as well as integrating meteorological comparative parameters.

  14. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YUE Li; CHEN Xiru

    2004-01-01

    Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.

  15. Strong consistency of maximum quasi-likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YiN; Changming; ZHAO; Lincheng

    2005-01-01

    In a generalized linear model with q × 1 responses, bounded and fixed p × qregressors Zi and general link function, under the most general assumption on the mini-mum eigenvalue of∑ni=1n ZiZ'i, the moment condition on responses as weak as possibleand other mild regular conditions, we prove that with probability one, the quasi-likelihoodequation has a solutionβn for all large sample size n, which converges to the true regres-sion parameterβo. This result is an essential improvement over the relevant results in literature.

  16. A self-consistent model for the study of electronic properties of hot dense plasmas in the superconfiguration approximation

    Energy Technology Data Exchange (ETDEWEB)

    Pain, J.C. [CEA/DIF, B.P. 12, 91680 Bruyeres-le-Chatel Cedex (France)]. E-mail: jean-christophe.pain@cea.fr; Dejonghe, G. [CEA/DIF, B.P. 12, 91680 Bruyeres-le-Chatel Cedex (France); Blenski, T. [CEA/DSM/DRECAM/SPAM, Centre d' Etudes de Saclay, 91191 Gif-sur-Yvette Cedex (France)

    2006-05-15

    We propose a thermodynamically consistent model involving detailed screened ions, described by superconfigurations, in plasmas. In the present work, the electrons, bound and free, are treated quantum-mechanically so that resonances are carefully taken into account in the self-consistent calculation of the electronic structure of each superconfiguration. The procedure is in some sense similar to the one used in Inferno code developed by D.A. Liberman; however, here we perform this calculation in the ion-sphere model for each superconfiguration. The superconfiguration approximation allows rapid calculation of necessary averages over all possible configurations representing excited states of bound electrons. The model enables a fully quantum-mechanical self-consistent calculation of the electronic structure of ions and provides the relevant thermodynamic quantities (e.g., internal energy, Helmholtz free energy and pressure), together with an improved treatment of pressure ionization. It should therefore give a better insight into the impact of plasma effects on photoabsorption spectra.

  17. Experiences & Tools from Modeling Instruction Applied to Earth Sciences

    Science.gov (United States)

    Cervenec, J.; Landis, C. E.

    2012-12-01

    The Framework for K-12 Science Education calls for stronger curricular connections within the sciences, greater depth in understanding, and tasks higher on Bloom's Taxonomy. Understanding atmospheric sciences draws on core knowledge traditionally taught in physics, chemistry, and in some cases, biology. If this core knowledge is not conceptually sound, well retained, and transferable to new settings, understanding the causes and consequences of climate changes become a task in memorizing seemingly disparate facts to a student. Fortunately, experiences and conceptual tools have been developed and refined in the nationwide network of Physics Modeling and Chemistry Modeling teachers to build necessary understanding of conservation of mass, conservation of energy, particulate nature of matter, kinetic molecular theory, and particle model of light. Context-rich experiences are first introduced for students to construct an understanding of these principles and then conceptual tools are deployed for students to resolve misconceptions and deepen their understanding. Using these experiences and conceptual tools takes an investment of instructional time, teacher training, and in some cases, re-envisioning the format of a science classroom. There are few financial barriers to implementation and students gain a greater understanding of the nature of science by going through successive cycles of investigation and refinement of their thinking. This presentation shows how these experiences and tools could be used in an Earth Science course to support students developing conceptually rich understanding of the atmosphere and connections happening within.

  18. A New Algorithm for Self-Consistent 3-D Modeling of Collisions in Dusty Debris Disks

    CERN Document Server

    Stark, Christopher C

    2009-01-01

    We present a new "collisional grooming" algorithm that enables us to model images of debris disks where the collision time is less than the Poynting Robertson time for the dominant grain size. Our algorithm uses the output of a collisionless disk simulation to iteratively solve the mass flux equation for the density distribution of a collisional disk containing planets in 3 dimensions. The algorithm can be run on a single processor in ~1 hour. Our preliminary models of disks with resonant ring structures caused by terrestrial mass planets show that the collision rate for background particles in a ring structure is enhanced by a factor of a few compared to the rest of the disk, and that dust grains in or near resonance have even higher collision rates. We show how collisions can alter the morphology of a resonant ring structure by reducing the sharpness of a resonant ring's inner edge and by smearing out azimuthal structure. We implement a simple prescription for particle fragmentation and show how Poynting-Ro...

  19. A consistent model for leptogenesis, dark matter and the IceCube signal

    CERN Document Server

    Fiorentin, M Re; Fornengo, N

    2016-01-01

    We discuss a left-right symmetric extension of the Standard Model in which the three additional right-handed neutrinos play a central role in explaining the baryon asymmetry of the Universe, the dark matter abundance and the ultra energetic signal detected by the IceCube experiment. The energy spectrum and neutrino flux measured by IceCube are ascribed to the decays of the lightest right-handed neutrino $N_1$, thus fixing its mass and lifetime, while the production of $N_1$ in the primordial thermal bath occurs via a freeze-in mechanism driven by the additional $SU(2)_R$ interactions. The constraints imposed by IceCube and the dark matter abundance allow nonetheless the heavier right-handed neutrinos to realize a standard type-I seesaw leptogenesis, with the $B-L$ asymmetry dominantly produced by the next-to-lightest neutrino $N_2$. Further consequences and predictions of the model are that: the $N_1$ production implies a specific power-law relation between the reheating temperature of the Universe and the va...

  20. Thermal X-ray emission from a baryonic jet: a self-consistent multicolour spectral model

    CERN Document Server

    Khabibullin, Ildar; Sazonov, Sergey

    2015-01-01

    We present a publicly-available spectral model for thermal X-ray emission from a baryonic jet in an X-ray binary system, inspired by the microquasar SS 433. The jet is assumed to be strongly collimated (half-opening angle $\\Theta\\sim 1\\deg$) and mildly relativistic (bulk velocity $\\beta=V_{b}/c\\sim 0.03-0.3$). Its X-ray spectrum is found by integrating over thin slices of constant temperature, radiating in optically thin coronal regime. The temperature profile along the jet and corresponding differential emission measure distribution are calculated with full account for gas cooling due to expansion and radiative losses. Since the model predicts both the spectral shape and luminosity of the jet's emission, its normalisation is not a free parameter if the source distance is known. We also explore the possibility of using simple X-ray observables (such as flux ratios in different energy bands) to constrain physical parameters of the jet (e.g. gas temperature and density at its base) without broad-band fitting of...

  1. Interface Consistency

    DEFF Research Database (Denmark)

    Staunstrup, Jørgen

    1998-01-01

    This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....

  2. redGEM: Systematic reduction and analysis of genome-scale metabolic reconstructions for development of consistent core metabolic models.

    Science.gov (United States)

    Ataman, Meric; Hernandez Gardiol, Daniel F; Fengos, Georgios; Hatzimanikatis, Vassily

    2017-07-01

    Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.

  3. redGEM: Systematic reduction and analysis of genome-scale metabolic reconstructions for development of consistent core metabolic models.

    Directory of Open Access Journals (Sweden)

    Meric Ataman

    2017-07-01

    Full Text Available Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.

  4. Scenario Evaluator for Electrical Resistivity survey pre-modeling tool

    Science.gov (United States)

    Terry, Neil; Day-Lewis, Frederick D.; Robinson, Judith L.; Slater, Lee D; Halford, Keith J.; Binley, Andrew; Lane, John; Werkema, Dale

    2017-01-01

    Geophysical tools have much to offer users in environmental, water resource, and geotechnical fields; however, techniques such as electrical resistivity imaging (ERI) are often oversold and/or overinterpreted due to a lack of understanding of the limitations of the techniques, such as the appropriate depth intervals or resolution of the methods. The relationship between ERI data and resistivity is nonlinear; therefore, these limitations depend on site conditions and survey design and are best assessed through forward and inverse modeling exercises prior to field investigations. In this approach, proposed field surveys are first numerically simulated given the expected electrical properties of the site, and the resulting hypothetical data are then analyzed using inverse models. Performing ERI forward/inverse modeling, however, requires substantial expertise and can take many hours to implement. We present a new spreadsheet-based tool, the Scenario Evaluator for Electrical Resistivity (SEER), which features a graphical user interface that allows users to manipulate a resistivity model and instantly view how that model would likely be interpreted by an ERI survey. The SEER tool is intended for use by those who wish to determine the value of including ERI to achieve project goals, and is designed to have broad utility in industry, teaching, and research.

  5. DsixTools: the standard model effective field theory toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Celis, Alejandro [Ludwig-Maximilians-Universitaet Muenchen, Fakultaet fuer Physik, Arnold Sommerfeld Center for Theoretical Physics, Munich (Germany); Fuentes-Martin, Javier; Vicente, Avelino [Universitat de Valencia-CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Virto, Javier [University of Bern, Albert Einstein Center for Fundamental Physics, Institute for Theoretical Physics, Bern (Switzerland)

    2017-06-15

    We present DsixTools, a Mathematica package for the handling of the dimension-six standard model effective field theory. Among other features, DsixTools allows the user to perform the full one-loop renormalization group evolution of the Wilson coefficients in the Warsaw basis. This is achieved thanks to the SMEFTrunner module, which implements the full one-loop anomalous dimension matrix previously derived in the literature. In addition, DsixTools also contains modules devoted to the matching to the ΔB = ΔS = 1, 2 and ΔB = ΔC = 1 operators of the Weak Effective Theory at the electroweak scale, and their QCD and QED Renormalization group evolution below the electroweak scale. (orig.)

  6. Gauge propagator and physical consistency of the CPT-even part of the standard model extension

    Science.gov (United States)

    Casana, Rodolfo; Ferreira, Manoel M., Jr.; Gomes, Adalto R.; Pinheiro, Paulo R. D.

    2009-12-01

    In this work, we explicitly evaluate the gauge propagator of the Maxwell theory supplemented by the CPT-even term of the standard model extension. First, we specialize our evaluation for the parity-odd sector of the tensor Wμνρσ, using a parametrization that retains only the three nonbirefringent coefficients. From the poles of the propagator, it is shown that physical modes of this electrodynamics are stable, noncausal and unitary. In the sequel, we carry out the parity-even gauge propagator using a parametrization that allows to work with only the isotropic nonbirefringent element. In this case, we show that the physical modes of the parity-even sector of the tensor W are causal, stable and unitary for a limited range of the isotropic coefficient.

  7. Consistency and normality of Huber-Dutter estimators for partial linear model

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    For partial linear model Y = Xτβ0 + g0(T) + with unknown β0 ∈ Rd and an unknown smooth function g0, this paper considers the Huber-Dutter estimators of β0, scale σ for the errors and the function g0 approximated by the smoothing B-spline functions, respectively. Under some regularity conditions, the Huber-Dutter estimators of β0 and σ are shown to be asymptotically normal with the rate of convergence n-1/2 and the B-spline Huber-Dutter estimator of g0 achieves the optimal rate of convergence in nonparametric regression. A simulation study and two examples demonstrate that the Huber-Dutter estimator of β0 is competitive with its M-estimator without scale parameter and the ordinary least square estimator.

  8. Consistency and normality of Huber-Dutter estimators for partial linear model

    Institute of Scientific and Technical Information of China (English)

    TONG XingWei; CUI HengJian; YU Peng

    2008-01-01

    For partial linear model Y = Xτβ0 + g0(T) + ∈ with unknown/β0 ∈ Rd and an unknown smooth function g0,this paper considers the Huber-Dutter estimators of/β0,scale σ for the errors and the function g0 approximated by the smoothing B-spline functions,respectively.Under some regularity conditions,the Huber-Dutter estimators of/β0 and σ are shown to be asymptotically normal with the rate of convergence n-1/2 and the B-spline Huber-Dutter estimator of go achieves the optimal rate of convergence in nonparametric regression.A simulation study and two examples demonstrate that the Huber-Dutter estimator of/β0 is competitive with its M-estimator without scale parameter and the ordinary least square estimator.

  9. A Lane consistent optical model potential for nucleon scattering on actinide nuclei with extended coupling

    Science.gov (United States)

    Quesada, José Manuel; Capote, Roberto; Soukhovitski, Efrem S.; Chiba, Satoshi

    2016-03-01

    An extension for odd-A actinides of a previously derived dispersive coupledchannel optical model potential (OMP) for 238U and 232Th nuclei is presented. It is used to fit simultaneously all the available experimental databases including neutron strength functions for nucleon scattering on 232Th, 233,235,238U and 239Pu nuclei. Quasi-elastic (p,n) scattering data on 232Th and 238U to the isobaric analogue states of the target nucleus are also used to constrain the isovector part of the optical potential. For even-even (odd) actinides almost all low-lying collective levels below 1 MeV (0.5 MeV) of excitation energy are coupled. OMP parameters show a smooth energy dependence and energy independent geometry.

  10. Bringing consistency to simulation of population models--Poisson simulation as a bridge between micro and macro simulation.

    Science.gov (United States)

    Gustafsson, Leif; Sternad, Mikael

    2007-10-01

    Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.

  11. Towards a Self Consistent Model of the Thermal Structure of the Venus Atmosphere

    Science.gov (United States)

    Limaye, Sanjay; Vandaele, Ann C.; Wilson, Colin

    Nearly three decades ago, an international effort led to the adoption of the Venus International Reference Atmosphere (VIRA) was published in 1985 after the significant data returned by the Pioneer Venus Orbiter and Probes and the earlier Venera missions (Kliore et al., 1985). The vertical thermal structure is one component of the reference model which relied primarily on the three Pioneer Venus Small Probes, the Large Probe profiles as well as several hundred retrieved temperature profiles from the Pioneer Venus Orbiter radio occultation data collected during 1978 - 1982. Since then a huge amount of thermal structure data has been obtained from multiple instruments on ESA’s Venus Express (VEX) orbiter mission. The VEX data come from retrieval of temperature profiles from SPICAV/SOIR stellar/solar occultations, VeRa radio occultations and from the passive remote sensing by the VIRTIS instrument. The results of these three experiments vary in their intrinsic properties - altitude coverage, spatial and temporal sampling and resolution and accuracy An international team has been formed with support from the International Space Studies Institute (Bern, Switzerland) to consider the observations of the Venus atmospheric structure obtained since the data used for the COSPAR Venus International Reference Atmosphere (Kliore et al., 1985). We report on the progress made by the comparison of the newer data with VIRA model and also between different experiments where there is overlap. Kliore, A.J., V.I. Moroz, and G.M. Keating, Eds. 1985, VIRA: Venus International Reference Atmosphere, Advances in Space Research, Volume 5, Number 11, 307 pages.

  12. Reduced fertility in patients' families is consistent with the sexual selection model of schizophrenia and schizotypy.

    Directory of Open Access Journals (Sweden)

    Marco Del Giudice

    Full Text Available BACKGROUND: Schizophrenia is a mental disorder marked by an evolutionarily puzzling combination of high heritability, reduced reproductive success, and a remarkably stable prevalence. Recently, it has been proposed that sexual selection may be crucially involved in the evolution of schizophrenia. In the sexual selection model (SSM of schizophrenia and schizotypy, schizophrenia represents the negative extreme of a sexually selected indicator of genetic fitness and condition. Schizotypal personality traits are hypothesized to increase the sensitivity of the fitness indicator, thus conferring mating advantages on high-fitness individuals but increasing the risk of schizophrenia in low-fitness individuals; the advantages of successful schzotypy would be mediated by enhanced courtship-related traits such as verbal creativity. Thus, schizotypy-increasing alleles would be maintained by sexual selection, and could be selectively neutral or even beneficial, at least in some populations. However, most empirical studies find that the reduction in fertility experienced by schizophrenic patients is not compensated for by increased fertility in their unaffected relatives. This finding has been interpreted as indicating strong negative selection on schizotypy-increasing alleles, and providing evidence against sexual selection on schizotypy. METHODOLOGY: A simple mathematical model is presented, showing that reduced fertility in the families of schizophrenic patients can coexist with selective neutrality of schizotypy-increasing alleles, or even with positive selection on schizotypy in the general population. If the SSM is correct, studies of patients' families can be expected to underestimate the true fertility associated with schizotypy. SIGNIFICANCE: This paper formally demonstrates that reduced fertility in the families of schizophrenic patients does not constitute evidence against sexual selection on schizotypy-increasing alleles. Futhermore, it suggests

  13. Evaluating EML Modeling Tools for Insurance Purposes: A Case Study

    Directory of Open Access Journals (Sweden)

    Mikael Gustavsson

    2010-01-01

    Full Text Available As with any situation that involves economical risk refineries may share their risk with insurers. The decision process generally includes modelling to determine to which extent the process area can be damaged. On the extreme end of modelling the so-called Estimated Maximum Loss (EML scenarios are found. These scenarios predict the maximum loss a particular installation can sustain. Unfortunately no standard model for this exists. Thus the insurers reach different results due to applying different models and different assumptions. Therefore, a study has been conducted on a case in a Swedish refinery where several scenarios previously had been modelled by two different insurance brokers using two different softwares, ExTool and SLAM. This study reviews the concept of EML and analyses the used models to see which parameters are most uncertain. Also a third model, EFFECTS, was employed in an attempt to reach a conclusion with higher reliability.

  14. Nephrectomized and hepatectomized animal models as tools in preclinical pharmacokinetics.

    Science.gov (United States)

    Vestergaard, Bill; Agersø, Henrik; Lykkesfeldt, Jens

    2013-08-01

    Early understanding of the pharmacokinetics and metabolic patterns of new drug candidates is essential for selection of optimal candidates to move further in to the drug development process. In vitro methodologies can be used to investigate metabolic patterns, but in general, they lack several aspects of the whole-body physiology. In contrast, the complexity of intact animals does not necessarily allow individual processes to be identified. Animal models lacking a major excretion organ can be used to investigate these individual metabolic processes. Animal models of nephrectomy and hepatectomy have considerable potential as tools in preclinical pharmacokinetics to assess organs of importance for drug clearance and thereby knowledge of potential metabolic processes to manipulate to improve pharmacokinetic properties of the molecules. Detailed knowledge of anatomy and surgical techniques is crucial to successfully establish the models, and a well-balanced anaesthesia and adequate monitoring of the animals are also of major importance. An obvious drawback of animal models lacking an organ is the disruption of normal homoeostasis and the induction of dramatic and ultimately mortal systemic changes in the animals. Refining of the surgical techniques and the post-operative supportive care of the animals can increase the value of these models by minimizing the systemic changes induced, and thorough validation of nephrectomy and hepatectomy models is needed before use of such models as a tool in preclinical pharmacokinetics. The present MiniReview discusses pros and cons of the available techniques associated with establishing nephrectomy and hepatectomy models.

  15. Toward A Self Consistent MHD Model of Chromospheres and Winds From Late Type Evolved Stars

    CERN Document Server

    Airapetian, V S; Carpenter, K G

    2014-01-01

    We present the first magnetohydrodynamic model of the stellar chromospheric heating and acceleration of the outer atmospheres of cool evolved stars, using alpha Tau as a case study. We used a 1.5D MHD code with a generalized Ohm's law that accounts for the effects of partial ionization in the stellar atmosphere to study Alfven wave dissipation and wave reflection. We have demonstrated that due to inclusion of the effects of ion-neutral collisions in magnetized weakly ionized chromospheric plasma on resistivity and the appropriate grid resolution, the numerical resistivity becomes 1-2 orders of magnitude smaller than the physical resistivity. The motions introduced by non-linear transverse Alfven waves can explain non-thermally broadened and non-Gaussian profiles of optically thin UV lines forming in the stellar chromosphere of alpha Tau and other late-type giant and supergiant stars. The calculated heating rates in the stellar chromosphere due to resistive (Joule) dissipation of electric currents, induced by ...

  16. Complementarity of DM searches in a consistent simplified model: the case of Z{sup ′}

    Energy Technology Data Exchange (ETDEWEB)

    Jacques, Thomas [SISSA and INFN,via Bonomea 265, 34136 Trieste (Italy); Katz, Andrey [Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Morgante, Enrico; Racco, Davide [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Rameez, Mohamed [Département de Physique Nucléaire et Corpusculaire,Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Riotto, Antonio [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland)

    2016-10-14

    We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC, direct and indirect detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy Z{sup ′} mediates the interactions between the SM and the DM. We find that for heavy dark matter indirect detection provides the strongest bounds on this scenario, while IceCube bounds are typically stronger than those from direct detection. The LHC constraints are dominant for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun and the Galactic Center are either bb̄ or tt̄, while the heavy DM annihilation is completely dominated by Zh channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast IceCube constraints to allow proper comparison with constraints from direct and indirect detection experiments and LHC exclusions.

  17. Complementarity of DM Searches in a Consistent Simplified Model: the Case of Z'

    CERN Document Server

    Jacques, Thomas; Morgante, Enrico; Racco, Davide; Rameez, Mohamed; Riotto, Antonio

    2016-01-01

    We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC and direct detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy $Z'$ mediates the interactions between the SM and the DM. We find that in most cases IceCube provides the strongest bounds on this scenario, while the LHC constraints are only meaningful for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun are either $b \\bar b$ or $t \\bar t$, while the heavy DM annihilation is completely dominated by $Zh$ channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast Ice...

  18. Self-consistent, axisymmetric two-integral models of elliptical galaxies with embedded nuclear discs

    CERN Document Server

    Van den Bosch, F C; van den Bosch, Frank C; de Zeeuw, P Tim

    1996-01-01

    Recently, observations with the Hubble Space Telescope have revealed small stellar discs embedded in the nuclei of a number of ellipticals and S0s. In this paper we construct two-integral axisymmetric models for such systems. We calculate the even part of the phase-space distribution function, and specify the odd part by means of a simple parameterization. We investigate the photometric as well as the kinematic signatures of nuclear discs, including their velocity profiles (VPs), and study the influence of seeing convolution. The rotation curve of a nuclear disc gives an excellent measure of the central mass-to-light ratio whenever the VPs clearly reveal the narrow, rapidly rotating component associated with the nuclear disc. Steep cusps and seeing convolution both result in central VPs that are dominated by the bulge light, and these VPs barely show the presence of the nuclear disc, impeding measurements of the central rotation velocities of the disc stars. However, if a massive BH is present, the disc compo...

  19. Dynamic wind turbine models in power system simulation tool

    DEFF Research Database (Denmark)

    Hansen, Anca D.; Iov, Florin; Sørensen, Poul

    This report presents a collection of models and control strategies developed and implemented in the power system simulation tool PowerFactory DIgSILENT for different wind turbine concepts. It is the second edition of Risø-R-1400(EN) and it gathers and describes a whole wind turbine model database...... strategies have different goals e.g. fast response over disturbances, optimum power efficiency over a wider range of wind speeds, voltage ride-through capability including grid support. A dynamic model of a DC connection for active stall wind farms to the grid including the control is also implemented...

  20. Hypermedia as an experiential learning tool: a theoretical model

    Directory of Open Access Journals (Sweden)

    Jose Miguel Baptista Nunes

    1996-01-01

    Full Text Available The process of methodical design and development is of extreme importance in the production of educational software. However, this process will only be effective, if it is based on a theoretical model that explicitly defines what educational approach is being used and how specific features of the technology can best support it. This paper proposes a theoretical model of how hypermedia can be used as an experiential learning tool. The development of the model was based on a experiential learning approach and simultaneously aims at minimising the inherent problems of hypermedia as the underlying support technology.

  1. The consistency evaluation of the climate version of the Eta regional forecast model developed for regional climate downscaling

    CERN Document Server

    Pisnichenko, I A

    2007-01-01

    The regional climate model prepared from Eta WS (workstation) forecast model has been integrated over South America with the horizontal resolution of 40 km for the period of 1961-1977. The model was forced at its lateral boundaries by the outputs of HadAMP. The data of HadAMP represent the simulation of modern climate with the resolution about150 km. In order to prepare climate regional model from the Eta forecast model was added new blocks and multiple modifications and corrections was made in the original model. The running of climate Eta model was made on the supercomputer SX-6. The detailed analysis of the results of dynamical downscaling experiment includes an investigation of a consistency between the regional and AGCM models as well as of ability of the regional model to resolve important features of climate fields on the finer scale than that resolved by AGCM. In this work we show the results of our investigation of the consistency of the output fields of the Eta model and HadAMP. We have analysed geo...

  2. Solid consistency

    Science.gov (United States)

    Bordin, Lorenzo; Creminelli, Paolo; Mirbabayi, Mehrdad; Noreña, Jorge

    2017-03-01

    We argue that isotropic scalar fluctuations in solid inflation are adiabatic in the super-horizon limit. During the solid phase this adiabatic mode has peculiar features: constant energy-density slices and comoving slices do not coincide, and their curvatures, parameterized respectively by ζ and Script R, both evolve in time. The existence of this adiabatic mode implies that Maldacena's squeezed limit consistency relation holds after angular average over the long mode. The correlation functions of a long-wavelength spherical scalar mode with several short scalar or tensor modes is fixed by the scaling behavior of the correlators of short modes, independently of the solid inflation action or dynamics of reheating.

  3. Self consistent model of core formation and the effective metal-silicate partitioning

    Science.gov (United States)

    Ichikawa, H.; Labrosse, S.; Kameyama, M.

    2010-12-01

    It has been long known that the formation of the core transforms gravitational energy into heat and is able to heat up the whole Earth by about 2000 K. However, the distribution of this energy within the Earth is still debated and depends on the core formation process considered. Iron rain in the surface magma ocean is supposed to be the first mechanism of separation for large planets, iron then coalesces to form a pond at the base of the magma ocean [Stevenson 1990]. The time scale of the separation can be estimated from falling velocity of the iron phase, which is estimated by numerical simulation [Ichikawa et al., 2010] as ˜ 10cm/s with iron droplet of centimeter-scale. A simple estimate of the metal-silicate partition from the P-T condition of the base of the magma ocean, which must coincide with between peridotite liquidus and solidus by a single-stage model, is inconsistent with Earth's core-mantle partition. P-T conditions where silicate equilibrated with metal are far beyond the liquidus or solidus temperature for about ˜ 700K. For example, estimated P-T conditions are: 40GPa at 3750K for Wade and Wood, 2005, T ≧ 3600K for Chabot and Agee, 2003 and 35GPa at T ≧ 3300K for Gessmann and Rubie, 2000. Meanwhile, Rubie et al., 2003 shown that metal couldn't equilibrate with silicate on the base of the magma ocean before crystallization of silicate. On the other hand, metal-silicate equilibration is achieved only ˜ 5 s in the state of iron rain. Therefore metal and silicate simultaneously separate and equilibrate each other at the P-T condition during the course to the iron pond. Taking into account the release of gravitational energy, temperature of the middle of the magma ocean would be higher than the liquidus. Estimation of the thermal structure during the iron-silicate separation requires the development of a planetary-sized calculation model. However, because of the huge disparity of scales between the cm-sized drops and the magma ocean, a direct

  4. A practical tool for modeling biospecimen user fees.

    Science.gov (United States)

    Matzke, Lise; Dee, Simon; Bartlett, John; Damaraju, Sambasivarao; Graham, Kathryn; Johnston, Randal; Mes-Masson, Anne-Marie; Murphy, Leigh; Shepherd, Lois; Schacter, Brent; Watson, Peter H

    2014-08-01

    The question of how best to attribute the unit costs of the annotated biospecimen product that is provided to a research user is a common issue for many biobanks. Some of the factors influencing user fees are capital and operating costs, internal and external demand and market competition, and moral standards that dictate that fees must have an ethical basis. It is therefore important to establish a transparent and accurate costing tool that can be utilized by biobanks and aid them in establishing biospecimen user fees. To address this issue, we built a biospecimen user fee calculator tool, accessible online at www.biobanking.org . The tool was built to allow input of: i) annual operating and capital costs; ii) costs categorized by the major core biobanking operations; iii) specimen products requested by a biobank user; and iv) services provided by the biobank beyond core operations (e.g., histology, tissue micro-array); as well as v) several user defined variables to allow the calculator to be adapted to different biobank operational designs. To establish default values for variables within the calculator, we first surveyed the members of the Canadian Tumour Repository Network (CTRNet) management committee. We then enrolled four different participants from CTRNet biobanks to test the hypothesis that the calculator tool could change approaches to user fees. Participants were first asked to estimate user fee pricing for three hypothetical user scenarios based on their biobanking experience (estimated pricing) and then to calculate fees for the same scenarios using the calculator tool (calculated pricing). Results demonstrated significant variation in estimated pricing that was reduced by calculated pricing, and that higher user fees are consistently derived when using the calculator. We conclude that adoption of this online calculator for user fee determination is an important first step towards harmonization and realistic user fees.

  5. Models of vertical coordination consistent with the development of bio-energetics

    Directory of Open Access Journals (Sweden)

    Gianluca Nardone

    2009-04-01

    Full Text Available To foster the development of the biomasses for solid fuel it is fundamental to build up a strategy at a local level in which co-exists farms as well as industrial farms. To such aim, it is necessary to implement an effective vertical coordination between the stakeholders with the definition of a contract that prevents opportunistic behaviors and guarantees the industrial investments of constant supplies over the time. Starting from a project that foresees a biomasses power plant in the south of Italy, this study reflects on the payments to fix in an eventual contract in such a way to maintain the fidelity of the farmers. These one have a greater flexibility since they can choose the most convenient crop. Therefore, their fidelity can be obtained tying the contractual payments to the price of the main alternative crop to the energetic one. The results of the study seem to indicate the opportunity to fix a purchase price of the raw materials linked to the one of durum wheat that is the most widespread crop in the territory and the one that depends more on a volatile market. Using the data of the District 12 of the province of Foggia Water Consortium with an area of 11.300 hectares (instead of the 20.000 demanded in the proposal, it has been possible to organize approximately 600 enterprises in five cluster, each of them identified by a representative farm. With a model of linear programming, we have run different simulations taking into account the possibility to grow sorghum in different ways. Through an aggregation process, it has been calculated that farmers may find it convenient to supply the energetic crop at a price of 50 €/t when the price of durum wheat is 150 €/t. Anyway, this price is lower than the one offered by firm that is planning to build the power plant. Moreover, it has been identified a strong correlation between the price of the durum wheat and the price that makes convenient for the farmers to grow the sorghum. When the

  6. Models of vertical coordination consistent with the development of bio-energetics

    Directory of Open Access Journals (Sweden)

    Rosaria Viscecchia

    2011-02-01

    Full Text Available To foster the development of the biomasses for solid fuel it is fundamental to build up a strategy at a local level in which co-exists farms as well as industrial farms. To such aim, it is necessary to implement an effective vertical coordination between the stakeholders with the definition of a contract that prevents opportunistic behaviors and guarantees the industrial investments of constant supplies over the time. Starting from a project that foresees a biomasses power plant in the south of Italy, this study reflects on the payments to fix in an eventual contract in such a way to maintain the fidelity of the farmers. These one have a greater flexibility since they can choose the most convenient crop. Therefore, their fidelity can be obtained tying the contractual payments to the price of the main alternative crop to the energetic one. The results of the study seem to indicate the opportunity to fix a purchase price of the raw materials linked to the one of durum wheat that is the most widespread crop in the territory and the one that depends more on a volatile market. Using the data of the District 12 of the province of Foggia Water Consortium with an area of 11.300 hectares (instead of the 20.000 demanded in the proposal, it has been possible to organize approximately 600 enterprises in five cluster, each of them identified by a representative farm. With a model of linear programming, we have run different simulations taking into account the possibility to grow sorghum in different ways. Through an aggregation process, it has been calculated that farmers may find it convenient to supply the energetic crop at a price of 50 €/t when the price of durum wheat is 150 €/t. Anyway, this price is lower than the one offered by firm that is planning to build the power plant. Moreover, it has been identified a strong correlation between the price of the durum wheat and the price that makes convenient for the farmers to grow the sorghum. When the

  7. A simple and self-consistent geostrophic-force-balance model of the thermohaline circulation with boundary mixing

    Directory of Open Access Journals (Sweden)

    J. Callies

    2011-08-01

    Full Text Available A simple model of the thermohaline circulation (THC is formulated, with the objective to represent explicitly the geostrophic force balance of the basinwide THC. The model comprises advective-diffusive density balances in two meridional-vertical planes located at the eastern and the western walls of a hemispheric sector basin. Boundary mixing constrains vertical motion to lateral boundary layers along these walls. Interior, along-boundary, and zonally integrated meridional flows are in thermal-wind balance. Rossby waves and the absence of interior mixing render isopycnals zonally flat except near the western boundary, constraining meridional flow to the western boundary layer. The model is forced by a prescribed meridional surface density profile.

    This two-plane model reproduces both steady-state density and steady-state THC structures of a primitive-equation model. The solution shows narrow deep sinking at the eastern high latitudes, distributed upwelling at both boundaries, and a western boundary current with poleward surface and equatorward deep flow. The overturning strength has a 2/3-power-law dependence on vertical diffusivity and a 1/3-power-law dependence on the imposed meridional surface density difference. Convective mixing plays an essential role in the two-plane model, ensuring that deep sinking is located at high latitudes. This role of convective mixing is consistent with that in three-dimensional models and marks a~sharp contrast with previous two-dimensional models.

    Overall, the two-plane model reproduces crucial features of the THC as simulated in simple-geometry three-dimensional models. At the same time, the model self-consistently makes quantitative a conceptual picture of the three-dimensional THC that hitherto has been expressed either purely qualitatively or not self-consistently.

  8. A simple and self-consistent geostrophic-force-balance model of the thermohaline circulation with boundary mixing

    Directory of Open Access Journals (Sweden)

    J. Callies

    2012-01-01

    Full Text Available A simple model of the thermohaline circulation (THC is formulated, with the objective to represent explicitly the geostrophic force balance of the basinwide THC. The model comprises advective-diffusive density balances in two meridional-vertical planes located at the eastern and the western walls of a hemispheric sector basin. Boundary mixing constrains vertical motion to lateral boundary layers along these walls. Interior, along-boundary, and zonally integrated meridional flows are in thermal-wind balance. Rossby waves and the absence of interior mixing render isopycnals zonally flat except near the western boundary, constraining meridional flow to the western boundary layer. The model is forced by a prescribed meridional surface density profile.

    This two-plane model reproduces both steady-state density and steady-state THC structures of a primitive-equation model. The solution shows narrow deep sinking at the eastern high latitudes, distributed upwelling at both boundaries, and a western boundary current with poleward surface and equatorward deep flow. The overturning strength has a 2/3-power-law dependence on vertical diffusivity and a 1/3-power-law dependence on the imposed meridional surface density difference. Convective mixing plays an essential role in the two-plane model, ensuring that deep sinking is located at high latitudes. This role of convective mixing is consistent with that in three-dimensional models and marks a sharp contrast with previous two-dimensional models.

    Overall, the two-plane model reproduces crucial features of the THC as simulated in simple-geometry three-dimensional models. At the same time, the model self-consistently makes quantitative a conceptual picture of the three-dimensional THC that hitherto has been expressed either purely qualitatively or not self-consistently.

  9. Measuring the adoption of consistent use of condoms using the stages of change model. AIDS Community Demonstration Projects.

    Science.gov (United States)

    Schnell, D J; Galavotti, C; Fishbein, M; Chan, D K

    1996-01-01

    The stages of behavior change model has been used to understand a variety of health behaviors. Since consistent condom use has been promoted as a risk-reduction behavior for prevention of human immunodeficiency virus (HIV) infection, an algorithm for staging the adoption of consistent condom use during vaginal sex was empirically developed using three considerations: HIV prevention efficacy, analogy with work on staging other health-related behaviors, and condom use data from groups at high risk for HIV infection. This algorithm suggests that the adoption of consistent condom use among persons at high risk can be meaningfully measured with the model. However, variations in the algorithm details affect both the interpretation of stages and apportionment of persons across stages.

  10. The relativistic consistent angular-momentum projected shell model study of the N=Z nucleus 52Fe

    Institute of Scientific and Technical Information of China (English)

    LI YanSong; LONG GuiLu

    2009-01-01

    The relativistic consistent angular-momentum projected shell model (RECAPS) is used in the study of the structure and electromagnetic transitions of the low-lying states in the N=Z nucleus 52Fe.The model calculations show a reasonably good agreement with the data.The backbending at 12+ is reproduced and the energy level structure suggests that neutron-proton interactions play important roles.

  11. Rate of strong consistency of the maximum quasi-likelihood estimator in quasi-likelihood nonlinear models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.

  12. The relativistic consistent angular-momentum projected shell model study of the N=Z nucleus 52Fe

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The relativistic consistent angular-momentum projected shell model(ReCAPS) is used in the study of the structure and electromagnetic transitions of the low-lying states in the N=Z nucleus 52Fe.The model calculations show a reasonably good agreement with the data.The backbending at 12+ is reproduced and the energy level structure suggests that neutron-proton interactions play important roles.

  13. Attitudes, Learning and Human-Computer Interaction: An Application of the Fishbein and Ajzen Model of Attitude-Behavior Consistency.

    Science.gov (United States)

    Yeaman, Andrew R. J.

    The Fishbein and Ajzen model of attitude-behavior consistency was applied to 56 undergraduates learning to use a microcomputer. Two levels of context for this act were compared: the students' beliefs about themselves, and their beliefs about people in general. The results indicated that students' beliefs were good predictors of their behavioral…

  14. A self-consistent transport model for molecular conduction based on extended Huckel theory with full three-dimensional electrostatics

    DEFF Research Database (Denmark)

    Zahid, F.; Paulsson, Magnus; Polizzi, E.;

    2005-01-01

    We present a transport model for molecular conduction involving an extended Huckel theoretical treatment of the molecular chemistry combined with a nonequilibrium Green's function treatment of quantum transport. The self-consistent potential is approximated by CNDO (complete neglect of differential...

  15. Self-consistent field modeling of non-ionic surfactants at the silica-water interface: Incorporating molecular detail

    NARCIS (Netherlands)

    Postmus, B.R.; Leermakers, F.A.M.; Cohen Stuart, M.A.

    2008-01-01

    We have constructed a model to predict the properties of non-ionic (alkyl-ethylene oxide) (C(n)E(m)) surfactants, both in aqueous solutions and near a silica surface, based upon the self-consistent field theory using the Scheutjens-Fleer discretisation scheme. The system has the pH and the ionic

  16. Model Fusion Tool - the Open Environmental Modelling Platform Concept

    Science.gov (United States)

    Kessler, H.; Giles, J. R.

    2010-12-01

    The vision of an Open Environmental Modelling Platform - seamlessly linking geoscience data, concepts and models to aid decision making in times of environmental change. Governments and their executive agencies across the world are facing increasing pressure to make decisions about the management of resources in light of population growth and environmental change. In the UK for example, groundwater is becoming a scarce resource for large parts of its most densely populated areas. At the same time river and groundwater flooding resulting from high rainfall events are increasing in scale and frequency and sea level rise is threatening the defences of coastal cities. There is also a need for affordable housing, improved transport infrastructure and waste disposal as well as sources of renewable energy and sustainable food production. These challenges can only be resolved if solutions are based on sound scientific evidence. Although we have knowledge and understanding of many individual processes in the natural sciences it is clear that a single science discipline is unable to answer the questions and their inter-relationships. Modern science increasingly employs computer models to simulate the natural, economic and human system. Management and planning requires scenario modelling, forecasts and ‘predictions’. Although the outputs are often impressive in terms of apparent accuracy and visualisation, they are inherently not suited to simulate the response to feedbacks from other models of the earth system, such as the impact of human actions. Geological Survey Organisations (GSO) are increasingly employing advances in Information Technology to visualise and improve their understanding of geological systems. Instead of 2 dimensional paper maps and reports many GSOs now produce 3 dimensional geological framework models and groundwater flow models as their standard output. Additionally the British Geological Survey have developed standard routines to link geological

  17. Using the IEA ETSAP modelling tools for Denmark

    Energy Technology Data Exchange (ETDEWEB)

    Grohnheit, Poul Erik

    2008-12-15

    An important part of the cooperation within the IEA (International Energy Agency) is organised through national contributions to 'Implementation Agreements' on energy technology and energy analyses. One of them is ETSAP (Energy Technology Systems Analysis Programme), started in 1976. Denmark has signed the agreement and contributed to some early annexes. This project is motivated by an invitation to participate in ETSAP Annex X, 'Global Energy Systems and Common Analyses: Climate friendly, Secure and Productive Energy Systems' for the period 2005 to 2007. The main activity is semi-annual workshops focusing on presentations of model analyses and use of the ETSAP tools (the MARKAL/TIMES family of models). The project was also planned to benefit from the EU project 'NEEDS - New Energy Externalities Developments for Sustainability'. ETSAP is contributing to a part of NEEDS that develops the TIMES model for 29 European countries with assessment of future technologies. An additional project 'Monitoring and Evaluation of the RES directives: implementation in EU27 and policy recommendations for 2020' (RES2020) under Intelligent Energy Europe was added, as well as the Danish 'Centre for Energy, Environment and Health (CEEH), starting from January 2007. This report summarises the activities under ETSAP Annex X and related project, emphasising the development of modelling tools that will be useful for modelling the Danish energy system. It is also a status report for the development of a model for Denmark, focusing on the tools and features that allow comparison with other countries and, particularly, to evaluate assumptions and results in international models covering Denmark. (au)

  18. Application of Process Modeling Tools to Ship Design

    Science.gov (United States)

    2011-05-01

    Release; Distribution is unlimited. Different People – Different Preferences • We need to view process data in multiple formats. – DSM – GANTT Charts...CertificatePrograms/tRI Scheduling Software Spreadsheet Software Info Modeling Software DSM Tool Schedules IDEF Diagrams Spreadsheets DSM Schema 6/2...Chart Gantt Chart DSM Flow Chart by Geography 6/2/2011 3:41 PM 16Statement A: Approved for Public Release; Distribution is unlimited. Multi-domain

  19. Analysis of Utility and Use of a Web-Based Tool for Digital Signal Processing Teaching by Means of a Technological Acceptance Model

    Science.gov (United States)

    Toral, S. L.; Barrero, F.; Martinez-Torres, M. R.

    2007-01-01

    This paper presents an exploratory study about the development of a structural and measurement model for the technological acceptance (TAM) of a web-based educational tool. The aim consists of measuring not only the use of this tool, but also the external variables with a significant influence in its use for planning future improvements. The tool,…

  20. Schistosomiasis japonica: modelling as a tool to explore transmission patterns.

    Science.gov (United States)

    Xu, Jun-Fang; Lv, Shan; Wang, Qing-Yun; Qian, Men-Bao; Liu, Qin; Bergquist, Robert; Zhou, Xiao-Nong

    2015-01-01

    Modelling is an important tool for the exploration of Schistosoma japonicum transmission patterns. It provides a general theoretical framework for decision-makers and lends itself specifically to assessing the progress of the national control programme by following the outcome of surveys. The challenge of keeping up with the many changes of social, ecological and environmental factors involved in control activities is greatly facilitated by modelling that can also indicate which activities are critical and which are less important. This review examines the application of modelling tools in the epidemiological study of schistosomiasis japonica during the last 20 years and explores the application of enhanced models for surveillance and response. Updated and timely information for decision-makers in the national elimination programme is provided but, in spite of the new modelling techniques introduced, many questions remain. Issues on application of modelling are discussed with the view to improve the current situation with respect to schistosomiasis japonica. Copyright © 2014 Elsevier B.V. All rights reserved.