WorldWideScience

Sample records for modeling tool consists

  1. Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.

    2003-01-01

    The amount of data available on the Internet is continuously increasing, consequentially there is a growing need for tools that help to analyse the data. Testing of consistency among data received from different sources is made difficult by the number of different languages and schemas being used....... In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling......, an important part of this technique is attaching of OCL expressions to special boolean class attributes that we call consistency attributes. The resulting integration model can be used for automatic consistency testing of two instances of the legacy models by automatically instantiate the whole integration...

  2. Consistent model driven architecture

    Science.gov (United States)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  3. A Spectral Unmixing Model for the Integration of Multi-Sensor Imagery: A Tool to Generate Consistent Time Series Data

    Directory of Open Access Journals (Sweden)

    Georgia Doxani

    2015-10-01

    Full Text Available The Sentinel missions have been designed to support the operational services of the Copernicus program, ensuring long-term availability of data for a wide range of spectral, spatial and temporal resolutions. In particular, Sentinel-2 (S-2 data with improved high spatial resolution and higher revisit frequency (five days with the pair of satellites in operation will play a fundamental role in recording land cover types and monitoring land cover changes at regular intervals. Nevertheless, cloud coverage usually hinders the time series availability and consequently the continuous land surface monitoring. In an attempt to alleviate this limitation, the synergistic use of instruments with different features is investigated, aiming at the future synergy of the S-2 MultiSpectral Instrument (MSI and Sentinel-3 (S-3 Ocean and Land Colour Instrument (OLCI. To that end, an unmixing model is proposed with the intention of integrating the benefits of the two Sentinel missions, when both in orbit, in one composite image. The main goal is to fill the data gaps in the S-2 record, based on the more frequent information of the S-3 time series. The proposed fusion model has been applied on MODIS (MOD09GA L2G and SPOT4 (Take 5 data and the experimental results have demonstrated that the approach has high potential. However, the different acquisition characteristics of the sensors, i.e. illumination and viewing geometry, should be taken into consideration and bidirectional effects correction has to be performed in order to reduce noise in the reflectance time series.

  4. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  5. Consistent ranking of volatility models

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2006-01-01

    We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...... variance in out-of-sample evaluations rather than the squared return. We derive the theoretical results in a general framework that is not specific to the comparison of volatility models. Similar problems can arise in comparisons of forecasting models whenever the predicted variable is a latent variable....

  6. A parameter optimization tool for evaluating the physical consistency of the plot-scale water budget of the integrated eco-hydrological model GEOtop in complex terrain

    Science.gov (United States)

    Bertoldi, Giacomo; Cordano, Emanuele; Brenner, Johannes; Senoner, Samuel; Della Chiesa, Stefano; Niedrist, Georg

    2017-04-01

    In mountain regions, the plot- and catchment-scale water and energy budgets are controlled by a complex interplay of different abiotic (i.e. topography, geology, climate) and biotic (i.e. vegetation, land management) controlling factors. When integrated, physically-based eco-hydrological models are used in mountain areas, there are a large number of parameters, topographic and boundary conditions that need to be chosen. However, data on soil and land-cover properties are relatively scarce and do not reflect the strong variability at the local scale. For this reason, tools for uncertainty quantification and optimal parameters identification are essential not only to improve model performances, but also to identify most relevant parameters to be measured in the field and to evaluate the impact of different assumptions for topographic and boundary conditions (surface, lateral and subsurface water and energy fluxes), which are usually unknown. In this contribution, we present the results of a sensitivity analysis exercise for a set of 20 experimental stations located in the Italian Alps, representative of different conditions in terms of topography (elevation, slope, aspect), land use (pastures, meadows, and apple orchards), soil type and groundwater influence. Besides micrometeorological parameters, each station provides soil water content at different depths, and in three stations (one for each land cover) eddy covariance fluxes. The aims of this work are: (I) To present an approach for improving calibration of plot-scale soil moisture and evapotranspiration (ET). (II) To identify the most sensitive parameters and relevant factors controlling temporal and spatial differences among sites. (III) Identify possible model structural deficiencies or uncertainties in boundary conditions. Simulations have been performed with the GEOtop 2.0 model, which is a physically-based, fully distributed integrated eco-hydrological model that has been specifically designed for mountain

  7. Consistency of the MLE under mixture models

    OpenAIRE

    Chen, Jiahua

    2016-01-01

    The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...

  8. Consistent spectroscopy for a extended gauge model

    International Nuclear Information System (INIS)

    Oliveira Neto, G. de.

    1990-11-01

    The consistent spectroscopy was obtained with a Lagrangian constructed with vector fields with a U(1) group extended symmetry. As consistent spectroscopy is understood the determination of quantum physical properties described by the model in an manner independent from the possible parametrizations adopted in their description. (L.C.J.A.)

  9. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...

  10. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  11. Financial model calibration using consistency hints.

    Science.gov (United States)

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  12. Self-consistent asset pricing models

    Science.gov (United States)

    Malevergne, Y.; Sornette, D.

    2007-08-01

    We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the

  13. Consistent biokinetic models for the actinide elements

    International Nuclear Information System (INIS)

    Leggett, R.W.

    2001-01-01

    The biokinetic models for Th, Np, Pu, Am and Cm currently recommended by the International Commission on Radiological Protection (ICRP) were developed within a generic framework that depicts gradual burial of skeletal activity in bone volume, depicts recycling of activity released to blood and links excretion to retention and translocation of activity. For other actinide elements such as Ac, Pa, Bk, Cf and Es, the ICRP still uses simplistic retention models that assign all skeletal activity to bone surface and depicts one-directional flow of activity from blood to long-term depositories to excreta. This mixture of updated and older models in ICRP documents has led to inconsistencies in dose estimates and interpretation of bioassay for radionuclides with reasonably similar biokinetics. This paper proposes new biokinetic models for Ac, Pa, Bk, Cf and Es that are consistent with the updated models for Th, Np, Pu, Am and Cm. The proposed models are developed within the ICRP's generic model framework for bone-surface-seeking radionuclides, and an effort has been made to develop parameter values that are consistent with results of comparative biokinetic data on the different actinide elements. (author)

  14. Thermodynamically consistent model calibration in chemical kinetics

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2011-05-01

    Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new

  15. Toward a consistent model for glass dissolution

    International Nuclear Information System (INIS)

    Strachan, D.M.; McGrail, B.P.; Bourcier, W.L.

    1994-01-01

    Understanding the process of glass dissolution in aqueous media has advanced significantly over the last 10 years through the efforts of many scientists around the world. Mathematical models describing the glass dissolution process have also advanced from simple empirical functions to structured models based on fundamental principles of physics, chemistry, and thermodynamics. Although borosilicate glass has been selected as the waste form for disposal of high-level wastes in at least 5 countries, there is no international consensus on the fundamental methodology for modeling glass dissolution that could be used in assessing the long term performance of waste glasses in a geologic repository setting. Each repository program is developing their own model and supporting experimental data. In this paper, we critically evaluate a selected set of these structured models and show that a consistent methodology for modeling glass dissolution processes is available. We also propose a strategy for a future coordinated effort to obtain the model input parameters that are needed for long-term performance assessments of glass in a geologic repository. (author) 4 figs., tabs., 75 refs

  16. Self-consistent model of confinement

    International Nuclear Information System (INIS)

    Swift, A.R.

    1988-01-01

    A model of the large-spatial-distance, zero--three-momentum, limit of QCD is developed from the hypothesis that there is an infrared singularity. Single quarks and gluons do not propagate because they have infinite energy after renormalization. The Hamiltonian formulation of the path integral is used to quantize QCD with physical, nonpropagating fields. Perturbation theory in the infrared limit is simplified by the absence of self-energy insertions and by the suppression of large classes of diagrams due to vanishing propagators. Remaining terms in the perturbation series are resummed to produce a set of nonlinear, renormalizable integral equations which fix both the confining interaction and the physical propagators. Solutions demonstrate the self-consistency of the concepts of an infrared singularity and nonpropagating fields. The Wilson loop is calculated to provide a general proof of confinement. Bethe-Salpeter equations for quark-antiquark pairs and for two gluons have finite-energy solutions in the color-singlet channel. The choice of gauge is addressed in detail. Large classes of corrections to the model are discussed and shown to support self-consistency

  17. Simplified models for dark matter face their consistent completions

    Energy Technology Data Exchange (ETDEWEB)

    Gonçalves, Dorival; Machado, Pedro A. N.; No, Jose Miguel

    2017-03-01

    Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.

  18. Developing consistent pronunciation models for phonemic variants

    CSIR Research Space (South Africa)

    Davel, M

    2006-09-01

    Full Text Available Pronunciation lexicons often contain pronunciation variants. This can create two problems: It can be difficult to define these variants in an internally consistent way and it can also be difficult to extract generalised grapheme-to-phoneme rule sets...

  19. Consistent Alignment of World Embedding Models

    Science.gov (United States)

    2017-03-02

    propose a solution that aligns variations of the same model (or different models) in a joint low-dimensional la- tent space leveraging carefully...representations of linguistic enti- ties, most often referred to as embeddings. This includes techniques that rely on matrix factoriza- tion (Levy & Goldberg ...higher, the variation is much higher as well. As we increase the size of the neighborhood, or improve the quality of our sample by only picking the most

  20. Self-consistent modelling of ICRH

    International Nuclear Information System (INIS)

    Hellsten, T.; Hedin, J.; Johnson, T.; Laxaaback, M.; Tennfors, E.

    2001-01-01

    The performance of ICRH is often sensitive to the shape of the high energy part of the distribution functions of the resonating species. This requires self-consistent calculations of the distribution functions and the wave-field. In addition to the wave-particle interactions and Coulomb collisions the effects of the finite orbit width and the RF-induced spatial transport are found to be important. The inward drift dominates in general even for a symmetric toroidal wave spectrum in the centre of the plasma. An inward drift does not necessarily produce a more peaked heating profile. On the contrary, for low concentrations of hydrogen minority in deuterium plasmas it can even give rise to broader profiles. (author)

  1. String consistency for unified model building

    International Nuclear Information System (INIS)

    Chaudhuri, S.; Chung, S.W.; Hockney, G.; Lykken, J.

    1995-01-01

    We explore the use of real fermionization as a test case for understanding how specific features of phenomenological interest in the low-energy effective superpotential are realized in exact solutions to heterotic superstring theory. We present pedagogic examples of models which realize SO(10) as a level two current algebra on the world-sheet, and discuss in general how higher level current algebras can be realized in the tensor product of simple constituent conformal field theories. We describe formal developments necessary to compute couplings in models built using real fermionization. This allows us to isolate cases of spin structures where the standard prescription for real fermionization may break down. (orig.)

  2. REPFLO model evaluation, physical and numerical consistency

    International Nuclear Information System (INIS)

    Wilson, R.N.; Holland, D.H.

    1978-11-01

    This report contains a description of some suggested changes and an evaluation of the REPFLO computer code, which models ground-water flow and nuclear-waste migration in and about a nuclear-waste repository. The discussion contained in the main body of the report is supplemented by a flow chart, presented in the Appendix of this report. The suggested changes are of four kinds: (1) technical changes to make the code compatible with a wider variety of digital computer systems; (2) changes to fill gaps in the computer code, due to missing proprietary subroutines; (3) changes to (a) correct programming errors, (b) correct logical flaws, and (c) remove unnecessary complexity; and (4) changes in the computer code logical structure to make REPFLO a more viable model from the physical point of view

  3. Consistency test of the standard model

    International Nuclear Information System (INIS)

    Pawlowski, M.; Raczka, R.

    1997-01-01

    If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions

  4. A model for cytoplasmic rheology consistent with magnetic twisting cytometry.

    Science.gov (United States)

    Butler, J P; Kelly, S M

    1998-01-01

    Magnetic twisting cytometry is gaining wide applicability as a tool for the investigation of the rheological properties of cells and the mechanical properties of receptor-cytoskeletal interactions. Current technology involves the application and release of magnetically induced torques on small magnetic particles bound to or inside cells, with measurements of the resulting angular rotation of the particles. The properties of purely elastic or purely viscous materials can be determined by the angular strain and strain rate, respectively. However, the cytoskeleton and its linkage to cell surface receptors display elastic, viscous, and even plastic deformation, and the simultaneous characterization of these properties using only elastic or viscous models is internally inconsistent. Data interpretation is complicated by the fact that in current technology, the applied torques are not constant in time, but decrease as the particles rotate. This paper describes an internally consistent model consisting of a parallel viscoelastic element in series with a parallel viscoelastic element, and one approach to quantitative parameter evaluation. The unified model reproduces all essential features seen in data obtained from a wide variety of cell populations, and contains the pure elastic, viscoelastic, and viscous cases as subsets.

  5. High-performance speech recognition using consistency modeling

    Science.gov (United States)

    Digalakis, Vassilios; Murveit, Hy; Monaco, Peter; Neumeyer, Leo; Sankar, Ananth

    1994-12-01

    The goal of SRI's consistency modeling project is to improve the raw acoustic modeling component of SRI's DECIPHER speech recognition system and develop consistency modeling technology. Consistency modeling aims to reduce the number of improper independence assumptions used in traditional speech recognition algorithms so that the resulting speech recognition hypotheses are more self-consistent and, therefore, more accurate. At the initial stages of this effort, SRI focused on developing the appropriate base technologies for consistency modeling. We first developed the Progressive Search technology that allowed us to perform large-vocabulary continuous speech recognition (LVCSR) experiments. Since its conception and development at SRI, this technique has been adopted by most laboratories, including other ARPA contracting sites, doing research on LVSR. Another goal of the consistency modeling project is to attack difficult modeling problems, when there is a mismatch between the training and testing phases. Such mismatches may include outlier speakers, different microphones and additive noise. We were able to either develop new, or transfer and evaluate existing, technologies that adapted our baseline genonic HMM recognizer to such difficult conditions.

  6. Standard Model Vacuum Stability and Weyl Consistency Conditions

    DEFF Research Database (Denmark)

    Antipin, Oleg; Gillioz, Marc; Krog, Jens

    2013-01-01

    At high energy the standard model possesses conformal symmetry at the classical level. This is reflected at the quantum level by relations between the different beta functions of the model. These relations are known as the Weyl consistency conditions. We show that it is possible to satisfy them...... order by order in perturbation theory, provided that a suitable coupling constant counting scheme is used. As a direct phenomenological application, we study the stability of the standard model vacuum at high energies and compare with previous computations violating the Weyl consistency conditions....

  7. Consistency and Reconciliation Model In Regional Development Planning

    Directory of Open Access Journals (Sweden)

    Dina Suryawati

    2016-10-01

    Full Text Available The aim of this study was to identify the problems and determine the conceptual model of regional development planning. Regional development planning is a systemic, complex and unstructured process. Therefore, this study used soft systems methodology to outline unstructured issues with a structured approach. The conceptual models that were successfully constructed in this study are a model of consistency and a model of reconciliation. Regional development planning is a process that is well-integrated with central planning and inter-regional planning documents. Integration and consistency of regional planning documents are very important in order to achieve the development goals that have been set. On the other hand, the process of development planning in the region involves technocratic system, that is, both top-down and bottom-up system of participation. Both must be balanced, do not overlap and do not dominate each other. regional, development, planning, consistency, reconciliation

  8. Modeling a Consistent Behavior of PLC-Sensors

    Directory of Open Access Journals (Sweden)

    E. V. Kuzmin

    2014-01-01

    Full Text Available The article extends the cycle of papers dedicated to programming and verificatoin of PLC-programs by LTL-specification. This approach provides the availability of correctness analysis of PLC-programs by the model checking method.The model checking method needs to construct a finite model of a PLC program. For successful verification of required properties it is important to take into consideration that not all combinations of input signals from the sensors can occur while PLC works with a control object. This fact requires more advertence to the construction of the PLC-program model.In this paper we propose to describe a consistent behavior of sensors by three groups of LTL-formulas. They will affect the program model, approximating it to the actual behavior of the PLC program. The idea of LTL-requirements is shown by an example.A PLC program is a description of reactions on input signals from sensors, switches and buttons. In constructing a PLC-program model, the approach to modeling a consistent behavior of PLC sensors allows to focus on modeling precisely these reactions without an extension of the program model by additional structures for realization of a realistic behavior of sensors. The consistent behavior of sensors is taken into account only at the stage of checking a conformity of the programming model to required properties, i. e. a property satisfaction proof for the constructed model occurs with the condition that the model contains only such executions of the program that comply with the consistent behavior of sensors.

  9. Diagnosing a Strong-Fault Model by Conflict and Consistency.

    Science.gov (United States)

    Zhang, Wenfeng; Zhao, Qi; Zhao, Hongbo; Zhou, Gan; Feng, Wenquan

    2018-03-29

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model's prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain-the heat control unit of a spacecraft-where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  10. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  11. Consistent partnership formation: application to a sexually transmitted disease model.

    Science.gov (United States)

    Artzrouni, Marc; Deuchert, Eva

    2012-02-01

    We apply a consistent sexual partnership formation model which hinges on the assumption that one gender's choices drives the process (male or female dominant model). The other gender's behavior is imputed. The model is fitted to UK sexual behavior data and applied to a simple incidence model of HSV-2. With a male dominant model (which assumes accurate male reports on numbers of partners) the modeled incidences of HSV-2 are 77% higher for men and 50% higher for women than with a female dominant model (which assumes accurate female reports). Although highly stylized, our simple incidence model sheds light on the inconsistent results one can obtain with misreported data on sexual activity and age preferences. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Diagnosing a Strong-Fault Model by Conflict and Consistency

    Directory of Open Access Journals (Sweden)

    Wenfeng Zhang

    2018-03-01

    Full Text Available The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF. Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  13. Comparison of two different modelling tools

    DEFF Research Database (Denmark)

    Brix, Wiebke; Elmegaard, Brian

    2009-01-01

    In this paper a test case is solved using two different modelling tools, Engineering Equation Solver (EES) and WinDali, in order to compare the tools. The system of equations solved, is a static model of an evaporator used for refrigeration. The evaporator consists of two parallel channels......, and it is investigated how a non-uniform airflow influences the refrigerant mass flow rate distribution and the total cooling capacity of the heat exchanger. It is shown that the cooling capacity decreases significantly with increasing maldistribution of the airflow. Comparing the two simulation tools it is found...

  14. Consistent Conformal Extensions of the Standard Model arXiv

    CERN Document Server

    Loebbert, Florian; Plefka, Jan

    The question of whether classically conformal modifications of the standard model are consistent with experimental obervations has recently been subject to renewed interest. The method of Gildener and Weinberg provides a natural framework for the study of the effective potential of the resulting multi-scalar standard model extensions. This approach relies on the assumption of the ordinary loop hierarchy $\\lambda_\\text{s} \\sim g^2_\\text{g}$ of scalar and gauge couplings. On the other hand, Andreassen, Frost and Schwartz recently argued that in the (single-scalar) standard model, gauge invariant results require the consistent scaling $\\lambda_\\text{s} \\sim g^4_\\text{g}$. In the present paper we contrast these two hierarchy assumptions and illustrate the differences in the phenomenological predictions of minimal conformal extensions of the standard model.

  15. Final Report Fermionic Symmetries and Self consistent Shell Model

    International Nuclear Information System (INIS)

    Zamick, Larry

    2008-01-01

    In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with 'anomoulous' magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them. The importance of a self consistent shell model was emphasized.

  16. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  17. Consistent three-equation model for thin films

    Science.gov (United States)

    Richard, Gael; Gisclon, Marguerite; Ruyer-Quil, Christian; Vila, Jean-Paul

    2017-11-01

    Numerical simulations of thin films of newtonian fluids down an inclined plane use reduced models for computational cost reasons. These models are usually derived by averaging over the fluid depth the physical equations of fluid mechanics with an asymptotic method in the long-wave limit. Two-equation models are based on the mass conservation equation and either on the momentum balance equation or on the work-energy theorem. We show that there is no two-equation model that is both consistent and theoretically coherent and that a third variable and a three-equation model are required to solve all theoretical contradictions. The linear and nonlinear properties of two and three-equation models are tested on various practical problems. We present a new consistent three-equation model with a simple mathematical structure which allows an easy and reliable numerical resolution. The numerical calculations agree fairly well with experimental measurements or with direct numerical resolutions for neutral stability curves, speed of kinematic waves and of solitary waves and depth profiles of wavy films. The model can also predict the flow reversal at the first capillary trough ahead of the main wave hump.

  18. Self-consistent mean-field models for nuclear structure

    International Nuclear Information System (INIS)

    Bender, Michael; Heenen, Paul-Henri; Reinhard, Paul-Gerhard

    2003-01-01

    The authors review the present status of self-consistent mean-field (SCMF) models for describing nuclear structure and low-energy dynamics. These models are presented as effective energy-density functionals. The three most widely used variants of SCMF's based on a Skyrme energy functional, a Gogny force, and a relativistic mean-field Lagrangian are considered side by side. The crucial role of the treatment of pairing correlations is pointed out in each case. The authors discuss other related nuclear structure models and present several extensions beyond the mean-field model which are currently used. Phenomenological adjustment of the model parameters is discussed in detail. The performance quality of the SCMF model is demonstrated for a broad range of typical applications

  19. Detection and quantification of flow consistency in business process models

    DEFF Research Database (Denmark)

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel

    2017-01-01

    , to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics......Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect......, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second...

  20. A consistent transported PDF model for treating differential molecular diffusion

    Science.gov (United States)

    Wang, Haifeng; Zhang, Pei

    2016-11-01

    Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.

  1. Consistency checks in beam emission modeling for neutral beam injectors

    International Nuclear Information System (INIS)

    Punyapu, Bharathi; Vattipalle, Prahlad; Sharma, Sanjeev Kumar; Baruah, Ujjwal Kumar; Crowley, Brendan

    2015-01-01

    In positive neutral beam systems, the beam parameters such as ion species fractions, power fractions and beam divergence are routinely measured using Doppler shifted beam emission spectrum. The accuracy with which these parameters are estimated depend on the accuracy of the atomic modeling involved in these estimations. In this work, an effective procedure to check the consistency of the beam emission modeling in neutral beam injectors is proposed. As a first consistency check, at a constant beam voltage and current, the intensity of the beam emission spectrum is measured by varying the pressure in the neutralizer. Then, the scaling of measured intensity of un-shifted (target) and Doppler shifted intensities (projectile) of the beam emission spectrum at these pressure values are studied. If the un-shifted component scales with pressure, then the intensity of this component will be used as a second consistency check on the beam emission modeling. As a further check, the modeled beam fractions and emission cross sections of projectile and target are used to predict the intensity of the un-shifted component and then compared with the value of measured target intensity. An agreement between the predicted and measured target intensities provide the degree of discrepancy in the beam emission modeling. In order to test this methodology, a systematic analysis of Doppler shift spectroscopy data obtained on the JET neutral beam test stand data was carried out

  2. Consistency of the tachyon warm inflationary universe models

    International Nuclear Information System (INIS)

    Zhang, Xiao-Min; Zhu, Jian-Yang

    2014-01-01

    This study concerns the consistency of the tachyon warm inflationary models. A linear stability analysis is performed to find the slow-roll conditions, characterized by the potential slow-roll (PSR) parameters, for the existence of a tachyon warm inflationary attractor in the system. The PSR parameters in the tachyon warm inflationary models are redefined. Two cases, an exponential potential and an inverse power-law potential, are studied, when the dissipative coefficient Γ = Γ 0 and Γ = Γ(φ), respectively. A crucial condition is obtained for a tachyon warm inflationary model characterized by the Hubble slow-roll (HSR) parameter ε H , and the condition is extendable to some other inflationary models as well. A proper number of e-folds is obtained in both cases of the tachyon warm inflation, in contrast to existing works. It is also found that a constant dissipative coefficient (Γ = Γ 0 ) is usually not a suitable assumption for a warm inflationary model

  3. Modeling self-consistent multi-class dynamic traffic flow

    Science.gov (United States)

    Cho, Hsun-Jung; Lo, Shih-Ching

    2002-09-01

    In this study, we present a systematic self-consistent multiclass multilane traffic model derived from the vehicular Boltzmann equation and the traffic dispersion model. The multilane domain is considered as a two-dimensional space and the interaction among vehicles in the domain is described by a dispersion model. The reason we consider a multilane domain as a two-dimensional space is that the driving behavior of road users may not be restricted by lanes, especially motorcyclists. The dispersion model, which is a nonlinear Poisson equation, is derived from the car-following theory and the equilibrium assumption. Under the concept that all kinds of users share the finite section, the density is distributed on a road by the dispersion model. In addition, the dynamic evolution of the traffic flow is determined by the systematic gas-kinetic model derived from the Boltzmann equation. Multiplying Boltzmann equation by the zeroth, first- and second-order moment functions, integrating both side of the equation and using chain rules, we can derive continuity, motion and variance equation, respectively. However, the second-order moment function, which is the square of the individual velocity, is employed by previous researches does not have physical meaning in traffic flow. Although the second-order expansion results in the velocity variance equation, additional terms may be generated. The velocity variance equation we propose is derived from multiplying Boltzmann equation by the individual velocity variance. It modifies the previous model and presents a new gas-kinetic traffic flow model. By coupling the gas-kinetic model and the dispersion model, a self-consistent system is presented.

  4. Detection and quantification of flow consistency in business process models.

    Science.gov (United States)

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara

    2018-01-01

    Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.

  5. MultiSIMNRA: A computational tool for self-consistent ion beam analysis using SIMNRA

    International Nuclear Information System (INIS)

    Silva, T.F.; Rodrigues, C.L.; Mayer, M.; Moro, M.V.; Trindade, G.F.; Aguirre, F.R.; Added, N.; Rizzutto, M.A.; Tabacniks, M.H.

    2016-01-01

    Highlights: • MultiSIMNRA enables the self-consistent analysis of multiple ion beam techniques. • Self-consistent analysis enables unequivocal and reliable modeling of the sample. • Four different computational algorithms available for model optimizations. • Definition of constraints enables to include prior knowledge into the analysis. - Abstract: SIMNRA is widely adopted by the scientific community of ion beam analysis for the simulation and interpretation of nuclear scattering techniques for material characterization. Taking advantage of its recognized reliability and quality of the simulations, we developed a computer program that uses multiple parallel sessions of SIMNRA to perform self-consistent analysis of data obtained by different ion beam techniques or in different experimental conditions of a given sample. In this paper, we present a result using MultiSIMNRA for a self-consistent multi-elemental analysis of a thin film produced by magnetron sputtering. The results demonstrate the potentialities of the self-consistent analysis and its feasibility using MultiSIMNRA.

  6. Consistency Across Standards or Standards in a New Business Model

    Science.gov (United States)

    Russo, Dane M.

    2010-01-01

    Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.

  7. Self-consistent approach for neutral community models with speciation

    Science.gov (United States)

    Haegeman, Bart; Etienne, Rampal S.

    2010-03-01

    Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.

  8. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  9. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Kokholm, Thomas

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  10. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    2013-01-01

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  11. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    observed properties of variance swap dynamics and allows for jumps in volatility and returns. An affine specification using L´evy processes as building blocks leads to analytically tractable pricing formulas for options on variance swaps as well as efficient numerical methods for pricing of European......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... options on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options...

  12. Development of a Consistent and Reproducible Porcine Scald Burn Model

    Science.gov (United States)

    Kempf, Margit; Kimble, Roy; Cuttle, Leila

    2016-01-01

    There are very few porcine burn models that replicate scald injuries similar to those encountered by children. We have developed a robust porcine burn model capable of creating reproducible scald burns for a wide range of burn conditions. The study was conducted with juvenile Large White pigs, creating replicates of burn combinations; 50°C for 1, 2, 5 and 10 minutes and 60°C, 70°C, 80°C and 90°C for 5 seconds. Visual wound examination, biopsies and Laser Doppler Imaging were performed at 1, 24 hours and at 3 and 7 days post-burn. A consistent water temperature was maintained within the scald device for long durations (49.8 ± 0.1°C when set at 50°C). The macroscopic and histologic appearance was consistent between replicates of burn conditions. For 50°C water, 10 minute duration burns showed significantly deeper tissue injury than all shorter durations at 24 hours post-burn (p ≤ 0.0001), with damage seen to increase until day 3 post-burn. For 5 second duration burns, by day 7 post-burn the 80°C and 90°C scalds had damage detected significantly deeper in the tissue than the 70°C scalds (p ≤ 0.001). A reliable and safe model of porcine scald burn injury has been successfully developed. The novel apparatus with continually refreshed water improves consistency of scald creation for long exposure times. This model allows the pathophysiology of scald burn wound creation and progression to be examined. PMID:27612153

  13. Thermodynamically consistent model of brittle oil shales under overpressure

    Science.gov (United States)

    Izvekov, Oleg

    2016-04-01

    The concept of dual porosity is a common way for simulation of oil shale production. In the frame of this concept the porous fractured media is considered as superposition of two permeable continua with mass exchange. As a rule the concept doesn't take into account such as the well-known phenomenon as slip along natural fractures, overpressure in low permeability matrix and so on. Overpressure can lead to development of secondary fractures in low permeability matrix in the process of drilling and pressure reduction during production. In this work a new thermodynamically consistent model which generalizes the model of dual porosity is proposed. Particularities of the model are as follows. The set of natural fractures is considered as permeable continuum. Damage mechanics is applied to simulation of secondary fractures development in low permeability matrix. Slip along natural fractures is simulated in the frame of plasticity theory with Drucker-Prager criterion.

  14. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    Monte Carlo sampling schemes of available evaluation methods. The second improvement concerns Bayesian evaluation methods based on a certain simplification of the nuclear model. These methods were restricted to the consistent evaluation of tens of thousands of observables. In this thesis, a new evaluation scheme has been developed, which is mathematically equivalent to existing methods, but allows the consistent evaluation of dozens of millions of observables. The new scheme is suited for the implementation as a database application. The realization of such an application with public access can help to accelerate the production of reliable nuclear data sets. Furthermore, in combination with the novel treatment of model deficiencies, problems of the model and the experimental data can be tracked down without user interaction. This feature can foster the development of nuclear models with high predictive power. (author) [de

  15. Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.

    Science.gov (United States)

    Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P

    2015-10-01

    Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.

  16. A self-consistent model of an isothermal tokamak

    Science.gov (United States)

    McNamara, Steven; Lilley, Matthew

    2014-10-01

    Continued progress in liquid lithium coating technologies have made the development of a beam driven tokamak with minimal edge recycling a feasibly possibility. Such devices are characterised by improved confinement due to their inherent stability and the suppression of thermal conduction. Particle and energy confinement become intrinsically linked and the plasma thermal energy content is governed by the injected beam. A self-consistent model of a purely beam fuelled isothermal tokamak is presented, including calculations of the density profile, bulk species temperature ratios and the fusion output. Stability considerations constrain the operating parameters and regions of stable operation are identified and their suitability to potential reactor applications discussed.

  17. Mean-field theory and self-consistent dynamo modeling

    International Nuclear Information System (INIS)

    Yoshizawa, Akira; Yokoi, Nobumitsu

    2001-12-01

    Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)

  18. A self-consistent spin-diffusion model for micromagnetics

    KAUST Repository

    Abert, Claas; Ruggeri, Michele; Bruckner, Florian; Vogler, Christoph; Manchon, Aurelien; Praetorius, Dirk; Suess, Dieter

    2016-01-01

    We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.

  19. Self-consistent modeling of amorphous silicon devices

    International Nuclear Information System (INIS)

    Hack, M.

    1987-01-01

    The authors developed a computer model to describe the steady-state behaviour of a range of amorphous silicon devices. It is based on the complete set of transport equations and takes into account the important role played by the continuous distribution of localized states in the mobility gap of amorphous silicon. Using one set of parameters they have been able to self-consistently simulate the current-voltage characteristics of p-i-n (or n-i-p) solar cells under illumination, the dark behaviour of field-effect transistors, p-i-n diodes and n-i-n diodes in both the ohmic and space charge limited regimes. This model also describes the steady-state photoconductivity of amorphous silicon, in particular, its dependence on temperature, doping and illumination intensity

  20. A self-consistent spin-diffusion model for micromagnetics

    KAUST Repository

    Abert, Claas

    2016-12-17

    We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.

  1. Self-consistent modeling of electron cyclotron resonance ion sources

    International Nuclear Information System (INIS)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lecot, C.

    2004-01-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally

  2. Self-consistent modeling of electron cyclotron resonance ion sources

    Science.gov (United States)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.

    2004-05-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.

  3. A self-consistent upward leader propagation model

    International Nuclear Information System (INIS)

    Becerra, Marley; Cooray, Vernon

    2006-01-01

    The knowledge of the initiation and propagation of an upward moving connecting leader in the presence of a downward moving lightning stepped leader is a must in the determination of the lateral attraction distance of a lightning flash by any grounded structure. Even though different models that simulate this phenomenon are available in the literature, they do not take into account the latest developments in the physics of leader discharges. The leader model proposed here simulates the advancement of positive upward leaders by appealing to the presently understood physics of that process. The model properly simulates the upward continuous progression of the positive connecting leaders from its inception to the final connection with the downward stepped leader (final jump). Thus, the main physical properties of upward leaders, namely the charge per unit length, the injected current, the channel gradient and the leader velocity are self-consistently obtained. The obtained results are compared with an altitude triggered lightning experiment and there is good agreement between the model predictions and the measured leader current and the experimentally inferred spatial and temporal location of the final jump. It is also found that the usual assumption of constant charge per unit length, based on laboratory experiments, is not valid for lightning upward connecting leaders

  4. Green Infrastructure Models and Tools

    Science.gov (United States)

    The objective of this project is to modify and refine existing models and develop new tools to support decision making for the complete green infrastructure (GI) project lifecycle, including the planning and implementation of stormwater control in urban and agricultural settings,...

  5. Classical and Quantum Consistency of the DGP Model

    CERN Document Server

    Nicolis, A; Nicolis, Alberto; Rattazzi, Riccardo

    2004-01-01

    We study the Dvali-Gabadadze-Porrati model by the method of the boundary effective action. The truncation of this action to the bending mode \\pi consistently describes physics in a wide range of regimes both at the classical and at the quantum level. The Vainshtein effect, which restores agreement with precise tests of general relativity, follows straightforwardly. We give a simple and general proof of stability, i.e. absence of ghosts in the fluctuations, valid for most of the relevant cases, like for instance the spherical source in asymptotically flat space. However we confirm that around certain interesting self-accelerating cosmological solutions there is a ghost. We consider the issue of quantum corrections. Around flat space \\pi becomes strongly coupled below a macroscopic length of 1000 km, thus impairing the predictivity of the model. Indeed the tower of higher dimensional operators which is expected by a generic UV completion of the model limits predictivity at even larger length scales. We outline ...

  6. Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems

    KAUST Repository

    Garg, Vikram V

    2014-09-27

    Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.

  7. Tool wear modeling using abductive networks

    Science.gov (United States)

    Masory, Oren

    1992-09-01

    A tool wear model based on Abductive Networks, which consists of a network of `polynomial' nodes, is described. The model relates the cutting parameters, components of the cutting force, and machining time to flank wear. Thus real time measurements of the cutting force can be used to monitor the machining process. The model is obtained by a training process in which the connectivity between the network's nodes and the polynomial coefficients of each node are determined by optimizing a performance criteria. Actual wear measurements of coated and uncoated carbide inserts were used for training and evaluating the established model.

  8. Self-Consistent Dynamical Model of the Broad Line Region

    Energy Technology Data Exchange (ETDEWEB)

    Czerny, Bozena [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Li, Yan-Rong [Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing (China); Sredzinska, Justyna; Hryniewicz, Krzysztof [Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw (Poland); Panda, Swayam [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw (Poland); Wildy, Conor [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Karas, Vladimir, E-mail: bcz@cft.edu.pl [Astronomical Institute, Czech Academy of Sciences, Prague (Czech Republic)

    2017-06-22

    We develop a self-consistent description of the Broad Line Region based on the concept of a failed wind powered by radiation pressure acting on a dusty accretion disk atmosphere in Keplerian motion. The material raised high above the disk is illuminated, dust evaporates, and the matter falls back toward the disk. This material is the source of emission lines. The model predicts the inner and outer radius of the region, the cloud dynamics under the dust radiation pressure and, subsequently, the gravitational field of the central black hole, which results in asymmetry between the rise and fall. Knowledge of the dynamics allows us to predict the shapes of the emission lines as functions of the basic parameters of an active nucleus: black hole mass, accretion rate, black hole spin (or accretion efficiency) and the viewing angle with respect to the symmetry axis. Here we show preliminary results based on analytical approximations to the cloud motion.

  9. Self-Consistent Dynamical Model of the Broad Line Region

    Directory of Open Access Journals (Sweden)

    Bozena Czerny

    2017-06-01

    Full Text Available We develop a self-consistent description of the Broad Line Region based on the concept of a failed wind powered by radiation pressure acting on a dusty accretion disk atmosphere in Keplerian motion. The material raised high above the disk is illuminated, dust evaporates, and the matter falls back toward the disk. This material is the source of emission lines. The model predicts the inner and outer radius of the region, the cloud dynamics under the dust radiation pressure and, subsequently, the gravitational field of the central black hole, which results in asymmetry between the rise and fall. Knowledge of the dynamics allows us to predict the shapes of the emission lines as functions of the basic parameters of an active nucleus: black hole mass, accretion rate, black hole spin (or accretion efficiency and the viewing angle with respect to the symmetry axis. Here we show preliminary results based on analytical approximations to the cloud motion.

  10. Consistent constraints on the Standard Model Effective Field Theory

    International Nuclear Information System (INIS)

    Berthier, Laure; Trott, Michael

    2016-01-01

    We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred and three observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, Λ≳ 3 TeV. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an S,T analysis is modified by the theory errors we include as an illustrative example.

  11. Thermodynamically consistent mesoscopic model of the ferro/paramagnetic transition

    Czech Academy of Sciences Publication Activity Database

    Benešová, Barbora; Kružík, Martin; Roubíček, Tomáš

    2013-01-01

    Roč. 64, Č. 1 (2013), s. 1-28 ISSN 0044-2275 R&D Projects: GA AV ČR IAA100750802; GA ČR GA106/09/1573; GA ČR GAP201/10/0357 Grant - others:GA ČR(CZ) GA106/08/1397; GA MŠk(CZ) LC06052 Program:GA; LC Institutional support: RVO:67985556 Keywords : ferro-para-magnetism * evolution * thermodynamics Subject RIV: BA - General Mathematics; BA - General Mathematics (UT-L) Impact factor: 1.214, year: 2013 http://library.utia.cas.cz/separaty/2012/MTR/kruzik-thermodynamically consistent mesoscopic model of the ferro-paramagnetic transition.pdf

  12. Creation of Consistent Burn Wounds: A Rat Model

    Directory of Open Access Journals (Sweden)

    Elijah Zhengyang Cai

    2014-07-01

    Full Text Available Background Burn infliction techniques are poorly described in rat models. An accurate study can only be achieved with wounds that are uniform in size and depth. We describe a simple reproducible method for creating consistent burn wounds in rats. Methods Ten male Sprague-Dawley rats were anesthetized and dorsum shaved. A 100 g cylindrical stainless-steel rod (1 cm diameter was heated to 100℃ in boiling water. Temperature was monitored using a thermocouple. We performed two consecutive toe-pinch tests on different limbs to assess the depth of sedation. Burn infliction was limited to the loin. The skin was pulled upwards, away from the underlying viscera, creating a flat surface. The rod rested on its own weight for 5, 10, and 20 seconds at three different sites on each rat. Wounds were evaluated for size, morphology and depth. Results Average wound size was 0.9957 cm2 (standard deviation [SD] 0.1845 (n=30. Wounds created with duration of 5 seconds were pale, with an indistinct margin of erythema. Wounds of 10 and 20 seconds were well-defined, uniformly brown with a rim of erythema. Average depths of tissue damage were 1.30 mm (SD 0.424, 2.35 mm (SD 0.071, and 2.60 mm (SD 0.283 for duration of 5, 10, 20 seconds respectively. Burn duration of 5 seconds resulted in full-thickness damage. Burn duration of 10 seconds and 20 seconds resulted in full-thickness damage, involving subjacent skeletal muscle. Conclusions This is a simple reproducible method for creating burn wounds consistent in size and depth in a rat burn model.

  13. Consistent model reduction of polymer chains in solution in dissipative particle dynamics: Model description

    KAUST Repository

    Moreno Chaparro, Nicolas; Nunes, Suzana Pereira; Calo, Victor M.

    2015-01-01

    considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling

  14. Integrating a Decision Management Tool with UML Modeling Tools

    DEFF Research Database (Denmark)

    Könemann, Patrick

    by proposing potential subsequent design issues. In model-based software development, many decisions directly affect the structural and behavioral models used to describe and develop a software system and its architecture. However, these decisions are typically not connected to the models created during...... integration of formerly disconnected tools improves tool usability as well as decision maker productivity....

  15. Self-consistent Modeling of Elastic Anisotropy in Shale

    Science.gov (United States)

    Kanitpanyacharoen, W.; Wenk, H.; Matthies, S.; Vasin, R.

    2012-12-01

    Elastic anisotropy in clay-rich sedimentary rocks has increasingly received attention because of significance for prospecting of petroleum deposits, as well as seals in the context of nuclear waste and CO2 sequestration. The orientation of component minerals and pores/fractures is a critical factor that influences elastic anisotropy. In this study, we investigate lattice and shape preferred orientation (LPO and SPO) of three shales from the North Sea in UK, the Qusaiba Formation in Saudi Arabia, and the Officer Basin in Australia (referred to as N1, Qu3, and L1905, respectively) to calculate elastic properties and compare them with experimental results. Synchrotron hard X-ray diffraction and microtomography experiments were performed to quantify LPO, weight proportions, and three-dimensional SPO of constituent minerals and pores. Our preliminary results show that the degree of LPO and total amount of clays are highest in Qu3 (3.3-6.5 m.r.d and 74vol%), moderately high in N1 (2.4-5.6 m.r.d. and 70vol%), and lowest in L1905 (2.3-2.5 m.r.d. and 42vol%). In addition, porosity in Qu3 is as low as 2% while it is up to 6% in L1605 and 8% in N1, respectively. Based on this information and single crystal elastic properties of mineral components, we apply a self-consistent averaging method to calculate macroscopic elastic properties and corresponding seismic velocities for different shales. The elastic model is then compared with measured acoustic velocities on the same samples. The P-wave velocities measured from Qu3 (4.1-5.3 km/s, 26.3%Ani.) are faster than those obtained from L1905 (3.9-4.7 km/s, 18.6%Ani.) and N1 (3.6-4.3 km/s, 17.7%Ani.). By making adjustments for pore structure (aspect ratio) and single crystal elastic properties of clay minerals, a good agreement between our calculation and the ultrasonic measurement is obtained.

  16. Self-consistent modelling of resonant tunnelling structures

    DEFF Research Database (Denmark)

    Fiig, T.; Jauho, A.P.

    1992-01-01

    We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated...... applied voltages and carrier densities at the emitter-barrier interface. We include the two-dimensional accumulation layer charge and the quantum well charge in our self-consistent scheme. We discuss the evaluation of the current contribution originating from the two-dimensional accumulation layer charges......, and our qualitative estimates seem consistent with recent experimental studies. The intrinsic bistability of resonant tunnelling diodes is analyzed within several different approximation schemes....

  17. Self-consistent approach for neutral community models with speciation

    NARCIS (Netherlands)

    Haegeman, Bart; Etienne, Rampal S.

    Hubbell's neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is

  18. Exotic nuclei in self-consistent mean-field models

    International Nuclear Information System (INIS)

    Bender, M.; Rutz, K.; Buervenich, T.; Reinhard, P.-G.; Maruhn, J. A.; Greiner, W.

    1999-01-01

    We discuss two widely used nuclear mean-field models, the relativistic mean-field model and the (nonrelativistic) Skyrme-Hartree-Fock model, and their capability to describe exotic nuclei with emphasis on neutron-rich tin isotopes and superheavy nuclei. (c) 1999 American Institute of Physics

  19. New geometric design consistency model based on operating speed profiles for road safety evaluation.

    Science.gov (United States)

    Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo

    2013-12-01

    To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Is the island universe model consistent with observations?

    OpenAIRE

    Piao, Yun-Song

    2005-01-01

    We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.

  1. Thermodynamically consistent description of criticality in models of correlated electrons

    Czech Academy of Sciences Publication Activity Database

    Janiš, Václav; Kauch, Anna; Pokorný, Vladislav

    2017-01-01

    Roč. 95, č. 4 (2017), s. 1-14, č. článku 045108. ISSN 2469-9950 R&D Projects: GA ČR GA15-14259S Institutional support: RVO:68378271 Keywords : conserving approximations * Anderson model * Hubbard model * parquet equations Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.) Impact factor: 3.836, year: 2016

  2. Consistent Evolution of Software Artifacts and Non-Functional Models

    Science.gov (United States)

    2014-11-14

    induce bad software performance)? 15. SUBJECT TERMS EOARD, Nano particles, Photo-Acoustic Sensors, Model-Driven Engineering ( MDE ), Software Performance...Università degli Studi dell’Aquila, Via Vetoio, 67100 L’Aquila, Italy Email: vittorio.cortellessa@univaq.it Web : http: // www. di. univaq. it/ cortelle/ Phone...Model-Driven Engineering ( MDE ), Software Performance Engineering (SPE), Change Propagation, Performance Antipatterns. For sake of readability of the

  3. Phase models of galaxies consisting of disk and halo

    International Nuclear Information System (INIS)

    Osipkov, L.P.; Kutuzov, S.A.

    1987-01-01

    A method of finding the phase density of a two-component model of mass distribution is developed. The equipotential surfaces and the potential law are given. The equipotentials are lenslike surfaces with a sharp edge in the equatorial plane, which provides the existence of an imbedded thin disk in halo. The equidensity surfaces of the halo coincide with the equipotentials. Phase models for the halo and the disk are constructed separately on the basis of spatial and surface mass densities by solving the corresponding integral equations. In particular the models for the halo with finite dimensions can be constructed. The even part of the phase density in respect to velocities is only found. For the halo it depends on the energy integral as a single argument

  4. Phase models of galaxies consisting of a disk and halo

    International Nuclear Information System (INIS)

    Osipkov, L.P.; Kutuzov, S.A.

    1988-01-01

    A method is developed for finding the phase density of a two-component model of a distribution of masses. The equipotential surfaces and potential law are given. The equipotentials are lenslike surfaces with a sharp edge in the equatorial plane, this ensuring the existence of a vanishingly thin embedded disk. The equidensity surfaces of the halo coincide with the equipotentials. Phase models are constructed separately for the halo and for the disk on the basis of the spatial and surface mass densities by the solution of the corresponding integral equations. In particular, models with a halo having finite dimensions can be constructed. For both components, the part of the phase density even with respect to the velocities is found. For the halo, it depends only on the energy integral. Two examples, for which exact solutions are found, are considered

  5. A thermodynamically consistent model of shape-memory alloys

    Czech Academy of Sciences Publication Activity Database

    Benešová, Barbora

    2011-01-01

    Roč. 11, č. 1 (2011), s. 355-356 ISSN 1617-7061 R&D Projects: GA ČR GAP201/10/0357 Institutional research plan: CEZ:AV0Z20760514 Keywords : slape memory alloys * model based on relaxation * thermomechanic coupling Subject RIV: BA - General Mathematics http://onlinelibrary.wiley.com/doi/10.1002/pamm.201110169/abstract

  6. Flood damage: a model for consistent, complete and multipurpose scenarios

    Science.gov (United States)

    Menoni, Scira; Molinari, Daniela; Ballio, Francesco; Minucci, Guido; Mejri, Ouejdane; Atun, Funda; Berni, Nicola; Pandolfo, Claudia

    2016-12-01

    Effective flood risk mitigation requires the impacts of flood events to be much better and more reliably known than is currently the case. Available post-flood damage assessments usually supply only a partial vision of the consequences of the floods as they typically respond to the specific needs of a particular stakeholder. Consequently, they generally focus (i) on particular items at risk, (ii) on a certain time window after the occurrence of the flood, (iii) on a specific scale of analysis or (iv) on the analysis of damage only, without an investigation of damage mechanisms and root causes. This paper responds to the necessity of a more integrated interpretation of flood events as the base to address the variety of needs arising after a disaster. In particular, a model is supplied to develop multipurpose complete event scenarios. The model organizes available information after the event according to five logical axes. This way post-flood damage assessments can be developed that (i) are multisectoral, (ii) consider physical as well as functional and systemic damage, (iii) address the spatial scales that are relevant for the event at stake depending on the type of damage that has to be analyzed, i.e., direct, functional and systemic, (iv) consider the temporal evolution of damage and finally (v) allow damage mechanisms and root causes to be understood. All the above features are key for the multi-usability of resulting flood scenarios. The model allows, on the one hand, the rationalization of efforts currently implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.

  7. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: exactly solvable two-site Hubbard model.

    Science.gov (United States)

    Kutepov, A L

    2015-08-12

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ1 from the first-order perturbation theory, and the exact vertex Γ(E)). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. The results obtained with the exact vertex are directly related to the present open question-which approximation is more advantageous for future implementations, GW + DMFT or QPGW + DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on perturbation theory (PT) systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.

  8. miRiadne: a web tool for consistent integration of miRNA nomenclature.

    Science.gov (United States)

    Bonnal, Raoul J P; Rossi, Riccardo L; Carpi, Donatella; Ranzani, Valeria; Abrignani, Sergio; Pagani, Massimiliano

    2015-07-01

    The miRBase is the official miRNA repository which keeps the annotation updated on newly discovered miRNAs: it is also used as a reference for the design of miRNA profiling platforms. Nomenclature ambiguities generated by loosely updated platforms and design errors lead to incompatibilities among platforms, even from the same vendor. Published miRNA lists are thus generated with different profiling platforms that refer to diverse and not updated annotations. This greatly compromises searches, comparisons and analyses that rely on miRNA names only without taking into account the mature sequences, which is particularly critic when such analyses are carried over automatically. In this paper we introduce miRiadne, a web tool to harmonize miRNA nomenclature, which takes into account the original miRBase versions from 10 up to 21, and annotations of 40 common profiling platforms from nine brands that we manually curated. miRiadne uses the miRNA mature sequence to link miRBase versions and/or platforms to prevent nomenclature ambiguities. miRiadne was designed to simplify and support biologists and bioinformaticians in re-annotating their own miRNA lists and/or data sets. As Ariadne helped Theseus in escaping the mythological maze, miRiadne will help the miRNA researcher in escaping the nomenclature maze. miRiadne is freely accessible from the URL http://www.miriadne.org. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Alien wavelength modeling tool and field trial

    DEFF Research Database (Denmark)

    Sambo, N.; Sgambelluri, A.; Secondini, M.

    2015-01-01

    A modeling tool is presented for pre-FEC BER estimation of PM-QPSK alien wavelength signals. A field trial is demonstrated and used as validation of the tool's correctness. A very close correspondence between the performance of the field trial and the one predicted by the modeling tool has been...

  10. Spectrally-consistent regularization modeling of turbulent natural convection flows

    International Nuclear Information System (INIS)

    Trias, F Xavier; Gorobets, Andrey; Oliva, Assensi; Verstappen, Roel

    2012-01-01

    The incompressible Navier-Stokes equations constitute an excellent mathematical modelization of turbulence. Unfortunately, attempts at performing direct simulations are limited to relatively low-Reynolds numbers because of the almost numberless small scales produced by the non-linear convective term. Alternatively, a dynamically less complex formulation is proposed here. Namely, regularizations of the Navier-Stokes equations that preserve the symmetry and conservation properties exactly. To do so, both convective and diffusive terms are altered in the same vein. In this way, the convective production of small scales is effectively restrained whereas the modified diffusive term introduces a hyperviscosity effect and consequently enhances the destruction of small scales. In practice, the only additional ingredient is a self-adjoint linear filter whose local filter length is determined from the requirement that vortex-stretching must stop at the smallest grid scale. In the present work, the performance of the above-mentioned recent improvements is assessed through application to turbulent natural convection flows by means of comparison with DNS reference data.

  11. On the internal consistency of holographic dark energy models

    International Nuclear Information System (INIS)

    Horvat, R

    2008-01-01

    Holographic dark energy (HDE) models, underpinned by an effective quantum field theory (QFT) with a manifest UV/IR connection, have become convincing candidates for providing an explanation of the dark energy in the universe. On the other hand, the maximum number of quantum states that a conventional QFT for a box of size L is capable of describing relates to those boxes which are on the brink of experiencing a sudden collapse to a black hole. Another restriction on the underlying QFT is that the UV cut-off, which cannot be chosen independently of the IR cut-off and therefore becomes a function of time in a cosmological setting, should stay the largest energy scale even in the standard cosmological epochs preceding a dark energy dominated one. We show that, irrespective of whether one deals with the saturated form of HDE or takes a certain degree of non-saturation in the past, the above restrictions cannot be met in a radiation dominated universe, an epoch in the history of the universe which is expected to be perfectly describable within conventional QFT

  12. Vertically integrated simulation tools for self-consistent tracking and analysis

    International Nuclear Information System (INIS)

    Forest, E.; Nishimura, H.

    1989-03-01

    A modeling, simulation and analysis code complex, the Gemini Package, was developed for the study of single-particle dynamics in the Advanced Light Source (ALS), a 1--2 GeV synchrotron radiation source now being built at Lawrence Berkeley Laboratory. The purpose of this paper is to describe the philosophy behind the package, with special emphasis on our vertical approach. 8 refs., 2 figs

  13. Study of mango endogenous pectinases as a tool to engineer mango purée consistency.

    Science.gov (United States)

    Jamsazzadeh Kermani, Zahra; Shpigelman, Avi; Houben, Ken; ten Geuzendam, Belinda; Van Loey, Ann M; Hendrickx, Marc E

    2015-04-01

    The objective of this work was to evaluate the possibility of using mango endogenous pectinases to change the viscosity of mango purée. Hereto, the structure of pectic polysaccharide and the presence of sufficiently active endogenous enzymes of ripe mango were determined. Pectin of mango flesh had a high molecular weight and was highly methoxylated. Pectin methylesterase showed a negligible activity which is related to the confirmed presence of a pectin methylesterase inhibitor. Pectin contained relatively high amounts of galactose and considerable β-galactosidase (β-Gal) activity was observed. The possibility of stimulating β-Gal activity during processing (temperature/pressure, time) was investigated. β-Gal of mango was rather temperature labile but pressure stable relatively to the temperature and pressure levels used to inactivate destructive enzymes in industry. Creating processing conditions allowing endogenous β-Gal activity did not substantially change the consistency of mango purée. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  15. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  16. A consistent modelling methodology for secondary settling tanks in wastewater treatment.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar

    2011-03-01

    The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  18. An evaluation of BPMN modeling tools

    NARCIS (Netherlands)

    Yan, Z.; Reijers, H.A.; Dijkman, R.M.; Mendling, J.; Weidlich, M.

    2010-01-01

    Various BPMN modeling tools are available and it is close to impossible to understand their functional differences without simply trying them out. This paper presents an evaluation framework and presents the outcomes of its application to a set of five BPMN modeling tools. We report on various

  19. Aggregated wind power plant models consisting of IEC wind turbine models

    DEFF Research Database (Denmark)

    Altin, Müfit; Göksu, Ömer; Hansen, Anca Daniela

    2015-01-01

    The common practice regarding the modelling of large generation components has been to make use of models representing the performance of the individual components with a required level of accuracy and details. Owing to the rapid increase of wind power plants comprising large number of wind...... turbines, parameters and models to represent each individual wind turbine in detail makes it necessary to develop aggregated wind power plant models considering the simulation time for power system stability studies. In this paper, aggregated wind power plant models consisting of the IEC 61400-27 variable...... speed wind turbine models (type 3 and type 4) with a power plant controller is presented. The performance of the detailed benchmark wind power plant model and the aggregated model are compared by means of simulations for the specified test cases. Consequently, the results are summarized and discussed...

  20. Software Engineering Tools for Scientific Models

    Science.gov (United States)

    Abrams, Marc; Saboo, Pallabi; Sonsini, Mike

    2013-01-01

    Software tools were constructed to address issues the NASA Fortran development community faces, and they were tested on real models currently in use at NASA. These proof-of-concept tools address the High-End Computing Program and the Modeling, Analysis, and Prediction Program. Two examples are the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) atmospheric model in Cell Fortran on the Cell Broadband Engine, and the Goddard Institute for Space Studies (GISS) coupled atmosphere- ocean model called ModelE, written in fixed format Fortran.

  1. Consistent model reduction of polymer chains in solution in dissipative particle dynamics: Model description

    KAUST Repository

    Moreno Chaparro, Nicolas

    2015-06-30

    We introduce a framework for model reduction of polymer chain models for dissipative particle dynamics (DPD) simulations, where the properties governing the phase equilibria such as the characteristic size of the chain, compressibility, density, and temperature are preserved. The proposed methodology reduces the number of degrees of freedom required in traditional DPD representations to model equilibrium properties of systems with complex molecules (e.g., linear polymers). Based on geometrical considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling of the simulation parameters. In order to satisfy the geometrical constraints in the reduced model we introduce bond-angle potentials that account for the changes in the chain free energy after the model reduction. Following this coarse-graining process we represent high molecular weight DPD chains (i.e., ≥200≥200 beads per chain) with a significant reduction in the number of particles required (i.e., ≥20≥20 times the original system). We show that our methodology has potential applications modeling systems of high molecular weight molecules at large scales, such as diblock copolymer and DNA.

  2. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  3. A new k-epsilon model consistent with Monin-Obukhov similarity theory

    DEFF Research Database (Denmark)

    van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.

    2017-01-01

    A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...

  4. Process of Integrating Screening and Detailed Risk-based Modeling Analyses to Ensure Consistent and Scientifically Defensible Results

    International Nuclear Information System (INIS)

    Buck, John W.; McDonald, John P.; Taira, Randal Y.

    2002-01-01

    To support cleanup and closure of these tanks, modeling is performed to understand and predict potential impacts to human health and the environment. Pacific Northwest National Laboratory developed a screening tool for the United States Department of Energy, Office of River Protection that estimates the long-term human health risk, from a strategic planning perspective, posed by potential tank releases to the environment. This tool is being conditioned to more detailed model analyses to ensure consistency between studies and to provide scientific defensibility. Once the conditioning is complete, the system will be used to screen alternative cleanup and closure strategies. The integration of screening and detailed models provides consistent analyses, efficiencies in resources, and positive feedback between the various modeling groups. This approach of conditioning a screening methodology to more detailed analyses provides decision-makers with timely and defensible information and increases confidence in the results on the part of clients, regulators, and stakeholders

  5. Spatial Modeling Tools for Cell Biology

    National Research Council Canada - National Science Library

    Przekwas, Andrzej; Friend, Tom; Teixeira, Rodrigo; Chen, Z. J; Wilkerson, Patrick

    2006-01-01

    .... Scientific potentials and military relevance of computational biology and bioinformatics have inspired DARPA/IPTO's visionary BioSPICE project to develop computational framework and modeling tools for cell biology...

  6. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    Science.gov (United States)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  7. Cockpit System Situational Awareness Modeling Tool

    Science.gov (United States)

    Keller, John; Lebiere, Christian; Shay, Rick; Latorella, Kara

    2004-01-01

    This project explored the possibility of predicting pilot situational awareness (SA) using human performance modeling techniques for the purpose of evaluating developing cockpit systems. The Improved Performance Research Integration Tool (IMPRINT) was combined with the Adaptive Control of Thought-Rational (ACT-R) cognitive modeling architecture to produce a tool that can model both the discrete tasks of pilots and the cognitive processes associated with SA. The techniques for using this tool to predict SA were demonstrated using the newly developed Aviation Weather Information (AWIN) system. By providing an SA prediction tool to cockpit system designers, cockpit concepts can be assessed early in the design process while providing a cost-effective complement to the traditional pilot-in-the-loop experiments and data collection techniques.

  8. Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems

    KAUST Repository

    Garg, Vikram V; Prudhomme, Serge; van der Zee, Kris G; Carey, Graham F

    2014-01-01

    Models based on the Helmholtz `slip' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint

  9. Graph and model transformation tools for model migration : empirical results from the transformation tool contest

    NARCIS (Netherlands)

    Rose, L.M.; Herrmannsdoerfer, M.; Mazanek, S.; Van Gorp, P.M.E.; Buchwald, S.; Horn, T.; Kalnina, E.; Koch, A.; Lano, K.; Schätz, B.; Wimmer, M.

    2014-01-01

    We describe the results of the Transformation Tool Contest 2010 workshop, in which nine graph and model transformation tools were compared for specifying model migration. The model migration problem—migration of UML activity diagrams from version 1.4 to version 2.2—is non-trivial and practically

  10. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  11. The european Trans-Tools transport model

    NARCIS (Netherlands)

    Rooijen, T. van; Burgess, A.

    2008-01-01

    The paper presents the use of ArcGIS in the Transtools Transport Model, TRANS-TOOLS, created by an international consortium for the European Commission. The model describe passenger as well as freight transport in Europe with all medium and long distance modes (cars, vans, trucks, train, inland

  12. The Cryosphere Model Comparison Tool (CmCt): Ice Sheet Model Validation and Comparison Tool for Greenland and Antarctica

    Science.gov (United States)

    Simon, E.; Nowicki, S.; Neumann, T.; Tyahla, L.; Saba, J. L.; Guerber, J. R.; Bonin, J. A.; DiMarzio, J. P.

    2017-12-01

    The Cryosphere model Comparison tool (CmCt) is a web based ice sheet model validation tool that is being developed by NASA to facilitate direct comparison between observational data and various ice sheet models. The CmCt allows the user to take advantage of several decades worth of observations from Greenland and Antarctica. Currently, the CmCt can be used to compare ice sheet models provided by the user with remotely sensed satellite data from ICESat (Ice, Cloud, and land Elevation Satellite) laser altimetry, GRACE (Gravity Recovery and Climate Experiment) satellite, and radar altimetry (ERS-1, ERS-2, and Envisat). One or more models can be uploaded through the CmCt website and compared with observational data, or compared to each other or other models. The CmCt calculates statistics on the differences between the model and observations, and other quantitative and qualitative metrics, which can be used to evaluate the different model simulations against the observations. The qualitative metrics consist of a range of visual outputs and the quantitative metrics consist of several whole-ice-sheet scalar values that can be used to assign an overall score to a particular simulation. The comparison results from CmCt are useful in quantifying improvements within a specific model (or within a class of models) as a result of differences in model dynamics (e.g., shallow vs. higher-order dynamics approximations), model physics (e.g., representations of ice sheet rheological or basal processes), or model resolution (mesh resolution and/or changes in the spatial resolution of input datasets). The framework and metrics could also be used for use as a model-to-model intercomparison tool, simply by swapping outputs from another model as the observational datasets. Future versions of the tool will include comparisons with other datasets that are of interest to the modeling community, such as ice velocity, ice thickness, and surface mass balance.

  13. An online model composition tool for system biology models.

    Science.gov (United States)

    Coskun, Sarp A; Cicek, A Ercument; Lai, Nicola; Dash, Ranjan K; Ozsoyoglu, Z Meral; Ozsoyoglu, Gultekin

    2013-09-05

    There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user's input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well.

  14. Self-consistent imbedding and the ellipsoidal model model for porous rocks

    International Nuclear Information System (INIS)

    Korringa, J.; Brown, R.J.S.; Thompson, D.D.; Runge, R.J.

    1979-01-01

    Equations are obtained for the effective elastic moduli for a model of an isotropic, heterogeneous, porous medium. The mathematical model used for computation is abstract in that it is not simply a rigorous computation for a composite medium of some idealized geometry, although the computation contains individual steps which are just that. Both the solid part and pore space are represented by ellipsoidal or spherical 'grains' or 'pores' of various sizes and shapes. The strain of each grain, caused by external forces applied to the medium, is calculated in a self-consistent imbedding (SCI) approximation, which replaces the true surrounding of any given grain or pore by an isotropic medium defined by the effective moduli to be computed. The ellipsoidal nature of the shapes allows us to use Eshelby's theoretical treatment of a single ellipsoidal inclusion in an infiinte homogeneous medium. Results are compared with the literature, and discrepancies are found with all published accounts of this problem. Deviations from the work of Wu, of Walsh, and of O'Connell and Budiansky are attributed to a substitution made by these authors which though an identity for the exact quantities involved, is only approximate in the SCI calculation. This reduces the validity of the equations to first-order effects only. Differences with the results of Kuster and Toksoez are attributed to the fact that the computation of these authors is not self-consistent in the sense used here. A result seems to be the stiffening of the medium as if the pores are held apart. For spherical grains and pores, their calculated moduli are those given by the Hashin-Shtrikman upper bounds. Our calculation reproduces, in the case of spheres, an early result of Budiansky. An additional feature of our work is that the algebra is simpler than in earlier work. We also incorporate into the theory the possibility that fluid-filled pores are interconnected

  15. Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models

    KAUST Repository

    Vignal, Philippe

    2016-01-01

    of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure

  16. A CVAR scenario for a standard monetary model using theory-consistent expectations

    DEFF Research Database (Denmark)

    Juselius, Katarina

    2017-01-01

    A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...

  17. Behavioral Consistency of C and Verilog Programs Using Bounded Model Checking

    National Research Council Canada - National Science Library

    Clarke, Edmund; Kroening, Daniel; Yorav, Karen

    2003-01-01

    .... We describe experimental results on various reactive present an algorithm that checks behavioral consistency between an ANSI-C program and a circuit given in Verilog using Bounded Model Checking...

  18. QUALITY SERVICES EVALUATION MODEL BASED ON DEDICATED SOFTWARE TOOL

    Directory of Open Access Journals (Sweden)

    ANDREEA CRISTINA IONICĂ

    2012-10-01

    Full Text Available In this paper we introduced a new model, called Service Quality (SQ, which combines QFD and SERVQUAL methods. This model takes from the SERVQUAL method the five dimensions of requirements and three of characteristics and from the QFD method the application methodology. The originality of the SQ model consists in computing a global index that reflects the customers’ requirements accomplishment level by the quality characteristics. In order to prove the viability of the SQ model, there was developed a software tool that was applied for the evaluation of a health care services provider.

  19. Consistency, Verification, and Validation of Turbulence Models for Reynolds-Averaged Navier-Stokes Applications

    Science.gov (United States)

    Rumsey, Christopher L.

    2009-01-01

    In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.

  20. Self-consistent one-gluon exchange in soliton bag models

    International Nuclear Information System (INIS)

    Dodd, L.R.; Adelaide Univ.; Williams, A.G.

    1988-01-01

    The treatment of soliton bag models as two-point boundary value problems is extended to include self-consistent one-gluon exchange interactions. The colour-magnetic contribution to the nucleon-delta mass splitting is calculated self-consistently in the mean-field, one-gluon-exchange approximation for the Friedberg-Lee and Nielsen-Patkos models. Small glueball mass parameters (m GB ∝ 500 MeV) are favoured. Comparisons with previous calculations are made. (orig.)

  1. Development Life Cycle and Tools for XML Content Models

    Energy Technology Data Exchange (ETDEWEB)

    Kulvatunyou, Boonserm [ORNL; Morris, Katherine [National Institute of Standards and Technology (NIST); Buhwan, Jeong [POSTECH University, South Korea; Goyal, Puja [National Institute of Standards and Technology (NIST)

    2004-11-01

    Many integration projects today rely on shared semantic models based on standards represented using Extensible Mark up Language (XML) technologies. Shared semantic models typically evolve and require maintenance. In addition, to promote interoperability and reduce integration costs, the shared semantics should be reused as much as possible. Semantic components must be consistent and valid in terms of agreed upon standards and guidelines. In this paper, we describe an activity model for creation, use, and maintenance of a shared semantic model that is coherent and supports efficient enterprise integration. We then use this activity model to frame our research and the development of tools to support those activities. We provide overviews of these tools primarily in the context of the W3C XML Schema. At the present, we focus our work on the W3C XML Schema as the representation of choice, due to its extensive adoption by industry.

  2. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  3. Programming Models and Tools for Intelligent Embedded Systems

    DEFF Research Database (Denmark)

    Sørensen, Peter Verner Bojsen

    Design automation and analysis tools targeting embedded platforms, developed using a component-based design approach, must be able to reason about the capabilities of the platforms. In the general case where nothing is assumed about the components comprising a platform or the platform topology...... is used for checking the consistency of a design with respect to the availablity of services and resources. In the second application, a tool for automatically implementing the communication infrastructure of a process network application, the Service Relation Model is used for analyzing the capabilities...

  4. Self-consistent assessment of Englert-Schwinger model on atomic properties

    Science.gov (United States)

    Lehtomäki, Jouko; Lopez-Acevedo, Olga

    2017-12-01

    Our manuscript investigates a self-consistent solution of the statistical atom model proposed by Berthold-Georg Englert and Julian Schwinger (the ES model) and benchmarks it against atomic Kohn-Sham and two orbital-free models of the Thomas-Fermi-Dirac (TFD)-λvW family. Results show that the ES model generally offers the same accuracy as the well-known TFD-1/5 vW model; however, the ES model corrects the failure in the Pauli potential near-nucleus region. We also point to the inability of describing low-Z atoms as the foremost concern in improving the present model.

  5. Animal models: an important tool in mycology.

    Science.gov (United States)

    Capilla, Javier; Clemons, Karl V; Stevens, David A

    2007-12-01

    Animal models of fungal infections are, and will remain, a key tool in the advancement of the medical mycology. Many different types of animal models of fungal infection have been developed, with murine models the most frequently used, for studies of pathogenesis, virulence, immunology, diagnosis, and therapy. The ability to control numerous variables in performing the model allows us to mimic human disease states and quantitatively monitor the course of the disease. However, no single model can answer all questions and different animal species or different routes of infection can show somewhat different results. Thus, the choice of which animal model to use must be made carefully, addressing issues of the type of human disease to mimic, the parameters to follow and collection of the appropriate data to answer those questions being asked. This review addresses a variety of uses for animal models in medical mycology. It focuses on the most clinically important diseases affecting humans and cites various examples of the different types of studies that have been performed. Overall, animal models of fungal infection will continue to be valuable tools in addressing questions concerning fungal infections and contribute to our deeper understanding of how these infections occur, progress and can be controlled and eliminated.

  6. Estimating long-term volatility parameters for market-consistent models

    African Journals Online (AJOL)

    Contemporary actuarial and accounting practices (APN 110 in the South African context) require the use of market-consistent models for the valuation of embedded investment derivatives. These models have to be calibrated with accurate and up-to-date market data. Arguably, the most important variable in the valuation of ...

  7. Self-consistent model calculations of the ordered S-matrix and the cylinder correction

    International Nuclear Information System (INIS)

    Millan, J.

    1977-11-01

    The multiperipheral ordered bootstrap of Rosenzweig and Veneziano is studied by using dual triple Regge couplings exhibiting the required threshold behavior. In the interval -0.5 less than or equal to t less than or equal to 0.8 GeV 2 self-consistent reggeon couplings and propagators are obtained for values of Regge slopes and intercepts consistent with the physical values for the leading natural-parity Regge trajectories. Cylinder effects on planar pole positions and couplings are calculated. By use of an unsymmetrical planar π--rho reggeon loop model, self-consistent solutions are obtained for the unnatural-parity mesons in the interval -0.5 less than or equal to t less than or equal to 0.6 GeV 2 . The effects of other Regge poles being neglected, the model gives a value of the π--eta splitting consistent with experiment. 24 figures, 1 table, 25 references

  8. Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a General Risk Model with Diffusion

    Directory of Open Access Journals (Sweden)

    Lidong Zhang

    2014-01-01

    Full Text Available We mainly study a general risk model and investigate the precommitted strategy and the time-consistent strategy under mean-variance criterion, respectively. A lagrange method is proposed to derive the precommitted investment strategy. Meanwhile from the game theoretical perspective, we find the time-consistent investment strategy by solving the extended Hamilton-Jacobi-Bellman equations. By comparing the precommitted strategy with the time-consistent strategy, we find that the company under the time-consistent strategy has to give up the better current utility in order to keep a consistent satisfaction over the whole time horizon. Furthermore, we theoretically and numerically provide the effect of the parameters on these two optimal strategies and the corresponding value functions.

  9. Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers

    Science.gov (United States)

    Cartar, William; Mørk, Jesper; Hughes, Stephen

    2017-08-01

    We present a powerful computational approach to simulate the threshold behavior of photonic-crystal quantum-dot (QD) lasers. Using a finite-difference time-domain (FDTD) technique, Maxwell-Bloch equations representing a system of thousands of statistically independent and randomly positioned two-level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within the model. These Maxwell-Bloch equations are implemented by using Lumerical's flexible material plug-in tool, which allows a user to define additional equations of motion for the nonlinear polarization. We implement the gain ensemble within triangular-lattice photonic-crystal cavities of various length N (where N refers to the number of missing holes), and investigate the cavity mode characteristics and the threshold regime as a function of cavity length. We develop effective two-dimensional model simulations which are derived after studying the full three-dimensional passive material structures by matching the cavity quality factors and resonance properties. We also demonstrate how to obtain the correct point-dipole radiative decay rate from Fermi's golden rule, which is captured naturally by the FDTD method. Our numerical simulations predict that the pump threshold plateaus around cavity lengths greater than N =9 , which we identify as a consequence of the complex spatial dynamics and gain coupling from the inhomogeneous QD ensemble. This behavior is not expected from simple rate-equation analysis commonly adopted in the literature, but is in qualitative agreement with recent experiments. Single-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also

  10. A Model-Driven Visualization Tool for Use with Model-Based Systems Engineering Projects

    Science.gov (United States)

    Trase, Kathryn; Fink, Eric

    2014-01-01

    Model-Based Systems Engineering (MBSE) promotes increased consistency between a system's design and its design documentation through the use of an object-oriented system model. The creation of this system model facilitates data presentation by providing a mechanism from which information can be extracted by automated manipulation of model content. Existing MBSE tools enable model creation, but are often too complex for the unfamiliar model viewer to easily use. These tools do not yet provide many opportunities for easing into the development and use of a system model when system design documentation already exists. This study creates a Systems Modeling Language (SysML) Document Traceability Framework (SDTF) for integrating design documentation with a system model, and develops an Interactive Visualization Engine for SysML Tools (InVEST), that exports consistent, clear, and concise views of SysML model data. These exported views are each meaningful to a variety of project stakeholders with differing subjects of concern and depth of technical involvement. InVEST allows a model user to generate multiple views and reports from a MBSE model, including wiki pages and interactive visualizations of data. System data can also be filtered to present only the information relevant to the particular stakeholder, resulting in a view that is both consistent with the larger system model and other model views. Viewing the relationships between system artifacts and documentation, and filtering through data to see specialized views improves the value of the system as a whole, as data becomes information

  11. A non-parametric consistency test of the ΛCDM model with Planck CMB data

    Energy Technology Data Exchange (ETDEWEB)

    Aghamousa, Amir; Shafieloo, Arman [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2017-09-01

    Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation of the base ΛCDM model as cosmology's gold standard.

  12. Development of a Model for Dynamic Recrystallization Consistent with the Second Derivative Criterion

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2017-11-01

    Full Text Available Dynamic recrystallization (DRX processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.

  13. Self-consistent atmosphere modeling with cloud formation for low-mass stars and exoplanets

    Science.gov (United States)

    Juncher, Diana; Jørgensen, Uffe G.; Helling, Christiane

    2017-12-01

    Context. Low-mass stars and extrasolar planets have ultra-cool atmospheres where a rich chemistry occurs and clouds form. The increasing amount of spectroscopic observations for extrasolar planets requires self-consistent model atmosphere simulations to consistently include the formation processes that determine cloud formation and their feedback onto the atmosphere. Aims: Our aim is to complement the MARCS model atmosphere suit with simulations applicable to low-mass stars and exoplanets in preparation of E-ELT, JWST, PLATO and other upcoming facilities. Methods: The MARCS code calculates stellar atmosphere models, providing self-consistent solutions of the radiative transfer and the atmospheric structure and chemistry. We combine MARCS with a kinetic model that describes cloud formation in ultra-cool atmospheres (seed formation, growth/evaporation, gravitational settling, convective mixing, element depletion). Results: We present a small grid of self-consistently calculated atmosphere models for Teff = 2000-3000 K with solar initial abundances and log (g) = 4.5. Cloud formation in stellar and sub-stellar atmospheres appears for Teff day-night energy transport and no temperature inversion.

  14. A consistency assessment of coupled cohesive zone models for mixed-mode debonding problems

    Directory of Open Access Journals (Sweden)

    R. Dimitri

    2014-07-01

    Full Text Available Due to their simplicity, cohesive zone models (CZMs are very attractive to describe mixed-mode failure and debonding processes of materials and interfaces. Although a large number of coupled CZMs have been proposed, and despite the extensive related literature, little attention has been devoted to ensuring the consistency of these models for mixed-mode conditions, primarily in a thermodynamical sense. A lack of consistency may affect the local or global response of a mechanical system. This contribution deals with the consistency check for some widely used exponential and bilinear mixed-mode CZMs. The coupling effect on stresses and energy dissipation is first investigated and the path-dependance of the mixed-mode debonding work of separation is analitically evaluated. Analytical predictions are also compared with results from numerical implementations, where the interface is described with zero-thickness contact elements. A node-to-segment strategy is here adopted, which incorporates decohesion and contact within a unified framework. A new thermodynamically consistent mixed-mode CZ model based on a reformulation of the Xu-Needleman model as modified by van den Bosch et al. is finally proposed and derived by applying the Coleman and Noll procedure in accordance with the second law of thermodynamics. The model holds monolithically for loading and unloading processes, as well as for decohesion and contact, and its performance is demonstrated through suitable examples.

  15. Towards an Information Model of Consistency Maintenance in Distributed Interactive Applications

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2008-01-01

    Full Text Available A novel framework to model and explore predictive contract mechanisms in distributed interactive applications (DIAs using information theory is proposed. In our model, the entity state update scheme is modelled as an information generation, encoding, and reconstruction process. Such a perspective facilitates a quantitative measurement of state fidelity loss as a result of the distribution protocol. Results from an experimental study on a first-person shooter game are used to illustrate the utility of this measurement process. We contend that our proposed model is a starting point to reframe and analyse consistency maintenance in DIAs as a problem in distributed interactive media compression.

  16. Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a Dual Risk Model

    Directory of Open Access Journals (Sweden)

    Lidong Zhang

    2014-01-01

    Full Text Available We are concerned with optimal investment strategy for a dual risk model. We assume that the company can invest into a risk-free asset and a risky asset. Short-selling and borrowing money are allowed. Due to lack of iterated-expectation property, the Bellman Optimization Principle does not hold. Thus we investigate the precommitted strategy and time-consistent strategy, respectively. We take three steps to derive the precommitted investment strategy. Furthermore, the time-consistent investment strategy is also obtained by solving the extended Hamilton-Jacobi-Bellman equations. We compare the precommitted strategy with time-consistent strategy and find that these different strategies have different advantages: the former can make value function maximized at the original time t=0 and the latter strategy is time-consistent for the whole time horizon. Finally, numerical analysis is presented for our results.

  17. "A Simplified 'Benchmark” Stock-flow Consistent (SFC) Post-Keynesian Growth Model"

    OpenAIRE

    Claudio H. Dos Santos; Gennaro Zezza

    2007-01-01

    Despite being arguably one of the most active areas of research in heterodox macroeconomics, the study of the dynamic properties of stock-flow consistent (SFC) growth models of financially sophisticated economies is still in its early stages. This paper attempts to offer a contribution to this line of research by presenting a simplified Post-Keynesian SFC growth model with well-defined dynamic properties, and using it to shed light on the merits and limitations of the current heterodox SFC li...

  18. Self consistent MHD modeling of the solar wind from coronal holes with distinct geometries

    Science.gov (United States)

    Stewart, G. A.; Bravo, S.

    1995-01-01

    Utilizing an iterative scheme, a self-consistent axisymmetric MHD model for the solar wind has been developed. We use this model to evaluate the properties of the solar wind issuing from the open polar coronal hole regions of the Sun, during solar minimum. We explore the variation of solar wind parameters across the extent of the hole and we investigate how these variations are affected by the geometry of the hole and the strength of the field at the coronal base.

  19. A pedestal temperature model with self-consistent calculation of safety factor and magnetic shear

    International Nuclear Information System (INIS)

    Onjun, T; Siriburanon, T; Onjun, O

    2008-01-01

    A pedestal model based on theory-motivated models for the pedestal width and the pedestal pressure gradient is developed for the temperature at the top of the H-mode pedestal. The pedestal width model based on magnetic shear and flow shear stabilization is used in this study, where the pedestal pressure gradient is assumed to be limited by first stability of infinite n ballooning mode instability. This pedestal model is implemented in the 1.5D BALDUR integrated predictive modeling code, where the safety factor and magnetic shear are solved self-consistently in both core and pedestal regions. With the self-consistently approach for calculating safety factor and magnetic shear, the effect of bootstrap current can be correctly included in the pedestal model. The pedestal model is used to provide the boundary conditions in the simulations and the Multi-mode core transport model is used to describe the core transport. This new integrated modeling procedure of the BALDUR code is used to predict the temperature and density profiles of 26 H-mode discharges. Simulations are carried out for 13 discharges in the Joint European Torus and 13 discharges in the DIII-D tokamak. The average root-mean-square deviation between experimental data and the predicted profiles of the temperature and the density, normalized by their central values, is found to be about 14%

  20. Self-consistent approximation for muffin-tin models of random substitutional alloys with environmental disorder

    International Nuclear Information System (INIS)

    Kaplan, T.; Gray, L.J.

    1984-01-01

    The self-consistent approximation of Kaplan, Leath, Gray, and Diehl is applied to models for substitutional random alloys with muffin-tin potentials. The particular advantage of this approximation is that, in addition to including cluster scattering, the muffin-tin potentials in the alloy can depend on the occupation of the surrounding sites (i.e., environmental disorder is included)

  1. A new self-consistent model for thermodynamics of binary solutions

    Czech Academy of Sciences Publication Activity Database

    Svoboda, Jiří; Shan, Y. V.; Fischer, F. D.

    2015-01-01

    Roč. 108, NOV (2015), s. 27-30 ISSN 1359-6462 R&D Projects: GA ČR(CZ) GA14-24252S Institutional support: RVO:68081723 Keywords : Thermodynamics * Analytical methods * CALPHAD * Phase diagram * Self-consistent model Subject RIV: BJ - Thermodynamics Impact factor: 3.305, year: 2015

  2. Comment on self-consistent model of black hole formation and evaporation

    International Nuclear Information System (INIS)

    Ho, Pei-Ming

    2015-01-01

    In an earlier work, Kawai et al. proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.

  3. Topologically Consistent Models for Efficient Big Geo-Spatio Data Distribution

    Science.gov (United States)

    Jahn, M. W.; Bradley, P. E.; Doori, M. Al; Breunig, M.

    2017-10-01

    Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.

  4. ICFD modeling of final settlers - developing consistent and effective simulation model structures

    DEFF Research Database (Denmark)

    Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham

    CFD concept. The case of secondary settling tanks (SSTs) is used to demonstrate the methodological steps using the validated CFD model with the hindered-transientcompression settling velocity model by (10). Factor screening and latin hypercube sampling (LSH) are used to degenerate a 2-D axi-symmetrical CFD...... of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Results suggest that the iCFD model developed...... the feed-layer. These scenarios were inspired by literature (1; 2; 9). As for the D0--iCFD model, values of SSRE obtained are below 1 with an average SSRE=0.206. The simulation model thus can predict the solids distribution inside the tank with a satisfactory accuracy. Averaged relative errors of 8.1 %, 3...

  5. WMT: The CSDMS Web Modeling Tool

    Science.gov (United States)

    Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged

  6. The Science Consistency Review A Tool To Evaluate the Use of Scientific Information in Land Management Decisionmaking

    Science.gov (United States)

    James M. Guldin; David Cawrse; Russell Graham; Miles Hemstrom; Linda Joyce; Steve Kessler; Ranotta McNair; George Peterson; Charles G. Shaw; Peter Stine; Mark Twery; Jeffrey Walter

    2003-01-01

    The paper outlines a process called the science consistency review, which can be used to evaluate the use of scientific information in land management decisions. Developed with specific reference to land management decisions in the U.S. Department of Agriculture Forest Service, the process involves assembling a team of reviewers under a review administrator to...

  7. Comparison of BrainTool to other UML modeling and model transformation tools

    Science.gov (United States)

    Nikiforova, Oksana; Gusarovs, Konstantins

    2017-07-01

    In the last 30 years there were numerous model generated software systems offered targeting problems with the development productivity and the resulting software quality. CASE tools developed due today's date are being advertised as having "complete code-generation capabilities". Nowadays the Object Management Group (OMG) is calling similar arguments in regards to the Unified Modeling Language (UML) models at different levels of abstraction. It is being said that software development automation using CASE tools enables significant level of automation. Actual today's CASE tools are usually offering a combination of several features starting with a model editor and a model repository for a traditional ones and ending with code generator (that could be using a scripting or domain-specific (DSL) language), transformation tool to produce the new artifacts from the manually created and transformation definition editor to define new transformations for the most advanced ones. Present paper contains the results of CASE tool (mainly UML editors) comparison against the level of the automation they are offering.

  8. Self-consistency in the phonon space of the particle-phonon coupling model

    Science.gov (United States)

    Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.

    2018-04-01

    In the paper the nonlinear generalization of the time blocking approximation (TBA) is presented. The TBA is one of the versions of the extended random-phase approximation (RPA) developed within the Green-function method and the particle-phonon coupling model. In the generalized version of the TBA the self-consistency principle is extended onto the phonon space of the model. The numerical examples show that this nonlinear version of the TBA leads to the convergence of results with respect to enlarging the phonon space of the model.

  9. Consistency maintenance for constraint in role-based access control model

    Institute of Scientific and Technical Information of China (English)

    韩伟力; 陈刚; 尹建伟; 董金祥

    2002-01-01

    Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces corresponding formal rules, rule-based reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally, the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-oriented product data management (PDM) system.

  10. Consistency maintenance for constraint in role-based access control model

    Institute of Scientific and Technical Information of China (English)

    韩伟力; 陈刚; 尹建伟; 董金祥

    2002-01-01

    Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far'few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces correaponding formal rules, rulebased reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally,the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-ori-ented product data management (PDM) system.

  11. Wave and Wind Model Performance Metrics Tools

    Science.gov (United States)

    Choi, J. K.; Wang, D. W.

    2016-02-01

    Continual improvements and upgrades of Navy ocean wave and wind models are essential to the assurance of battlespace environment predictability of ocean surface wave and surf conditions in support of Naval global operations. Thus, constant verification and validation of model performance is equally essential to assure the progress of model developments and maintain confidence in the predictions. Global and regional scale model evaluations may require large areas and long periods of time. For observational data to compare against, altimeter winds and waves along the tracks from past and current operational satellites as well as moored/drifting buoys can be used for global and regional coverage. Using data and model runs in previous trials such as the planned experiment, the Dynamics of the Adriatic in Real Time (DART), we demonstrated the use of accumulated altimeter wind and wave data over several years to obtain an objective evaluation of the performance the SWAN (Simulating Waves Nearshore) model running in the Adriatic Sea. The assessment provided detailed performance of wind and wave models by using cell-averaged statistical variables maps with spatial statistics including slope, correlation, and scatter index to summarize model performance. Such a methodology is easily generalized to other regions and at global scales. Operational technology currently used by subject matter experts evaluating the Navy Coastal Ocean Model and the Hybrid Coordinate Ocean Model can be expanded to evaluate wave and wind models using tools developed for ArcMAP, a GIS application developed by ESRI. Recent inclusion of altimeter and buoy data into a format through the Naval Oceanographic Office's (NAVOCEANO) quality control system and the netCDF standards applicable to all model output makes it possible for the fusion of these data and direct model verification. Also, procedures were developed for the accumulation of match-ups of modelled and observed parameters to form a data base

  12. Continued development of modeling tools and theory for RF heating

    International Nuclear Information System (INIS)

    1998-01-01

    Mission Research Corporation (MRC) is pleased to present the Department of Energy (DOE) with its renewal proposal to the Continued Development of Modeling Tools and Theory for RF Heating program. The objective of the program is to continue and extend the earlier work done by the proposed principal investigator in the field of modeling (Radio Frequency) RF heating experiments in the large tokamak fusion experiments, particularly the Tokamak Fusion Test Reactor (TFTR) device located at Princeton Plasma Physics Laboratory (PPPL). An integral part of this work is the investigation and, in some cases, resolution of theoretical issues which pertain to accurate modeling. MRC is nearing the successful completion of the specified tasks of the Continued Development of Modeling Tools and Theory for RF Heating project. The following tasks are either completed or nearing completion. (1) Anisotropic temperature and rotation upgrades; (2) Modeling for relativistic ECRH; (3) Further documentation of SHOOT and SPRUCE. As a result of the progress achieved under this project, MRC has been urged to continue this effort. Specifically, during the performance of this project two topics were identified by PPPL personnel as new applications of the existing RF modeling tools. These two topics concern (a) future fast-wave current drive experiments on the large tokamaks including TFTR and (c) the interpretation of existing and future RF probe data from TFTR. To address each of these topics requires some modification or enhancement of the existing modeling tools, and the first topic requires resolution of certain theoretical issues to produce self-consistent results. This work falls within the scope of the original project and is more suited to the project's renewal than to the initiation of a new project

  13. Metrics and tools for consistent cohort discovery and financial analyses post-transition to ICD-10-CM.

    Science.gov (United States)

    Boyd, Andrew D; Li, Jianrong John; Kenost, Colleen; Joese, Binoy; Yang, Young Min; Kalagidis, Olympia A; Zenku, Ilir; Saner, Donald; Bahroos, Neil; Lussier, Yves A

    2015-05-01

    In the United States, International Classification of Disease Clinical Modification (ICD-9-CM, the ninth revision) diagnosis codes are commonly used to identify patient cohorts and to conduct financial analyses related to disease. In October 2015, the healthcare system of the United States will transition to ICD-10-CM (the tenth revision) diagnosis codes. One challenge posed to clinical researchers and other analysts is conducting diagnosis-related queries across datasets containing both coding schemes. Further, healthcare administrators will manage growth, trends, and strategic planning with these dually-coded datasets. The majority of the ICD-9-CM to ICD-10-CM translations are complex and nonreciprocal, creating convoluted representations and meanings. Similarly, mapping back from ICD-10-CM to ICD-9-CM is equally complex, yet different from mapping forward, as relationships are likewise nonreciprocal. Indeed, 10 of the 21 top clinical categories are complex as 78% of their diagnosis codes are labeled as "convoluted" by our analyses. Analysis and research related to external causes of morbidity, injury, and poisoning will face the greatest challenges due to 41 745 (90%) convolutions and a decrease in the number of codes. We created a web portal tool and translation tables to list all ICD-9-CM diagnosis codes related to the specific input of ICD-10-CM diagnosis codes and their level of complexity: "identity" (reciprocal), "class-to-subclass," "subclass-to-class," "convoluted," or "no mapping." These tools provide guidance on ambiguous and complex translations to reveal where reports or analyses may be challenging to impossible.Web portal: http://www.lussierlab.org/transition-to-ICD9CM/Tables annotated with levels of translation complexity: http://www.lussierlab.org/publications/ICD10to9. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  14. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    Science.gov (United States)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  15. Detecting consistent patterns of directional adaptation using differential selection codon models.

    Science.gov (United States)

    Parto, Sahar; Lartillot, Nicolas

    2017-06-23

    Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.

  16. A thermodynamically consistent model for granular-fluid mixtures considering pore pressure evolution and hypoplastic behavior

    Science.gov (United States)

    Hess, Julian; Wang, Yongqi

    2016-11-01

    A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.

  17. Toward a consistent modeling framework to assess multi-sectoral climate impacts.

    Science.gov (United States)

    Monier, Erwan; Paltsev, Sergey; Sokolov, Andrei; Chen, Y-H Henry; Gao, Xiang; Ejaz, Qudsia; Couzo, Evan; Schlosser, C Adam; Dutkiewicz, Stephanie; Fant, Charles; Scott, Jeffery; Kicklighter, David; Morris, Jennifer; Jacoby, Henry; Prinn, Ronald; Haigh, Martin

    2018-02-13

    Efforts to estimate the physical and economic impacts of future climate change face substantial challenges. To enrich the currently popular approaches to impact analysis-which involve evaluation of a damage function or multi-model comparisons based on a limited number of standardized scenarios-we propose integrating a geospatially resolved physical representation of impacts into a coupled human-Earth system modeling framework. Large internationally coordinated exercises cannot easily respond to new policy targets and the implementation of standard scenarios across models, institutions and research communities can yield inconsistent estimates. Here, we argue for a shift toward the use of a self-consistent integrated modeling framework to assess climate impacts, and discuss ways the integrated assessment modeling community can move in this direction. We then demonstrate the capabilities of such a modeling framework by conducting a multi-sectoral assessment of climate impacts under a range of consistent and integrated economic and climate scenarios that are responsive to new policies and business expectations.

  18. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  19. A semi-nonparametric mixture model for selecting functionally consistent proteins.

    Science.gov (United States)

    Yu, Lianbo; Doerge, Rw

    2010-09-28

    High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.

  20. Model Consistent Pseudo-Observations of Precipitation and Their Use for Bias Correcting Regional Climate Models

    Directory of Open Access Journals (Sweden)

    Peter Berg

    2015-01-01

    Full Text Available Lack of suitable observational data makes bias correction of high space and time resolution regional climate models (RCM problematic. We present a method to construct pseudo-observational precipitation data bymerging a large scale constrained RCMreanalysis downscaling simulation with coarse time and space resolution observations. The large scale constraint synchronizes the inner domain solution to the driving reanalysis model, such that the simulated weather is similar to observations on a monthly time scale. Monthly biases for each single month are corrected to the corresponding month of the observational data, and applied to the finer temporal resolution of the RCM. A low-pass filter is applied to the correction factors to retain the small spatial scale information of the RCM. The method is applied to a 12.5 km RCM simulation and proven successful in producing a reliable pseudo-observational data set. Furthermore, the constructed data set is applied as reference in a quantile mapping bias correction, and is proven skillful in retaining small scale information of the RCM, while still correcting the large scale spatial bias. The proposed method allows bias correction of high resolution model simulations without changing the fine scale spatial features, i.e., retaining the very information required by many impact models.

  1. Collaboro: a collaborative (meta modeling tool

    Directory of Open Access Journals (Sweden)

    Javier Luis Cánovas Izquierdo

    2016-10-01

    Full Text Available Software development is becoming more and more collaborative, emphasizing the role of end-users in the development process to make sure the final product will satisfy customer needs. This is especially relevant when developing Domain-Specific Modeling Languages (DSMLs, which are modeling languages specifically designed to carry out the tasks of a particular domain. While end-users are actually the experts of the domain for which a DSML is developed, their participation in the DSML specification process is still rather limited nowadays. In this paper, we propose a more community-aware language development process by enabling the active participation of all community members (both developers and end-users from the very beginning. Our proposal, called Collaboro, is based on a DSML itself enabling the representation of change proposals during the language design and the discussion (and trace back of possible solutions, comments and decisions arisen during the collaboration. Collaboro also incorporates a metric-based recommender system to help community members to define high-quality notations for the DSMLs. We also show how Collaboro can be used at the model-level to facilitate the collaborative specification of software models. Tool support is available both as an Eclipse plug-in a web-based solution.

  2. Self-consistent electronic structure of a model stage-1 graphite acceptor intercalate

    International Nuclear Information System (INIS)

    Campagnoli, G.; Tosatti, E.

    1981-04-01

    A simple but self-consistent LCAO scheme is used to study the π-electronic structure of an idealized stage-1 ordered graphite acceptor intercalate, modeled approximately on C 8 AsF 5 . The resulting non-uniform charge population within the carbon plane, band structure, optical and energy loss properties are discussed and compared with available spectroscopic evidence. The calculated total energy is used to estimate migration energy barriers, and the intercalate vibration mode frequency. (author)

  3. Implicit implementation and consistent tangent modulus of a viscoplastic model for polymers

    OpenAIRE

    ACHOUR, Nadia; CHATZIGEORGIOU, George; MERAGHNI, Fodil; CHEMISKY, Yves; FITOUSSI, Joseph

    2015-01-01

    In this work, the phenomenological viscoplastic DSGZ model (Duan et al., 2001 [13]), developed for glassy or semi-crystalline polymers, is numerically implemented in a three-dimensional framework, following an implicit formulation. The computational methodology is based on the radial return mapping algorithm. This implicit formulation leads to the definition of the consistent tangent modulus which permits the implementation in incremental micromechanical scale transition analysis. The extende...

  4. Self-consistent Dark Matter simplified models with an s-channel scalar mediator

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Nicole F.; Busoni, Giorgio; Sanderson, Isaac W., E-mail: n.bell@unimelb.edu.au, E-mail: giorgio.busoni@unimelb.edu.au, E-mail: isanderson@student.unimelb.edu.au [ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University of Melbourne, Victoria 3010 (Australia)

    2017-03-01

    We examine Simplified Models in which fermionic DM interacts with Standard Model (SM) fermions via the exchange of an s -channel scalar mediator. The single-mediator version of this model is not gauge invariant, and instead we must consider models with two scalar mediators which mix and interfere. The minimal gauge invariant scenario involves the mixing of a new singlet scalar with the Standard Model Higgs boson, and is tightly constrained. We construct two Higgs doublet model (2HDM) extensions of this scenario, where the singlet mixes with the 2nd Higgs doublet. Compared with the one doublet model, this provides greater freedom for the masses and mixing angle of the scalar mediators, and their coupling to SM fermions. We outline constraints on these models, and discuss Yukawa structures that allow enhanced couplings, yet keep potentially dangerous flavour violating processes under control. We examine the direct detection phenomenology of these models, accounting for interference of the scalar mediators, and interference of different quarks in the nucleus. Regions of parameter space consistent with direct detection measurements are determined.

  5. Self-consistent Dark Matter simplified models with an s-channel scalar mediator

    International Nuclear Information System (INIS)

    Bell, Nicole F.; Busoni, Giorgio; Sanderson, Isaac W.

    2017-01-01

    We examine Simplified Models in which fermionic DM interacts with Standard Model (SM) fermions via the exchange of an s -channel scalar mediator. The single-mediator version of this model is not gauge invariant, and instead we must consider models with two scalar mediators which mix and interfere. The minimal gauge invariant scenario involves the mixing of a new singlet scalar with the Standard Model Higgs boson, and is tightly constrained. We construct two Higgs doublet model (2HDM) extensions of this scenario, where the singlet mixes with the 2nd Higgs doublet. Compared with the one doublet model, this provides greater freedom for the masses and mixing angle of the scalar mediators, and their coupling to SM fermions. We outline constraints on these models, and discuss Yukawa structures that allow enhanced couplings, yet keep potentially dangerous flavour violating processes under control. We examine the direct detection phenomenology of these models, accounting for interference of the scalar mediators, and interference of different quarks in the nucleus. Regions of parameter space consistent with direct detection measurements are determined.

  6. Consistent robustness analysis (CRA) identifies biologically relevant properties of regulatory network models.

    Science.gov (United States)

    Saithong, Treenut; Painter, Kevin J; Millar, Andrew J

    2010-12-16

    A number of studies have previously demonstrated that "goodness of fit" is insufficient in reliably classifying the credibility of a biological model. Robustness and/or sensitivity analysis is commonly employed as a secondary method for evaluating the suitability of a particular model. The results of such analyses invariably depend on the particular parameter set tested, yet many parameter values for biological models are uncertain. Here, we propose a novel robustness analysis that aims to determine the "common robustness" of the model with multiple, biologically plausible parameter sets, rather than the local robustness for a particular parameter set. Our method is applied to two published models of the Arabidopsis circadian clock (the one-loop [1] and two-loop [2] models). The results reinforce current findings suggesting the greater reliability of the two-loop model and pinpoint the crucial role of TOC1 in the circadian network. Consistent Robustness Analysis can indicate both the relative plausibility of different models and also the critical components and processes controlling each model.

  7. A Theoretically Consistent Framework for Modelling Lagrangian Particle Deposition in Plant Canopies

    Science.gov (United States)

    Bailey, Brian N.; Stoll, Rob; Pardyjak, Eric R.

    2018-06-01

    We present a theoretically consistent framework for modelling Lagrangian particle deposition in plant canopies. The primary focus is on describing the probability of particles encountering canopy elements (i.e., potential deposition), and provides a consistent means for including the effects of imperfect deposition through any appropriate sub-model for deposition efficiency. Some aspects of the framework draw upon an analogy to radiation propagation through a turbid medium with which to develop model theory. The present method is compared against one of the most commonly used heuristic Lagrangian frameworks, namely that originally developed by Legg and Powell (Agricultural Meteorology, 1979, Vol. 20, 47-67), which is shown to be theoretically inconsistent. A recommendation is made to discontinue the use of this heuristic approach in favour of the theoretically consistent framework developed herein, which is no more difficult to apply under equivalent assumptions. The proposed framework has the additional advantage that it can be applied to arbitrary canopy geometries given readily measurable parameters describing vegetation structure.

  8. Alfven-wave particle interaction in finite-dimensional self-consistent field model

    International Nuclear Information System (INIS)

    Padhye, N.; Horton, W.

    1998-01-01

    A low-dimensional Hamiltonian model is derived for the acceleration of ions in finite amplitude Alfven waves in a finite pressure plasma sheet. The reduced low-dimensional wave-particle Hamiltonian is useful for describing the reaction of the accelerated ions on the wave amplitudes and phases through the self-consistent fields within the envelope approximation. As an example, the authors show for a single Alfven wave in the central plasma sheet of the Earth's geotail, modeled by the linear pinch geometry called the Harris sheet, the time variation of the wave amplitude during the acceleration of fast protons

  9. Interstellar turbulence model : A self-consistent coupling of plasma and neutral fluids

    International Nuclear Information System (INIS)

    Shaikh, Dastgeer; Zank, Gary P.; Pogorelov, Nikolai

    2006-01-01

    We present results of a preliminary investigation of interstellar turbulence based on a self-consistent two-dimensional fluid simulation model. Our model describes a partially ionized magnetofluid interstellar medium (ISM) that couples a neutral hydrogen fluid to a plasma through charge exchange interactions and assumes that the ISM turbulent correlation scales are much bigger than the shock characteristic length-scales, but smaller than the charge exchange mean free path length-scales. The shocks have no influence on the ISM turbulent fluctuations. We find that nonlinear interactions in coupled plasma-neutral ISM turbulence are influenced substantially by charge exchange processes

  10. Self-consistent nonlinearly polarizable shell-model dynamics for ferroelectric materials

    International Nuclear Information System (INIS)

    Mkam Tchouobiap, S.E.; Kofane, T.C.; Ngabireng, C.M.

    2002-11-01

    We investigate the dynamical properties of the polarizable shellmodel with a symmetric double Morse-type electron-ion interaction in one ionic species. A variational calculation based on the Self-Consistent Einstein Model (SCEM) shows that a theoretical ferroelectric (FE) transition temperature can be derive which demonstrates the presence of a first-order phase transition for the potassium selenate (K 2 SeO 4 ) crystal around Tc 91.5 K. Comparison of the model calculation with the experimental critical temperature yields satisfactory agreement. (author)

  11. Development of a self-consistent lightning NOx simulation in large-scale 3-D models

    Science.gov (United States)

    Luo, Chao; Wang, Yuhang; Koshak, William J.

    2017-03-01

    We seek to develop a self-consistent representation of lightning NOx (LNOx) simulation in a large-scale 3-D model. Lightning flash rates are parameterized functions of meteorological variables related to convection. We examine a suite of such variables and find that convective available potential energy and cloud top height give the best estimates compared to July 2010 observations from ground-based lightning observation networks. Previous models often use lightning NOx vertical profiles derived from cloud-resolving model simulations. An implicit assumption of such an approach is that the postconvection lightning NOx vertical distribution is the same for all deep convection, regardless of geographic location, time of year, or meteorological environment. Detailed observations of the lightning channel segment altitude distribution derived from the NASA Lightning Nitrogen Oxides Model can be used to obtain the LNOx emission profile. Coupling such a profile with model convective transport leads to a more self-consistent lightning distribution compared to using prescribed postconvection profiles. We find that convective redistribution appears to be a more important factor than preconvection LNOx profile selection, providing another reason for linking the strength of convective transport to LNOx distribution.

  12. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  13. Collaborative Inquiry Learning: Models, tools, and challenges

    Science.gov (United States)

    Bell, Thorsten; Urhahne, Detlef; Schanze, Sascha; Ploetzner, Rolf

    2010-02-01

    Collaborative inquiry learning is one of the most challenging and exciting ventures for today's schools. It aims at bringing a new and promising culture of teaching and learning into the classroom where students in groups engage in self-regulated learning activities supported by the teacher. It is expected that this way of learning fosters students' motivation and interest in science, that they learn to perform steps of inquiry similar to scientists and that they gain knowledge on scientific processes. Starting from general pedagogical reflections and science standards, the article reviews some prominent models of inquiry learning. This comparison results in a set of inquiry processes being the basis for cooperation in the scientific network NetCoIL. Inquiry learning is conceived in several ways with emphasis on different processes. For an illustration of the spectrum, some main conceptions of inquiry and their focuses are described. In the next step, the article describes exemplary computer tools and environments from within and outside the NetCoIL network that were designed to support processes of collaborative inquiry learning. These tools are analysed by describing their functionalities as well as effects on student learning known from the literature. The article closes with challenges for further developments elaborated by the NetCoIL network.

  14. Metabolic engineering tools in model cyanobacteria.

    Science.gov (United States)

    Carroll, Austin L; Case, Anna E; Zhang, Angela; Atsumi, Shota

    2018-03-26

    Developing sustainable routes for producing chemicals and fuels is one of the most important challenges in metabolic engineering. Photoautotrophic hosts are particularly attractive because of their potential to utilize light as an energy source and CO 2 as a carbon substrate through photosynthesis. Cyanobacteria are unicellular organisms capable of photosynthesis and CO 2 fixation. While engineering in heterotrophs, such as Escherichia coli, has result in a plethora of tools for strain development and hosts capable of producing valuable chemicals efficiently, these techniques are not always directly transferable to cyanobacteria. However, recent efforts have led to an increase in the scope and scale of chemicals that cyanobacteria can produce. Adaptations of important metabolic engineering tools have also been optimized to function in photoautotrophic hosts, which include Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)-Cas9, 13 C Metabolic Flux Analysis (MFA), and Genome-Scale Modeling (GSM). This review explores innovations in cyanobacterial metabolic engineering, and highlights how photoautotrophic metabolism has shaped their development. Copyright © 2018 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  15. Overlap function and Regge cut in a self-consistent multi-Regge model

    International Nuclear Information System (INIS)

    Banerjee, H.; Mallik, S.

    1977-01-01

    A self-consistent multi-Regge model with unit intercept for the input trajectory is presented. Violation of unitarity is avoided in the model by assuming the vanishing of the pomeron-pomeron-hadron vertex, as the mass of either pomeron tends to zero. The model yields an output Regge pole in the inelastic overlap function which for t>0 lies on the r.h.s. of the moving branch point in the complex J-plane, but for t<0 moves to unphysical sheets. The leading Regge-cut contribution to the forward diffraction amplitude can be negative, so that the total cross section predicted by the model attains a limiting value from below

  16. Overlap function and Regge cut in a self-consistent multi-Regge model

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, H [Saha Inst. of Nuclear Physics, Calcutta (India); Mallik, S [Bern Univ. (Switzerland). Inst. fuer Theoretische Physik

    1977-04-21

    A self-consistent multi-Regge model with unit intercept for the input trajectory is presented. Violation of unitarity is avoided in the model by assuming the vanishing of the pomeron-pomeron-hadron vertex, as the mass of either pomeron tends to zero. The model yields an output Regge pole in the inelastic overlap function which for t>0 lies on the r.h.s. of the moving branch point in the complex J-plane, but for t<0 moves to unphysical sheets. The leading Regge-cut contribution to the forward diffraction amplitude can be negative, so that the total cross section predicted by the model attains a limiting value from below.

  17. An Ice Model That is Consistent with Composite Rheology in GIA Modelling

    Science.gov (United States)

    Huang, P.; Patrick, W.

    2017-12-01

    There are several popular approaches in constructing ice history models. One of them is mainly based on thermo-mechanical ice models with forcing or boundary conditions inferred from paleoclimate data. The second one is mainly based on the observed response of the Earth to glacial loading and unloading, a process called Glacial Isostatic Adjustment or GIA. The third approach is a hybrid version of the first and second approaches. In this presentation, we will follow the second approach which also uses geological data such as ice flow, terminal moraine data and simple ice dynamic for the ice sheet re-construction (Peltier & Andrew 1976). The global ice model ICE-6G (Peltier et al. 2015) and all its predecessors (Tushingham & Peltier 1991, Peltier 1994, 1996, 2004, Lambeck et al. 2014) are constructed this way with the assumption that mantle rheology is linear. However, high temperature creep experiments on mantle rocks show that non-linear creep laws can also operate in the mantle. Since both linear (e.g. diffusion creep) and non-linear (e.g. dislocation) creep laws can operate simultaneously in the mantle, mantle rheology is likely composite, where the total creep is the sum of both linear and onlinear creep. Preliminary GIA studies found that composite rheology can fit regional RSL observations better than that from linear rheology(e.g. van der Wal et al. 2010). The aim of this paper is to construct ice models in Laurentia and Fennoscandia using this second approach, but with composite rheology, so that its predictions can fit GIA observations such as global RSL data, land uplift rate and g-dot simultaneously in addition to geological data and simple ice dynamics. The g-dot or gravity-rate-of-change data is from the GRACE gravity mission but with the effects of hydrology removed. Our GIA model is based on the Coupled Laplace-Finite Element method as described in Wu(2004) and van der Wal et al.(2010). It is found that composite rheology generally supports a thicker

  18. Possible world based consistency learning model for clustering and classifying uncertain data.

    Science.gov (United States)

    Liu, Han; Zhang, Xianchao; Zhang, Xiaotong

    2018-06-01

    Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media.

    Science.gov (United States)

    Ma, Manman; Xu, Zhenli

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  20. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Manman, E-mail: mmm@sjtu.edu.cn; Xu, Zhenli, E-mail: xuzl@sjtu.edu.cn [Department of Mathematics, Institute of Natural Sciences, and MoE Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  1. Traffic Multiresolution Modeling and Consistency Analysis of Urban Expressway Based on Asynchronous Integration Strategy

    Directory of Open Access Journals (Sweden)

    Liyan Zhang

    2017-01-01

    Full Text Available The paper studies multiresolution traffic flow simulation model of urban expressway. Firstly, compared with two-level hybrid model, three-level multiresolution hybrid model has been chosen. Then, multiresolution simulation framework and integration strategies are introduced. Thirdly, the paper proposes an urban expressway multiresolution traffic simulation model by asynchronous integration strategy based on Set Theory, which includes three submodels: macromodel, mesomodel, and micromodel. After that, the applicable conditions and derivation process of the three submodels are discussed in detail. In addition, in order to simulate and evaluate the multiresolution model, “simple simulation scenario” of North-South Elevated Expressway in Shanghai has been established. The simulation results showed the following. (1 Volume-density relationships of three submodels are unanimous with detector data. (2 When traffic density is high, macromodel has a high precision and smaller error and the dispersion of results is smaller. Compared with macromodel, simulation accuracies of micromodel and mesomodel are lower but errors are bigger. (3 Multiresolution model can simulate characteristics of traffic flow, capture traffic wave, and keep the consistency of traffic state transition. Finally, the results showed that the novel multiresolution model can have higher simulation accuracy and it is feasible and effective in the real traffic simulation scenario.

  2. Evaluation of clinical information modeling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Austin, Tony; Moreno-Conde, Jesús; Parra-Calderón, Carlos L; Kalra, Dipak

    2016-11-01

    Clinical information models are formal specifications for representing the structure and semantics of the clinical content within electronic health record systems. This research aims to define, test, and validate evaluation metrics for software tools designed to support the processes associated with the definition, management, and implementation of these models. The proposed framework builds on previous research that focused on obtaining agreement on the essential requirements in this area. A set of 50 conformance criteria were defined based on the 20 functional requirements agreed by that consensus and applied to evaluate the currently available tools. Of the 11 initiative developing tools for clinical information modeling identified, 9 were evaluated according to their performance on the evaluation metrics. Results show that functionalities related to management of data types, specifications, metadata, and terminology or ontology bindings have a good level of adoption. Improvements can be made in other areas focused on information modeling and associated processes. Other criteria related to displaying semantic relationships between concepts and communication with terminology servers had low levels of adoption. The proposed evaluation metrics were successfully tested and validated against a representative sample of existing tools. The results identify the need to improve tool support for information modeling and software development processes, especially in those areas related to governance, clinician involvement, and optimizing the technical validation of testing processes. This research confirmed the potential of these evaluation metrics to support decision makers in identifying the most appropriate tool for their organization. Los Modelos de Información Clínica son especificaciones para representar la estructura y características semánticas del contenido clínico en los sistemas de Historia Clínica Electrónica. Esta investigación define, prueba y valida

  3. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2017-01-01

    Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... of parameters was performed in order to get a good model fit to the data. However, not all parameters are identifiable with the given data set and model structure. Sensitivity, identifiability, and uncertainty analysis were completed and a relevant identifiable subset of parameters was determined for a new...

  4. Consistent phase-change modeling for CO2-based heat mining operation

    DEFF Research Database (Denmark)

    Singh, Ashok Kumar; Veje, Christian

    2017-01-01

    The accuracy of mathematical modeling of phase-change phenomena is limited if a simple, less accurate equation of state completes the governing partial differential equation. However, fluid properties (such as density, dynamic viscosity and compressibility) and saturation state are calculated using...... a highly accurate, complex equation of state. This leads to unstable and inaccurate simulation as the equation of state and governing partial differential equations are mutually inconsistent. In this study, the volume-translated Peng–Robinson equation of state was used with emphasis to model the liquid......–gas phase transition with more accuracy and consistency. Calculation of fluid properties and saturation state were based on the volume translated Peng–Robinson equation of state and results verified. The present model has been applied to a scenario to simulate a CO2-based heat mining process. In this paper...

  5. Simulation of recrystallization textures in FCC materials based on a self consistent model

    International Nuclear Information System (INIS)

    Bolmaro, R.E; Roatta, A; Fourty, A.L; Signorelli, J.W; Bertinetti, M.A

    2004-01-01

    The development of re-crystallization textures in FCC polycrystalline materials has been a long lasting scientific problem. The appearance of the so-called cubic component in high stack fault energy laminated FCC materials is not an entirely understood phenomenon. This work approaches the problem using a self- consistent simulation technique of homogenization. The information on first preferential neighbors is used in the model to consider grain boundary energies and intra granular misorientations and to treat the growth of grains and the mobility of the grain boundary. The energies accumulated by deformations are taken as conducting energies of the nucleation and the later growth is statistically governed by the grain boundary energies. The model shows the correct trend for re-crystallization textures obtained from previously simulated deformation textures for high and low stack fault energy FCC materials. The model's topological representation is discussed (CW)

  6. Nonparametric test of consistency between cosmological models and multiband CMB measurements

    Energy Technology Data Exchange (ETDEWEB)

    Aghamousa, Amir [Asia Pacific Center for Theoretical Physics, Pohang, Gyeongbuk 790-784 (Korea, Republic of); Shafieloo, Arman, E-mail: amir@apctp.org, E-mail: shafieloo@kasi.re.kr [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of)

    2015-06-01

    We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare between the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit ΛCDM model at 95% (∼ 2σ) confidence distance from the center of the nonparametric confidence set while repeating the analysis excluding the Planck 217 × 217 GHz spectrum data, the best fit ΛCDM model shifts to 70% (∼ 1σ) confidence distance. The most prominent features in the data deviating from the best fit ΛCDM model seems to be at low multipoles  18 < ℓ < 26 at greater than 2σ, ℓ ∼ 750 at ∼1 to 2σ and ℓ ∼ 1800 at greater than 2σ level. Excluding the 217×217 GHz spectrum the feature at ℓ ∼ 1800 becomes substantially less significance at ∼1 to 2σ confidence level. Results of our analysis based on the new approach we propose in this work are in agreement with other analysis done using alternative methods.

  7. Commensurate comparisons of models with energy budget observations reveal consistent climate sensitivities

    Science.gov (United States)

    Armour, K.

    2017-12-01

    Global energy budget observations have been widely used to constrain the effective, or instantaneous climate sensitivity (ICS), producing median estimates around 2°C (Otto et al. 2013; Lewis & Curry 2015). A key question is whether the comprehensive climate models used to project future warming are consistent with these energy budget estimates of ICS. Yet, performing such comparisons has proven challenging. Within models, values of ICS robustly vary over time, as surface temperature patterns evolve with transient warming, and are generally smaller than the values of equilibrium climate sensitivity (ECS). Naively comparing values of ECS in CMIP5 models (median of about 3.4°C) to observation-based values of ICS has led to the suggestion that models are overly sensitive. This apparent discrepancy can partially be resolved by (i) comparing observation-based values of ICS to model values of ICS relevant for historical warming (Armour 2017; Proistosescu & Huybers 2017); (ii) taking into account the "efficacies" of non-CO2 radiative forcing agents (Marvel et al. 2015); and (iii) accounting for the sparseness of historical temperature observations and differences in sea-surface temperature and near-surface air temperature over the oceans (Richardson et al. 2016). Another potential source of discrepancy is a mismatch between observed and simulated surface temperature patterns over recent decades, due to either natural variability or model deficiencies in simulating historical warming patterns. The nature of the mismatch is such that simulated patterns can lead to more positive radiative feedbacks (higher ICS) relative to those engendered by observed patterns. The magnitude of this effect has not yet been addressed. Here we outline an approach to perform fully commensurate comparisons of climate models with global energy budget observations that take all of the above effects into account. We find that when apples-to-apples comparisons are made, values of ICS in models are

  8. Are water simulation models consistent with steady-state and ultrafast vibrational spectroscopy experiments?

    International Nuclear Information System (INIS)

    Schmidt, J.R.; Roberts, S.T.; Loparo, J.J.; Tokmakoff, A.; Fayer, M.D.; Skinner, J.L.

    2007-01-01

    Vibrational spectroscopy can provide important information about structure and dynamics in liquids. In the case of liquid water, this is particularly true for isotopically dilute HOD/D 2 O and HOD/H 2 O systems. Infrared and Raman line shapes for these systems were measured some time ago. Very recently, ultrafast three-pulse vibrational echo experiments have been performed on these systems, which provide new, exciting, and important dynamical benchmarks for liquid water. There has been tremendous theoretical effort expended on the development of classical simulation models for liquid water. These models have been parameterized from experimental structural and thermodynamic measurements. The goal of this paper is to determine if representative simulation models are consistent with steady-state, and especially with these new ultrafast, experiments. Such a comparison provides information about the accuracy of the dynamics of these simulation models. We perform this comparison using theoretical methods developed in previous papers, and calculate the experimental observables directly, without making the Condon and cumulant approximations, and taking into account molecular rotation, vibrational relaxation, and finite excitation pulses. On the whole, the simulation models do remarkably well; perhaps the best overall agreement with experiment comes from the SPC/E model

  9. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles

    Science.gov (United States)

    Roth, Jenny; Steffens, Melanie C.; Vignoles, Vivian L.

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias. PMID:29681878

  10. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles

    Directory of Open Access Journals (Sweden)

    Jenny Roth

    2018-04-01

    Full Text Available The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification depends in part on the (incompatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (incompatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.

  11. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles.

    Science.gov (United States)

    Roth, Jenny; Steffens, Melanie C; Vignoles, Vivian L

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance-congruity and imbalance-dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.

  12. A Time consistent model for monetary value of man-sievert

    International Nuclear Information System (INIS)

    Na, S.H.; Kim, Sun G.

    2008-01-01

    Full text: Performing a cost-benefit analysis to establish optimum levels of radiation protection under the ALARA principle, we introduce a discrete stepwise model to evaluate man-sievert monetary value of Korea. The model formula, which is unique and country-specific, is composed of GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and the average life expectancy. Unlike previous researches on alpha-value assessment, we showed different alpha values optimized with respect to various ranges of individual dose, which would be more realistic and applicable to the radiation protection area. Employing economically constant term of GDP we showed the real values of man-sievert by year, which should be consistent in time series comparison even under price level fluctuation. GDP deflators of an economy have to be applied to measure one's own consistent value of radiation protection by year. In addition, we recommend that the concept of purchasing power parity should be adopted if it needs international comparison of alpha values in real terms. Finally, we explain the way that this stepwise model can be generalized simply to other countries without normalizing any country-specific factors. (author)

  13. Self-consistent nonlinear transmission line model of standing wave effects in a capacitive discharge

    International Nuclear Information System (INIS)

    Chabert, P.; Raimbault, J.L.; Rax, J.M.; Lieberman, M.A.

    2004-01-01

    It has been shown previously [Lieberman et al., Plasma Sources Sci. Technol. 11, 283 (2002)], using a non-self-consistent model based on solutions of Maxwell's equations, that several electromagnetic effects may compromise capacitive discharge uniformity. Among these, the standing wave effect dominates at low and moderate electron densities when the driving frequency is significantly greater than the usual 13.56 MHz. In the present work, two different global discharge models have been coupled to a transmission line model and used to obtain the self-consistent characteristics of the standing wave effect. An analytical solution for the wavelength λ was derived for the lossless case and compared to the numerical results. For typical plasma etching conditions (pressure 10-100 mTorr), a good approximation of the wavelength is λ/λ 0 ≅40 V 0 1/10 l -1/2 f -2/5 , where λ 0 is the wavelength in vacuum, V 0 is the rf voltage magnitude in volts at the discharge center, l is the electrode spacing in meters, and f the driving frequency in hertz

  14. Validity test and its consistency in the construction of patient loyalty model

    Science.gov (United States)

    Yanuar, Ferra

    2016-04-01

    The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.

  15. Dynamic consistency of leader/fringe models of exhaustible resource markets

    International Nuclear Information System (INIS)

    Pelot, R.P.

    1990-01-01

    A dynamic feedback pricing model is developed for a leader/fringe supply market of exhaustible resources. The discrete game optimization model includes marginal costs which may be quadratic functions of cumulative production, a linear demand curve and variable length periods. The multiperiod formulation is based on the nesting of later periods' Kuhn-Tucker conditions into earlier periods' optimizations. This procedure leads to dynamically consistent solutions where the leader's strategy is credible as he has no incentive to alter his original plan at some later stage. A static leader-fringe model may yield multiple local optima. This can result in the leader forcing the fringe to produce at their capacity constraint, which would otherwise be non-binding if it is greater than the fringe's unconstrained optimal production rate. Conditions are developed where the optimal solution occurs at a corner where constraints meet, of which limit pricing is a special case. The 2-period leader/fringe feedback model is compared to the computationally simpler open-loop model. Under certain conditions, the open-loop model yields the same result as the feedback model. A multiperiod feedback model of the world oil market with OPEC as price-leader and the remaining world oil suppliers comprising the fringe is compared with the open-loop solution. The optimal profits and prices are very similar, but large differences in production rates may occur. The exhaustion date predicted by the open-loop model may also differ from the feedback outcome. Some numerical tests result in non-contiguous production periods for a player or limit pricing phases. 85 refs., 60 figs., 30 tabs

  16. Numerical modelling of tool wear in turning with cemented carbide cutting tools

    Science.gov (United States)

    Franco, P.; Estrems, M.; Faura, F.

    2007-04-01

    A numerical model is proposed for analysing the flank and crater wear resulting from the loss of material on cutting tool surface in turning processes due to wear mechanisms of adhesion, abrasion and fracture. By means of this model, the material loss along cutting tool surface can be analysed, and the worn surface shape during the workpiece machining can be determined. The proposed model analyses the gradual degradation of cutting tool during turning operation, and tool wear can be estimated as a function of cutting time. Wear-land width (VB) and crater depth (KT) can be obtained for description of material loss on cutting tool surface, and the effects of the distinct wear mechanisms on surface shape can be studied. The parameters required for the tool wear model are obtained from bibliography and experimental observation for AISI 4340 steel turning with WC-Co cutting tools.

  17. Numerical modelling of tool wear in turning with cemented carbide cutting tools

    International Nuclear Information System (INIS)

    Franco, P.; Estrems, M.; Faura, F.

    2007-01-01

    A numerical model is proposed for analysing the flank and crater wear resulting from the loss of material on cutting tool surface in turning processes due to wear mechanisms of adhesion, abrasion and fracture. By means of this model, the material loss along cutting tool surface can be analysed, and the worn surface shape during the workpiece machining can be determined. The proposed model analyses the gradual degradation of cutting tool during turning operation, and tool wear can be estimated as a function of cutting time. Wear-land width (VB) and crater depth (KT) can be obtained for description of material loss on cutting tool surface, and the effects of the distinct wear mechanisms on surface shape can be studied. The parameters required for the tool wear model are obtained from bibliography and experimental observation for AISI 4340 steel turning with WC-Co cutting tools

  18. Self-consistent Bulge/Disk/Halo Galaxy Dynamical Modeling Using Integral Field Kinematics

    Science.gov (United States)

    Taranu, D. S.; Obreschkow, D.; Dubinski, J. J.; Fogarty, L. M. R.; van de Sande, J.; Catinella, B.; Cortese, L.; Moffett, A.; Robotham, A. S. G.; Allen, J. T.; Bland-Hawthorn, J.; Bryant, J. J.; Colless, M.; Croom, S. M.; D'Eugenio, F.; Davies, R. L.; Drinkwater, M. J.; Driver, S. P.; Goodwin, M.; Konstantopoulos, I. S.; Lawrence, J. S.; López-Sánchez, Á. R.; Lorente, N. P. F.; Medling, A. M.; Mould, J. R.; Owers, M. S.; Power, C.; Richards, S. N.; Tonini, C.

    2017-11-01

    We introduce a method for modeling disk galaxies designed to take full advantage of data from integral field spectroscopy (IFS). The method fits equilibrium models to simultaneously reproduce the surface brightness, rotation, and velocity dispersion profiles of a galaxy. The models are fully self-consistent 6D distribution functions for a galaxy with a Sérsic profile stellar bulge, exponential disk, and parametric dark-matter halo, generated by an updated version of GalactICS. By creating realistic flux-weighted maps of the kinematic moments (flux, mean velocity, and dispersion), we simultaneously fit photometric and spectroscopic data using both maximum-likelihood and Bayesian (MCMC) techniques. We apply the method to a GAMA spiral galaxy (G79635) with kinematics from the SAMI Galaxy Survey and deep g- and r-band photometry from the VST-KiDS survey, comparing parameter constraints with those from traditional 2D bulge-disk decomposition. Our method returns broadly consistent results for shared parameters while constraining the mass-to-light ratios of stellar components and reproducing the H I-inferred circular velocity well beyond the limits of the SAMI data. Although the method is tailored for fitting integral field kinematic data, it can use other dynamical constraints like central fiber dispersions and H I circular velocities, and is well-suited for modeling galaxies with a combination of deep imaging and H I and/or optical spectra (resolved or otherwise). Our implementation (MagRite) is computationally efficient and can generate well-resolved models and kinematic maps in under a minute on modern processors.

  19. A self-consistent model of the three-phase interstellar medium in disk galaxies

    International Nuclear Information System (INIS)

    Wang, Z.

    1989-01-01

    In the present study the author analyzes a number of physical processes concerning velocity and spatial distributions, ionization structure, pressure variation, mass and energy balance, and equation of state of the diffuse interstellar gas in a three phase model. He also considers the effects of this model on the formation of molecular clouds and the evolution of disk galaxies. The primary purpose is to incorporate self-consistently the interstellar conditions in a typical late-type galaxy, and to relate these to various observed large-scale phenomena. He models idealized situations both analytically and numerically, and compares the results with observational data of the Milky Way Galaxy and other nearby disk galaxies. Several main conclusions of this study are: (1) the highly ionized gas found in the lower Galactic halo is shown to be consistent with a model in which the gas is photoionized by the diffuse ultraviolet radiation; (2) in a quasi-static and self-regulatory configuration, the photoelectric effects of interstellar grains are primarily responsible for heating the cold (T ≅ 100K) gas; the warm (T ≅ 8,000K) gas may be heated by supernova remnants and other mechanisms; (3) the large-scale atomic and molecular gas distributions in a sample of 15 disk galaxies can be well explained if molecular cloud formation and star formation follow a modified Schmidt Law; a scaling law for the radial gas profiles is proposed based on this model, and it is shown to be applicable to the nearby late-type galaxies where radio mapping data is available; for disk galaxies of earlier type, the effect of their massive central bulges may have to be taken into account

  20. RPA method based on the self-consistent cranking model for 168Er and 158Dy

    International Nuclear Information System (INIS)

    Kvasil, J.; Cwiok, S.; Chariev, M.M.; Choriev, B.

    1983-01-01

    The low-lying nuclear states in 168 Er and 158 Dy are analysed within the random phase approximation (RPA) method based on the self-consistent cranking model (SCCM). The moment of inertia, the value of chemical potential, and the strength constant k 1 have been obtained from the symmetry condition. The pairing strength constants Gsub(tau) have been determined from the experimental values of neutron and proton pairing energies for nonrotating nuclei. A quite good agreement with experimental energies of states with positive parity was obtained without introducing the two-phonon vibrational states

  1. Quest for consistent modelling of statistical decay of the compound nucleus

    Science.gov (United States)

    Banerjee, Tathagata; Nath, S.; Pal, Santanu

    2018-01-01

    A statistical model description of heavy ion induced fusion-fission reactions is presented where shell effects, collective enhancement of level density, tilting away effect of compound nuclear spin and dissipation are included. It is shown that the inclusion of all these effects provides a consistent picture of fission where fission hindrance is required to explain the experimental values of both pre-scission neutron multiplicities and evaporation residue cross-sections in contrast to some of the earlier works where a fission hindrance is required for pre-scission neutrons but a fission enhancement for evaporation residue cross-sections.

  2. A self-consistent model for thermodynamics of multicomponent solid solutions

    International Nuclear Information System (INIS)

    Svoboda, J.; Fischer, F.D.

    2016-01-01

    The self-consistent concept recently published in this journal (108, 27–30, 2015) is extended from a binary to a multicomponent system. This is possible by exploiting the trapping concept as basis for including the interaction of atoms in terms of pairs (e.g. A–A, B–B, C–C…) and couples (e.g. A–B, B–C, …) in a multicomponent system with A as solvent and B, C, … as dilute solutes. The model results in a formulation of Gibbs-energy, which can be minimized. Examples show that the couple and pair formation may influence the equilibrium Gibbs energy markedly.

  3. A Self-Consistent Fault Slip Model for the 2011 Tohoku Earthquake and Tsunami

    Science.gov (United States)

    Yamazaki, Yoshiki; Cheung, Kwok Fai; Lay, Thorne

    2018-02-01

    The unprecedented geophysical and hydrographic data sets from the 2011 Tohoku earthquake and tsunami have facilitated numerous modeling and inversion analyses for a wide range of dislocation models. Significant uncertainties remain in the slip distribution as well as the possible contribution of tsunami excitation from submarine slumping or anelastic wedge deformation. We seek a self-consistent model for the primary teleseismic and tsunami observations through an iterative approach that begins with downsampling of a finite fault model inverted from global seismic records. Direct adjustment of the fault displacement guided by high-resolution forward modeling of near-field tsunami waveform and runup measurements improves the features that are not satisfactorily accounted for by the seismic wave inversion. The results show acute sensitivity of the runup to impulsive tsunami waves generated by near-trench slip. The adjusted finite fault model is able to reproduce the DART records across the Pacific Ocean in forward modeling of the far-field tsunami as well as the global seismic records through a finer-scale subfault moment- and rake-constrained inversion, thereby validating its ability to account for the tsunami and teleseismic observations without requiring an exotic source. The upsampled final model gives reasonably good fits to onshore and offshore geodetic observations albeit early after-slip effects and wedge faulting that cannot be reliably accounted for. The large predicted slip of over 20 m at shallow depth extending northward to 39.7°N indicates extensive rerupture and reduced seismic hazard of the 1896 tsunami earthquake zone, as inferred to varying extents by several recent joint and tsunami-only inversions.

  4. Comparison of squashing and self-consistent input-output models of quantum feedback

    Science.gov (United States)

    Peřinová, V.; Lukš, A.; Křepelka, J.

    2018-03-01

    The paper (Yanagisawa and Hope, 2010) opens with two ways of analysis of a measurement-based quantum feedback. The scheme of the feedback includes, along with the homodyne detector, a modulator and a beamsplitter, which does not enable one to extract the nonclassical field. In the present scheme, the beamsplitter is replaced by the quantum noise evader, which makes it possible to extract the nonclassical field. We re-approach the comparison of two models related to the same scheme. The first one admits that in the feedback loop between the photon annihilation and creation operators, unusual commutation relations hold. As a consequence, in the feedback loop, squashing of the light occurs. In the second one, the description arrives at the feedback loop via unitary transformations. But it is obvious that the unitary transformation which describes the modulator changes even the annihilation operator of the mode which passes by the modulator which is not natural. The first model could be called "squashing model" and the second one could be named "self-consistent model". Although the predictions of the two models differ only a little and both the ways of analysis have their advantages, they have also their drawbacks and further investigation is possible.

  5. A comprehensive, consistent and systematic mathematical model of PEM fuel cells

    International Nuclear Information System (INIS)

    Baschuk, J.J.; Li Xianguo

    2009-01-01

    This paper presents a comprehensive, consistent and systematic mathematical model for PEM fuel cells that can be used as the general formulation for the simulation and analysis of PEM fuel cells. As an illustration, the model is applied to an isothermal, steady state, two-dimensional PEM fuel cell. Water is assumed to be in either the gas phase or as a liquid phase in the pores of the polymer electrolyte. The model includes the transport of gas in the gas flow channels, electrode backing and catalyst layers; the transport of water and hydronium in the polymer electrolyte of the catalyst and polymer electrolyte layers; and the transport of electrical current in the solid phase. Water and ion transport in the polymer electrolyte was modeled using the generalized Stefan-Maxwell equations, based on non-equilibrium thermodynamics. Model simulations show that the bulk, convective gas velocity facilitates hydrogen transport from the gas flow channels to the anode catalyst layers, but inhibits oxygen transport. While some of the water required by the anode is supplied by the water produced in the cathode, the majority of water must be supplied by the anode gas phase, making operation with fully humidified reactants necessary. The length of the gas flow channel has a significant effect on the current production of the PEM fuel cell, with a longer channel length having a lower performance relative to a shorter channel length. This lower performance is caused by a greater variation in water content within the longer channel length

  6. Consistent modelling of wind turbine noise propagation from source to receiver

    DEFF Research Database (Denmark)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    2017-01-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine...... propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine....... and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound...

  7. Self-consistent model for pulsed direct-current N2 glow discharge

    International Nuclear Information System (INIS)

    Liu Chengsen

    2005-01-01

    A self-consistent analysis of a pulsed direct-current (DC) N 2 glow discharge is presented. The model is based on a numerical solution of the continuity equations for electron and ions coupled with Poisson's equation. The spatial-temporal variations of ionic and electronic densities and electric field are obtained. The electric field structure exhibits all the characteristic regions of a typical glow discharge (the cathode fall, the negative glow, and the positive column). Current-voltage characteristics of the discharge can be obtained from the model. The calculated current-voltage results using a constant secondary electron emission coefficient for the gas pressure 133.32 Pa are in reasonable agreement with experiment. (authors)

  8. A self-consistent model for polycrystal deformation. Description and implementation

    International Nuclear Information System (INIS)

    Clausen, B.; Lorentzen, T.

    1997-04-01

    This report is a manual for the ANSI C implementation of an incremental elastic-plastic rate-insensitive self-consistent polycrystal deformation model based on (Hutchinson 1970). The model is furthermore described in the Ph.D. thesis by Clausen (Clausen 1997). The structure of the main program, sc m odel.c, and its subroutines are described with flow-charts. Likewise the pre-processor, sc i ni.c, is described with a flowchart. Default values of all the input parameters are given in the pre-processor, but the user is able to select from other pre-defined values or enter new values. A sample calculation is made and the results are presented as plots and examples of the output files are shown. (au) 4 tabs., 28 ills., 17 refs

  9. A self-consistent model for polycrystal deformation. Description and implementation

    Energy Technology Data Exchange (ETDEWEB)

    Clausen, B.; Lorentzen, T.

    1997-04-01

    This report is a manual for the ANSI C implementation of an incremental elastic-plastic rate-insensitive self-consistent polycrystal deformation model based on (Hutchinson 1970). The model is furthermore described in the Ph.D. thesis by Clausen (Clausen 1997). The structure of the main program, sc{sub m}odel.c, and its subroutines are described with flow-charts. Likewise the pre-processor, sc{sub i}ni.c, is described with a flowchart. Default values of all the input parameters are given in the pre-processor, but the user is able to select from other pre-defined values or enter new values. A sample calculation is made and the results are presented as plots and examples of the output files are shown. (au) 4 tabs., 28 ills., 17 refs.

  10. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper

    2003-01-01

    Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...

  11. Self-Consistent Generation of Primordial Continental Crust in Global Mantle Convection Models

    Science.gov (United States)

    Jain, C.; Rozel, A.; Tackley, P. J.

    2017-12-01

    We present the generation of primordial continental crust (TTG rocks) using self-consistent and evolutionary thermochemical mantle convection models (Tackley, PEPI 2008). Numerical modelling commonly shows that mantle convection and continents have strong feedbacks on each other. However in most studies, continents are inserted a priori while basaltic (oceanic) crust is generated self-consistently in some models (Lourenco et al., EPSL 2016). Formation of primordial continental crust happened by fractional melting and crystallisation in episodes of relatively rapid growth from late Archean to late Proterozoic eras (3-1 Ga) (Hawkesworth & Kemp, Nature 2006) and it has also been linked to the onset of plate tectonics around 3 Ga. It takes several stages of differentiation to generate Tonalite-Trondhjemite-Granodiorite (TTG) rocks or proto-continents. First, the basaltic magma is extracted from the pyrolitic mantle which is both erupted at the surface and intruded at the base of the crust. Second, it goes through eclogitic transformation and then partially melts to form TTGs (Rudnick, Nature 1995; Herzberg & Rudnick, Lithos 2012). TTGs account for the majority of the Archean continental crust. Based on the melting conditions proposed by Moyen (Lithos 2011), the feasibility of generating TTG rocks in numerical simulations has already been demonstrated by Rozel et al. (Nature, 2017). Here, we have developed the code further by parameterising TTG formation. We vary the ratio of intrusive (plutonic) and extrusive (volcanic) magmatism (Crisp, Volcanol. Geotherm. 1984) to study the relative volumes of three petrological TTG compositions as reported from field data (Moyen, Lithos 2011). Furthermore, we systematically vary parameters such as friction coefficient, initial core temperature and composition-dependent viscosity to investigate the global tectonic regime of early Earth. Continental crust can also be destroyed by subduction or delamination. We will investigate

  12. Self-consistent modeling of plasma response to impurity spreading from intense localized source

    International Nuclear Information System (INIS)

    Koltunov, Mikhail

    2012-07-01

    Non-hydrogen impurities unavoidably exist in hot plasmas of present fusion devices. They enter it intrinsically, due to plasma interaction with the wall of vacuum vessel, as well as are seeded for various purposes deliberately. Normally, the spots where injected particles enter the plasma are much smaller than its total surface. Under such conditions one has to expect a significant modification of local plasma parameters through various physical mechanisms, which, in turn, affect the impurity spreading. Self-consistent modeling of interaction between impurity and plasma is, therefore, not possible with linear approaches. A model based on the fluid description of electrons, main and impurity ions, and taking into account the plasma quasi-neutrality, Coulomb collisions of background and impurity charged particles, radiation losses, particle transport to bounding surfaces, is elaborated in this work. To describe the impurity spreading and the plasma response self-consistently, fluid equations for the particle, momentum and energy balances of various plasma components are solved by reducing them to ordinary differential equations for the time evolution of several parameters characterizing the solution in principal details: the magnitudes of plasma density and plasma temperatures in the regions of impurity localization and the spatial scales of these regions. The results of calculations for plasma conditions typical in tokamak experiments with impurity injection are presented. A new mechanism for the condensation phenomenon and formation of cold dense plasma structures is proposed.

  13. Consistent initial conditions for the Saint-Venant equations in river network modeling

    Directory of Open Access Journals (Sweden)

    C.-W. Yu

    2017-09-01

    Full Text Available Initial conditions for flows and depths (cross-sectional areas throughout a river network are required for any time-marching (unsteady solution of the one-dimensional (1-D hydrodynamic Saint-Venant equations. For a river network modeled with several Strahler orders of tributaries, comprehensive and consistent synoptic data are typically lacking and synthetic starting conditions are needed. Because of underlying nonlinearity, poorly defined or inconsistent initial conditions can lead to convergence problems and long spin-up times in an unsteady solver. Two new approaches are defined and demonstrated herein for computing flows and cross-sectional areas (or depths. These methods can produce an initial condition data set that is consistent with modeled landscape runoff and river geometry boundary conditions at the initial time. These new methods are (1 the pseudo time-marching method (PTM that iterates toward a steady-state initial condition using an unsteady Saint-Venant solver and (2 the steady-solution method (SSM that makes use of graph theory for initial flow rates and solution of a steady-state 1-D momentum equation for the channel cross-sectional areas. The PTM is shown to be adequate for short river reaches but is significantly slower and has occasional non-convergent behavior for large river networks. The SSM approach is shown to provide a rapid solution of consistent initial conditions for both small and large networks, albeit with the requirement that additional code must be written rather than applying an existing unsteady Saint-Venant solver.

  14. Towards a consistent geochemical model for prediction of uranium(VI) removal from groundwater by ferrihydrite

    International Nuclear Information System (INIS)

    Gustafsson, Jon Petter; Daessman, Ellinor; Baeckstroem, Mattias

    2009-01-01

    Uranium(VI), which is often elevated in granitoidic groundwaters, is known to adsorb strongly to Fe (hydr)oxides under certain conditions. This process can be used in water treatment to remove U(VI). To develop a consistent geochemical model for U(VI) adsorption to ferrihydrite, batch experiments were performed and previous data sets reviewed to optimize a set of surface complexation constants using the 3-plane CD-MUSIC model. To consider the effect of dissolved organic matter (DOM) on U(VI) speciation, new parameters for the Stockholm Humic Model (SHM) were optimized using previously published data. The model, which was constrained from available X-ray absorption fine structure (EXAFS) spectroscopy evidence, fitted the data well when the surface sites were divided into low- and high-affinity binding sites. Application of the model concept to other published data sets revealed differences in the reactivity of different ferrihydrites towards U(VI). Use of the optimized SHM parameters for U(VI)-DOM complexation showed that this process is important for U(VI) speciation at low pH. However in neutral to alkaline waters with substantial carbonate present, Ca-U-CO 3 complexes predominate. The calibrated geochemical model was used to simulate U(VI) adsorption to ferrihydrite for a hypothetical groundwater in the presence of several competitive ions. The results showed that U(VI) adsorption was strong between pH 5 and 8. Also near the calcite saturation limit, where U(VI) adsorption was weakest according to the model, the adsorption percentage was predicted to be >80%. Hence U(VI) adsorption to ferrihydrite-containing sorbents may be used as a method to bring down U(VI) concentrations to acceptable levels in groundwater

  15. Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models

    KAUST Repository

    Vignal, Philippe

    2016-02-11

    Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are

  16. Consistent post-reaction vibrational energy redistribution in DSMC simulations using TCE model

    Science.gov (United States)

    Borges Sebastião, Israel; Alexeenko, Alina

    2016-10-01

    The direct simulation Monte Carlo (DSMC) method has been widely applied to study shockwaves, hypersonic reentry flows, and other nonequilibrium flow phenomena. Although there is currently active research on high-fidelity models based on ab initio data, the total collision energy (TCE) and Larsen-Borgnakke (LB) models remain the most often used chemistry and relaxation models in DSMC simulations, respectively. The conventional implementation of the discrete LB model, however, may not satisfy detailed balance when recombination and exchange reactions play an important role in the flow energy balance. This issue can become even more critical in reacting mixtures involving polyatomic molecules, such as in combustion. In this work, this important shortcoming is addressed and an empirical approach to consistently specify the post-reaction vibrational states close to thermochemical equilibrium conditions is proposed within the TCE framework. Following Bird's quantum-kinetic (QK) methodology for populating post-reaction states, the new TCE-based approach involves two main steps. The state-specific TCE reaction probabilities for a forward reaction are first pre-computed from equilibrium 0-D simulations. These probabilities are then employed to populate the post-reaction vibrational states of the corresponding reverse reaction. The new approach is illustrated by application to exchange and recombination reactions relevant to H2-O2 combustion processes.

  17. Consistency of climate change projections from multiple global and regional model intercomparison projects

    Science.gov (United States)

    Fernández, J.; Frías, M. D.; Cabos, W. D.; Cofiño, A. S.; Domínguez, M.; Fita, L.; Gaertner, M. A.; García-Díez, M.; Gutiérrez, J. M.; Jiménez-Guerrero, P.; Liguori, G.; Montávez, J. P.; Romera, R.; Sánchez, E.

    2018-03-01

    We present an unprecedented ensemble of 196 future climate projections arising from different global and regional model intercomparison projects (MIPs): CMIP3, CMIP5, ENSEMBLES, ESCENA, EURO- and Med-CORDEX. This multi-MIP ensemble includes all regional climate model (RCM) projections publicly available to date, along with their driving global climate models (GCMs). We illustrate consistent and conflicting messages using continental Spain and the Balearic Islands as target region. The study considers near future (2021-2050) changes and their dependence on several uncertainty sources sampled in the multi-MIP ensemble: GCM, future scenario, internal variability, RCM, and spatial resolution. This initial work focuses on mean seasonal precipitation and temperature changes. The results show that the potential GCM-RCM combinations have been explored very unevenly, with favoured GCMs and large ensembles of a few RCMs that do not respond to any ensemble design. Therefore, the grand-ensemble is weighted towards a few models. The selection of a balanced, credible sub-ensemble is challenged in this study by illustrating several conflicting responses between the RCM and its driving GCM and among different RCMs. Sub-ensembles from different initiatives are dominated by different uncertainty sources, being the driving GCM the main contributor to uncertainty in the grand-ensemble. For this analysis of the near future changes, the emission scenario does not lead to a strong uncertainty. Despite the extra computational effort, for mean seasonal changes, the increase in resolution does not lead to important changes.

  18. Physically-consistent wall boundary conditions for the k-ω turbulence model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Dixen, Martin; Jacobsen, Niels Gjøl

    2010-01-01

    A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components of the fluc......A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components...... of the fluctuating velocity signal. Both conventional k = 0 and dk/dy = 0 wall boundary conditions are considered. Results indicate that either condition can provide accurate solutions, for the bulk of the flow, over both smooth and rough beds. It is argued that the zero-gradient condition is more consistent...... with the near wall physics, however, as it allows direct integration through a viscous sublayer near smooth walls, while avoiding a viscous sublayer near rough walls. This is in contrast to the conventional k = 0 wall boundary condition, which forces resolution of a viscous sublayer in all circumstances...

  19. Consistency and discrepancy in the atmospheric response to Arctic sea-ice loss across climate models

    Science.gov (United States)

    Screen, James A.; Deser, Clara; Smith, Doug M.; Zhang, Xiangdong; Blackport, Russell; Kushner, Paul J.; Oudar, Thomas; McCusker, Kelly E.; Sun, Lantao

    2018-03-01

    The decline of Arctic sea ice is an integral part of anthropogenic climate change. Sea-ice loss is already having a significant impact on Arctic communities and ecosystems. Its role as a cause of climate changes outside of the Arctic has also attracted much scientific interest. Evidence is mounting that Arctic sea-ice loss can affect weather and climate throughout the Northern Hemisphere. The remote impacts of Arctic sea-ice loss can only be properly represented using models that simulate interactions among the ocean, sea ice, land and atmosphere. A synthesis of six such experiments with different models shows consistent hemispheric-wide atmospheric warming, strongest in the mid-to-high-latitude lower troposphere; an intensification of the wintertime Aleutian Low and, in most cases, the Siberian High; a weakening of the Icelandic Low; and a reduction in strength and southward shift of the mid-latitude westerly winds in winter. The atmospheric circulation response seems to be sensitive to the magnitude and geographic pattern of sea-ice loss and, in some cases, to the background climate state. However, it is unclear whether current-generation climate models respond too weakly to sea-ice change. We advocate for coordinated experiments that use different models and observational constraints to quantify the climate response to Arctic sea-ice loss.

  20. Consistent modelling of wind turbine noise propagation from source to receiver.

    Science.gov (United States)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick

    2017-11-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.

  1. Model for ICRF fast wave current drive in self-consistent MHD equilibria

    International Nuclear Information System (INIS)

    Bonoli, P.T.; Englade, R.C.; Porkolab, M.; Fenstermacher, M.E.

    1993-01-01

    Recently, a model for fast wave current drive in the ion cyclotron radio frequency (ICRF) range was incorporated into the current drive and MHD equilibrium code ACCOME. The ACCOME model combines a free boundary solution of the Grad Shafranov equation with the calculation of driven currents due to neutral beam injection, lower hybrid (LH) waves, bootstrap effects, and ICRF fast waves. The equilibrium and current drive packages iterate between each other to obtain an MHD equilibrium which is consistent with the profiles of driven current density. The ICRF current drive package combines a toroidal full-wave code (FISIC) with a parameterization of the current drive efficiency obtained from an adjoint solution of the Fokker Planck equation. The electron absorption calculation in the full-wave code properly accounts for the combined effects of electron Landau damping (ELD) and transit time magnetic pumping (TTMP), assuming a Maxwellian (or bi-Maxwellian) electron distribution function. Furthermore, the current drive efficiency includes the effects of particle trapping, momentum conserving corrections to the background Fokker Planck collision operator, and toroidally induced variations in the parallel wavenumbers of the injected ICRF waves. This model has been used to carry out detailed studies of advanced physics scenarios in the proposed Tokamak Physics Experiment (TPX). Results are shown, for example, which demonstrate the possibility of achieving stable equilibria at high beta and high bootstrap current fraction in TPX. Model results are also shown for the proposed ITER device

  2. Development of a 3D consistent 1D neutronics model for reactor core simulation

    International Nuclear Information System (INIS)

    Lee, Ki Bog; Joo, Han Gyu; Cho, Byung Oh; Zee, Sung Quun

    2001-02-01

    In this report a 3D consistent 1D model based on nonlinear analytic nodal method is developed to reproduce the 3D results. During the derivation, the current conservation factor (CCF) is introduced which guarantees the same axial neutron currents obtained from the 1D equation as the 3D reference values. Furthermore in order to properly use 1D group constants, a new 1D group constants representation scheme employing tables for the fuel temperature, moderator density and boron concentration is developed and functionalized for the control rod tip position. To test the 1D kinetics model with CCF, several steady state and transient calculations were performed and compared with 3D reference values. The errors of K-eff values were reduced about one tenth when using CCF without significant computational overhead. And the errors of power distribution were decreased to the range of one fifth or tenth at steady state calculation. The 1D kinetics model with CCF and the 1D group constant functionalization employing tables as a function of control rod tip position can provide preciser results at the steady state and transient calculation. Thus it is expected that the 1D kinetics model derived in this report can be used in the safety analysis, reactor real time simulation coupled with system analysis code, operator support system etc.

  3. A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints

    Directory of Open Access Journals (Sweden)

    L. Kantha

    2016-01-01

    Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.

  4. Modeling, methodologies and tools for molecular and nano-scale communications modeling, methodologies and tools

    CERN Document Server

    Nakano, Tadashi; Moore, Michael

    2017-01-01

    (Preliminary) The book presents the state of art in the emerging field of molecular and nanoscale communication. It gives special attention to fundamental models, and advanced methodologies and tools used in the field. It covers a wide range of applications, e.g. nanomedicine, nanorobot communication, bioremediation and environmental managements. It addresses advanced graduate students, academics and professionals working at the forefront in their fields and at the interfaces between different areas of research, such as engineering, computer science, biology and nanotechnology.

  5. Developing a Modeling Tool Using Eclipse

    NARCIS (Netherlands)

    Kirtley, Nick; Waqas Kamal, Ahmad; Avgeriou, Paris

    2008-01-01

    Tool development using an open source platform provides autonomy to users to change, use, and develop cost-effective software with freedom from licensing requirements. However, open source tool development poses a number of challenges, such as poor documentation and continuous evolution. In this

  6. A paradigm shift toward a consistent modeling framework to assess climate impacts

    Science.gov (United States)

    Monier, E.; Paltsev, S.; Sokolov, A. P.; Fant, C.; Chen, H.; Gao, X.; Schlosser, C. A.; Scott, J. R.; Dutkiewicz, S.; Ejaz, Q.; Couzo, E. A.; Prinn, R. G.; Haigh, M.

    2017-12-01

    Estimates of physical and economic impacts of future climate change are subject to substantial challenges. To enrich the currently popular approaches of assessing climate impacts by evaluating a damage function or by multi-model comparisons based on the Representative Concentration Pathways (RCPs), we focus here on integrating impacts into a self-consistent coupled human and Earth system modeling framework that includes modules that represent multiple physical impacts. In a sample application we show that this framework is capable of investigating the physical impacts of climate change and socio-economic stressors. The projected climate impacts vary dramatically across the globe in a set of scenarios with global mean warming ranging between 2.4°C and 3.6°C above pre-industrial by 2100. Unabated emissions lead to substantial sea level rise, acidification that impacts the base of the oceanic food chain, air pollution that exceeds health standards by tenfold, water stress that impacts an additional 1 to 2 billion people globally and agricultural productivity that decreases substantially in many parts of the world. We compare the outcomes from these forward-looking scenarios against the common goal described by the target-driven scenario of 2°C, which results in much smaller impacts. It is challenging for large internationally coordinated exercises to respond quickly to new policy targets. We propose that a paradigm shift toward a self-consistent modeling framework to assess climate impacts is needed to produce information relevant to evolving global climate policy and mitigation strategies in a timely way.

  7. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    Science.gov (United States)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  8. Novel Multiscale Modeling Tool Applied to Pseudomonas aeruginosa Biofilm Formation

    OpenAIRE

    Biggs, Matthew B.; Papin, Jason A.

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid mod...

  9. Net Rotation of the Lithosphere in Mantle Convection Models with Self-consistent Plate Generation

    Science.gov (United States)

    Gerault, M.; Coltice, N.

    2017-12-01

    Lateral variations in the viscosity structure of the lithosphere and the mantle give rise to a discordant motion between the two. In a deep mantle reference frame, this motion is called the net rotation of the lithosphere. Plate motion reconstructions, mantle flow computations, and inferences from seismic anisotropy all indicate some amount of net rotation using different mantle reference frames. While the direction of rotation is somewhat consistent across studies, the predicted amplitudes range from 0.1 deg/Myr to 0.3 deg/Myr at the present-day. How net rotation rates could have differed in the past is also a subject of debate and strong geodynamic arguments are missing from the discussion. This study provides the first net rotation calculations in 3-D spherical mantle convection models with self-consistent plate generation. We run the computations for billions of years of numerical integration. We look into how sensitive the net rotation is to major tectonic events, such as subduction initiation, continental breakup and plate reorganisations, and whether some governing principles from the models could guide plate motion reconstructions. The mantle convection problem is solved with the finite volume code StagYY using a visco-pseudo-plastic rheology. Mantle flow velocities are solely driven by buoyancy forces internal to the system, with free slip upper and lower boundary conditions. We investigate how the yield stress, the mantle viscosity structure and the properties of continents affect the net rotation over time. Models with large lateral viscosity variations from continents predict net rotations that are at least threefold faster than those without continents. Models where continents cover a third of the surface produce net rotation rates that vary from nearly zero to over 0.3 deg/Myr with rapide increase during continental breakup. The pole of rotation appears to migrate along no particular path. For all models, regardless of the yield stress and the

  10. Isotopes as validation tools for global climate models

    International Nuclear Information System (INIS)

    Henderson-Sellers, A.

    2001-01-01

    Global Climate Models (GCMs) are the predominant tool with which we predict the future climate. In order that people can have confidence in such predictions, GCMs require validation. As almost every available item of meteorological data has been exploited in the construction and tuning of GCMs to date, independent validation is very difficult. This paper explores the use of isotopes as a novel and fully independent means of evaluating GCMs. The focus is the Amazon Basin which has a long history of isotope collection and analysis and also of climate modelling: both having been reported for over thirty years. Careful consideration of the results of GCM simulations of Amazonian deforestation and climate change suggests that the recent stable isotope record is more consistent with the predicted effects of greenhouse warming, possibly combined with forest removal, than with GCM predictions of the effects of deforestation alone

  11. Self-consistent Random Phase Approximation applied to a schematic model of the field theory

    International Nuclear Information System (INIS)

    Bertrand, Thierry

    1998-01-01

    The self-consistent Random Phase Approximation (SCRPA) is a method allowing in the mean-field theory inclusion of the correlations in the ground and excited states. It has the advantage of not violating the Pauli principle in contrast to RPA, that is based on the quasi-bosonic approximation; in addition, numerous applications in different domains of physics, show a possible variational character. However, the latter should be formally demonstrated. The first model studied with SCRPA is the anharmonic oscillator in the region where one of its symmetries is spontaneously broken. The ground state energy is reproduced by SCRPA more accurately than RPA, with no violation of the Ritz variational principle, what is not the case for the latter approximation. The success of SCRPA is the the same in case of ground state energy for a model mixing bosons and fermions. At the transition point the SCRPA is correcting RPA drastically, but far from this region the correction becomes negligible, both methods being of similar precision. In the deformed region in the case of RPA a spurious mode occurred due to the microscopical character of the model.. The SCRPA may also reproduce this mode very accurately and actually it coincides with an excitation in the exact spectrum

  12. Self-Consistent Atmosphere Models of the Most Extreme Hot Jupiters

    Science.gov (United States)

    Lothringer, Joshua; Barman, Travis

    2018-01-01

    We present a detailed look at self-consistent PHOENIX atmosphere models of the most highly irradiated hot Jupiters known to exist. These hot Jupiters typically have equilibrium temperatures approaching and sometimes exceeding 3000 K, orbiting A, F, and early-G type stars on orbits less than 0.03 AU (10x closer than Mercury is to the Sun). The most extreme example, KELT-9b, is the hottest known hot Jupiter with a measured dayside temperature of 4600 K. Many of the planets we model have recently attracted attention with high profile discoveries, including temperature inversions in WASP-33b and WASP-121, changing phase curve offsets possibly caused by magnetohydrodymanic effects in HAT-P-7b, and TiO in WASP-19b. Our modeling provides a look at the a priori expectations for these planets and helps us understand these recent discoveries. We show that, in the hottest cases, all molecules are dissociated down to relatively high pressures. These planets may have detectable temperature inversions, more akin to thermospheres than stratospheres in that an optical absorber like TiO or VO is not needed. Instead, the inversions are created by a lack of cooling in the IR combined with heating from atoms and ions at UV and blue optical wavelengths. We also reevaluate some of the assumptions that have been made in retrieval analyses of these planets.

  13. Methodology and consistency of slant and vertical assessments for ionospheric electron content models

    Science.gov (United States)

    Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül

    2017-12-01

    A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.

  14. Providing comprehensive and consistent access to astronomical observatory archive data: the NASA archive model

    Science.gov (United States)

    McGlynn, Thomas; Fabbiano, Giuseppina; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; Pevunova, Olga; Imel, David; Berriman, Graham B.; Teplitz, Harry I.; Groom, Steve L.; Desai, Vandana R.; Landry, Walter

    2016-07-01

    Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.

  15. First results of GERDA Phase II and consistency with background models

    Science.gov (United States)

    Agostini, M.; Allardt, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Baudis, L.; Bauer, C.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode1, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; D'Andrea, V.; Demidova, E. V.; Di Marco, N.; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Gooch, C.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hakenmüller, J.; Hegai, A.; Heisel, M.; Hemmer, S.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Kish, A.; Klimenko, A.; Kneißl, R.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Medinaceli, E.; Miloradovic, M.; Mingazheva, R.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Palioselitis, D.; Panas, K.; Pandola, L.; Pelczar, K.; Pullia, A.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salamida, F.; Salathe, M.; Schmitt, C.; Schneider, B.; Schönert, S.; Schreiner, J.; Schulz, O.; Schütz, A.-K.; Schwingenheuer, B.; Selivanenko, O.; Shevzik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Wegmann, A.; Wester, T.; Wiesinger, C.; Wojcik, M.; Yanovich, E.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.

    2017-01-01

    The GERDA (GERmanium Detector Array) is an experiment for the search of neutrinoless double beta decay (0νββ) in 76Ge, located at Laboratori Nazionali del Gran Sasso of INFN (Italy). GERDA operates bare high purity germanium detectors submersed in liquid Argon (LAr). Phase II of data-taking started in Dec 2015 and is currently ongoing. In Phase II 35 kg of germanium detectors enriched in 76Ge including thirty newly produced Broad Energy Germanium (BEGe) detectors is operating to reach an exposure of 100 kg·yr within about 3 years data taking. The design goal of Phase II is to reduce the background by one order of magnitude to get the sensitivity for T1/20ν = O≤ft( {{{10}26}} \\right){{ yr}}. To achieve the necessary background reduction, the setup was complemented with LAr veto. Analysis of the background spectrum of Phase II demonstrates consistency with the background models. Furthermore 226Ra and 232Th contamination levels consistent with screening results. In the first Phase II data release we found no hint for a 0νββ decay signal and place a limit of this process T1/20ν > 5.3 \\cdot {1025} yr (90% C.L., sensitivity 4.0·1025 yr). First results of GERDA Phase II will be presented.

  16. The self-consistent field model for Fermi systems with account of three-body interactions

    Directory of Open Access Journals (Sweden)

    Yu.M. Poluektov

    2015-12-01

    Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.

  17. Height-Diameter Models for Mixed-Species Forests Consisting of Spruce, Fir, and Beech

    Directory of Open Access Journals (Sweden)

    Petráš Rudolf

    2014-06-01

    Full Text Available Height-diameter models define the general relationship between the tree height and diameter at each growth stage of the forest stand. This paper presents generalized height-diameter models for mixed-species forest stands consisting of Norway spruce (Picea abies Karst., Silver fir (Abies alba L., and European beech (Fagus sylvatica L. from Slovakia. The models were derived using two growth functions from the exponential family: the two-parameter Michailoff and three-parameter Korf functions. Generalized height-diameter functions must normally be constrained to pass through the mean stand diameter and height, and then the final growth model has only one or two parameters to be estimated. These “free” parameters are then expressed over the quadratic mean diameter, height and stand age and the final mathematical form of the model is obtained. The study material included 50 long-term experimental plots located in the Western Carpathians. The plots were established 40-50 years ago and have been repeatedly measured at 5 to 10-year intervals. The dataset includes 7,950 height measurements of spruce, 21,661 of fir and 5,794 of beech. As many as 9 regression models were derived for each species. Although the “goodness of fit” of all models showed that they were generally well suited for the data, the best results were obtained for silver fir. The coefficient of determination ranged from 0.946 to 0.948, RMSE (m was in the interval 1.94-1.97 and the bias (m was -0.031 to 0.063. Although slightly imprecise parameter estimation was established for spruce, the estimations of the regression parameters obtained for beech were quite less precise. The coefficient of determination for beech was 0.854-0.860, RMSE (m 2.67-2.72, and the bias (m ranged from -0.144 to -0.056. The majority of models using Korf’s formula produced slightly better estimations than Michailoff’s, and it proved immaterial which estimated parameter was fixed and which parameters

  18. Geometry and time scales of self-consistent orbits in a modified SU(2) model

    International Nuclear Information System (INIS)

    Jezek, D.M.; Hernandez, E.S.; Solari, H.G.

    1986-01-01

    We investigate the time-dependent Hartree-Fock flow pattern of a two-level many fermion system interacting via a two-body interaction which does not preserve the parity symmetry of standard SU(2) models. The geometrical features of the time-dependent Hartree-Fock energy surface are analyzed and a phase instability is clearly recognized. The time evolution of one-body observables along self-consistent and exact trajectories are examined together with the overlaps between both orbits. Typical time scales for the determinantal motion can be set and the validity of the time-dependent Hartree-Fock approach in the various regions of quasispin phase space is discussed

  19. Self-consistent model of the Rayleigh--Taylor instability in ablatively accelerated laser plasma

    International Nuclear Information System (INIS)

    Bychkov, V.V.; Golberg, S.M.; Liberman, M.A.

    1994-01-01

    A self-consistent approach to the problem of the growth rate of the Rayleigh--Taylor instability in laser accelerated targets is developed. The analytical solution of the problem is obtained by solving the complete system of the hydrodynamical equations which include both thermal conductivity and energy release due to absorption of the laser light. The developed theory provides a rigorous justification for the supplementary boundary condition in the limiting case of the discontinuity model. An analysis of the suppression of the Rayleigh--Taylor instability by the ablation flow is done and it is found that there is a good agreement between the obtained solution and the approximate formula σ = 0.9√gk - 3u 1 k, where g is the acceleration, u 1 is the ablation velocity. This paper discusses different regimes of the ablative stabilization and compares them with previous analytical and numerical works

  20. Self-consistent finite-temperature model of atom-laser coherence properties

    International Nuclear Information System (INIS)

    Fergusson, J.R.; Geddes, A.J.; Hutchinson, D.A.W.

    2005-01-01

    We present a mean-field model of a continuous-wave atom laser with Raman output coupling. The noncondensate is pumped at a fixed input rate which, in turn, pumps the condensate through a two-body scattering process obeying the Fermi golden rule. The gas is then coupled out by a Gaussian beam from the system, and the temperature and particle number are self-consistently evaluated against equilibrium constraints. We observe the dependence of the second-order coherence of the output upon the width of the output-coupling beam, and note that even in the presence of a highly coherent trapped gas, perfect coherence of the output matter wave is not guaranteed

  1. Homogenization of linearly anisotropic scattering cross sections in a consistent B1 heterogeneous leakage model

    International Nuclear Information System (INIS)

    Marleau, G.; Debos, E.

    1998-01-01

    One of the main problems encountered in cell calculations is that of spatial homogenization where one associates to an heterogeneous cell an homogeneous set of cross sections. The homogenization process is in fact trivial when a totally reflected cell without leakage is fully homogenized since it involved only a flux-volume weighting of the isotropic cross sections. When anisotropic leakages models are considered, in addition to homogenizing isotropic cross sections, the anisotropic scattering cross section must also be considered. The simple option, which consists of using the same homogenization procedure for both the isotropic and anisotropic components of the scattering cross section, leads to inconsistencies between the homogeneous and homogenized transport equation. Here we will present a method for homogenizing the anisotropic scattering cross sections that will resolve these inconsistencies. (author)

  2. Modelling of Tool Wear and Residual Stress during Machining of AISI H13 Tool Steel

    Science.gov (United States)

    Outeiro, José C.; Umbrello, Domenico; Pina, José C.; Rizzuti, Stefania

    2007-05-01

    Residual stresses can enhance or impair the ability of a component to withstand loading conditions in service (fatigue, creep, stress corrosion cracking, etc.), depending on their nature: compressive or tensile, respectively. This poses enormous problems in structural assembly as this affects the structural integrity of the whole part. In addition, tool wear issues are of critical importance in manufacturing since these affect component quality, tool life and machining cost. Therefore, prediction and control of both tool wear and the residual stresses in machining are absolutely necessary. In this work, a two-dimensional Finite Element model using an implicit Lagrangian formulation with an automatic remeshing was applied to simulate the orthogonal cutting process of AISI H13 tool steel. To validate such model the predicted and experimentally measured chip geometry, cutting forces, temperatures, tool wear and residual stresses on the machined affected layers were compared. The proposed FE model allowed us to investigate the influence of tool geometry, cutting regime parameters and tool wear on residual stress distribution in the machined surface and subsurface of AISI H13 tool steel. The obtained results permit to conclude that in order to reduce the magnitude of surface residual stresses, the cutting speed should be increased, the uncut chip thickness (or feed) should be reduced and machining with honed tools having large cutting edge radii produce better results than chamfered tools. Moreover, increasing tool wear increases the magnitude of surface residual stresses.

  3. The Consistent Kinetics Porosity (CKP) Model: A Theory for the Mechanical Behavior of Moderately Porous Solids

    Energy Technology Data Exchange (ETDEWEB)

    BRANNON,REBECCA M.

    2000-11-01

    A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion

  4. SIMULATION TOOLS FOR ELECTRICAL MACHINES MODELLING ...

    African Journals Online (AJOL)

    Dr Obe

    ABSTRACT. Simulation tools are used both for research and teaching to allow a good ... The solution provide an easy way of determining the dynamic .... incorporate an in-built numerical algorithm, ... to learn, versatile in application, enhanced.

  5. Integrating decision management with UML modeling concepts and tools

    DEFF Research Database (Denmark)

    Könemann, Patrick

    2009-01-01

    , but also for guiding the user by proposing subsequent decisions. In model-based software development, many decisions directly affect the structural and behavioral models used to describe and develop a software system and its architecture. However, the decisions are typically not connected to these models...... of formerly disconnected tools could improve tool usability as well as decision maker productivity....

  6. Dynamic wind turbine models in power system simulation tool

    DEFF Research Database (Denmark)

    Hansen, A.; Jauch, Clemens; Soerensen, P.

    The present report describes the dynamic wind turbine models implemented in the power system simulation tool DIgSILENT. The developed models are a part of the results of a national research project, whose overall objective is to create a model database in different simulation tools. The report...

  7. Self consistent solution of the tJ model in the overdoped regime

    Science.gov (United States)

    Shastry, B. Sriram; Hansen, Daniel

    2013-03-01

    Detailed results from a recent microscopic theory of extremely correlated Fermi liquids, applied to the t-J model in two dimensions, are presented. The theory is to second order in a parameter λ, and is valid in the overdoped regime of the tJ model. The solution reported here is from Ref, where relevant equations given in Ref are self consistently solved for the square lattice. Thermodynamic variables and the resistivity are displayed at various densities and T for two sets of band parameters. The momentum distribution function and the renormalized electronic dispersion, its width and asymmetry are reported along principal directions of the zone. The optical conductivity is calculated. The electronic spectral function A (k , ω) probed in ARPES, is detailed with different elastic scattering parameters to account for the distinction between LASER and synchrotron ARPES. A high (binding) energy waterfall feature, sensitively dependent on the band hopping parameter t' is noted. This work was supported by DOE under Grant No. FG02-06ER46319.

  8. Study of impurity effects on CFETR steady-state scenario by self-consistent integrated modeling

    Science.gov (United States)

    Shi, Nan; Chan, Vincent S.; Jian, Xiang; Li, Guoqiang; Chen, Jiale; Gao, Xiang; Shi, Shengyu; Kong, Defeng; Liu, Xiaoju; Mao, Shifeng; Xu, Guoliang

    2017-12-01

    Impurity effects on fusion performance of China fusion engineering test reactor (CFETR) due to extrinsic seeding are investigated. An integrated 1.5D modeling workflow evolves plasma equilibrium and all transport channels to steady state. The one modeling framework for integrated tasks framework is used to couple the transport solver, MHD equilibrium solver, and source and sink calculations. A self-consistent impurity profile constructed using a steady-state background plasma, which satisfies quasi-neutrality and true steady state, is presented for the first time. Studies are performed based on an optimized fully non-inductive scenario with varying concentrations of Argon (Ar) seeding. It is found that fusion performance improves before dropping off with increasing {{Z}\\text{eff}} , while the confinement remains at high level. Further analysis of transport for these plasmas shows that low-k ion temperature gradient modes dominate the turbulence. The decrease in linear growth rate and resultant fluxes of all channels with increasing {{Z}\\text{eff}} can be traced to impurity profile change by transport. The improvement in confinement levels off at higher {{Z}\\text{eff}} . Over the regime of study there is a competition between the suppressed transport and increasing radiation that leads to a peak in the fusion performance at {{Z}\\text{eff}} (~2.78 for CFETR). Extrinsic impurity seeding to control divertor heat load will need to be optimized around this value for best fusion performance.

  9. Modeling of LH current drive in self-consistent elongated tokamak MHD equilibria

    International Nuclear Information System (INIS)

    Blackfield, D.T.; Devoto, R.S.; Fenstermacher, M.E.; Bonoli, P.T.; Porkolab, M.; Yugo, J.

    1989-01-01

    Calculations of non-inductive current drive typically have been used with model MHD equilibria which are independently generated from an assumed toroidal current profile or from a fit to an experiment. Such a method can lead to serious errors since the driven current can dramatically alter the equilibrium and changes in the equilibrium B-fields can dramatically alter the current drive. The latter effect is quite pronounced in LH current drive where the ray trajectories are sensitive to the local values of the magnetic shear and the density gradient. In order to overcome these problems, we have modified a LH simulation code to accommodate elongated plasmas with numerically generated equilibria. The new LH module has been added to the ACCOME code which solves for current drive by neutral beams, electric fields, and bootstrap effects in a self-consistent 2-D equilibrium. We briefly describe the model in the next section and then present results of a study of LH current drive in ITER. 2 refs., 6 figs., 2 tabs

  10. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    International Nuclear Information System (INIS)

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M.

    2014-01-01

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k c and k ¯ and the preferred monolayer curvature J 0 m , and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k c and the area compression modulus k A are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k ¯ and J 0 m can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k ¯ and J 0 m change sign with relevant parameter changes. Although typically k ¯ 0 m ≫0, especially at low ionic strengths. We anticipate that these changes lead to unstable membranes as these become vulnerable to pore formation or disintegration into lipid disks

  11. Self consistent MHD modeling of the solar wind from polar coronal holes

    International Nuclear Information System (INIS)

    Stewart, G. A.; Bravo, S.

    1996-01-01

    We have developed a 2D self consistent MHD model for solar wind flow from antisymmetric magnetic geometries. We present results in the case of a photospheric magnetic field which has a dipolar configuration, in order to investigate some of the general characteristics of the wind at solar minimum. As in previous studies, we find that the magnetic configuration is that of a closed field region (a coronal helmet belt) around the solar equator, extending up to about 1.6 R · , and two large open field regions centred over the poles (polar coronal holes), whose magnetic and plasma fluxes expand to fill both hemispheres in interplanetary space. In addition, we find that the different geometries of the magnetic field lines across each hole (from the almost radial central polar lines to the highly curved border equatorial lines) cause the solar wind to have greatly different properties depending on which region it flows from. We find that, even though our simplified model cannot produce realistic wind values, we can obtain a polar wind that is faster, less dense and hotter than equatorial wind, and found that, close to the Sun, there exists a sharp transition between the two wind types. As these characteristics coincide with observations we conclude that both fast and slow solar wind can originate from coronal holes, fast wind from the centre, slow wind from the border

  12. Quantum self-consistency of AdSxΣ brane models

    International Nuclear Information System (INIS)

    Flachi, Antonino; Pujolas, Oriol

    2003-01-01

    Continuing our previous work, we consider a class of higher dimensional brane models with the topology of AdS D 1 +1 xΣ, where Σ is a one-parameter compact manifold and two branes of codimension one are located at the orbifold fixed points. We consider a setup where such a solution arises from Einstein-Yang-Mills theory and evaluate the one-loop effective potential induced by gauge fields and by a generic bulk scalar field. We show that this type of brane model resolves the gauge hierarchy between the Planck and electroweak scales through redshift effects due to the warp factor a=e -πkr . The value of a is then fixed by minimizing the effective potential. We find that, as in the Randall-Sundrum case, the gauge field contribution to the effective potential stabilizes the hierarchy without fine-tuning as long as the Laplacian Δ Σ on Σ has a zero eigenvalue. Scalar fields can stabilize the hierarchy depending on the mass and the nonminimal coupling. We also address the quantum self-consistency of the solution, showing that the classical brane solution is not spoiled by quantum effects

  13. Modeling and Tool Wear in Routing of CFRP

    International Nuclear Information System (INIS)

    Iliescu, D.; Fernandez, A.; Gutierrez-Orrantia, M. E.; Lopez de Lacalle, L. N.; Girot, F.

    2011-01-01

    This paper presents the prediction and evaluation of feed force in routing of carbon composite material. In order to extend tool life and improve quality of the machined surface, a better understanding of uncoated and coated tool behaviors is required. This work describes (1) the optimization of the geometry of multiple teeth tools minimizing the tool wear and the feed force, (2) the optimization of tool coating and (3) the development of a phenomenological model between the feed force, the routing parameters and the tool wear. The experimental results indicate that the feed rate, the cutting speed and the tool wear are the most significant factors affecting the feed force. In the case of multiple teeth tools, a particular geometry with 14 teeth right helix right cut and 11 teeth left helix right cut gives the best results. A thick AlTiN coating or a diamond coating can dramatically improve the tool life while minimizing the axial force, roughness and delamination. A wear model has then been developed based on an abrasive behavior of the tool. The model links the feed rate to the tool geometry parameters (tool diameter), to the process parameters (feed rate, cutting speed and depth of cut) and to the wear. The model presented has been verified by experimental tests.

  14. Self-consistent model of the low-latitude boundary layer

    International Nuclear Information System (INIS)

    Phan, T.D.; Sonnerup, B.U.Oe.; Lotko, W.

    1989-01-01

    A simple two-dimensional, steady state, viscous model of the dawnside and duskside low-latitude boundary layer (LLBL) has been developed. It incorporates coupling to the ionosphere via field-aligned currents and associated field-aligned potential drops, governed by a simple conductance law, and it describes boundary layer currents, magnetic fields, and plasma flow in a self-consistent manner. The magnetic field induced by these currents leads to two effects: (1) a diamagnetic depression of the magnetic field in the equatorial region and (2) bending of the field lines into parabolas in the xz plane with their vertices in the equatorial plane, at z = 0, and pointing in the flow direction, i.e., tailward. Both effects are strongest at the magnetopause edge of the boundary layer and vanish at the magnetospheric edge. The diamagnetic depression corresponds to an excess of plasma pressure in the equatorial boundary layer near the magnetopause. The boundary layer structure is governed by a fourth-order, nonlinear, ordinary differential equation in which one nondimensional parameter, the Hartmann number M, appears. A second parameter, introduced via the boundary conditions, is a nondimensional flow velocity v 0 * at the magnetopause. Numerical results from the model are presented and the possible use of observations to determine the model parameters is discussed. The main new contribution of the study is to provide a better description of the field and plasma configuration in the LLBL itself and to clarify in quantitative terms the circumstances in which induced magnetic fields become important

  15. Electron beam charging of insulators: A self-consistent flight-drift model

    International Nuclear Information System (INIS)

    Touzin, M.; Goeuriot, D.; Guerret-Piecourt, C.; Juve, D.; Treheux, D.; Fitting, H.-J.

    2006-01-01

    Electron beam irradiation and the self-consistent charge transport in bulk insulating samples are described by means of a new flight-drift model and an iterative computer simulation. Ballistic secondary electron and hole transport is followed by electron and hole drifts, their possible recombination and/or trapping in shallow and deep traps. The trap capture cross sections are the Poole-Frenkel-type temperature and field dependent. As a main result the spatial distributions of currents j(x,t), charges ρ(x,t), the field F(x,t), and the potential slope V(x,t) are obtained in a self-consistent procedure as well as the time-dependent secondary electron emission rate σ(t) and the surface potential V 0 (t). For bulk insulating samples the time-dependent distributions approach the final stationary state with j(x,t)=const=0 and σ=1. Especially for low electron beam energies E 0 G of a vacuum grid in front of the target surface. For high beam energies E 0 =10, 20, and 30 keV high negative surface potentials V 0 =-4, -14, and -24 kV are obtained, respectively. Besides open nonconductive samples also positive ion-covered samples and targets with a conducting and grounded layer (metal or carbon) on the surface have been considered as used in environmental scanning electron microscopy and common SEM in order to prevent charging. Indeed, the potential distributions V(x) are considerably small in magnitude and do not affect the incident electron beam neither by retarding field effects in front of the surface nor within the bulk insulating sample. Thus the spatial scattering and excitation distributions are almost not affected

  16. Towards three-dimensional continuum models of self-consistent along-strike megathrust segmentation

    Science.gov (United States)

    Pranger, Casper; van Dinther, Ylona; May, Dave; Le Pourhiet, Laetitia; Gerya, Taras

    2016-04-01

    into one algorithm. We are working towards presenting the first benchmarked 3D dynamic rupture models as an important step towards seismic cycle modelling of megathrust segmentation in a three-dimensional subduction setting with slow tectonic loading, self consistent fault development, and spontaneous seismicity.

  17. Development of a real-time simulation tool towards self-consistent scenario of plasma start-up and sustainment on helical fusion reactor FFHR-d1

    Science.gov (United States)

    Goto, T.; Miyazawa, J.; Sakamoto, R.; Suzuki, Y.; Suzuki, C.; Seki, R.; Satake, S.; Huang, B.; Nunami, M.; Yokoyama, M.; Sagara, A.; the FFHR Design Group

    2017-06-01

    This study closely investigates the plasma operation scenario for the LHD-type helical reactor FFHR-d1 in view of MHD equilibrium/stability, neoclassical transport, alpha energy loss and impurity effect. In 1D calculation code that reproduces the typical pellet discharges in LHD experiments, we identify a self-consistent solution of the plasma operation scenario which achieves steady-state sustainment of the burning plasma with a fusion gain of Q ~ 10 was found within the operation regime that has been already confirmed in LHD experiment. The developed calculation tool enables systematic analysis of the operation regime in real time.

  18. Student Model Tools Code Release and Documentation

    DEFF Research Database (Denmark)

    Johnson, Matthew; Bull, Susan; Masci, Drew

    of its strengths and areas of improvement (Section 6). Several key appendices are attached to this report including user manuals for teacher and students (Appendix 3). Fundamentally, all relevant information is included in the report for those wishing to do further development work with the tool...

  19. The Devil in the Dark: A Fully Self-Consistent Seismic Model for Venus

    Science.gov (United States)

    Unterborn, C. T.; Schmerr, N. C.; Irving, J. C. E.

    2017-12-01

    The bulk composition and structure of Venus is unknown despite accounting for 40% of the mass of all the terrestrial planets in our Solar System. As we expand the scope of planetary science to include those planets around other stars, the lack of measurements of basic planetary properties such as moment of inertia, core-size and thermal profile for Venus hinders our ability to compare the potential uniqueness of the Earth and our Solar System to other planetary systems. Here we present fully self-consistent, whole-planet density and seismic velocity profiles calculated using the ExoPlex and BurnMan software packages for various potential Venusian compositions. Using these models, we explore the seismological implications of the different thermal and compositional initial conditions, taking into account phase transitions due to changes in pressure, temperature as well as composition. Using mass-radius constraints, we examine both the centre frequencies of normal mode oscillations and the waveforms and travel times of body waves. Seismic phases which interact with the core, phase transitions in the mantle, and shallower parts of Venus are considered. We also consider the detectability and transmission of these seismic waves from within the dense atmosphere of Venus. Our work provides coupled compositional-seismological reference models for the terrestrial planet in our Solar System of which we know the least. Furthermore, these results point to the potential wealth of fundamental scientific insights into Venus and Earth, as well as exoplanets, which could be gained by including a seismometer on future planetary exploration missions to Venus, the devil in the dark.

  20. Self-consistent modeling of radio-frequency plasma generation in stellarators

    Energy Technology Data Exchange (ETDEWEB)

    Moiseenko, V. E., E-mail: moiseenk@ipp.kharkov.ua; Stadnik, Yu. S., E-mail: stadnikys@kipt.kharkov.ua [National Academy of Sciences of Ukraine, National Science Center Kharkov Institute of Physics and Technology (Ukraine); Lysoivan, A. I., E-mail: a.lyssoivan@fz-juelich.de [Royal Military Academy, EURATOM-Belgian State Association, Laboratory for Plasma Physics (Belgium); Korovin, V. B. [National Academy of Sciences of Ukraine, National Science Center Kharkov Institute of Physics and Technology (Ukraine)

    2013-11-15

    A self-consistent model of radio-frequency (RF) plasma generation in stellarators in the ion cyclotron frequency range is described. The model includes equations for the particle and energy balance and boundary conditions for Maxwell’s equations. The equation of charged particle balance takes into account the influx of particles due to ionization and their loss via diffusion and convection. The equation of electron energy balance takes into account the RF heating power source, as well as energy losses due to the excitation and electron-impact ionization of gas atoms, energy exchange via Coulomb collisions, and plasma heat conduction. The deposited RF power is calculated by solving the boundary problem for Maxwell’s equations. When describing the dissipation of the energy of the RF field, collisional absorption and Landau damping are taken into account. At each time step, Maxwell’s equations are solved for the current profiles of the plasma density and plasma temperature. The calculations are performed for a cylindrical plasma. The plasma is assumed to be axisymmetric and homogeneous along the plasma column. The system of balance equations is solved using the Crank-Nicholson scheme. Maxwell’s equations are solved in a one-dimensional approximation by using the Fourier transformation along the azimuthal and longitudinal coordinates. Results of simulations of RF plasma generation in the Uragan-2M stellarator by using a frame antenna operating at frequencies lower than the ion cyclotron frequency are presented. The calculations show that the slow wave generated by the antenna is efficiently absorbed at the periphery of the plasma column, due to which only a small fraction of the input power reaches the confinement region. As a result, the temperature on the axis of the plasma column remains low, whereas at the periphery it is substantially higher. This leads to strong absorption of the RF field at the periphery via the Landau mechanism.

  1. Consistency of land surface reflectance data: presentation of a new tool and case study with Formosat-2, SPOT-4 and Landsat-5/7/8 data

    Science.gov (United States)

    Claverie, M.; Vermote, E.; Franch, B.; Huc, M.; Hagolle, O.; Masek, J.

    2013-12-01

    Maintaining consistent dataset of Surface Reflectance (SR) data derived from the large panel of in-orbit sensors is an important challenge to ensure long term analysis of earth observation data. Continuous validation of such SR products through comparison with a reference dataset is thus an important challenge. Validating with in situ or airborne SR data is not easy since the sensors rarely match completely the same spectral, spatial and directional characteristics of the satellite measurement. Inter-comparison between satellites sensors data appears as a valuable tool to maintain a long term consistency of the data. However, satellite data are acquired at various times of the day (i.e., variation of the atmosphere content) and within a relative large range of geometry (view and sun angles). Also, even if band-to-band spectral characteristics of optical sensors are closed, they rarely have identical spectral responses. As the results, direct comparisons without consideration of these differences are poorly suitable. In this study, we suggest a new systematic method to assess land optical SR data from high to medium resolution sensors. We used MODIS SR products (MO/YD09CMG) which benefit from a long term calibration/validation process, to assess SR from 3 sensors data: Formosat-2 (280 scenes 24x24km - 5 sites), SPOT-4 (62 scenes 120x60km - 1 site) and Landsat-5/7 (104 180x180km scenes - 50 sites). The main issue concerns the difference in term of geometry acquisition between MODIS and compared sensors data. We used the VJB model (Vermote et al. 2009, TGRS) to correct MODIS SR from BRDF effects and to simulate SR at the corresponding geometry (view and sun angles) of each pixel of the compared sensor data. The comparison is done at the CMG spatial resolution (0.05°) which ensures a constant field-of-view and negligible geometrical errors. Figure 1 displays the summary of the NIR results through APU graphs where metrics A, P and U stands for Accuracy, Precision and

  2. Reproducibility and consistency of proteomic experiments on natural populations of a non-model aquatic insect.

    Science.gov (United States)

    Hidalgo-Galiana, Amparo; Monge, Marta; Biron, David G; Canals, Francesc; Ribera, Ignacio; Cieslak, Alexandra

    2014-01-01

    Population proteomics has a great potential to address evolutionary and ecological questions, but its use in wild populations of non-model organisms is hampered by uncontrolled sources of variation. Here we compare the response to temperature extremes of two geographically distant populations of a diving beetle species (Agabus ramblae) using 2-D DIGE. After one week of acclimation in the laboratory under standard conditions, a third of the specimens of each population were placed at either 4 or 27°C for 12 h, with another third left as a control. We then compared the protein expression level of three replicated samples of 2-3 specimens for each treatment. Within each population, variation between replicated samples of the same treatment was always lower than variation between treatments, except for some control samples that retained a wider range of expression levels. The two populations had a similar response, without significant differences in the number of protein spots over- or under-expressed in the pairwise comparisons between treatments. We identified exemplary proteins among those differently expressed between treatments, which proved to be proteins known to be related to thermal response or stress. Overall, our results indicate that specimens collected in the wild are suitable for proteomic analyses, as the additional sources of variation were not enough to mask the consistency and reproducibility of the response to the temperature treatments.

  3. A consistent model for the equilibrium thermodynamic functions of partially ionized flibe plasma with Coulomb corrections

    International Nuclear Information System (INIS)

    Zaghloul, Mofreh R.

    2003-01-01

    Flibe (2LiF-BeF2) is a molten salt that has been chosen as the coolant and breeding material in many design studies of the inertial confinement fusion (ICF) chamber. Flibe plasmas are to be generated in the ICF chamber in a wide range of temperatures and densities. These plasmas are more complex than the plasma of any single chemical species. Nevertheless, the composition and thermodynamic properties of the resulting flibe plasmas are needed for the gas dynamics calculations and the determination of other design parameters in the ICF chamber. In this paper, a simple consistent model for determining the detailed plasma composition and thermodynamic functions of high-temperature, fully dissociated and partially ionized flibe gas is presented and used to calculate different thermodynamic properties of interest to fusion applications. The computed properties include the average ionization state; kinetic pressure; internal energy; specific heats; adiabatic exponent, as well as the sound speed. The presented results are computed under the assumptions of local thermodynamic equilibrium (LTE) and electro-neutrality. A criterion for the validity of the LTE assumption is presented and applied to the computed results. Other attempts in the literature are assessed with their implied inaccuracies pointed out and discussed

  4. A fully kinetic, self-consistent particle simulation model of the collisionless plasma--sheath region

    International Nuclear Information System (INIS)

    Procassini, R.J.; Birdsall, C.K.; Morse, E.C.

    1990-01-01

    A fully kinetic particle-in-cell (PIC) model is used to self-consistently determine the steady-state potential profile in a collisionless plasma that contacts a floating, absorbing boundary. To balance the flow of particles to the wall, a distributed source region is used to inject particles into the one-dimensional system. The effect of the particle source distribution function on the source region and collector sheath potential drops, and particle velocity distributions is investigated. The ion source functions proposed by Emmert et al. [Phys. Fluids 23, 803 (1980)] and Bissell and Johnson [Phys. Fluids 30, 779 (1987)] (and various combinations of these) are used for the injection of both ions and electrons. The values of the potential drops obtained from the PIC simulations are compared to those from the theories of Emmert et al., Bissell and Johnson, and Scheuer and Emmert [Phys. Fluids 31, 3645 (1988)], all of which assume that the electron density is related to the plasma potential via the Boltzmann relation. The values of the source region and total potential drop are found to depend on the choice of the electron source function, as well as the ion source function. The question of an infinite electric field at the plasma--sheath interface, which arises in the analyses of Bissell and Johnson and Scheuer and Emmert, is also addressed

  5. Comprehensive and fully self-consistent modeling of modern semiconductor lasers

    International Nuclear Information System (INIS)

    Nakwaski, W.; Sarzał, R. P.

    2016-01-01

    The fully self-consistent model of modern semiconductor lasers used to design their advanced structures and to understand more deeply their properties is given in the present paper. Operation of semiconductor lasers depends not only on many optical, electrical, thermal, recombination, and sometimes mechanical phenomena taking place within their volumes but also on numerous mutual interactions between these phenomena. Their experimental investigation is quite complex, mostly because of miniature device sizes. Therefore, the most convenient and exact method to analyze expected laser operation and to determine laser optimal structures for various applications is to examine the details of their performance with the aid of a simulation of laser operation in various considered conditions. Such a simulation of an operation of semiconductor lasers is presented in this paper in a full complexity of all mutual interactions between the above individual physical processes. In particular, the hole-burning effect has been discussed. The impacts on laser performance introduced by oxide apertures (their sizes and localization) have been analyzed in detail. Also, some important details concerning the operation of various types of semiconductor lasers are discussed. The results of some applications of semiconductor lasers are shown for successive laser structures. (paper)

  6. A self-consistent first-principle based approach to model carrier mobility in organic materials

    International Nuclear Information System (INIS)

    Meded, Velimir; Friederich, Pascal; Symalla, Franz; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang

    2015-01-01

    Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using a fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC

  7. Multiscale Modeling at Nanointerfaces: Polymer Thin Film Materials Discovery via Thermomechanically Consistent Coarse Graining

    Science.gov (United States)

    Hsu, David D.

    Due to high nanointerfacial area to volume ratio, the properties of "nanoconfined" polymer thin films, blends, and composites become highly altered compared to their bulk homopolymer analogues. Understanding the structure-property mechanisms underlying this effect is an active area of research. However, despite extensive work, a fundamental framework for predicting the local and system-averaged thermomechanical properties as a function of configuration and polymer species has yet to be established. Towards bridging this gap, here, we present a novel, systematic coarse-graining (CG) method which is able to capture quantitatively, the thermomechanical properties of real polymer systems in bulk and in nanoconfined geometries. This method, which we call thermomechanically consistent coarse-graining (TCCG), is a two-bead-per-monomer CG hybrid approach through which bonded interactions are optimized to match the atomistic structure via the Iterative Boltzmann Inversion method (IBI), and nonbonded interactions are tuned to macroscopic targets through parametric studies. We validate the TCCG method by systematically developing coarse-grain models for a group of five specialized methacrylate-based polymers including poly(methyl methacrylate) (PMMA). Good correlation with bulk all-atom (AA) simulations and experiments is found for the temperature-dependent glass transition temperature (Tg) Flory-Fox scaling relationships, self-diffusion coefficients of liquid monomers, and modulus of elasticity. We apply this TCCG method also to bulk polystyrene (PS) using a comparable coarse-grain CG bead mapping strategy. The model demonstrates chain stiffness commensurate with experiments, and we utilize a density-correction term to improve the transferability of the elastic modulus over a 500 K range. Additionally, PS and PMMA models capture the unexplained, characteristically dissimilar scaling of Tg with the thickness of free-standing films as seen in experiments. Using vibrational

  8. Functional connectivity modeling of consistent cortico-striatal degeneration in Huntington's disease

    Directory of Open Access Journals (Sweden)

    Imis Dogan

    2015-01-01

    Full Text Available Huntington's disease (HD is a progressive neurodegenerative disorder characterized by a complex neuropsychiatric phenotype. In a recent meta-analysis we identified core regions of consistent neurodegeneration in premanifest HD in the striatum and middle occipital gyrus (MOG. For early manifest HD convergent evidence of atrophy was most prominent in the striatum, motor cortex (M1 and inferior frontal junction (IFJ. The aim of the present study was to functionally characterize this topography of brain atrophy and to investigate differential connectivity patterns formed by consistent cortico-striatal atrophy regions in HD. Using areas of striatal and cortical atrophy at different disease stages as seeds, we performed task-free resting-state and task-based meta-analytic connectivity modeling (MACM. MACM utilizes the large data source of the BrainMap database and identifies significant areas of above-chance co-activation with the seed-region via the activation-likelihood-estimation approach. In order to delineate functional networks formed by cortical as well as striatal atrophy regions we computed the conjunction between the co-activation profiles of striatal and cortical seeds in the premanifest and manifest stages of HD, respectively. Functional characterization of the seeds was obtained using the behavioral meta-data of BrainMap. Cortico-striatal atrophy seeds of the premanifest stage of HD showed common co-activation with a rather cognitive network including the striatum, anterior insula, lateral prefrontal, premotor, supplementary motor and parietal regions. A similar but more pronounced co-activation pattern, additionally including the medial prefrontal cortex and thalamic nuclei was found with striatal and IFJ seeds at the manifest HD stage. The striatum and M1 were functionally connected mainly to premotor and sensorimotor areas, posterior insula, putamen and thalamus. Behavioral characterization of the seeds confirmed that experiments

  9. Advanced reach tool (ART) : Development of the mechanistic model

    NARCIS (Netherlands)

    Fransman, W.; Tongeren, M. van; Cherrie, J.W.; Tischer, M.; Schneider, T.; Schinkel, J.; Kromhout, H.; Warren, N.; Goede, H.; Tielemans, E.

    2011-01-01

    This paper describes the development of the mechanistic model within a collaborative project, referred to as the Advanced REACH Tool (ART) project, to develop a tool to model inhalation exposure for workers sharing similar operational conditions across different industries and locations in Europe.

  10. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    Science.gov (United States)

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  11. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia; Harmandaris, Vagelis; Katsoulakis, Markos A.; Plechac, Petr

    2015-01-01

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics

  12. MARKET EVALUATION MODEL: TOOL FORBUSINESS DECISIONS

    OpenAIRE

    Porlles Loarte, José; Yenque Dedios, Julio; Lavado Soto, Aurelio

    2014-01-01

    In the present work the concepts of potential market and global market are analyzed as the basis for strategic decisions of market with long term perspectives, when the implantation of a business in certain geographic area is evaluated. On this conceptual frame, the methodological tool is proposed to evaluate a commercial decision, for which it is taken as reference the case from the brewing industry in Peru, considering that this industry faces in the region entrepreneurial reorderings withi...

  13. A self-consistent kinetic modeling of a 1-D, bounded, plasma in ...

    Indian Academy of Sciences (India)

    ions, consistent with the idea of scattering off a random collection of stationary scattering points, while it yields a constant for slow ions, consistent with the idea of collisions experienced by a stationary particle in an ideal gas. For this treatment, o has been assumed independent of position. Pramana – J. Phys., Vol. 55, Nos 5 ...

  14. A Consistent Fuzzy Preference Relations Based ANP Model for R&D Project Selection

    Directory of Open Access Journals (Sweden)

    Chia-Hua Cheng

    2017-08-01

    Full Text Available In today’s rapidly changing economy, technology companies have to make decisions on research and development (R&D projects investment on a routine bases with such decisions having a direct impact on that company’s profitability, sustainability and future growth. Companies seeking profitable opportunities for investment and project selection must consider many factors such as resource limitations and differences in assessment, with consideration of both qualitative and quantitative criteria. Often, differences in perception by the various stakeholders hinder the attainment of a consensus of opinion and coordination efforts. Thus, in this study, a hybrid model is developed for the consideration of the complex criteria taking into account the different opinions of the various stakeholders who often come from different departments within the company and have different opinions about which direction to take. The decision-making trial and evaluation laboratory (DEMATEL approach is used to convert the cause and effect relations representing the criteria into a visual network structure. A consistent fuzzy preference relations based analytic network process (CFPR-ANP method is developed to calculate the preference-weights of the criteria based on the derived network structure. The CFPR-ANP is an improvement over the original analytic network process (ANP method in that it reduces the problem of inconsistency as well as the number of pairwise comparisons. The combined complex proportional assessment (COPRAS-G method is applied with fuzzy grey relations to resolve conflicts arising from differences in information and opinions provided by the different stakeholders about the selection of the most suitable R&D projects. This novel combination approach is then used to assist an international brand-name company to prioritize projects and make project decisions that will maximize returns and ensure sustainability for the company.

  15. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    Energy Technology Data Exchange (ETDEWEB)

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M., E-mail: Frans.leermakers@wur.nl [Laboratory of Physical Chemistry and Colloid Science, Wageningen University, Dreijenplein 6, 6307 HB Wageningen (Netherlands)

    2014-02-14

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k{sub c} and k{sup ¯} and the preferred monolayer curvature J{sub 0}{sup m}, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k{sub c} and the area compression modulus k{sub A} are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k{sup ¯} and J{sub 0}{sup m} can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k{sup ¯} and J{sub 0}{sup m} change sign with relevant parameter changes. Although typically k{sup ¯}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J{sub 0}{sup m}≫0, especially at low ionic

  16. Applying Modeling Tools to Ground System Procedures

    Science.gov (United States)

    Di Pasquale, Peter

    2012-01-01

    As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.

  17. Self-consistent tight-binding model of B and N doping in graphene

    DEFF Research Database (Denmark)

    Pedersen, Thomas Garm; Pedersen, Jesper Goor

    2013-01-01

    . The impurity potential depends sensitively on the impurity occupancy, leading to a self-consistency requirement. We solve this problem using the impurity Green's function and determine the self-consistent local density of states at the impurity site and, thereby, identify acceptor and donor energy resonances.......Boron and nitrogen substitutional impurities in graphene are analyzed using a self-consistent tight-binding approach. An analytical result for the impurity Green's function is derived taking broken electron-hole symmetry into account and validated by comparison to numerical diagonalization...

  18. Development of hydrogeological modelling tools based on NAMMU

    Energy Technology Data Exchange (ETDEWEB)

    Marsic, N. [Kemakta Konsult AB, Stockholm (Sweden); Hartley, L.; Jackson, P.; Poole, M. [AEA Technology, Harwell (United Kingdom); Morvik, A. [Bergen Software Services International AS, Bergen (Norway)

    2001-09-01

    A number of relatively sophisticated hydrogeological models were developed within the SR 97 project to handle issues such as nesting of scales and the effects of salinity. However, these issues and others are considered of significant importance and generality to warrant further development of the hydrogeological methodology. Several such developments based on the NAMMU package are reported here: - Embedded grid: nesting of the regional- and site-scale models within the same numerical model has given greater consistency in the structural model representation and in the flow between scales. Since there is a continuous representation of the regional- and site-scales the modelling of pathways from the repository no longer has to be contained wholly by the site-scale region. This allows greater choice in the size of the site-scale. - Implicit Fracture Zones (IFZ): this method of incorporating the structural model is very efficient and allows changes to either the mesh or fracture zones to be implemented quickly. It also supports great flexibility in the properties of the structures and rock mass. - Stochastic fractures: new functionality has been added to IFZ to allow arbitrary combinations of stochastic or deterministic fracture zones with the rock-mass. Whether a fracture zone is modelled deterministically or stochastically its statistical properties can be defined independently. - Stochastic modelling: efficient methods for Monte-Carlo simulation of stochastic permeability fields have been implemented and tested on SKB's computers. - Visualisation: the visualisation tool Avizier for NAMMU has been enhanced such that it is efficient for checking models and presentation. - PROPER interface: NAMMU outputs pathlines in PROPER format so that it can be included in PA workflow. The developed methods are illustrated by application to stochastic nested modelling of the Beberg site using data from SR 97. The model properties were in accordance with the regional- and site

  19. Development of hydrogeological modelling tools based on NAMMU

    International Nuclear Information System (INIS)

    Marsic, N.; Hartley, L.; Jackson, P.; Poole, M.; Morvik, A.

    2001-09-01

    A number of relatively sophisticated hydrogeological models were developed within the SR 97 project to handle issues such as nesting of scales and the effects of salinity. However, these issues and others are considered of significant importance and generality to warrant further development of the hydrogeological methodology. Several such developments based on the NAMMU package are reported here: - Embedded grid: nesting of the regional- and site-scale models within the same numerical model has given greater consistency in the structural model representation and in the flow between scales. Since there is a continuous representation of the regional- and site-scales the modelling of pathways from the repository no longer has to be contained wholly by the site-scale region. This allows greater choice in the size of the site-scale. - Implicit Fracture Zones (IFZ): this method of incorporating the structural model is very efficient and allows changes to either the mesh or fracture zones to be implemented quickly. It also supports great flexibility in the properties of the structures and rock mass. - Stochastic fractures: new functionality has been added to IFZ to allow arbitrary combinations of stochastic or deterministic fracture zones with the rock-mass. Whether a fracture zone is modelled deterministically or stochastically its statistical properties can be defined independently. - Stochastic modelling: efficient methods for Monte-Carlo simulation of stochastic permeability fields have been implemented and tested on SKB's computers. - Visualisation: the visualisation tool Avizier for NAMMU has been enhanced such that it is efficient for checking models and presentation. - PROPER interface: NAMMU outputs pathlines in PROPER format so that it can be included in PA workflow. The developed methods are illustrated by application to stochastic nested modelling of the Beberg site using data from SR 97. The model properties were in accordance with the regional- and site

  20. Shape: A 3D Modeling Tool for Astrophysics.

    Science.gov (United States)

    Steffen, Wolfgang; Koning, Nicholas; Wenger, Stephan; Morisset, Christophe; Magnor, Marcus

    2011-04-01

    We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.

  1. Consistent and Conservative Model Selection with the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    2016-01-01

    We show that the adaptive Lasso is oracle efficient in stationary and nonstationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...

  2. Slab2 - Updated Subduction Zone Geometries and Modeling Tools

    Science.gov (United States)

    Moore, G.; Hayes, G. P.; Portner, D. E.; Furtney, M.; Flamme, H. E.; Hearne, M. G.

    2017-12-01

    The U.S. Geological Survey database of global subduction zone geometries (Slab1.0), is a highly utilized dataset that has been applied to a wide range of geophysical problems. In 2017, these models have been improved and expanded upon as part of the Slab2 modeling effort. With a new data driven approach that can be applied to a broader range of tectonic settings and geophysical data sets, we have generated a model set that will serve as a more comprehensive, reliable, and reproducible resource for three-dimensional slab geometries at all of the world's convergent margins. The newly developed framework of Slab2 is guided by: (1) a large integrated dataset, consisting of a variety of geophysical sources (e.g., earthquake hypocenters, moment tensors, active-source seismic survey images of the shallow slab, tomography models, receiver functions, bathymetry, trench ages, and sediment thickness information); (2) a dynamic filtering scheme aimed at constraining incorporated seismicity to only slab related events; (3) a 3-D data interpolation approach which captures both high resolution shallow geometries and instances of slab rollback and overlap at depth; and (4) an algorithm which incorporates uncertainties of contributing datasets to identify the most probable surface depth over the extent of each subduction zone. Further layers will also be added to the base geometry dataset, such as historic moment release, earthquake tectonic providence, and interface coupling. Along with access to several queryable data formats, all components have been wrapped into an open source library in Python, such that suites of updated models can be released as further data becomes available. This presentation will discuss the extent of Slab2 development, as well as the current availability of the model and modeling tools.

  3. Self-Consistent Approach to Global Charge Neutrality in Electrokinetics: A Surface Potential Trap Model

    Directory of Open Access Journals (Sweden)

    Li Wan

    2014-03-01

    Full Text Available In this work, we treat the Poisson-Nernst-Planck (PNP equations as the basis for a consistent framework of the electrokinetic effects. The static limit of the PNP equations is shown to be the charge-conserving Poisson-Boltzmann (CCPB equation, with guaranteed charge neutrality within the computational domain. We propose a surface potential trap model that attributes an energy cost to the interfacial charge dissociation. In conjunction with the CCPB, the surface potential trap can cause a surface-specific adsorbed charge layer σ. By defining a chemical potential μ that arises from the charge neutrality constraint, a reformulated CCPB can be reduced to the form of the Poisson-Boltzmann equation, whose prediction of the Debye screening layer profile is in excellent agreement with that of the Poisson-Boltzmann equation when the channel width is much larger than the Debye length. However, important differences emerge when the channel width is small, so the Debye screening layers from the opposite sides of the channel overlap with each other. In particular, the theory automatically yields a variation of σ that is generally known as the “charge regulation” behavior, attendant with predictions of force variation as a function of nanoscale separation between two charged surfaces that are in good agreement with the experiments, with no adjustable or additional parameters. We give a generalized definition of the ζ potential that reflects the strength of the electrokinetic effect; its variations with the concentration of surface-specific and surface-nonspecific salt ions are shown to be in good agreement with the experiments. To delineate the behavior of the electro-osmotic (EO effect, the coupled PNP and Navier-Stokes equations are solved numerically under an applied electric field tangential to the fluid-solid interface. The EO effect is shown to exhibit an intrinsic time dependence that is noninertial in its origin. Under a step-function applied

  4. Assessing the Accuracy and Consistency of Language Proficiency Classification under Competing Measurement Models

    Science.gov (United States)

    Zhang, Bo

    2010-01-01

    This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…

  5. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Directory of Open Access Journals (Sweden)

    Matthew B Biggs

    Full Text Available Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  6. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Science.gov (United States)

    Biggs, Matthew B; Papin, Jason A

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  7. New tools for aquatic habitat modeling

    Science.gov (United States)

    D. Tonina; J. A. McKean; C. Tang; P. Goodwin

    2011-01-01

    Modeling of aquatic microhabitat in streams has been typically done over short channel reaches using one-dimensional simulations, partly because of a lack of high resolution. subaqueous topographic data to better define model boundary conditions. The Experimental Advanced Airborne Research Lidar (EAARL) is an airborne aquatic-terrestrial sensor that allows simultaneous...

  8. Jack Human Modelling Tool: A Review

    Science.gov (United States)

    2010-01-01

    design and evaluation [8] and evolved into the Computerised Biomechanical Man Model (Combiman), shown in Figure 2. Combiman was developed at the...unrealistic arrangement of tetrahedra (Figure 7) to a highly realistic human model based on current anthropometric, anatomical and biomechanical data...has long legs and a short torso may find it difficult to adjust the seat and rudder pedals to achieve the required over the nose vision, reach to

  9. A comparison of tools for modeling freshwater ecosystem services.

    Science.gov (United States)

    Vigerstol, Kari L; Aukema, Juliann E

    2011-10-01

    Interest in ecosystem services has grown tremendously among a wide range of sectors, including government agencies, NGO's and the business community. Ecosystem services entailing freshwater (e.g. flood control, the provision of hydropower, and water supply), as well as carbon storage and sequestration, have received the greatest attention in both scientific and on-the-ground applications. Given the newness of the field and the variety of tools for predicting water-based services, it is difficult to know which tools to use for different questions. There are two types of freshwater-related tools--traditional hydrologic tools and newer ecosystem services tools. Here we review two of the most prominent tools of each type and their possible applications. In particular, we compare the data requirements, ease of use, questions addressed, and interpretability of results among the models. We discuss the strengths, challenges and most appropriate applications of the different models. Traditional hydrological tools provide more detail whereas ecosystem services tools tend to be more accessible to non-experts and can provide a good general picture of these ecosystem services. We also suggest gaps in the modeling toolbox that would provide the greatest advances by improving existing tools. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    Science.gov (United States)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  11. Using open sidewalls for modelling self-consistent lithosphere subduction dynamics

    NARCIS (Netherlands)

    Chertova, M.V.; Geenen, T.; van den Berg, A.; Spakman, W.

    2012-01-01

    Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable) vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free

  12. Studying the Consistency between and within the Student Mental Models for Atomic Structure

    Science.gov (United States)

    Zarkadis, Nikolaos; Papageorgiou, George; Stamovlasis, Dimitrios

    2017-01-01

    Science education research has revealed a number of student mental models for atomic structure, among which, the one based on Bohr's model seems to be the most dominant. The aim of the current study is to investigate the coherence of these models when students apply them for the explanation of a variety of situations. For this purpose, a set of…

  13. Pedagogical Approaches Used by Faculty in Holland's Model Environments: The Role of Environmental Consistency

    Science.gov (United States)

    Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.

    2009-01-01

    This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…

  14. Aligning building information model tools and construction management methods

    NARCIS (Netherlands)

    Hartmann, Timo; van Meerveld, H.J.; Vossebeld, N.; Adriaanse, Adriaan Maria

    2012-01-01

    Few empirical studies exist that can explain how different Building Information Model (BIM) based tool implementation strategies work in practical contexts. To help overcoming this gap, this paper describes the implementation of two BIM based tools, the first, to support the activities at an

  15. Scratch as a Computational Modelling Tool for Teaching Physics

    Science.gov (United States)

    Lopez, Victor; Hernandez, Maria Isabel

    2015-01-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling…

  16. Advanced REACH tool: A Bayesian model for occupational exposure assessment

    NARCIS (Netherlands)

    McNally, K.; Warren, N.; Fransman, W.; Entink, R.K.; Schinkel, J.; Van Tongeren, M.; Cherrie, J.W.; Kromhout, H.; Schneider, T.; Tielemans, E.

    2014-01-01

    This paper describes a Bayesian model for the assessment of inhalation exposures in an occupational setting; the methodology underpins a freely available web-based application for exposure assessment, the Advanced REACH Tool (ART). The ART is a higher tier exposure tool that combines disparate

  17. Agent Based Modeling as an Educational Tool

    Science.gov (United States)

    Fuller, J. H.; Johnson, R.; Castillo, V.

    2012-12-01

    Motivation is a key element in high school education. One way to improve motivation and provide content, while helping address critical thinking and problem solving skills, is to have students build and study agent based models in the classroom. This activity visually connects concepts with their applied mathematical representation. "Engaging students in constructing models may provide a bridge between frequently disconnected conceptual and mathematical forms of knowledge." (Levy and Wilensky, 2011) We wanted to discover the feasibility of implementing a model based curriculum in the classroom given current and anticipated core and content standards.; Simulation using California GIS data ; Simulation of high school student lunch popularity using aerial photograph on top of terrain value map.

  18. Models and Modelling Tools for Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    2016-01-01

    The design, development and reliability of a chemical product and the process to manufacture it, need to be consistent with the end-use characteristics of the desired product. One of the common ways to match the desired product-process characteristics is through trial and error based experiments......-based framework is that in the design, development and/or manufacturing of a chemical product-process, the knowledge of the applied phenomena together with the product-process design details can be provided with diverse degrees of abstractions and details. This would allow the experimental resources...... to be employed for validation and fine-tuning of the solutions from the model-based framework, thereby, removing the need for trial and error experimental steps. Also, questions related to economic feasibility, operability and sustainability, among others, can be considered in the early stages of design. However...

  19. Ensuring consistency and persistence to the Quality Information Model - The role of the GeoViQua Broker

    Science.gov (United States)

    Bigagli, Lorenzo; Papeschi, Fabrizio; Nativi, Stefano; Bastin, Lucy; Masó, Joan

    2013-04-01

    GeoViQua (QUAlity aware VIsualisation for the Global Earth Observation System of Systems) is an FP7 project aiming at complementing the Global Earth Observation System of Systems (GEOSS) with rigorous data quality specifications and quality-aware capabilities, in order to improve reliability in scientific studies and policy decision-making. GeoViQua main scientific and technical objective is to enhance the GEOSS Common Infrastructure (GCI) providing the user community with innovative quality-aware search and visualization tools, which will be integrated in the GEOPortal, as well as made available to other end-user interfaces. To this end, GeoViQua will promote the extension of the current standard metadata for geographic information with accurate and expressive quality indicators. The project will also contribute to the definition of a quality label, the GEOLabel, reflecting scientific relevance, quality, acceptance and societal needs. The concept of Quality Information is very broad. When talking about the quality of a product, this is not limited to geophysical quality but also includes concepts like mission quality (e.g. data coverage with respect to planning). In general, it provides an indication of the overall fitness for use of a specific type of product. Employing and extending several ISO standards such as 19115, 19157 and 19139, a common set of data quality indicators has been selected to be used within the project. The resulting work, in the form of a data model, is expressed in XML Schema Language and encoded in XML. Quality information can be stated both by data producers and by data users, actually resulting in two conceptually distinct data models, the Producer Quality model and the User Quality model (or User Feedback model). A very important issue concerns the association between the quality reports and the affected products that are target of the report. This association is usually achieved by means of a Product Identifier (PID), but actually just

  20. Macroscopic self-consistent model for external-reflection near-field microscopy

    International Nuclear Information System (INIS)

    Berntsen, S.; Bozhevolnaya, E.; Bozhevolnyi, S.

    1993-01-01

    The self-consistent macroscopic approach based on the Maxwell equations in two-dimensional geometry is developed to describe tip-surface interaction in external-reflection near-field microscopy. The problem is reduced to a single one-dimensional integral equation in terms of the Fourier components of the field at the plane of the sample surface. This equation is extended to take into account a pointlike scatterer placed on the sample surface. The power of light propagating toward the detector as the fiber mode is expressed by using the self-consistent field at the tip surface. Numerical results for trapezium-shaped tips are presented. The authors show that the sharper tip and the more confined fiber mode result in better resolution of the near-field microscope. Moreover, it is found that the tip-surface distance should not be too small so that better resolution is ensured. 14 refs., 10 figs

  1. The Bioenvironmental modeling of Bahar city based on Climate-consistent Architecture

    OpenAIRE

    Parna Kazemian

    2014-01-01

    The identification of the climate of a particularplace and the analysis of the climatic needs in terms of human comfort and theuse of construction materials is one of the prerequisites of aclimate-consistent design. In studies on climate and weather, usingillustrative reports, first a picture of the state of climate is offered. Then,based on the obtained results, the range of changes is determined, and thecause-effect relationships at different scales are identified. Finally, by ageneral exam...

  2. Self-consistent gyrokinetic modeling of neoclassical and turbulent impurity transport

    OpenAIRE

    Estève , D. ,; Sarazin , Y.; Garbet , X.; Grandgirard , V.; Breton , S. ,; Donnel , P. ,; Asahi , Y. ,; Bourdelle , C.; Dif-Pradalier , G; Ehrlacher , C.; Emeriau , C.; Ghendrih , Ph; Gillot , C.; Latu , G.; Passeron , C.

    2018-01-01

    International audience; Trace impurity transport is studied with the flux-driven gyrokinetic GYSELA code [V. Grandgirard et al., Comp. Phys. Commun. 207, 35 (2016)]. A reduced and linearized multi-species collision operator has been recently implemented, so that both neoclassical and turbulent transport channels can be treated self-consistently on an equal footing. In the Pfirsch-Schlüter regime likely relevant for tungsten, the standard expression of the neoclassical impurity flux is shown t...

  3. Self-consistent gyrokinetic modeling of neoclassical and turbulent impurity transport

    Science.gov (United States)

    Estève, D.; Sarazin, Y.; Garbet, X.; Grandgirard, V.; Breton, S.; Donnel, P.; Asahi, Y.; Bourdelle, C.; Dif-Pradalier, G.; Ehrlacher, C.; Emeriau, C.; Ghendrih, Ph.; Gillot, C.; Latu, G.; Passeron, C.

    2018-03-01

    Trace impurity transport is studied with the flux-driven gyrokinetic GYSELA code (Grandgirard et al 2016 Comput. Phys. Commun. 207 35). A reduced and linearized multi-species collision operator has been recently implemented, so that both neoclassical and turbulent transport channels can be treated self-consistently on an equal footing. In the Pfirsch-Schlüter regime that is probably relevant for tungsten, the standard expression for the neoclassical impurity flux is shown to be recovered from gyrokinetics with the employed collision operator. Purely neoclassical simulations of deuterium plasma with trace impurities of helium, carbon and tungsten lead to impurity diffusion coefficients, inward pinch velocities due to density peaking, and thermo-diffusion terms which quantitatively agree with neoclassical predictions and NEO simulations (Belli et al 2012 Plasma Phys. Control. Fusion 54 015015). The thermal screening factor appears to be less than predicted analytically in the Pfirsch-Schlüter regime, which can be detrimental to fusion performance. Finally, self-consistent nonlinear simulations have revealed that the tungsten impurity flux is not the sum of turbulent and neoclassical fluxes computed separately, as is usually assumed. The synergy partly results from the turbulence-driven in-out poloidal asymmetry of tungsten density. This result suggests the need for self-consistent simulations of impurity transport, i.e. including both turbulence and neoclassical physics, in view of quantitative predictions for ITER.

  4. Predictions of titanium alloy properties using thermodynamic modeling tools

    Science.gov (United States)

    Zhang, F.; Xie, F.-Y.; Chen, S.-L.; Chang, Y. A.; Furrer, D.; Venkatesh, V.

    2005-12-01

    Thermodynamic modeling tools have become essential in understanding the effect of alloy chemistry on the final microstructure of a material. Implementation of such tools to improve titanium processing via parameter optimization has resulted in significant cost savings through the elimination of shop/laboratory trials and tests. In this study, a thermodynamic modeling tool developed at CompuTherm, LLC, is being used to predict β transus, phase proportions, phase chemistries, partitioning coefficients, and phase boundaries of multicomponent titanium alloys. This modeling tool includes Pandat, software for multicomponent phase equilibrium calculations, and PanTitanium, a thermodynamic database for titanium alloys. Model predictions are compared with experimental results for one α-β alloy (Ti-64) and two near-β alloys (Ti-17 and Ti-10-2-3). The alloying elements, especially the interstitial elements O, N, H, and C, have been shown to have a significant effect on the β transus temperature, and are discussed in more detail herein.

  5. CONSISTENT USE OF THE KALMAN FILTER IN CHEMICAL TRANSPORT MODELS (CTMS) FOR DEDUCING EMISSIONS

    Science.gov (United States)

    Past research has shown that emissions can be deduced using observed concentrations of a chemical, a Chemical Transport Model (CTM), and the Kalman filter in an inverse modeling application. An expression was derived for the relationship between the "observable" (i.e., the con...

  6. A tool for model based diagnostics of the AGS Booster

    International Nuclear Information System (INIS)

    Luccio, A.

    1993-01-01

    A model-based algorithmic tool was developed to search for lattice errors by a systematic analysis of orbit data in the AGS Booster synchrotron. The algorithm employs transfer matrices calculated with MAD between points in the ring. Iterative model fitting of the data allows one to find and eventually correct magnet displacements and angles or field errors. The tool, implemented on a HP-Apollo workstation system, has proved very general and of immediate physical interpretation

  7. Fluid Survival Tool: A Model Checker for Hybrid Petri Nets

    NARCIS (Netherlands)

    Postema, Björn Frits; Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Ghasemieh, Hamed

    2014-01-01

    Recently, algorithms for model checking Stochastic Time Logic (STL) on Hybrid Petri nets with a single general one-shot transition (HPNG) have been introduced. This paper presents a tool for model checking HPNG models against STL formulas. A graphical user interface (GUI) not only helps to

  8. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  9. Using open sidewalls for modelling self-consistent lithosphere subduction dynamics

    Directory of Open Access Journals (Sweden)

    M. V. Chertova

    2012-10-01

    Full Text Available Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free slip sidewalls while comparing results for two model aspect ratios of 3:1 and 6:1. Slab buoyancy driven subduction with open boundaries and free plates immediately develops into strong rollback with high trench retreat velocities and predominantly laminar asthenospheric flow. In contrast, free-slip sidewalls prove highly restrictive on subduction rollback evolution, unless the lithosphere plates are allowed to move away from the sidewalls. This initiates return flows pushing both plates toward the subduction zone speeding up subduction. Increasing the aspect ratio to 6:1 does not change the overall flow pattern when using open sidewalls but only the flow magnitude. In contrast, for free-slip boundaries, the slab evolution does change with respect to the 3:1 aspect ratio model and slab evolution does not resemble the evolution obtained with open boundaries using 6:1 aspect ratio. For models with open side boundaries, we could develop a flow-speed scaling based on energy dissipation arguments to convert between flow fields of different model aspect ratios. We have also investigated incorporating the effect of far-field generated lithosphere stress in our open boundary models. By applying realistic normal stress conditions to the strong part of the overriding plate at the sidewalls, we can transfer intraplate stress to influence subduction dynamics varying from slab roll-back, stationary subduction, to advancing subduction. The relative independence of the flow field on model aspect ratio allows for a smaller modelling domain. Open boundaries allow for subduction to evolve freely and avoid the adverse effects (e.g. forced return flows of free-slip boundaries. We

  10. Transparent Model Transformation: Turning Your Favourite Model Editor into a Transformation Tool

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel

    2015-01-01

    Current model transformation languages are supported by dedicated editors, often closely coupled to a single execution engine. We introduce Transparent Model Transformation, a paradigm enabling modelers to specify transformations using a familiar tool: their model editor. We also present VMTL, th...... model transformation tool sharing the model editor’s benefits, transparently....

  11. A simple model of the plasma deflagration gun including self-consistent electric and magnetic fields

    International Nuclear Information System (INIS)

    Enloe, C.L.; Reinovsky, R.E.

    1985-01-01

    At the Air Force Weapons Laboratory, interest has continued for some time in energetic plasma injectors. A possible scheme for such a device is the plasma deflagration gun. When the question arose whether it would be possible to scale a deflagration gun to the multi-megajoule energy level, it became clear that a scaling law which described the fun as a circuit element and allowed one to confidently scale gun parameters would be required. The authors sought to develop a scaling law which self-consistently described the current, magnetic field, and velocity profiles in the gun. They based this scaling law on plasma parameters exclusively, abandoning the fluid approach

  12. Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers

    DEFF Research Database (Denmark)

    Cartar, William; Mørk, Jesper; Hughes, Stephen

    2017-01-01

    -level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within......-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also show how the average radiative decay rate decreases as a function of cavity size. In addition, we investigate the role of structural disorder...

  13. Implicit implementation and consistent tangent modulus of a viscoplastic model for polymers

    OpenAIRE

    ACHOUR-RENAULT, Nadia; CHATZIGEORGIOU, George; MERAGHNI, Fodil; CHEMISKY, Yves; FITOUSSI, Joseph

    2015-01-01

    In this work, the phenomenological viscoplastic DSGZ model[Duan, Y., Saigal, A., Greif, R., Zimmerman, M. A., 2001. A Uniform Phenomenological Constitutive Model for Glassy and Semicrystalline Polymers. Polymer Engineering and Science 41 (8), 1322-1328], developed for glassy or semi-crystalline polymers, is numerically implemented in a three dimensional framework, following an implicit formulation. The computational methodology is based on the radial return mapping algorithm. This implicit fo...

  14. Self-Consistent Model of Magnetospheric Electric Field, Ring Current, Plasmasphere, and Electromagnetic Ion Cyclotron Waves: Initial Results

    Science.gov (United States)

    Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.

    2009-01-01

    Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.

  15. Multidisciplinary Modelling Tools for Power Electronic Circuits

    DEFF Research Database (Denmark)

    Bahman, Amir Sajjad

    in reliability assessment of power modules, a three-dimensional lumped thermal network is proposed to be used for fast, accurate and detailed temperature estimation of power module in dynamic operation and different boundary conditions. Since an important issue in the reliability of power electronics...... environment to be used for optimization of cooling system layout with respect to thermal resistance and pressure drop reductions. Finally extraction of electrical parasitics in the multi-chip power modules will be investigated. As the switching frequency of power devices increases, the size of passive...... components are reduced considerably that leads to increase of power density and cost reduction. However, electrical parasitics become more challenging with increasing the switching frequency and paralleled chips in the integrated and denser packages. Therefore, electrical parasitic models are analyzed based...

  16. Modeling Tools for Drilling, Reservoir Navigation, and Formation Evaluation

    Directory of Open Access Journals (Sweden)

    Sushant Dutta

    2012-06-01

    Full Text Available The oil and gas industry routinely uses borehole tools for measuring or logging rock and fluid properties of geologic formations to locate hydrocarbons and maximize their production. Pore fluids in formations of interest are usually hydrocarbons or water. Resistivity logging is based on the fact that oil and gas have a substantially higher resistivity than water. The first resistivity log was acquired in 1927, and resistivity logging is still the foremost measurement used for drilling and evaluation. However, the acquisition and interpretation of resistivity logging data has grown in complexity over the years. Resistivity logging tools operate in a wide range of frequencies (from DC to GHz and encounter extremely high (several orders of magnitude conductivity contrast between the metal mandrel of the tool and the geologic formation. Typical challenges include arbitrary angles of tool inclination, full tensor electric and magnetic field measurements, and interpretation of complicated anisotropic formation properties. These challenges combine to form some of the most intractable computational electromagnetic problems in the world. Reliable, fast, and convenient numerical modeling of logging tool responses is critical for tool design, sensor optimization, virtual prototyping, and log data inversion. This spectrum of applications necessitates both depth and breadth of modeling software—from blazing fast one-dimensional (1-D modeling codes to advanced threedimensional (3-D modeling software, and from in-house developed codes to commercial modeling packages. In this paper, with the help of several examples, we demonstrate our approach for using different modeling software to address different drilling and evaluation applications. In one example, fast 1-D modeling provides proactive geosteering information from a deep-reading azimuthal propagation resistivity measurement. In the second example, a 3-D model with multiple vertical resistive fractures

  17. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted.......Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...

  18. Assessing the reliability of predictive activity coefficient models for molecules consisting of several functional groups

    Directory of Open Access Journals (Sweden)

    R. P. Gerber

    2013-03-01

    Full Text Available Currently, the most successful predictive models for activity coefficients are those based on functional groups such as UNIFAC. In contrast, these models require a large amount of experimental data for the determination of their parameter matrix. A more recent alternative is the models based on COSMO, for which only a small set of universal parameters must be calibrated. In this work, a recalibrated COSMO-SAC model was compared with the UNIFAC (Do model employing experimental infinite dilution activity coefficient data for 2236 non-hydrogen-bonding binary mixtures at different temperatures. As expected, UNIFAC (Do presented better overall performance, with a mean absolute error of 0.12 ln-units against 0.22 for our COSMO-SAC implementation. However, in cases involving molecules with several functional groups or when functional groups appear in an unusual way, the deviation for UNIFAC was 0.44 as opposed to 0.20 for COSMO-SAC. These results show that COSMO-SAC provides more reliable predictions for multi-functional or more complex molecules, reaffirming its future prospects.

  19. Thermodynamically self-consistent theory for the Blume-Capel model.

    Science.gov (United States)

    Grollau, S; Kierlik, E; Rosinberg, M L; Tarjus, G

    2001-04-01

    We use a self-consistent Ornstein-Zernike approximation to study the Blume-Capel ferromagnet on three-dimensional lattices. The correlation functions and the thermodynamics are obtained from the solution of two coupled partial differential equations. The theory provides a comprehensive and accurate description of the phase diagram in all regions, including the wing boundaries in a nonzero magnetic field. In particular, the coordinates of the tricritical point are in very good agreement with the best estimates from simulation or series expansion. Numerical and analytical analysis strongly suggest that the theory predicts a universal Ising-like critical behavior along the lambda line and the wing critical lines, and a tricritical behavior governed by mean-field exponents.

  20. The standard lateral gene transfer model is statistically consistent for pectinate four-taxon trees

    DEFF Research Database (Denmark)

    Sand, Andreas; Steel, Mike

    2013-01-01

    Evolutionary events such as incomplete lineage sorting and lateral gene transfers constitute major problems for inferring species trees from gene trees, as they can sometimes lead to gene trees which conflict with the underlying species tree. One particularly simple and efficient way to infer...... species trees from gene trees under such conditions is to combine three-taxon analyses for several genes using a majority vote approach. For incomplete lineage sorting this method is known to be statistically consistent; however, for lateral gene transfers it was recently shown that a zone...... of inconsistency exists for a specific four-taxon tree topology, and it was posed as an open question whether inconsistencies could exist for other four-taxon tree topologies? In this letter we analyze all remaining four-taxon topologies and show that no other inconsistencies exist....

  1. A self-consistent model for low-high transitions in tokamaks

    International Nuclear Information System (INIS)

    Guzdar, P.N.; Hassam, A.B.

    1996-01-01

    A system of equations that couples the rapidly varying fluctuations of resistive ballooning modes to the slowly varying transport of the density, vorticity and parallel momentum have been derived and solved numerically. Only a single toroidal mode number is retained in the present work. The low-mode (L-mode) phase consists of strong poloidally asymmetric particle transport driven by resistive ballooning modes, with larger flux on the outboard side compared to the inboard side. With the onset of shear flow driven by a combination of toroidal drive mechanisms as well as the Reynolds stress, the fluctuations associated with the resistive ballooning modes are attenuated leading to a strong reduction in the particle transport. The drop in the particle transport results in steepening of the density profile leading to the high-mode (H-mode). copyright 1996 American Institute of Physics

  2. Coulomb displacement energies in relativistic and non-relativistic self-consistent models

    International Nuclear Information System (INIS)

    Marcos, S.; Savushkin, L.N.; Giai, N. van.

    1992-03-01

    Coulomb displacement energies in mirror nuclei are comparatively analyzed in Dirac-Hartree and Skyrme-Hartree-Fock models. Using a non-linear effective Lagrangian fitted on ground state properties of finite nuclei, it is found that the predictions of relativistic models are lower than those of Hartree-Fock calculations with Skyrme force. The main sources of reduction are the kinetic energy and the Coulomb-nuclear interference potential. The discrepancy with the data is larger than in the Skyrme-Hartree-Fock case. (author) 24 refs., 3 tabs

  3. Is the thermal-spike model consistent with experimentally determined electron temperature?

    International Nuclear Information System (INIS)

    Ajryan, Eh.A.; Fedorov, A.V.; Kostenko, B.F.

    2000-01-01

    Carbon K-Auger electron spectra from amorphous carbon foils induced by fast heavy ions are theoretically investigated. The high-energy tail of the Auger structure showing a clear projectile charge dependence is analyzed within the thermal-spike model framework as well as in the frame of another model taking into account some kinetic features of the process. A poor comparison results between theoretically and experimentally determined temperatures are suggested to be due to an improper account of double electron excitations or due to shake-up processes which leave the system in a more energetic initial state than a statically screened core hole

  4. Investigating the consistency between proxy-based reconstructions and climate models using data assimilation: a mid-Holocene case study

    NARCIS (Netherlands)

    A. Mairesse; H. Goosse; P. Mathiot; H. Wanner; S. Dubinkina (Svetlana)

    2013-01-01

    htmlabstractThe mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of

  5. Pre-Processing and Modeling Tools for Bigdata

    Directory of Open Access Journals (Sweden)

    Hashem Hadi

    2016-09-01

    Full Text Available Modeling tools and operators help the user / developer to identify the processing field on the top of the sequence and to send into the computing module only the data related to the requested result. The remaining data is not relevant and it will slow down the processing. The biggest challenge nowadays is to get high quality processing results with a reduced computing time and costs. To do so, we must review the processing sequence, by adding several modeling tools. The existing processing models do not take in consideration this aspect and focus on getting high calculation performances which will increase the computing time and costs. In this paper we provide a study of the main modeling tools for BigData and a new model based on pre-processing.

  6. Strong time-consistency in the cartel-versus-fringe model

    NARCIS (Netherlands)

    Groot, F.; Withagen, C.A.A.M.; Zeeuw, de A.J.

    2003-01-01

    Due to developments on the oil market in the 1970s, the theory of exhaustible resources was extended with the cartel-versus-fringe model to characterize markets with one big coherent cartel and a large number of small suppliers called the fringe. Because cartel and fringe are leader and follower,

  7. Self-consistent semi-analytic models of the first stars

    Science.gov (United States)

    Visbal, Eli; Haiman, Zoltán; Bryan, Greg L.

    2018-04-01

    We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter haloes from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at z ≥ 20. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual haloes) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.

  8. A consistent multigroup model for radiative transfer and its underlying mean opacities

    International Nuclear Information System (INIS)

    Turpault, Rodolphe

    2005-01-01

    In some regimes, such as in plasma physics or in super orbital atmospheric entry of space objects, the effects of radiation are crucial and can tremendously modify the hydrodynamics of the gas. In such cases, it is therefore important to have a good prediction of the radiative variables. However, full transport solutions of these multi-dimensional, time-dependent problems are too expensive to get to be involved in a coupled configuration. It is hence necessary to develop other models for radiation that are cheap, yet accurate enough to give good predictions of the radiative effects. We will herein introduce the multigroup-M1 model and look at its characteristics and in particular try to separate the angular error from the frequential one since these two approximation play very different roles. The angular behaviour of the model will be tested on a case proposed by Su and Olson and used by Olson et al. to compare various moments and (flux-limited) diffusion models. For the frequency behaviour, we use a simplified flame test-case and show the importance of taking good mean opacities

  9. Thermodynamically consistent modeling and simulation of multi-component two-phase flow with partial miscibility

    KAUST Repository

    Kou, Jisheng; Sun, Shuyu

    2017-01-01

    A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is an attractive

  10. Self-consistent model for the radial current generation during fishbone activity

    International Nuclear Information System (INIS)

    Lutsenko, V.V.; Marchenko, V.S.

    2002-01-01

    Line broadened quasilinear burst model, originally developed for the bump-on-tail instability [H. L. Berk et al., Nucl. Fusion 35, 1661 (1995)], is extended to the problem of sheared flow generation by the fishbone burst. It is supposed that the radial current of the resonant fast ions can be sufficient to create the transport barrier

  11. Latent state-trait models for longitudinal family data : Investigating consistency in perceived support

    NARCIS (Netherlands)

    Loncke, Justine; Mayer, Axel; Eichelsheim, Veroni I.; Branje, Susan J.T.; Meeus, Wim H.J.; Koot, Hans M.; Buysse, Ann; Loeys, Tom

    2017-01-01

    Support is key to healthy family functioning. Using the family social relations model (SRM), it has already been shown that variability in perceived support is mostly attributed to individual perceiver effects. Little is known, however, as to whether those effects are stable or occasion-specific.

  12. Latent state-trait models for longitudinal family data investigating consistency in perceived support

    NARCIS (Netherlands)

    Loncke, Justine; Mayer, Axel; Eichelsheim, Veroni I.; Branje, Susan J. T.; Meeus, W.H.J.; Koot, Hans M.; Buysse, Ann; Loeys, Tom

    Support is key to healthy family functioning. Using the family social relations model (SRM), it has already been shown that variability in perceived support is mostly attributed to individual perceiver effects. Little is known, however, as to whether those effects are stable or occasion-specific.

  13. A self-consistent model for the Galactic cosmic ray, antiproton and positron spectra

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    In this talk I will present the escape model of Galactic cosmic rays. This model explains the measured cosmic ray spectra of individual groups of nuclei from TeV to EeV energies. It predicts an early transition to extragalactic cosmic rays, in agreement with recent Auger data. The escape model also explains the soft neutrino spectrum 1/E^2.5 found by IceCube in concordance with Fermi gamma-ray data. I will show that within the same model one can explain the excess of positrons and antiprotons above 20 GeV found by PAMELA and AMS-02, the discrepancy in the slopes of the spectra of cosmic ray protons and heavier nuclei in the TeV-PeV energy range and the plateau in cosmic ray dipole anisotropy in the 2-50 TeV energy range by adding the effects of a 2 million year old nearby supernova.

  14. Consistent stress-strain ductile fracture model as applied to two grades of beryllium

    International Nuclear Information System (INIS)

    Priddy, T.G.; Benzley, S.E.; Ford, L.M.

    1980-01-01

    Published yield and ultimate biaxial stress and strain data for two grades of beryllium are correlated with a more complete method of characterizing macroscopic strain at fracture initiation in ductile materials. Results are compared with those obtained from an exponential, mean stress dependent, model. Simple statistical methods are employed to illustrate the degree of correlation for each method with the experimental data

  15. Self-consistent collisional-radiative model for hydrogen atoms: Atom–atom interaction and radiation transport

    International Nuclear Information System (INIS)

    Colonna, G.; Pietanza, L.D.; D’Ammando, G.

    2012-01-01

    Graphical abstract: Self-consistent coupling between radiation, state-to-state kinetics, electron kinetics and fluid dynamics. Highlight: ► A CR model of shock-wave in hydrogen plasma has been presented. ► All equations have been coupled self-consistently. ► Non-equilibrium electron and level distributions are obtained. ► The results show non-local effects and non-equilibrium radiation. - Abstract: A collisional-radiative model for hydrogen atom, coupled self-consistently with the Boltzmann equation for free electrons, has been applied to model a shock tube. The kinetic model has been completed considering atom–atom collisions and the vibrational kinetics of the ground state of hydrogen molecules. The atomic level kinetics has been also coupled with a radiative transport equation to determine the effective adsorption and emission coefficients and non-local energy transfer.

  16. Thermodynamically consistent modeling and simulation of multi-component two-phase flow with partial miscibility

    KAUST Repository

    Kou, Jisheng

    2017-12-09

    A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is an attractive alternative recently over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of multiple fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which demonstrates that chemical potential gradients become the primary driving force of fluid motion. Moreover, we prove that the proposed model satisfies the total (free) energy dissipation with time. For numerical simulation of the proposed model, the key difficulties result from the strong nonlinearity of Helmholtz free energy density and tight coupling relations between molar densities and velocity. To resolve these problems, we propose a novel convex-concave splitting of Helmholtz free energy density and deal well with the coupling relations between molar densities and velocity through very careful physical observations with a mathematical rigor. We prove that the proposed numerical scheme can preserve the discrete (free) energy dissipation. Numerical tests are carried out to verify the effectiveness of the proposed method.

  17. Self-Consistent 3D Modeling of Electron Cloud Dynamics and Beam Response

    International Nuclear Information System (INIS)

    Furman, Miguel; Furman, M.A.; Celata, C.M.; Kireeff-Covo, M.; Sonnad, K.G.; Vay, J.-L.; Venturini, M.; Cohen, R.; Friedman, A.; Grote, D.; Molvik, A.; Stoltz, P.

    2007-01-01

    We present recent advances in the modeling of beam electron-cloud dynamics, including surface effects such as secondary electron emission, gas desorption, etc, and volumetric effects such as ionization of residual gas and charge-exchange reactions. Simulations for the HCX facility with the code WARP/POSINST will be described and their validity demonstrated by benchmarks against measurements. The code models a wide range of physical processes and uses a number of novel techniques, including a large-timestep electron mover that smoothly interpolates between direct orbit calculation and guiding-center drift equations, and a new computational technique, based on a Lorentz transformation to a moving frame, that allows the cost of a fully 3D simulation to be reduced to that of a quasi-static approximation

  18. Genome scale models of yeast: towards standardized evaluation and consistent omic integration

    DEFF Research Database (Denmark)

    Sanchez, Benjamin J.; Nielsen, Jens

    2015-01-01

    Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published and are curre......Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published...... in which all levels of omics data (from gene expression to flux) have been integrated in yeast GEMs. Relevant conclusions and current challenges for both GEM evaluation and omic integration are highlighted....

  19. Do we really use rainfall observations consistent with reality in hydrological modelling?

    Science.gov (United States)

    Ciampalini, Rossano; Follain, Stéphane; Raclot, Damien; Crabit, Armand; Pastor, Amandine; Moussa, Roger; Le Bissonnais, Yves

    2017-04-01

    Spatial and temporal patterns in rainfall control how water reaches soil surface and interacts with soil properties (i.e., soil wetting, infiltration, saturation). Once a hydrological event is defined by a rainfall with its spatiotemporal variability and by some environmental parameters such as soil properties (including land use, topographic and anthropic features), the evidence shows that each parameter variation produces different, specific outputs (e.g., runoff, flooding etc.). In this study, we focus on the effect of rainfall patterns because, due to the difficulty to dispose of detailed data, their influence in modelling is frequently underestimated or neglected. A rainfall event affects a catchment non uniformly, it is spatially localized and its pattern moves in space and time. The way and the time how the water reaches the soil and saturates it respect to the geometry of the catchment deeply influences soil saturation, runoff, and then sediment delivery. This research, approaching a hypothetical, simple case, aims to stimulate the debate on the reliability of the rainfall quality used in hydrological / soil erosion modelling. We test on a small catchment of the south of France (Roujan, Languedoc Roussillon) the influence of rainfall variability with the use of a HD hybrid hydrological - soil erosion model, combining a cinematic wave with the St. Venant equation and a simplified "bucket" conceptual model for ground water, able to quantify the effect of different spatiotemporal patterns of a very-high-definition synthetic rainfall. Results indicate that rainfall spatiotemporal patterns are crucial simulating an erosive event: differences between spatially uniform rainfalls, as frequently adopted in simulations, and some hypothetical rainfall patterns here applied, reveal that the outcome of a simulated event can be highly underestimated.

  20. Flood damage: a model for consistent, complete and multipurpose scenarios

    Directory of Open Access Journals (Sweden)

    S. Menoni

    2016-12-01

    implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.

  1. A consistent model for leptogenesis, dark matter and the IceCube signal

    Energy Technology Data Exchange (ETDEWEB)

    Fiorentin, M. Re [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Niro, V. [Departamento de Física Teórica, Universidad Autónoma de Madrid,Cantoblanco, E-28049 Madrid (Spain); Instituto de Física Teórica UAM/CSIC,Calle Nicolás Cabrera 13-15, Cantoblanco, E-28049 Madrid (Spain); Fornengo, N. [Dipartimento di Fisica, Università di Torino,via P. Giuria, 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Torino,via P. Giuria, 1, 10125 Torino (Italy)

    2016-11-04

    We discuss a left-right symmetric extension of the Standard Model in which the three additional right-handed neutrinos play a central role in explaining the baryon asymmetry of the Universe, the dark matter abundance and the ultra energetic signal detected by the IceCube experiment. The energy spectrum and neutrino flux measured by IceCube are ascribed to the decays of the lightest right-handed neutrino N{sub 1}, thus fixing its mass and lifetime, while the production of N{sub 1} in the primordial thermal bath occurs via a freeze-in mechanism driven by the additional SU(2){sub R} interactions. The constraints imposed by IceCube and the dark matter abundance allow nonetheless the heavier right-handed neutrinos to realize a standard type-I seesaw leptogenesis, with the B−L asymmetry dominantly produced by the next-to-lightest neutrino N{sub 2}. Further consequences and predictions of the model are that: the N{sub 1} production implies a specific power-law relation between the reheating temperature of the Universe and the vacuum expectation value of the SU(2){sub R} triplet; leptogenesis imposes a lower bound on the reheating temperature of the Universe at 7×10{sup 9} GeV. Additionally, the model requires a vanishing absolute neutrino mass scale m{sub 1}≃0.

  2. Demonstration of a geostatistical approach to physically consistent downscaling of climate modeling simulations

    KAUST Repository

    Jha, Sanjeev Kumar; Mariethoz, Gregoire; Evans, Jason P.; McCabe, Matthew

    2013-01-01

    A downscaling approach based on multiple-point geostatistics (MPS) is presented. The key concept underlying MPS is to sample spatial patterns from within training images, which can then be used in characterizing the relationship between different variables across multiple scales. The approach is used here to downscale climate variables including skin surface temperature (TSK), soil moisture (SMOIS), and latent heat flux (LH). The performance of the approach is assessed by applying it to data derived from a regional climate model of the Murray-Darling basin in southeast Australia, using model outputs at two spatial resolutions of 50 and 10 km. The data used in this study cover the period from 1985 to 2006, with 1985 to 2005 used for generating the training images that define the relationships of the variables across the different spatial scales. Subsequently, the spatial distributions for the variables in the year 2006 are determined at 10 km resolution using the 50 km resolution data as input. The MPS geostatistical downscaling approach reproduces the spatial distribution of TSK, SMOIS, and LH at 10 km resolution with the correct spatial patterns over different seasons, while providing uncertainty estimates through the use of multiple realizations. The technique has the potential to not only bridge issues of spatial resolution in regional and global climate model simulations but also in feature sharpening in remote sensing applications through image fusion, filling gaps in spatial data, evaluating downscaled variables with available remote sensing images, and aggregating/disaggregating hydrological and groundwater variables for catchment studies.

  3. Modeling the Spray Forming of H13 Steel Tooling

    Science.gov (United States)

    Lin, Yaojun; McHugh, Kevin M.; Zhou, Yizhang; Lavernia, Enrique J.

    2007-07-01

    On the basis of a numerical model, the temperature and liquid fraction of spray-formed H13 tool steel are calculated as a function of time. Results show that a preheated substrate at the appropriate temperature can lead to very low porosity by increasing the liquid fraction in the deposited steel. The calculated cooling rate can lead to a microstructure consisting of martensite, lower bainite, retained austenite, and proeutectoid carbides in as-spray-formed material. In the temperature range between the solidus and liquidus temperatures, the calculated temperature of the spray-formed material increases with increasing substrate preheat temperature, resulting in a very low porosity by increasing the liquid fraction of the deposited steel. In the temperature region where austenite decomposition occurs, the substrate preheat temperature has a negligible influence on the cooling rate of the spray-formed material. On the basis of the calculated results, it is possible to generate sufficient liquid fraction during spray forming by using a high growth rate of the deposit without preheating the substrate, and the growth rate of the deposit has almost no influence on the cooling rate in the temperature region of austenite decomposition.

  4. Tools and Models for Integrating Multiple Cellular Networks

    Energy Technology Data Exchange (ETDEWEB)

    Gerstein, Mark [Yale Univ., New Haven, CT (United States). Gerstein Lab.

    2015-11-06

    In this grant, we have systematically investigated the integrated networks, which are responsible for the coordination of activity between metabolic pathways in prokaryotes. We have developed several computational tools to analyze the topology of the integrated networks consisting of metabolic, regulatory, and physical interaction networks. The tools are all open-source, and they are available to download from Github, and can be incorporated in the Knowledgebase. Here, we summarize our work as follow. Understanding the topology of the integrated networks is the first step toward understanding its dynamics and evolution. For Aim 1 of this grant, we have developed a novel algorithm to determine and measure the hierarchical structure of transcriptional regulatory networks [1]. The hierarchy captures the direction of information flow in the network. The algorithm is generally applicable to regulatory networks in prokaryotes, yeast and higher organisms. Integrated datasets are extremely beneficial in understanding the biology of a system in a compact manner due to the conflation of multiple layers of information. Therefore for Aim 2 of this grant, we have developed several tools and carried out analysis for integrating system-wide genomic information. To make use of the structural data, we have developed DynaSIN for protein-protein interactions networks with various dynamical interfaces [2]. We then examined the association between network topology with phenotypic effects such as gene essentiality. In particular, we have organized E. coli and S. cerevisiae transcriptional regulatory networks into hierarchies. We then correlated gene phenotypic effects by tinkering with different layers to elucidate which layers were more tolerant to perturbations [3]. In the context of evolution, we also developed a workflow to guide the comparison between different types of biological networks across various species using the concept of rewiring [4], and Furthermore, we have developed

  5. LIDT-DD: A New Self-Consistent Debris Disc Model Including Radiation Pressure and Coupling Dynamical and Collisional Evolution

    Science.gov (United States)

    Kral, Q.; Thebault, P.; Charnoz, S.

    2014-01-01

    The first attempt at developing a fully self-consistent code coupling dynamics and collisions to study debris discs (Kral et al. 2013) is presented. So far, these two crucial mechanisms were studied separately, with N-body and statistical collisional codes respectively, because of stringent computational constraints. We present a new model named LIDT-DD which is able to follow over long timescales the coupled evolution of dynamics (including radiation forces) and collisions in a self-consistent way.

  6. System capacity and economic modeling computer tool for satellite mobile communications systems

    Science.gov (United States)

    Wiedeman, Robert A.; Wen, Doong; Mccracken, Albert G.

    1988-01-01

    A unique computer modeling tool that combines an engineering tool with a financial analysis program is described. The resulting combination yields a flexible economic model that can predict the cost effectiveness of various mobile systems. Cost modeling is necessary in order to ascertain if a given system with a finite satellite resource is capable of supporting itself financially and to determine what services can be supported. Personal computer techniques using Lotus 123 are used for the model in order to provide as universal an application as possible such that the model can be used and modified to fit many situations and conditions. The output of the engineering portion of the model consists of a channel capacity analysis and link calculations for several qualities of service using up to 16 types of earth terminal configurations. The outputs of the financial model are a revenue analysis, an income statement, and a cost model validation section.

  7. Advancing nucleosynthesis in self-consistent, multidimensional models of core-collapse supernovae

    International Nuclear Information System (INIS)

    Austin Harris, J.; Chertkow, M.A.; Blondin, J.M.; Pedro Marronetti; Florida Atlantic University, Boca Raton, FL

    2014-01-01

    We investigate CCSN in polar axisymmetric simulations using the multidimensional radiation hydrodynamics code CHIMERA. Computational costs have traditionally constrained the evolution of the nuclear composition in CCSN models to, at best, a 14-species α-network. However, the limited capacity of the α-network to accurately evolve detailed composition, the neutronization and the nuclear energy generation rate has fettered the ability of prior CCSN simulations to accurately reproduce the chemical abundances and energy distributions as known from observations. These deficits can be partially ameliorated by 'post-processing' with a more realistic network. Lagrangian tracer particles placed throughout the star record the temporal evolution of the initial simulation and enable the extension of the nuclear network evolution by incorporating larger systems in post-processing nucleosynthesis calculations. We present post-processing results of four ab initio axisymmetric CCSN 2D models evolved with the smaller α-network, and initiated from stellar metallicity, nonrotating progenitors of mass 12, 15, 20, and 25 M ⊙ 2 . As a test of the limitations of postprocessing, we provide preliminary results from an ongoing simulation of the 15 M ⊙ model evolved with a realistic 150 species nuclear reaction network in situ. With more accurate energy generation rates and an improved determination of the thermodynamic trajectories of the tracer particles, we can better unravel the complicated multidimensional 'mass-cut' in CCSN simulations and probe for less energetically significant nuclear processes like the νp-process and the r-process, which require still larger networks. (author)

  8. Consistency of different tropospheric models and mapping functions for precise GNSS processing

    Science.gov (United States)

    Graffigna, Victoria; Hernández-Pajares, Manuel; García-Rigo, Alberto; Gende, Mauricio

    2017-04-01

    The TOmographic Model of the IONospheric electron content (TOMION) software implements a simultaneous precise geodetic and ionospheric modeling, which can be used to test new approaches for real-time precise GNSS modeling (positioning, ionospheric and tropospheric delays, clock errors, among others). In this work, the software is used to estimate the Zenith Tropospheric Delay (ZTD) emulating real time and its performance is evaluated through a comparative analysis with a built-in GIPSY estimation and IGS final troposphere product, exemplified in a two-day experiment performed in East Australia. Furthermore, the troposphere mapping function was upgraded from Niell to Vienna approach. On a first scenario, only forward processing was activated and the coordinates of the Wide Area GNSS network were loosely constrained, without fixing the carrier phase ambiguities, for both reference and rover receivers. On a second one, precise point positioning (PPP) was implemented, iterating for a fixed coordinates set for the second day. Comparisons between TOMION, IGS and GIPSY estimates have been performed and for the first one, IGS clocks and orbits were considered. The agreement with GIPSY results seems to be 10 times better than with the IGS final ZTD product, despite having considered IGS products for the computations. Hence, the subsequent analysis was carried out with respect to the GIPSY computations. The estimates show a typical bias of 2cm for the first strategy and of 7mm for PPP, in the worst cases. Moreover, Vienna mapping function showed in general a fairly better agreement than Niell one for both strategies. The RMS values' were found to be around 1cm for all studied situations, with a slightly fitter performance for the Niell one. Further improvement could be achieved for such estimations with coefficients for the Vienna mapping function calculated from raytracing as well as integrating meteorological comparative parameters.

  9. Self-consistent one-dimensional modelling of x-ray laser plasmas

    International Nuclear Information System (INIS)

    Wan, A.S.; Walling, R.S.; Scott, H.A.; Mayle, R.W.; Osterheld, A.L.

    1992-01-01

    This paper presents the simulation of a planar, one-dimensional expanding Ge x-ray laser plasma using a new code which combines hydrodynamics, laser absorption, and detailed level population calculations within the same simulation. Previously, these simulations were performed in separate steps. We will present the effect of line transfer on gains and excited level populations and compare the line transfer result with simulations using escape probabilities. We will also discuss the impact of different atomic models on the accuracy of our simulation

  10. Stretched-exponential decay functions from a self-consistent model of dielectric relaxation

    International Nuclear Information System (INIS)

    Milovanov, A.V.; Rasmussen, J.J.; Rypdal, K.

    2008-01-01

    There are many materials whose dielectric properties are described by a stretched exponential, the so-called Kohlrausch-Williams-Watts (KWW) relaxation function. Its physical origin and statistical-mechanical foundation have been a matter of debate in the literature. In this Letter we suggest a model of dielectric relaxation, which naturally leads to a stretched exponential decay function. Some essential characteristics of the underlying charge conduction mechanisms are considered. A kinetic description of the relaxation and charge transport processes is proposed in terms of equations with time-fractional derivatives

  11. In situ neutron diffraction and Elastic–Plastic Self-Consistent polycrystal modeling of HT-9

    International Nuclear Information System (INIS)

    Clausen, B.; Brown, D.W.; Bourke, M.A.M.; Saleh, T.A.; Maloy, S.A.

    2012-01-01

    Qualifying materials for use in reactors with fluences greater than 200 dpa (displacements per atom) requires development of advanced alloys and irradiations in fast reactors to test these alloys. Research into the mechanical behavior of these materials under reactor conditions is ongoing. In order to probe changes in deformation mechanisms due to radiation in these materials, samples of HT-9 were tested in tension in situ on the SMARTS instrument at Los Alamos Neutron Science Center. Experimental results, confirmed with modeling, show significant load sharing between the carbides and parent phase of the steel beyond yield, displaying the critical role of carbides during deformation, along with basic texture development.

  12. In situ neutron diffraction and Elastic-Plastic Self-Consistent polycrystal modeling of HT-9

    Energy Technology Data Exchange (ETDEWEB)

    Clausen, B., E-mail: clausen@lanl.gov [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Brown, D.W.; Bourke, M.A.M.; Saleh, T.A.; Maloy, S.A. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2012-06-15

    Qualifying materials for use in reactors with fluences greater than 200 dpa (displacements per atom) requires development of advanced alloys and irradiations in fast reactors to test these alloys. Research into the mechanical behavior of these materials under reactor conditions is ongoing. In order to probe changes in deformation mechanisms due to radiation in these materials, samples of HT-9 were tested in tension in situ on the SMARTS instrument at Los Alamos Neutron Science Center. Experimental results, confirmed with modeling, show significant load sharing between the carbides and parent phase of the steel beyond yield, displaying the critical role of carbides during deformation, along with basic texture development.

  13. Designer Modeling for Personalized Game Content Creation Tools

    DEFF Research Database (Denmark)

    Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian

    2013-01-01

    preferences, goals and processes from their interaction with a computer-aided design tool, and suggests methods and domains within game development where such a model can be applied. We describe how designer modeling could be integrated with current work on automated and mixed-initiative content creation......With the growing use of automated content creation and computer-aided design tools in game development, there is potential for enhancing the design process through personalized interactions between the software and the game developer. This paper proposes designer modeling for capturing the designer’s......, and envision future directions which focus on personalizing the processes to a designer’s particular wishes....

  14. Fish habitat simulation models and integrated assessment tools

    International Nuclear Information System (INIS)

    Harby, A.; Alfredsen, K.

    1999-01-01

    Because of human development water use increases in importance, and this worldwide trend is leading to an increasing number of user conflicts with a strong need for assessment tools to measure the impacts both on the ecosystem and the different users and user groups. The quantitative tools must allow a comparison of alternatives, different user groups, etc., and the tools must be integrated while impact assessments includes different disciplines. Fish species, especially young ones, are indicators of the environmental state of a riverine system and monitoring them is a way to follow environmental changes. The direct and indirect impacts on the ecosystem itself are measured, and impacts on user groups is not included. Fish habitat simulation models are concentrated on, and methods and examples are considered from Norway. Some ideas on integrated modelling tools for impact assessment studies are included. One dimensional hydraulic models are rapidly calibrated and do not require any expert knowledge in hydraulics. Two and three dimensional models require a bit more skilled users, especially if the topography is very heterogeneous. The advantages of using two and three dimensional models include: they do not need any calibration, just validation; they are predictive; and they can be more cost effective than traditional habitat hydraulic models when combined with modern data acquisition systems and tailored in a multi-disciplinary study. Suitable modelling model choice should be based on available data and possible data acquisition, available manpower, computer, and software resources, and needed output and accuracy in the output. 58 refs

  15. Simulation Tools for Electrical Machines Modelling: Teaching and ...

    African Journals Online (AJOL)

    Simulation tools are used both for research and teaching to allow a good comprehension of the systems under study before practical implementations. This paper illustrates the way MATLAB is used to model non-linearites in synchronous machine. The machine is modeled in rotor reference frame with currents as state ...

  16. A Single Neonatal Exposure to BMAA in a Rat Model Produces Neuropathology Consistent with Neurodegenerative Diseases

    Directory of Open Access Journals (Sweden)

    Laura Louise Scott

    2017-12-01

    Full Text Available Although cyanobacterial β-N-methylamino-l-alanine (BMAA has been implicated in the development of Alzheimer’s Disease (AD, Parkinson’s Disease (PD and Amyotrophic Lateral Sclerosis (ALS, no BMAA animal model has reproduced all the neuropathology typically associated with these neurodegenerative diseases. We present here a neonatal BMAA model that causes β-amyloid deposition, neurofibrillary tangles of hyper-phosphorylated tau, TDP-43 inclusions, Lewy bodies, microbleeds and microgliosis as well as severe neuronal loss in the hippocampus, striatum, substantia nigra pars compacta, and ventral horn of the spinal cord in rats following a single BMAA exposure. We also report here that BMAA exposure on particularly PND3, but also PND4 and 5, the critical period of neurogenesis in the rodent brain, is substantially more toxic than exposure to BMAA on G14, PND6, 7 and 10 which suggests that BMAA could potentially interfere with neonatal neurogenesis in rats. The observed selective toxicity of BMAA during neurogenesis and, in particular, the observed pattern of neuronal loss observed in BMAA-exposed rats suggest that BMAA elicits its effect by altering dopamine and/or serotonin signaling in rats.

  17. A multichannel model for the self-consistent analysis of coherent transport in graphene nanoribbons.

    Science.gov (United States)

    Mencarelli, Davide; Pierantoni, Luca; Farina, Marco; Di Donato, Andrea; Rozzi, Tullio

    2011-08-23

    In this contribution, we analyze the multichannel coherent transport in graphene nanoribbons (GNRs) by a scattering matrix approach. We consider the transport properties of GNR devices of a very general form, involving multiple bands and multiple leads. The 2D quantum transport over the whole GNR surface, described by the Schrödinger equation, is strongly nonlinear as it implies calculation of self-generated and externally applied electrostatic potentials, solutions of the 3D Poisson equation. The surface charge density is computed as a balance of carriers traveling through the channel at all of the allowed energies. Moreover, formation of bound charges corresponding to a discrete modal spectrum is observed and included in the model. We provide simulation examples by considering GNR configurations typical for transistor devices and GNR protrusions that find an interesting application as cold cathodes for X-ray generation. With reference to the latter case, a unified model is required in order to couple charge transport and charge emission. However, to a first approximation, these could be considered as independent problems, as in the example. © 2011 American Chemical Society

  18. Tool Efficiency Analysis model research in SEMI industry

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2018-01-01

    Full Text Available One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states,and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.

  19. Mental health courts and their selection processes: modeling variation for consistency.

    Science.gov (United States)

    Wolff, Nancy; Fabrikant, Nicole; Belenko, Steven

    2011-10-01

    Admission into mental health courts is based on a complicated and often variable decision-making process that involves multiple parties representing different expertise and interests. To the extent that eligibility criteria of mental health courts are more suggestive than deterministic, selection bias can be expected. Very little research has focused on the selection processes underpinning problem-solving courts even though such processes may dominate the performance of these interventions. This article describes a qualitative study designed to deconstruct the selection and admission processes of mental health courts. In this article, we describe a multi-stage, complex process for screening and admitting clients into mental health courts. The selection filtering model that is described has three eligibility screening stages: initial, assessment, and evaluation. The results of this study suggest that clients selected by mental health courts are shaped by the formal and informal selection criteria, as well as by the local treatment system.

  20. Interface Consistency

    DEFF Research Database (Denmark)

    Staunstrup, Jørgen

    1998-01-01

    This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....

  1. Business intelligence tools for radiology: creating a prototype model using open-source tools.

    Science.gov (United States)

    Prevedello, Luciano M; Andriole, Katherine P; Hanson, Richard; Kelly, Pauline; Khorasani, Ramin

    2010-04-01

    Digital radiology departments could benefit from the ability to integrate and visualize data (e.g. information reflecting complex workflow states) from all of their imaging and information management systems in one composite presentation view. Leveraging data warehousing tools developed in the business world may be one way to achieve this capability. In total, the concept of managing the information available in this data repository is known as Business Intelligence or BI. This paper describes the concepts used in Business Intelligence, their importance to modern Radiology, and the steps used in the creation of a prototype model of a data warehouse for BI using open-source tools.

  2. Towards a more consistent picture of isopycnal mixing in climate models

    Science.gov (United States)

    Gnanadesikan, A.; Pradal, M. A. S.; Koszalka, I.; Abernathey, R. P.

    2014-12-01

    The stirring of tracers by mesoscale eddies along isopycnal surfaces is often represented in coarse-resolution models by the Redi diffusion parameter ARedi. Theoretical treatments of ARedi often assume it should scale as the eddy energy or the growth rate of mesoscale eddies,. producing a picture where it is high in boundary currents and low )of order a few hundred m2/s) in the gyre interiors. However, observational estimates suggest that ARedi should be very large (of order thousands of m2/s) in the gyre interior. We present results of recent simulations comparing a range of spatially constant values ARedi (with values of 400, 800, 1200 and 2400 m2/s) to a spatially resolved estimate based on altimetry and a zonally averaged version of the same estimate. In general, increasing the ARedi coefficient destratifies and warms the high latitudes. Relative to our control simulation, the spatially dependent coefficient is lower in the Southern Ocean, but high in the North Pacific, and so the temperature changes mirror this. We also examine the response of ocean hypoxia to these changes. In general, the zonally averaged version of the altimetry-based estimate of ARedi does not capture the full 2d representation.

  3. Complementarity of DM Searches in a Consistent Simplified Model: the Case of Z'

    CERN Document Server

    Jacques, Thomas; Morgante, Enrico; Racco, Davide; Rameez, Mohamed; Riotto, Antonio

    2016-01-01

    We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC and direct detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy $Z'$ mediates the interactions between the SM and the DM. We find that in most cases IceCube provides the strongest bounds on this scenario, while the LHC constraints are only meaningful for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun are either $b \\bar b$ or $t \\bar t$, while the heavy DM annihilation is completely dominated by $Zh$ channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast Ice...

  4. Complementarity of DM searches in a consistent simplified model: the case of Z′

    International Nuclear Information System (INIS)

    Jacques, Thomas; Katz, Andrey; Morgante, Enrico; Racco, Davide; Rameez, Mohamed; Riotto, Antonio

    2016-01-01

    We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC, direct and indirect detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy Z ′ mediates the interactions between the SM and the DM. We find that for heavy dark matter indirect detection provides the strongest bounds on this scenario, while IceCube bounds are typically stronger than those from direct detection. The LHC constraints are dominant for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun and the Galactic Center are either bb̄ or tt̄, while the heavy DM annihilation is completely dominated by Zh channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast IceCube constraints to allow proper comparison with constraints from direct and indirect detection experiments and LHC exclusions.

  5. Clumpy molecular clouds: A dynamic model self-consistently regulated by T Tauri star formation

    International Nuclear Information System (INIS)

    Norman, C.; Silk, J.

    1980-01-01

    A new model is proposed which can account for the longevity, energetics, and dynamical structure of dark molecular clouds. It seems clear that the kinetic and gravitational energy in macroscopic cloud motions cannot account for the energetic of many molecular clouds. A stellar energy source must evidently be tapped, and infrared observations indicate that one cannot utilize massive stars in dark clouds. Recent observations of a high space density of T Tauri stars in some dark clouds provide the basis for our assertion that high-velocity winds from these low-mass pre--main-sequence stars provide a continuous dynamic input into molecular clouds. The T Tauri winds sweep up shells of gas, the intersections or collisions of which form dense clumps embedded in a more rarefied interclump medium. Observations constrain the clumps to be ram-pressure confined, but at the relatively low Mach numbers, continuous leakage occurs. This mass input into the interclump medium leads to the existence of two phases; a dense, cold phase (clumps of density approx.10 4 --10 5 cm -3 and temperature approx.10 K) and a warm, more diffuse, interclump medium (ICM, of density approx.10 3 --10 4 cm -3 and temperature approx.30 K). Clump collisions lead to coalescence, and the evolution of the mass spectrum of clumps is studied

  6. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer......-format and COM-objects, are incorporated to allow the export and import of mathematical models; 5) a user interface that provides the work-flow and data-flow to guide the user through the different modelling tasks....

  7. Lightweight approach to model traceability in a CASE tool

    Science.gov (United States)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  8. OISI dynamic end-to-end modeling tool

    Science.gov (United States)

    Kersten, Michael; Weidler, Alexander; Wilhelm, Rainer; Johann, Ulrich A.; Szerdahelyi, Laszlo

    2000-07-01

    The OISI Dynamic end-to-end modeling tool is tailored to end-to-end modeling and dynamic simulation of Earth- and space-based actively controlled optical instruments such as e.g. optical stellar interferometers. `End-to-end modeling' is meant to denote the feature that the overall model comprises besides optical sub-models also structural, sensor, actuator, controller and disturbance sub-models influencing the optical transmission, so that the system- level instrument performance due to disturbances and active optics can be simulated. This tool has been developed to support performance analysis and prediction as well as control loop design and fine-tuning for OISI, Germany's preparatory program for optical/infrared spaceborne interferometry initiated in 1994 by Dornier Satellitensysteme GmbH in Friedrichshafen.

  9. A Self-consistent Model of a Ray Through the Orion Complex

    Science.gov (United States)

    Abel, N. P.; Ferland, G. J.

    2003-12-01

    The Orion Complex is the best studied region of active star formation, with observational data available over the entire electromagnetic spectrum. These extensive observations give us a good idea of the physical structure of Orion, that being a thin ( ˜ 0.1 parsec) blister H II region on the face of the molecular cloud OMC-1. A PDR, where the transition from atoms & ions to molecules occurs, forms an interface between the two. Most of the physical processes are driven by starlight from the Trapezium cluster, with the star Ori C being the strongest source of radiation. Observations made towards lines of sight near Ori C reveal numerous H II and molecular line intensities. Photoionization calculations have played an important role in determining the physical properties of the regions where these lines originate, but thus far have treated the H II region and PDR as separate problems. Actually these regions are energized by the same source of radiation, with the gas hydrodynamics providing the physical link between them. Here were present a unified physical model of a single ray through the Orion Complex. We choose a region 60'' west of Ori C, where extensive observations exist. These include lines that originate within the H II region, background PDR, and from regions deep inside OMC-1 itself. An improved treatment of the grain, molecular hydrogen, and CO physics have all been developed as part of the continuing evolution of the plasma code Cloudy, so that we can now simultaneously predict the full spectrum with few free parameters. This provides a holistic approach that will be validated in this well-studied environment then extended to the distant starburst galaxies. Acknowledgements: We thank the NSF and NASA for support.

  10. Modeling the dielectric logging tool at high frequency

    International Nuclear Information System (INIS)

    Chew, W.C.

    1987-01-01

    The high frequency dielectric logging tool has been used widely in electromagnetic well logging, because by measuring the dielectric constant at high frequencies (1 GHz), the water saturation of rocks could be known without measuring the water salinity in the rocks. As such, it could be used to delineate fresh water bearing zones, as the dielectric constant of fresh water is much higher than that of oil while they may have the same resistivity. The authors present a computer model, though electromagnetic field analysis, the response of such a measurement tool in a well logging environment. As the measurement is performed at high frequency, usually with small separation between the transmitter and receivers, some small geological features could be measured by such a tool. They use the computer model to study the behavior of such a tool across geological bed boundaries, and also across thin geological beds. Such a study could be very useful in understanding the limitation on the resolution of the tool. Furthermore, they could study the standoff effect and the depth of investigation of such a tool. This could delineate the range of usefulness of the measurement

  11. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  12. Application of the adiabatic self-consistent collective coordinate method to a solvable model of prolate-oblate shape coexistence

    International Nuclear Information System (INIS)

    Kobayasi, Masato; Matsuyanagi, Kenichi; Nakatsukasa, Takashi; Matsuo, Masayuki

    2003-01-01

    The adiabatic self-consistent collective coordinate method is applied to an exactly solvable multi-O(4) model that is designed to describe nuclear shape coexistence phenomena. The collective mass and dynamics of large amplitude collective motion in this model system are analyzed, and it is shown that the method yields a faithful description of tunneling motion through a barrier between the prolate and oblate local minima in the collective potential. The emergence of the doublet pattern is clearly described. (author)

  13. Open source Modeling and optimization tools for Planning

    Energy Technology Data Exchange (ETDEWEB)

    Peles, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-10

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward to complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.

  14. Analytical Modelling Of Milling For Tool Design And Selection

    International Nuclear Information System (INIS)

    Fontaine, M.; Devillez, A.; Dudzinski, D.

    2007-01-01

    This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools

  15. Modeling of the 3RS tau protein with self-consistent field method and Monte Carlo simulation

    NARCIS (Netherlands)

    Leermakers, F.A.M.; Jho, Y.S.; Zhulina, E.B.

    2010-01-01

    Using a model with amino acid resolution of the 196 aa N-terminus of the 3RS tau protein, we performed both a Monte Carlo study and a complementary self-consistent field (SCF) analysis to obtain detailed information on conformational properties of these moieties near a charged plane (mimicking the

  16. Multinational consistency of a discrete choice model in quantifying health states for the extended 5-level EQ-5D

    NARCIS (Netherlands)

    Krabbe, P.F.M.; Devlin, N.J.; Stolk, E.A.; Shah, K.K.; Oppe, M.; Van Hout, B.; Quik, E.H.; Pickard, A.S.; Xie, F.

    2013-01-01

    Objectives: To investigate the feasibility of choice experiments for EQ-5D-5L states using computer-based data collection, and to examine the consistency of the estimated parameters values derived after modeling the stated preference data across countries in a multinational study. Methods: Similar

  17. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  18. Using the IEA ETSAP modelling tools for Denmark

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik

    signed the agreement and contributed to some early annexes. This project is motivated by an invitation to participate in ETSAP Annex X, "Global Energy Systems and Common Analyses: Climate friendly, Secure and Productive Energy Systems" for the period 2005 to 2007. The main activity is semi......-annual workshops focusing on presentations of model analyses and use of the ETSAP' tools (the MARKAL/TIMES family of models). The project was also planned to benefit from the EU project ”NEEDS - New Energy Externalities Developments for Sustainability. ETSAP is contributing to a part of NEEDS that develops......, Environment and Health (CEEH), starting from January 2007. This report summarises the activities under ETSAP Annex X and related project, emphasising the development of modelling tools that will be useful for modelling the Danish energy system. It is also a status report for the development of a model...

  19. Designing tools for oil exploration using nuclear modeling

    Directory of Open Access Journals (Sweden)

    Mauborgne Marie-Laure

    2017-01-01

    Full Text Available When designing nuclear tools for oil exploration, one of the first steps is typically nuclear modeling for concept evaluation and initial characterization. Having an accurate model, including the availability of accurate cross sections, is essential to reduce or avoid time consuming and costly design iterations. During tool response characterization, modeling is benchmarked with experimental data and then used to complement and to expand the database to make it more detailed and inclusive of more measurement environments which are difficult or impossible to reproduce in the laboratory. We present comparisons of our modeling results obtained using the ENDF/B-VI and ENDF/B-VII cross section data bases, focusing on the response to a few elements found in the tool, borehole and subsurface formation. For neutron-induced inelastic and capture gamma ray spectroscopy, major obstacles may be caused by missing or inaccurate cross sections for essential materials. We show examples of the benchmarking of modeling results against experimental data obtained during tool characterization and discuss observed discrepancies.

  20. Designing tools for oil exploration using nuclear modeling

    Science.gov (United States)

    Mauborgne, Marie-Laure; Allioli, Françoise; Manclossi, Mauro; Nicoletti, Luisa; Stoller, Chris; Evans, Mike

    2017-09-01

    When designing nuclear tools for oil exploration, one of the first steps is typically nuclear modeling for concept evaluation and initial characterization. Having an accurate model, including the availability of accurate cross sections, is essential to reduce or avoid time consuming and costly design iterations. During tool response characterization, modeling is benchmarked with experimental data and then used to complement and to expand the database to make it more detailed and inclusive of more measurement environments which are difficult or impossible to reproduce in the laboratory. We present comparisons of our modeling results obtained using the ENDF/B-VI and ENDF/B-VII cross section data bases, focusing on the response to a few elements found in the tool, borehole and subsurface formation. For neutron-induced inelastic and capture gamma ray spectroscopy, major obstacles may be caused by missing or inaccurate cross sections for essential materials. We show examples of the benchmarking of modeling results against experimental data obtained during tool characterization and discuss observed discrepancies.

  1. A Modeling Tool for Household Biogas Burner Flame Port Design

    Science.gov (United States)

    Decker, Thomas J.

    Anaerobic digestion is a well-known and potentially beneficial process for rural communities in emerging markets, providing the opportunity to generate usable gaseous fuel from agricultural waste. With recent developments in low-cost digestion technology, communities across the world are gaining affordable access to the benefits of anaerobic digestion derived biogas. For example, biogas can displace conventional cooking fuels such as biomass (wood, charcoal, dung) and Liquefied Petroleum Gas (LPG), effectively reducing harmful emissions and fuel cost respectively. To support the ongoing scaling effort of biogas in rural communities, this study has developed and tested a design tool aimed at optimizing flame port geometry for household biogas-fired burners. The tool consists of a multi-component simulation that incorporates three-dimensional CAD designs with simulated chemical kinetics and computational fluid dynamics. An array of circular and rectangular port designs was developed for a widely available biogas stove (called the Lotus) as part of this study. These port designs were created through guidance from previous studies found in the literature. The three highest performing designs identified by the tool were manufactured and tested experimentally to validate tool output and to compare against the original port geometry. The experimental results aligned with the tool's prediction for the three chosen designs. Each design demonstrated improved thermal efficiency relative to the original, with one configuration of circular ports exhibiting superior performance. The results of the study indicated that designing for a targeted range of port hydraulic diameter, velocity and mixture density in the tool is a relevant way to improve the thermal efficiency of a biogas burner. Conversely, the emissions predictions made by the tool were found to be unreliable and incongruent with laboratory experiments.

  2. BPMNDiffViz : a tool for BPMN models comparison

    NARCIS (Netherlands)

    Ivanov, S.Y.; Kalenkova, A.A.; Aalst, van der W.M.P.; Daniel, F.; Zugal, S.

    2015-01-01

    Automatic comparison of business processes plays an important role in their analysis and optimization. In this paper we present the web-based tool BPMNDiffViz, that finds business processes discrepancies and visualizes them. BPMN (Business Process Model and Notation) 2.0 - one of the most commonly

  3. HMMEditor: a visual editing tool for profile hidden Markov model

    Directory of Open Access Journals (Sweden)

    Cheng Jianlin

    2008-03-01

    Full Text Available Abstract Background Profile Hidden Markov Model (HMM is a powerful statistical model to represent a family of DNA, RNA, and protein sequences. Profile HMM has been widely used in bioinformatics research such as sequence alignment, gene structure prediction, motif identification, protein structure prediction, and biological database search. However, few comprehensive, visual editing tools for profile HMM are publicly available. Results We develop a visual editor for profile Hidden Markov Models (HMMEditor. HMMEditor can visualize the profile HMM architecture, transition probabilities, and emission probabilities. Moreover, it provides functions to edit and save HMM and parameters. Furthermore, HMMEditor allows users to align a sequence against the profile HMM and to visualize the corresponding Viterbi path. Conclusion HMMEditor provides a set of unique functions to visualize and edit a profile HMM. It is a useful tool for biological sequence analysis and modeling. Both HMMEditor software and web service are freely available.

  4. A thermodynamically consistent quasi-particle model without temperature-dependent infinity of the vacuum zero point energy

    International Nuclear Information System (INIS)

    Cao Jing; Jiang Yu; Sun Weimin; Zong Hongshi

    2012-01-01

    In this Letter, an improved quasi-particle model is presented. Unlike the previous approach of establishing quasi-particle model, we introduce a classical background field (it is allowed to depend on the temperature) to deal with the infinity of thermal vacuum energy which exists in previous quasi-particle models. After taking into account the effect of this classical background field, the partition function of quasi-particle system can be made well-defined. Based on this and following the standard ensemble theory, we construct a thermodynamically consistent quasi-particle model without the need of any reformulation of statistical mechanics or thermodynamical consistency relation. As an application of our model, we employ it to the case of (2+1) flavor QGP at zero chemical potential and finite temperature and obtain a good fit to the recent lattice simulation results of Borsányi et al. A comparison of the result of our model with early calculations using other models is also presented. It is shown that our method is general and can be generalized to the case where the effective mass depends not only on the temperature but also on the chemical potential.

  5. Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies

    Science.gov (United States)

    Suh, Peter M.; Conyers, Howard Jason; Mavris, Dimitri N.

    2015-01-01

    This report introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this report is on tool presentation, verification, and validation. These processes are carried out in stages throughout the report. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.

  6. AgMIP Training in Multiple Crop Models and Tools

    Science.gov (United States)

    Boote, Kenneth J.; Porter, Cheryl H.; Hargreaves, John; Hoogenboom, Gerrit; Thornburn, Peter; Mutter, Carolyn

    2015-01-01

    The Agricultural Model Intercomparison and Improvement Project (AgMIP) has the goal of using multiple crop models to evaluate climate impacts on agricultural production and food security in developed and developing countries. There are several major limitations that must be overcome to achieve this goal, including the need to train AgMIP regional research team (RRT) crop modelers to use models other than the ones they are currently familiar with, plus the need to harmonize and interconvert the disparate input file formats used for the various models. Two activities were followed to address these shortcomings among AgMIP RRTs to enable them to use multiple models to evaluate climate impacts on crop production and food security. We designed and conducted courses in which participants trained on two different sets of crop models, with emphasis on the model of least experience. In a second activity, the AgMIP IT group created templates for inputting data on soils, management, weather, and crops into AgMIP harmonized databases, and developed translation tools for converting the harmonized data into files that are ready for multiple crop model simulations. The strategies for creating and conducting the multi-model course and developing entry and translation tools are reviewed in this chapter.

  7. Direct detection of WIMPs: implications of a self-consistent truncated isothermal model of the Milky Way's dark matter halo

    Science.gov (United States)

    Chaudhury, Soumini; Bhattacharjee, Pijushpani; Cowsik, Ramanath

    2010-09-01

    Direct detection of Weakly Interacting Massive Particle (WIMP) candidates of Dark Matter (DM) is studied within the context of a self-consistent truncated isothermal model of the finite-size dark halo of the Galaxy. The halo model, based on the ``King model'' of the phase space distribution function of collisionless DM particles, takes into account the modifications of the phase-space structure of the halo due to the gravitational influence of the observed visible matter in a self-consistent manner. The parameters of the halo model are determined by a fit to a recently determined circular rotation curve of the Galaxy that extends up to ~ 60 kpc. Unlike in the Standard Halo Model (SHM) customarily used in the analysis of the results of WIMP direct detection experiments, the velocity distribution of the WIMPs in our model is non-Maxwellian with a cut-off at a maximum velocity that is self-consistently determined by the model itself. For our halo model that provides the best fit to the rotation curve data, the 90% C.L. upper limit on the WIMP-nucleon spin-independent cross section from the recent results of the CDMS-II experiment, for example, is ~ 5.3 × 10-8 pb at a WIMP mass of ~ 71 GeV. We also find, using the original 2-bin annual modulation amplitude data on the nuclear recoil event rate seen in the DAMA experiment, that there exists a range of small WIMP masses, typically ~ 2-16 GeV, within which DAMA collaboration's claimed annual modulation signal purportedly due to WIMPs is compatible with the null results of other experiments. These results, based as they are on a self-consistent model of the dark matter halo of the Galaxy, strengthen the possibility of low-mass (lsim10 GeV) WIMPs as a candidate for dark matter as indicated by several earlier studies performed within the context of the SHM. A more rigorous analysis using DAMA bins over smaller intervals should be able to better constrain the ``DAMA regions'' in the WIMP parameter space within the context of

  8. Model-based setup assistant for progressive tools

    Science.gov (United States)

    Springer, Robert; Gräler, Manuel; Homberg, Werner; Henke, Christian; Trächtler, Ansgar

    2018-05-01

    In the field of production systems, globalization and technological progress lead to increasing requirements regarding part quality, delivery time and costs. Hence, today's production is challenged much more than a few years ago: it has to be very flexible and produce economically small batch sizes to satisfy consumer's demands and avoid unnecessary stock. Furthermore, a trend towards increasing functional integration continues to lead to an ongoing miniaturization of sheet metal components. In the industry of electric connectivity for example, the miniaturized connectors are manufactured by progressive tools, which are usually used for very large batches. These tools are installed in mechanical presses and then set up by a technician, who has to manually adjust a wide range of punch-bending operations. Disturbances like material thickness, temperatures, lubrication or tool wear complicate the setup procedure. In prospect of the increasing demand of production flexibility, this time-consuming process has to be handled more and more often. In this paper, a new approach for a model-based setup assistant is proposed as a solution, which is exemplarily applied in combination with a progressive tool. First, progressive tools, more specifically, their setup process is described and based on that, the challenges are pointed out. As a result, a systematic process to set up the machines is introduced. Following, the process is investigated with an FE-Analysis regarding the effects of the disturbances. In the next step, design of experiments is used to systematically develop a regression model of the system's behaviour. This model is integrated within an optimization in order to calculate optimal machine parameters and the following necessary adjustment of the progressive tool due to the disturbances. Finally, the assistant is tested in a production environment and the results are discussed.

  9. Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models

    Science.gov (United States)

    Candy, Adam S.; Pietrzak, Julie D.

    2018-01-01

    The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.

  10. Multi-Time Scale Model Order Reduction and Stability Consistency Certification of Inverter-Interfaced DG System in AC Microgrid

    Directory of Open Access Journals (Sweden)

    Xiaoxiao Meng

    2018-01-01

    Full Text Available AC microgrid mainly comprise inverter-interfaced distributed generators (IIDGs, which are nonlinear complex systems with multiple time scales, including frequency control, time delay measurements, and electromagnetic transients. The droop control-based IIDG in an AC microgrid is selected as the research object in this study, which comprises power droop controller, voltage- and current-loop controllers, and filter and line. The multi-time scale characteristics of the detailed IIDG model are divided based on singular perturbation theory. In addition, the IIDG model order is reduced by neglecting the system fast dynamics. The static and transient stability consistency of the IIDG model order reduction are demonstrated by extracting features of the IIDG small signal model and using the quadratic approximation method of the stability region boundary, respectively. The dynamic response consistencies of the IIDG model order reduction are evaluated using the frequency, damping and amplitude features extracted by the Prony transformation. Results are applicable to provide a simplified model for the dynamic characteristic analysis of IIDG systems in AC microgrid. The accuracy of the proposed method is verified by using the eigenvalue comparison, the transient stability index comparison and the dynamic time-domain simulation.

  11. Colonic stem cell data are consistent with the immortal model of stem cell division under non-random strand segregation.

    Science.gov (United States)

    Walters, K

    2009-06-01

    Colonic stem cells are thought to reside towards the base of crypts of the colon, but their numbers and proliferation mechanisms are not well characterized. A defining property of stem cells is that they are able to divide asymmetrically, but it is not known whether they always divide asymmetrically (immortal model) or whether there are occasional symmetrical divisions (stochastic model). By measuring diversity of methylation patterns in colon crypt samples, a recent study found evidence in favour of the stochastic model, assuming random segregation of stem cell DNA strands during cell division. Here, the effect of preferential segregation of the template strand is considered to be consistent with the 'immortal strand hypothesis', and explore the effect on conclusions of previously published results. For a sample of crypts, it is shown how, under the immortal model, to calculate mean and variance of the number of unique methylation patterns allowing for non-random strand segregation and compare them with those observed. The calculated mean and variance are consistent with an immortal model that incorporates non-random strand segregation for a range of stem cell numbers and levels of preferential strand segregation. Allowing for preferential strand segregation considerably alters previously published conclusions relating to stem cell numbers and turnover mechanisms. Evidence in favour of the stochastic model may not be as strong as previously thought.

  12. A consistent framework for modeling inorganic pesticides: Adaptation of life cycle inventory models to metal-base pesticides

    DEFF Research Database (Denmark)

    Peña, N.A.; Anton, A.; Fantke, Peter

    2016-01-01

    emission factors (percentages) or dynamic models base on specific application scenarios that describe only the behavior of organic pesticides. Currently fixed emission fractions for pesticides dearth to account for the influence of pesticide-specific function to crop type and application methods....... On the other hand the dynamic models need to account for the variability in this interactions in emissions of inorganic pesticides. This lack of appropriate models to estimate emission fractions of inorganic pesticides results in a lower accuracy when accounting for emissions in agriculture......, and it will influence the outcomes of the impact profile. The pesticide emission model PestLCI 2.0 is the most advanced currently available inventory model for LCA intended to provide an estimation of organic pesticide emission fractions to the environment. We use this model as starting point for quantifying emission...

  13. A relativistic self-consistent model for studying enhancement of space charge limited emission due to counter-streaming ions

    Science.gov (United States)

    Lin, M. C.; Verboncoeur, J.

    2016-10-01

    A maximum electron current transmitted through a planar diode gap is limited by space charge of electrons dwelling across the gap region, the so called space charge limited (SCL) emission. By introducing a counter-streaming ion flow to neutralize the electron charge density, the SCL emission can be dramatically raised, so electron current transmission gets enhanced. In this work, we have developed a relativistic self-consistent model for studying the enhancement of maximum transmission by a counter-streaming ion current. The maximum enhancement is found when the ion effect is saturated, as shown analytically. The solutions in non-relativistic, intermediate, and ultra-relativistic regimes are obtained and verified with 1-D particle-in-cell simulations. This self-consistent model is general and can also serve as a comparison for verification of simulation codes, as well as extension to higher dimensions.

  14. Evaluating EML Modeling Tools for Insurance Purposes: A Case Study

    Directory of Open Access Journals (Sweden)

    Mikael Gustavsson

    2010-01-01

    Full Text Available As with any situation that involves economical risk refineries may share their risk with insurers. The decision process generally includes modelling to determine to which extent the process area can be damaged. On the extreme end of modelling the so-called Estimated Maximum Loss (EML scenarios are found. These scenarios predict the maximum loss a particular installation can sustain. Unfortunately no standard model for this exists. Thus the insurers reach different results due to applying different models and different assumptions. Therefore, a study has been conducted on a case in a Swedish refinery where several scenarios previously had been modelled by two different insurance brokers using two different softwares, ExTool and SLAM. This study reviews the concept of EML and analyses the used models to see which parameters are most uncertain. Also a third model, EFFECTS, was employed in an attempt to reach a conclusion with higher reliability.

  15. Using Trait-State Models to Evaluate the Longitudinal Consistency of Global Self-Esteem From Adolescence to Adulthood

    OpenAIRE

    Donnellan, M. Brent; Kenny, David A.; Trzesniewski, Kali H.; Lucas, Richard E.; Conger, Rand D.

    2012-01-01

    The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-est...

  16. Scenario Evaluator for Electrical Resistivity survey pre-modeling tool

    Science.gov (United States)

    Terry, Neil; Day-Lewis, Frederick D.; Robinson, Judith L.; Slater, Lee D.; Halford, Keith J.; Binley, Andrew; Lane, John W.; Werkema, Dale D.

    2017-01-01

    Geophysical tools have much to offer users in environmental, water resource, and geotechnical fields; however, techniques such as electrical resistivity imaging (ERI) are often oversold and/or overinterpreted due to a lack of understanding of the limitations of the techniques, such as the appropriate depth intervals or resolution of the methods. The relationship between ERI data and resistivity is nonlinear; therefore, these limitations depend on site conditions and survey design and are best assessed through forward and inverse modeling exercises prior to field investigations. In this approach, proposed field surveys are first numerically simulated given the expected electrical properties of the site, and the resulting hypothetical data are then analyzed using inverse models. Performing ERI forward/inverse modeling, however, requires substantial expertise and can take many hours to implement. We present a new spreadsheet-based tool, the Scenario Evaluator for Electrical Resistivity (SEER), which features a graphical user interface that allows users to manipulate a resistivity model and instantly view how that model would likely be interpreted by an ERI survey. The SEER tool is intended for use by those who wish to determine the value of including ERI to achieve project goals, and is designed to have broad utility in industry, teaching, and research.

  17. DsixTools: the standard model effective field theory toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Celis, Alejandro [Ludwig-Maximilians-Universitaet Muenchen, Fakultaet fuer Physik, Arnold Sommerfeld Center for Theoretical Physics, Munich (Germany); Fuentes-Martin, Javier; Vicente, Avelino [Universitat de Valencia-CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Virto, Javier [University of Bern, Albert Einstein Center for Fundamental Physics, Institute for Theoretical Physics, Bern (Switzerland)

    2017-06-15

    We present DsixTools, a Mathematica package for the handling of the dimension-six standard model effective field theory. Among other features, DsixTools allows the user to perform the full one-loop renormalization group evolution of the Wilson coefficients in the Warsaw basis. This is achieved thanks to the SMEFTrunner module, which implements the full one-loop anomalous dimension matrix previously derived in the literature. In addition, DsixTools also contains modules devoted to the matching to the ΔB = ΔS = 1, 2 and ΔB = ΔC = 1 operators of the Weak Effective Theory at the electroweak scale, and their QCD and QED Renormalization group evolution below the electroweak scale. (orig.)

  18. Comparison of bootstrap current and plasma conductivity models applied in a self-consistent equilibrium calculation for Tokamak plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Maria Celia Ramos; Ludwig, Gerson Otto [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Lab. Associado de Plasma]. E-mail: mcr@plasma.inpe.br

    2004-07-01

    Different bootstrap current formulations are implemented in a self-consistent equilibrium calculation obtained from a direct variational technique in fixed boundary tokamak plasmas. The total plasma current profile is supposed to have contributions of the diamagnetic, Pfirsch-Schlueter, and the neoclassical Ohmic and bootstrap currents. The Ohmic component is calculated in terms of the neoclassical conductivity, compared here among different expressions, and the loop voltage determined consistently in order to give the prescribed value of the total plasma current. A comparison among several bootstrap current models for different viscosity coefficient calculations and distinct forms for the Coulomb collision operator is performed for a variety of plasma parameters of the small aspect ratio tokamak ETE (Experimento Tokamak Esferico) at the Associated Plasma Laboratory of INPE, in Brazil. We have performed this comparison for the ETE tokamak so that the differences among all the models reported here, mainly regarding plasma collisionality, can be better illustrated. The dependence of the bootstrap current ratio upon some plasma parameters in the frame of the self-consistent calculation is also analysed. We emphasize in this paper what we call the Hirshman-Sigmar/Shaing model, valid for all collisionality regimes and aspect ratios, and a fitted formulation proposed by Sauter, which has the same range of validity but is faster to compute than the previous one. The advantages or possible limitations of all these different formulations for the bootstrap current estimate are analysed throughout this work. (author)

  19. Model Fusion Tool - the Open Environmental Modelling Platform Concept

    Science.gov (United States)

    Kessler, H.; Giles, J. R.

    2010-12-01

    The vision of an Open Environmental Modelling Platform - seamlessly linking geoscience data, concepts and models to aid decision making in times of environmental change. Governments and their executive agencies across the world are facing increasing pressure to make decisions about the management of resources in light of population growth and environmental change. In the UK for example, groundwater is becoming a scarce resource for large parts of its most densely populated areas. At the same time river and groundwater flooding resulting from high rainfall events are increasing in scale and frequency and sea level rise is threatening the defences of coastal cities. There is also a need for affordable housing, improved transport infrastructure and waste disposal as well as sources of renewable energy and sustainable food production. These challenges can only be resolved if solutions are based on sound scientific evidence. Although we have knowledge and understanding of many individual processes in the natural sciences it is clear that a single science discipline is unable to answer the questions and their inter-relationships. Modern science increasingly employs computer models to simulate the natural, economic and human system. Management and planning requires scenario modelling, forecasts and ‘predictions’. Although the outputs are often impressive in terms of apparent accuracy and visualisation, they are inherently not suited to simulate the response to feedbacks from other models of the earth system, such as the impact of human actions. Geological Survey Organisations (GSO) are increasingly employing advances in Information Technology to visualise and improve their understanding of geological systems. Instead of 2 dimensional paper maps and reports many GSOs now produce 3 dimensional geological framework models and groundwater flow models as their standard output. Additionally the British Geological Survey have developed standard routines to link geological

  20. ISAC: A tool for aeroservoelastic modeling and analysis

    Science.gov (United States)

    Adams, William M., Jr.; Hoadley, Sherwood Tiffany

    1993-01-01

    The capabilities of the Interaction of Structures, Aerodynamics, and Controls (ISAC) system of program modules is discussed. The major modeling, analysis, and data management components of ISAC are identified. Equations of motion are displayed for a Laplace-domain representation of the unsteady aerodynamic forces. Options for approximating a frequency-domain representation of unsteady aerodynamic forces with rational functions of the Laplace variable are shown. Linear time invariant state-space equations of motion that result are discussed. Model generation and analyses of stability and dynamic response characteristics are shown for an aeroelastic vehicle which illustrates some of the capabilities of ISAC as a modeling and analysis tool for aeroelastic applications.

  1. Graphite-MicroMégas, a tool for DNA modeling

    OpenAIRE

    Hornus , Samuel; Larivière , Damien

    2011-01-01

    National audience; MicroMégas is the current state of an ongoing effort to develop tools for modeling biological assembly of molecules. We here present its DNA modeling part. MicroMégas is implemented as a plug-in to Graphite, which is a research plat- form for computer graphics, 3D modeling and numerical geometry that is developed by members of the ALICE team of INRIA.; Nous décrivons l'outils MicroMégas et les techniques qu'il met en jeu pour la modélisation d'assemblage de molécule, en par...

  2. Models as Tools of Analysis of a Network Organisation

    Directory of Open Access Journals (Sweden)

    Wojciech Pająk

    2013-06-01

    Full Text Available The paper presents models which may be applied as tools of analysis of a network organisation. The starting point of the discussion is defining the following terms: supply chain and network organisation. Further parts of the paper present basic assumptions analysis of a network organisation. Then the study characterises the best known models utilised in analysis of a network organisation. The purpose of the article is to define the notion and the essence of network organizations and to present the models used for their analysis.

  3. Self-consistent modelling of X-ray photoelectron spectra from air-exposed polycrystalline TiN thin films

    Energy Technology Data Exchange (ETDEWEB)

    Greczynski, G., E-mail: grzgr@ifm.liu.se; Hultman, L.

    2016-11-30

    Highlights: • We present first self-consistent model of TiN core level spectra with a cross-peak qualitative and quantitative agreement. • Model is tested for a series of TiN thin films oxidized to different extent by varying the venting temperature. • Conventional deconvolution process relies on reference binding energies that typically show large spread introducing ambiguity. • By imposing requirement of quantitative cross-peak self-consistency reliability of extracted chemical information is enhanced. • We propose that the cross-peak self-consistency should be a prerequisite for reliable XPS peak modelling. - Abstract: We present first self-consistent modelling of x-ray photoelectron spectroscopy (XPS) Ti 2p, N 1s, O 1s, and C 1s core level spectra with a cross-peak quantitative agreement for a series of TiN thin films grown by dc magnetron sputtering and oxidized to different extent by varying the venting temperature T{sub v} of the vacuum chamber before removing the deposited samples. So-obtained film series constitute a model case for XPS application studies, where certain degree of atmosphere exposure during sample transfer to the XPS instrument is unavoidable. The challenge is to extract information about surface chemistry without invoking destructive pre-cleaning with noble gas ions. All TiN surfaces are thus analyzed in the as-received state by XPS using monochromatic Al Kα radiation (hν = 1486.6 eV). Details of line shapes and relative peak areas obtained from deconvolution of the reference Ti 2p and N 1 s spectra representative of a native TiN surface serve as an input to model complex core level signals from air-exposed surfaces, where contributions from oxides and oxynitrides make the task very challenging considering the influence of the whole deposition process at hand. The essential part of the presented approach is that the deconvolution process is not only guided by the comparison to the reference binding energy values that often show

  4. A stock-flow consistent input-output model with applications to energy price shocks, interest rates, and heat emissions

    Science.gov (United States)

    Berg, Matthew; Hartley, Brian; Richters, Oliver

    2015-01-01

    By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.

  5. Theoretical Modeling of Rock Breakage by Hydraulic and Mechanical Tool

    Directory of Open Access Journals (Sweden)

    Hongxiang Jiang

    2014-01-01

    Full Text Available Rock breakage by coupled mechanical and hydraulic action has been developed over the past several decades, but theoretical study on rock fragmentation by mechanical tool with water pressure assistance was still lacking. The theoretical model of rock breakage by mechanical tool was developed based on the rock fracture mechanics and the solution of Boussinesq’s problem, and it could explain the process of rock fragmentation as well as predicating the peak reacting force. The theoretical model of rock breakage by coupled mechanical and hydraulic action was developed according to the superposition principle of intensity factors at the crack tip, and the reacting force of mechanical tool assisted by hydraulic action could be reduced obviously if the crack with a critical length could be produced by mechanical or hydraulic impact. The experimental results indicated that the peak reacting force could be reduced about 15% assisted by medium water pressure, and quick reduction of reacting force after peak value decreased the specific energy consumption of rock fragmentation by mechanical tool. The crack formation by mechanical or hydraulic impact was the prerequisite to improvement of the ability of combined breakage.

  6. Self-consistent Non-LTE Model of Infrared Molecular Emissions and Oxygen Dayglows in the Mesosphere and Lower Thermosphere

    Science.gov (United States)

    Feofilov, Artem G.; Yankovsky, Valentine A.; Pesnell, William D.; Kutepov, Alexander A.; Goldberg, Richard A.; Mauilova, Rada O.

    2007-01-01

    We present the new version of the ALI-ARMS (for Accelerated Lambda Iterations for Atmospheric Radiation and Molecular Spectra) model. The model allows simultaneous self-consistent calculating the non-LTE populations of the electronic-vibrational levels of the O3 and O2 photolysis products and vibrational level populations of CO2, N2,O2, O3, H2O, CO and other molecules with detailed accounting for the variety of the electronic-vibrational, vibrational-vibrational and vibrational-translational energy exchange processes. The model was used as the reference one for modeling the O2 dayglows and infrared molecular emissions for self-consistent diagnostics of the multi-channel space observations of MLT in the SABER experiment It also allows reevaluating the thermalization efficiency of the absorbed solar ultraviolet energy and infrared radiative cooling/heating of MLT by detailed accounting of the electronic-vibrational relaxation of excited photolysis products via the complex chain of collisional energy conversion processes down to the vibrational energy of optically active trace gas molecules.

  7. A time consistent risk averse three-stage stochastic mixed integer optimization model for power generation capacity expansion

    International Nuclear Information System (INIS)

    Pisciella, P.; Vespucci, M.T.; Bertocchi, M.; Zigrino, S.

    2016-01-01

    We propose a multi-stage stochastic optimization model for the generation capacity expansion problem of a price-taker power producer. Uncertainties regarding the evolution of electricity prices and fuel costs play a major role in long term investment decisions, therefore the objective function represents a trade-off between expected profit and risk. The Conditional Value at Risk is the risk measure used and is defined by a nested formulation that guarantees time consistency in the multi-stage model. The proposed model allows one to determine a long term expansion plan which takes into account uncertainty, while the LCoE approach, currently used by decision makers, only allows one to determine which technology should be chosen for the next power plant to be built. A sensitivity analysis is performed with respect to the risk weighting factor and budget amount. - Highlights: • We propose a time consistent risk averse multi-stage model for capacity expansion. • We introduce a case study with uncertainty on electricity prices and fuel costs. • Increased budget moves the investment from gas towards renewables and then coal. • Increased risk aversion moves the investment from coal towards renewables. • Time inconsistency leads to a profit gap between planned and implemented policies.

  8. Using the IEA ETSAP modelling tools for Denmark

    Energy Technology Data Exchange (ETDEWEB)

    Grohnheit, Poul Erik

    2008-12-15

    An important part of the cooperation within the IEA (International Energy Agency) is organised through national contributions to 'Implementation Agreements' on energy technology and energy analyses. One of them is ETSAP (Energy Technology Systems Analysis Programme), started in 1976. Denmark has signed the agreement and contributed to some early annexes. This project is motivated by an invitation to participate in ETSAP Annex X, 'Global Energy Systems and Common Analyses: Climate friendly, Secure and Productive Energy Systems' for the period 2005 to 2007. The main activity is semi-annual workshops focusing on presentations of model analyses and use of the ETSAP tools (the MARKAL/TIMES family of models). The project was also planned to benefit from the EU project 'NEEDS - New Energy Externalities Developments for Sustainability'. ETSAP is contributing to a part of NEEDS that develops the TIMES model for 29 European countries with assessment of future technologies. An additional project 'Monitoring and Evaluation of the RES directives: implementation in EU27 and policy recommendations for 2020' (RES2020) under Intelligent Energy Europe was added, as well as the Danish 'Centre for Energy, Environment and Health (CEEH), starting from January 2007. This report summarises the activities under ETSAP Annex X and related project, emphasising the development of modelling tools that will be useful for modelling the Danish energy system. It is also a status report for the development of a model for Denmark, focusing on the tools and features that allow comparison with other countries and, particularly, to evaluate assumptions and results in international models covering Denmark. (au)

  9. ADAS tools for collisional–radiative modelling of molecules

    Energy Technology Data Exchange (ETDEWEB)

    Guzmán, F., E-mail: francisco.guzman@cea.fr [Department of Physics, University of Strathclyde, Glasgow G4 0NG (United Kingdom); CEA, IRFM, Saint-Paul-lez-Durance 13108 (France); O’Mullane, M.; Summers, H.P. [Department of Physics, University of Strathclyde, Glasgow G4 0NG (United Kingdom)

    2013-07-15

    New theoretical and computational tools for molecular collisional–radiative models are presented. An application to the hydrogen molecule system has been made. At the same time, a structured database has been created where fundamental cross sections and rates for individual processes as well as derived data (effective coefficients) are stored. Relative populations for the vibrational states of the ground electronic state of H{sub 2} are presented and this vibronic resolution model is compared electronic resolution where vibronic transitions are summed over vibrational sub-states. Some new reaction rates are calculated by means of the impact parameter approximation. Computational tools have been developed to automate process and simplify the data assembly. Effective (collisional–radiative) rate coefficients versus temperature and density are presented.

  10. ADAS tools for collisional-radiative modelling of molecules

    Science.gov (United States)

    Guzmán, F.; O'Mullane, M.; Summers, H. P.

    2013-07-01

    New theoretical and computational tools for molecular collisional-radiative models are presented. An application to the hydrogen molecule system has been made. At the same time, a structured database has been created where fundamental cross sections and rates for individual processes as well as derived data (effective coefficients) are stored. Relative populations for the vibrational states of the ground electronic state of H2 are presented and this vibronic resolution model is compared electronic resolution where vibronic transitions are summed over vibrational sub-states. Some new reaction rates are calculated by means of the impact parameter approximation. Computational tools have been developed to automate process and simplify the data assembly. Effective (collisional-radiative) rate coefficients versus temperature and density are presented.

  11. Sobol Sensitivity Analysis: A Tool to Guide the Development and Evaluation of Systems Pharmacology Models

    Science.gov (United States)

    Trame, MN; Lesko, LJ

    2015-01-01

    A systems pharmacology model typically integrates pharmacokinetic, biochemical network, and systems biology concepts into a unifying approach. It typically consists of a large number of parameters and reaction species that are interlinked based upon the underlying (patho)physiology and the mechanism of drug action. The more complex these models are, the greater the challenge of reliably identifying and estimating respective model parameters. Global sensitivity analysis provides an innovative tool that can meet this challenge. CPT Pharmacometrics Syst. Pharmacol. (2015) 4, 69–79; doi:10.1002/psp4.6; published online 25 February 2015 PMID:27548289

  12. Thermodynamically consistent modeling and simulation of multi-component two-phase flow model with partial miscibility

    KAUST Repository

    Kou, Jisheng; Sun, Shuyu

    2016-01-01

    A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is a latest

  13. Using Trait-State Models to Evaluate the Longitudinal Consistency of Global Self-Esteem From Adolescence to Adulthood.

    Science.gov (United States)

    Donnellan, M Brent; Kenny, David A; Trzesniewski, Kali H; Lucas, Richard E; Conger, Rand D

    2012-12-01

    The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-esteem assessments, whereas state factors accounted for about 16% of the variance in repeated assessments of latent self-esteem. The stability of individual differences in self-esteem increased with age consistent with the cumulative continuity principle of personality development.

  14. Using Trait-State Models to Evaluate the Longitudinal Consistency of Global Self-Esteem From Adolescence to Adulthood

    Science.gov (United States)

    Donnellan, M. Brent; Kenny, David A.; Trzesniewski, Kali H.; Lucas, Richard E.; Conger, Rand D.

    2012-01-01

    The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-esteem assessments, whereas state factors accounted for about 16% of the variance in repeated assessments of latent self-esteem. The stability of individual differences in self-esteem increased with age consistent with the cumulative continuity principle of personality development. PMID:23180899

  15. Consistent classical supergravity theories

    International Nuclear Information System (INIS)

    Muller, M.

    1989-01-01

    This book offers a presentation of both conformal and Poincare supergravity. The consistent four-dimensional supergravity theories are classified. The formulae needed for further modelling are included

  16. Predicting knee replacement damage in a simulator machine using a computational model with a consistent wear factor.

    Science.gov (United States)

    Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J

    2008-02-01

    Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial

  17. Multi-model comparison highlights consistency in predicted effect of warming on a semi-arid shrub

    Science.gov (United States)

    Renwick, Katherine M.; Curtis, Caroline; Kleinhesselink, Andrew R.; Schlaepfer, Daniel R.; Bradley, Bethany A.; Aldridge, Cameron L.; Poulter, Benjamin; Adler, Peter B.

    2018-01-01

    A number of modeling approaches have been developed to predict the impacts of climate change on species distributions, performance, and abundance. The stronger the agreement from models that represent different processes and are based on distinct and independent sources of information, the greater the confidence we can have in their predictions. Evaluating the level of confidence is particularly important when predictions are used to guide conservation or restoration decisions. We used a multi-model approach to predict climate change impacts on big sagebrush (Artemisia tridentata), the dominant plant species on roughly 43 million hectares in the western United States and a key resource for many endemic wildlife species. To evaluate the climate sensitivity of A. tridentata, we developed four predictive models, two based on empirically derived spatial and temporal relationships, and two that applied mechanistic approaches to simulate sagebrush recruitment and growth. This approach enabled us to produce an aggregate index of climate change vulnerability and uncertainty based on the level of agreement between models. Despite large differences in model structure, predictions of sagebrush response to climate change were largely consistent. Performance, as measured by change in cover, growth, or recruitment, was predicted to decrease at the warmest sites, but increase throughout the cooler portions of sagebrush's range. A sensitivity analysis indicated that sagebrush performance responds more strongly to changes in temperature than precipitation. Most of the uncertainty in model predictions reflected variation among the ecological models, raising questions about the reliability of forecasts based on a single modeling approach. Our results highlight the value of a multi-model approach in forecasting climate change impacts and uncertainties and should help land managers to maximize the value of conservation investments.

  18. Thermodynamically consistent modeling and simulation of multi-component two-phase flow model with partial miscibility

    KAUST Repository

    Kou, Jisheng

    2016-11-25

    A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is a latest alternative over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of two fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which demonstrates that chemical potential gradients become the primary driving force of fluid motion. Moreover, we prove that the proposed model satisfies the total (free) energy dissipation with time. For numerical simulation of the proposed model, the key difficulties result from the strong nonlinearity of Helmholtz free energy density and tight coupling relations between molar densities and velocity. To resolve these problems, we propose a novel convex-concave splitting of Helmholtz free energy density and deal well with the coupling relations between molar densities and velocity through very careful physical observations with a mathematical rigor. We prove that the proposed numerical scheme can preserve the discrete (free) energy dissipation. Numerical tests are carried out to verify the effectiveness of the proposed method.

  19. Introduction to genetic algorithms as a modeling tool

    International Nuclear Information System (INIS)

    Wildberger, A.M.; Hickok, K.A.

    1990-01-01

    Genetic algorithms are search and classification techniques modeled on natural adaptive systems. This is an introduction to their use as a modeling tool with emphasis on prospects for their application in the power industry. It is intended to provide enough background information for its audience to begin to follow technical developments in genetic algorithms and to recognize those which might impact on electric power engineering. Beginning with a discussion of genetic algorithms and their origin as a model of biological adaptation, their advantages and disadvantages are described in comparison with other modeling tools such as simulation and neural networks in order to provide guidance in selecting appropriate applications. In particular, their use is described for improving expert systems from actual data and they are suggested as an aid in building mathematical models. Using the Thermal Performance Advisor as an example, it is suggested how genetic algorithms might be used to make a conventional expert system and mathematical model of a power plant adapt automatically to changes in the plant's characteristics

  20. A new tool for accelerator system modeling and analysis

    International Nuclear Information System (INIS)

    Gillespie, G.H.; Hill, B.W.; Jameson, R.A.

    1994-01-01

    A novel computer code is being developed to generate system level designs of radiofrequency ion accelerators. The goal of the Accelerator System Model (ASM) code is to create a modeling and analysis tool that is easy to use, automates many of the initial design calculations, supports trade studies used in assessing alternate designs and yet is flexible enough to incorporate new technology concepts as they emerge. Hardware engineering parameters and beam dynamics are modeled at comparable levels of fidelity. Existing scaling models of accelerator subsystems were sued to produce a prototype of ASM (version 1.0) working within the Shell for Particle Accelerator Related Codes (SPARC) graphical user interface. A small user group has been testing and evaluating the prototype for about a year. Several enhancements and improvements are now being developed. The current version (1.1) of ASM is briefly described and an example of the modeling and analysis capabilities is illustrated

  1. Surviving the present: Modeling tools for organizational change

    International Nuclear Information System (INIS)

    Pangaro, P.

    1992-01-01

    The nuclear industry, like the rest of modern American business, is beset by a confluence of economic, technological, competitive, regulatory, and political pressures. For better or worse, business schools and management consultants have leapt to the rescue, offering the most modern conveniences that they can purvey. Recent advances in the study of organizations have led to new tools for their analysis, revision, and repair. There are two complementary tools that do not impose values or injunctions in themselves. One, called the organization modeler, captures the hierarchy of purposes that organizations and their subparts carry out. Any deficiency or pathology is quickly illuminated, and requirements for repair are made clear. The second, called THOUGHTSTICKER, is used to capture the semantic content of the conversations that occur across the interactions of parts of an organization. The distinctions and vocabulary in the language of an organization, and the relations within that domain, are elicited from the participants so that all three are available for debate and refinement. The product of the applications of these modeling tools is not the resulting models but rather the enhancement of the organization as a consequence of the process of constructing them

  2. Modelling stillbirth mortality reduction with the Lives Saved Tool

    Directory of Open Access Journals (Sweden)

    Hannah Blencowe

    2017-11-01

    Full Text Available Abstract Background The worldwide burden of stillbirths is large, with an estimated 2.6 million babies stillborn in 2015 including 1.3 million dying during labour. The Every Newborn Action Plan set a stillbirth target of ≤12 per 1000 in all countries by 2030. Planning tools will be essential as countries set policy and plan investment to scale up interventions to meet this target. This paper summarises the approach taken for modelling the impact of scaling-up health interventions on stillbirths in the Lives Saved tool (LiST, and potential future refinements. Methods The specific application to stillbirths of the general method for modelling the impact of interventions in LiST is described. The evidence for the effectiveness of potential interventions to reduce stillbirths are reviewed and the assumptions of the affected fraction of stillbirths who could potentially benefit from these interventions are presented. The current assumptions and their effects on stillbirth reduction are described and potential future improvements discussed. Results High quality evidence are not available for all parameters in the LiST stillbirth model. Cause-specific mortality data is not available for stillbirths, therefore stillbirths are modelled in LiST using an attributable fraction approach by timing of stillbirths (antepartum/ intrapartum. Of 35 potential interventions to reduce stillbirths identified, eight interventions are currently modelled in LiST. These include childbirth care, induction for prolonged pregnancy, multiple micronutrient and balanced energy supplementation, malaria prevention and detection and management of hypertensive disorders of pregnancy, diabetes and syphilis. For three of the interventions, childbirth care, detection and management of hypertensive disorders of pregnancy, and diabetes the estimate of effectiveness is based on expert opinion through a Delphi process. Only for malaria is coverage information available, with coverage

  3. Solid-state-drives (SSDs) modeling simulation tools & strategies

    CERN Document Server

    2017-01-01

    This book introduces simulation tools and strategies for complex systems of solid-state-drives (SSDs) which consist of a flash multi-core microcontroller plus NAND flash memories. It provides a broad overview of the most popular simulation tools, with special focus on open source solutions. VSSIM, NANDFlashSim and DiskSim are benchmarked against performances of real SSDs under different traffic workloads. PROs and CONs of each simulator are analyzed, and it is clearly indicated which kind of answers each of them can give and at a what price. It is explained, that speed and precision do not go hand in hand, and it is important to understand when to simulate what, and with which tool. Being able to simulate SSD’s performances is mandatory to meet time-to-market, together with product cost and quality. Over the last few years the authors developed an advanced simulator named “SSDExplorer” which has been used to evaluate multiple phenomena with great accuracy, from QoS (Quality Of Service) to Read Retry, fr...

  4. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie

    2017-03-17

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  5. Empirical phylogenies and species abundance distributions are consistent with pre-equilibrium dynamics of neutral community models with gene flow

    KAUST Repository

    Bonnet-Lebrun, Anne-Sophie; Manica, Andrea; Eriksson, Anders; Rodrigues, Ana S.L.

    2017-01-01

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.

  6. Update on Small Modular Reactors Dynamics System Modeling Tool -- Molten Salt Cooled Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Hale, Richard Edward [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Qualls, A L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Borum, Robert C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chaleff, Ethan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogerson, Doug W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Batteh, John J. [Modelon Corporation (Sweden); Tiller, Michael M. [Xogeny Corporation, Canton, MI (United States)

    2014-08-01

    The Small Modular Reactor (SMR) Dynamic System Modeling Tool project is in the third year of development. The project is designed to support collaborative modeling and study of various advanced SMR (non-light water cooled) concepts, including the use of multiple coupled reactors at a single site. The objective of the project is to provide a common simulation environment and baseline modeling resources to facilitate rapid development of dynamic advanced reactor SMR models, ensure consistency among research products within the Instrumentation, Controls, and Human-Machine Interface (ICHMI) technical area, and leverage cross-cutting capabilities while minimizing duplication of effort. The combined simulation environment and suite of models are identified as the Modular Dynamic SIMulation (MoDSIM) tool. The critical elements of this effort include (1) defining a standardized, common simulation environment that can be applied throughout the program, (2) developing a library of baseline component modules that can be assembled into full plant models using existing geometry and thermal-hydraulic data, (3) defining modeling conventions for interconnecting component models, and (4) establishing user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.

  7. A consistent and verifiable macroscopic model for the dissolution of liquid CO2 in water under hydrate forming conditions

    International Nuclear Information System (INIS)

    Radhakrishnan, R.; Demurov, A.; Trout, B.L.; Herzog, H.

    2003-01-01

    Direct injection of liquid CO 2 into the ocean has been proposed as one method to reduce the emission levels of CO 2 into the atmosphere. When liquid CO 2 is injected (normally as droplets) at ocean depths >500 m, a solid interfacial region between the CO 2 and the water is observed to form. This region consists of hydrate clathrates and hinders the rate of dissolution of CO 2 . It is, therefore, expected to have a significant impact on the injection of liquid CO 2 into the ocean. Up until now, no consistent and predictive model for the shrinking of droplets of CO 2 under hydrate forming conditions has been proposed. This is because all models proposed to date have had too many unknowns. By computing rates of the physical and chemical processes in hydrates via molecular dynamics simulations, we have been able to determine independently some of these unknowns. We then propose the most reasonable model and use it to make independent predictions of the rates of mass transfer and thickness of the hydrate region. These predictions are compared to measurements, and implications to the rates of shrinkage of CO 2 droplets under varying flow conditions are discussed. (author)

  8. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  9. An ensemble model of QSAR tools for regulatory risk assessment.

    Science.gov (United States)

    Pradeep, Prachi; Povinelli, Richard J; White, Shannon; Merrill, Stephen J

    2016-01-01

    Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflicting predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa ( κ ): 0

  10. Volume of the steady-state space of financial flows in a monetary stock-flow-consistent model

    Science.gov (United States)

    Hazan, Aurélien

    2017-05-01

    We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to study the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.

  11. Using nudging to improve global-regional dynamic consistency in limited-area climate modeling: What should we nudge?

    Science.gov (United States)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2015-03-01

    Regional climate modelling sometimes requires that the regional model be nudged towards the large-scale driving data to avoid the development of inconsistencies between them. These inconsistencies are known to produce large surface temperature and rainfall artefacts. Therefore, it is essential to maintain the synoptic circulation within the simulation domain consistent with the synoptic circulation at the domain boundaries. Nudging techniques, initially developed for data assimilation purposes, are increasingly used in regional climate modeling and offer a workaround to this issue. In this context, several questions on the "optimal" use of nudging are still open. In this study we focus on a specific question which is: What variable should we nudge? in order to maintain the consistencies between the regional model and the driving fields as much as possible. For that, a "Big Brother Experiment", where a reference atmospheric state is known, is conducted using the weather research and forecasting (WRF) model over the Euro-Mediterranean region. A set of 22 3-month simulations is performed with different sets of nudged variables and nudging options (no nudging, indiscriminate nudging, spectral nudging) for summer and winter. The results show that nudging clearly improves the model capacity to reproduce the reference fields. However the skill scores depend on the set of variables used to nudge the regional climate simulations. Nudging the tropospheric horizontal wind is by far the key variable to nudge to simulate correctly surface temperature and wind, and rainfall. To a lesser extent, nudging tropospheric temperature also contributes to significantly improve the simulations. Indeed, nudging tropospheric wind or temperature directly impacts the simulation of the tropospheric geopotential height and thus the synoptic scale atmospheric circulation. Nudging moisture improves the precipitation but the impact on the other fields (wind and temperature) is not significant. As

  12. A Simulation Model for Drift Resistive Ballooning Turbulence Examining the Influence of Self-consistent Zonal Flows

    Science.gov (United States)

    Cohen, Bruce; Umansky, Maxim; Joseph, Ilon

    2015-11-01

    Progress is reported on including self-consistent zonal flows in simulations of drift-resistive ballooning turbulence using the BOUT + + framework. Previous published work addressed the simulation of L-mode edge turbulence in realistic single-null tokamak geometry using the BOUT three-dimensional fluid code that solves Braginskii-based fluid equations. The effects of imposed sheared ExB poloidal rotation were included, with a static radial electric field fitted to experimental data. In new work our goal is to include the self-consistent effects on the radial electric field driven by the microturbulence, which contributes to the sheared ExB poloidal rotation (zonal flow generation). We describe a model for including self-consistent zonal flows and an algorithm for maintaining underlying plasma profiles to enable the simulation of steady-state turbulence. We examine the role of Braginskii viscous forces in providing necessary dissipation when including axisymmetric perturbations. We also report on some of the numerical difficulties associated with including the axisymmetric component of the fluctuating fields. This work was performed under the auspices of the U.S. Department of Energy under contract DE-AC52-07NA27344 at the Lawrence Livermore National Laboratory (LLNL-ABS-674950).

  13. Modeling as a tool for process control: alcoholic fermentation

    Energy Technology Data Exchange (ETDEWEB)

    Tayeb, A M; Ashour, I A; Mostafa, N A [El-Minia Univ. (EG). Faculty of Engineering

    1991-01-01

    The results of the alcoholic fermentation of beet sugar molasses and wheat milling residues (Akalona) were fed into a computer program. Consequently, the kinetic parameters for these fermentation reactions were determined. These parameters were put into a kinetic model. Next, the model was tested, and the results obtained were compared with the experimental results of both beet molasses and Akalona. The deviation of the experimental results from the results obtained from the model was determined. An acceptable deviation of 1.2% for beet sugar molasses and 3.69% for Akalona was obtained. Thus, the present model could be a tool for chemical engineers working in fermentation processes both with respect to the control of the process and the design of the fermentor. (Author).

  14. Logic flowgraph methodology - A tool for modeling embedded systems

    Science.gov (United States)

    Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.

    1991-01-01

    The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.

  15. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  16. Integration and consistency testing of groundwater flow models with hydro-geochemistry in site investigations in Finland

    International Nuclear Information System (INIS)

    Pitkaenen, P.; Loefman, J.; Korkealaakso, J.; Koskinen, L.; Ruotsalainen, P.; Hautojaervi, A.; Aeikaes, T.

    1999-01-01

    In the assessment of the suitability and safety of a geological repository for radioactive waste the understanding of the fluid flow at a site is essential. In order to build confidence in the assessment of the hydrogeological performance of a site in various conditions, integration of hydrological and hydrogeochemical methods and studies provides the primary method for investigating the evolution that has taken place in the past, and for predicting future conditions at the potential disposal site. A systematic geochemical sampling campaign was started since the beginning of 1990's in the Finnish site investigation programme. This enabled the initiating of integration and evaluation of site scale hydrogeochemical and groundwater flow models. Hydrogeochemical information has been used to screen relevant external processes and variables for definition of the initial and boundary conditions in hydrological simulations. The results obtained from interpretation and modelling hydrogeochemical evolution have been employed in testing the hydrogeochemical consistency of conceptual flow models. Integration and testing of flow models with hydrogeochemical information are considered to improve significantly the hydrogeological understanding of a site and increases confidence in conceptual hydrogeological models. (author)

  17. Thermodynamic consistency of viscoplastic material models involving external variable rates in the evolution equations for the internal variables

    International Nuclear Information System (INIS)

    Malmberg, T.

    1993-09-01

    The objective of this study is to derive and investigate thermodynamic restrictions for a particular class of internal variable models. Their evolution equations consist of two contributions: the usual irreversible part, depending only on the present state, and a reversible but path dependent part, linear in the rates of the external variables (evolution equations of ''mixed type''). In the first instance the thermodynamic analysis is based on the classical Clausius-Duhem entropy inequality and the Coleman-Noll argument. The analysis is restricted to infinitesimal strains and rotations. The results are specialized and transferred to a general class of elastic-viscoplastic material models. Subsequently, they are applied to several viscoplastic models of ''mixed type'', proposed or discussed in the literature (Robinson et al., Krempl et al., Freed et al.), and it is shown that some of these models are thermodynamically inconsistent. The study is closed with the evaluation of the extended Clausius-Duhem entropy inequality (concept of Mueller) where the entropy flux is governed by an assumed constitutive equation in its own right; also the constraining balance equations are explicitly accounted for by the method of Lagrange multipliers (Liu's approach). This analysis is done for a viscoplastic material model with evolution equations of the ''mixed type''. It is shown that this approach is much more involved than the evaluation of the classical Clausius-Duhem entropy inequality with the Coleman-Noll argument. (orig.) [de

  18. Gsflow-py: An integrated hydrologic model development tool

    Science.gov (United States)

    Gardner, M.; Niswonger, R. G.; Morton, C.; Henson, W.; Huntington, J. L.

    2017-12-01

    Integrated hydrologic modeling encompasses a vast number of processes and specifications, variable in time and space, and development of model datasets can be arduous. Model input construction techniques have not been formalized or made easily reproducible. Creating the input files for integrated hydrologic models (IHM) requires complex GIS processing of raster and vector datasets from various sources. Developing stream network topology that is consistent with the model resolution digital elevation model is important for robust simulation of surface water and groundwater exchanges. Distribution of meteorologic parameters over the model domain is difficult in complex terrain at the model resolution scale, but is necessary to drive realistic simulations. Historically, development of input data for IHM models has required extensive GIS and computer programming expertise which has restricted the use of IHMs to research groups with available financial, human, and technical resources. Here we present a series of Python scripts that provide a formalized technique for the parameterization and development of integrated hydrologic model inputs for GSFLOW. With some modifications, this process could be applied to any regular grid hydrologic model. This Python toolkit automates many of the necessary and laborious processes of parameterization, including stream network development and cascade routing, land coverages, and meteorological distribution over the model domain.

  19. Consistency in PERT problems

    OpenAIRE

    Bergantiños, Gustavo; Valencia-Toledo, Alfredo; Vidal-Puga, Juan

    2016-01-01

    The program evaluation review technique (PERT) is a tool used to schedule and coordinate activities in a complex project. In assigning the cost of a potential delay, we characterize the Shapley rule as the only rule that satisfies consistency and other desirable properties.

  20. Modeling in the Classroom: An Evolving Learning Tool

    Science.gov (United States)

    Few, A. A.; Marlino, M. R.; Low, R.

    2006-12-01

    Among the early programs (early 1990s) focused on teaching Earth System Science were the Global Change Instruction Program (GCIP) funded by NSF through UCAR and the Earth System Science Education Program (ESSE) funded by NASA through USRA. These two programs introduced modeling as a learning tool from the beginning, and they provided workshops, demonstrations and lectures for their participating universities. These programs were aimed at university-level education. Recently, classroom modeling is experiencing a revival of interest. Drs John Snow and Arthur Few conducted two workshops on modeling at the ESSE21 meeting in Fairbanks, Alaska, in August 2005. The Digital Library for Earth System Education (DLESE) at http://www.dlese.org provides web access to STELLA models and tutorials, and UCAR's Education and Outreach (EO) program holds workshops that include training in modeling. An important innovation to the STELLA modeling software by isee systems, http://www.iseesystems.com, called "isee Player" is available as a free download. The Player allows users to view and run STELLA models, change model parameters, share models with colleagues and students, and make working models available on the web. This is important because the expert can create models, and the user can learn how the modeled system works. Another aspect of this innovation is that the educational benefits of modeling concepts can be extended throughout most of the curriculum. The procedure for building a working computer model of an Earth Science System follows this general format: (1) carefully define the question(s) for which you seek the answer(s); (2) identify the interacting system components and inputs contributing to the system's behavior; (3) collect the information and data that will be required to complete the conceptual model; (4) construct a system diagram (graphic) of the system that displays all of system's central questions, components, relationships and required inputs. At this stage

  1. Producing physically consistent and bias free extreme precipitation events over the Switzerland: Bridging gaps between meteorology and impact models

    Science.gov (United States)

    José Gómez-Navarro, Juan; Raible, Christoph C.; Blumer, Sandro; Martius, Olivia; Felder, Guido

    2016-04-01

    Extreme precipitation episodes, although rare, are natural phenomena that can threat human activities, especially in areas densely populated such as Switzerland. Their relevance demands the design of public policies that protect public assets and private property. Therefore, increasing the current understanding of such exceptional situations is required, i.e. the climatic characterisation of their triggering circumstances, severity, frequency, and spatial distribution. Such increased knowledge shall eventually lead us to produce more reliable projections about the behaviour of these events under ongoing climate change. Unfortunately, the study of extreme situations is hampered by the short instrumental record, which precludes a proper characterization of events with return period exceeding few decades. This study proposes a new approach that allows studying storms based on a synthetic, but physically consistent database of weather situations obtained from a long climate simulation. Our starting point is a 500-yr control simulation carried out with the Community Earth System Model (CESM). In a second step, this dataset is dynamically downscaled with the Weather Research and Forecasting model (WRF) to a final resolution of 2 km over the Alpine area. However, downscaling the full CESM simulation at such high resolution is infeasible nowadays. Hence, a number of case studies are previously selected. This selection is carried out examining the precipitation averaged in an area encompassing Switzerland in the ESM. Using a hydrological criterion, precipitation is accumulated in several temporal windows: 1 day, 2 days, 3 days, 5 days and 10 days. The 4 most extreme events in each category and season are selected, leading to a total of 336 days to be simulated. The simulated events are affected by systematic biases that have to be accounted before this data set can be used as input in hydrological models. Thus, quantile mapping is used to remove such biases. For this task

  2. A self-consistent model of a thermally balanced quiescent prominence in magnetostatic equilibrium in a uniform gravitational field

    International Nuclear Information System (INIS)

    Lerche, I.; Low, B.C.

    1977-01-01

    A theoretical model of quiescent prominences in the form of an infinite vertical sheet is presented. Self-consistent solutions are obtained by integrating simultaneously the set of nonlinear equations of magnetostatic equilibrium and thermal balance. The basic features of the models are: (1) The prominence matter is confined to a sheet and supported against gravity by a bowed magnetic field. (2) The thermal flux is channelled along magnetic field lines. (3) The thermal flux is everywhere balanced by Low's (1975) hypothetical heat sink which is proportional to the local density. (4) A constant component of the magnetic field along the length of the prominence shields the cool plasma from the hot surrounding. It is assumed that the prominence plasma emits more radiation than it absorbes from the radiation fields of the photosphere, chromosphere and corona, and the above hypothetical heat sink is interpreted to represent the amount of radiative loss that must be balanced by a nonradiative energy input. Using a central density and temperature of 10 11 particles cm -3 and 5000 K respectively, a magnetic field strength between 2 to 10 gauss and a thermal conductivity that varies linearly with temperature, the physical properties implied by the model are discussed. The analytic treatment can also be carried out for a class of more complex thermal conductivities. These models provide a useful starting point for investigating the combined requirements of magnetostatic equilibrium and thermal balance in the quiescent prominence. (Auth.)

  3. Analysis of Sequence Diagram Layout in Advanced UML Modelling Tools

    Directory of Open Access Journals (Sweden)

    Ņikiforova Oksana

    2016-05-01

    Full Text Available System modelling using Unified Modelling Language (UML is the task that should be solved for software development. The more complex software becomes the higher requirements are stated to demonstrate the system to be developed, especially in its dynamic aspect, which in UML is offered by a sequence diagram. To solve this task, the main attention is devoted to the graphical presentation of the system, where diagram layout plays the central role in information perception. The UML sequence diagram due to its specific structure is selected for a deeper analysis on the elements’ layout. The authors research represents the abilities of modern UML modelling tools to offer automatic layout of the UML sequence diagram and analyse them according to criteria required for the diagram perception.

  4. MODERN TOOLS FOR MODELING ACTIVITY IT-COMPANIES

    Directory of Open Access Journals (Sweden)

    Марина Петрівна ЧАЙКОВСЬКА

    2015-05-01

    Full Text Available Increasing competition in the market of the web-based applications increases the importance of the quality of services and optimization of processes of interaction with customers. The purpose of the article is to develop recommendations for improving the business processes of IT enterprises of web application segment based on technological tools for business modeling, shaping requirements for the development of an information system for customer interaction; analysis of the effective means of implementation and evaluation of the economic effects of the introduction. A scheme of the business process development and launch of the website was built, based on the analysis of business process models and “swim lane” models, requirements for IP customer relationship management for web studio were established. Market of software to create IP was analyzed, and the ones corresponding to the requirements were selected. IP system was developed and tested, implemented it in the company, an appraisal of the economic effect was conducted.

  5. Tools and Methods for RTCP-Nets Modeling and Verification

    Directory of Open Access Journals (Sweden)

    Szpyrka Marcin

    2016-09-01

    Full Text Available RTCP-nets are high level Petri nets similar to timed colored Petri nets, but with different time model and some structural restrictions. The paper deals with practical aspects of using RTCP-nets for modeling and verification of real-time systems. It contains a survey of software tools developed to support RTCP-nets. Verification of RTCP-nets is based on coverability graphs which represent the set of reachable states in the form of directed graph. Two approaches to verification of RTCP-nets are considered in the paper. The former one is oriented towards states and is based on translation of a coverability graph into nuXmv (NuSMV finite state model. The later approach is oriented towards transitions and uses the CADP toolkit to check whether requirements given as μ-calculus formulae hold for a given coverability graph. All presented concepts are discussed using illustrative examples

  6. Dynamic wind turbine models in power system simulation tool

    DEFF Research Database (Denmark)

    Hansen, Anca D.; Iov, Florin; Sørensen, Poul

    , connection of the wind turbine at different types of grid and storage systems. Different control strategies have been developed and implemented for these wind turbine concepts, their performance in normal or fault operation being assessed and discussed by means of simulations. The described control......This report presents a collection of models and control strategies developed and implemented in the power system simulation tool PowerFactory DIgSILENT for different wind turbine concepts. It is the second edition of Risø-R-1400(EN) and it gathers and describes a whole wind turbine model database...... of the interaction between the mechanical structure of the wind turbine and the electrical grid during different operational modes. The report provides thus a description of the wind turbines modelling, both at a component level and at a system level. The report contains both the description of DIgSILENT built...

  7. Mathematical modeling of physiological systems: an essential tool for discovery.

    Science.gov (United States)

    Glynn, Patric; Unudurthi, Sathya D; Hund, Thomas J

    2014-08-28

    Mathematical models are invaluable tools for understanding the relationships between components of a complex system. In the biological context, mathematical models help us understand the complex web of interrelations between various components (DNA, proteins, enzymes, signaling molecules etc.) in a biological system, gain better understanding of the system as a whole, and in turn predict its behavior in an altered state (e.g. disease). Mathematical modeling has enhanced our understanding of multiple complex biological processes like enzyme kinetics, metabolic networks, signal transduction pathways, gene regulatory networks, and electrophysiology. With recent advances in high throughput data generation methods, computational techniques and mathematical modeling have become even more central to the study of biological systems. In this review, we provide a brief history and highlight some of the important applications of modeling in biological systems with an emphasis on the study of excitable cells. We conclude with a discussion about opportunities and challenges for mathematical modeling going forward. In a larger sense, the review is designed to help answer a simple but important question that theoreticians frequently face from interested but skeptical colleagues on the experimental side: "What is the value of a model?" Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Materials modelling - a possible design tool for advanced nuclear applications

    International Nuclear Information System (INIS)

    Hoffelner, W.; Samaras, M.; Bako, B.; Iglesias, R.

    2008-01-01

    The design of components for power plants is usually based on codes, standards and design rules or code cases. However, it is very difficult to get the necessary experimental data to prove these lifetime assessment procedures for long-term applications in environments where complex damage interactions (temperature, stress, environment, irradiation) can occur. The rules used are often very simple and do not have a basis which take physical damage into consideration. The linear life fraction rule for creep and fatigue interaction can be taken as a prominent example. Materials modelling based on a multi-scale approach in principle provides a tool to convert microstructural findings into mechanical response and therefore has the capability of providing a set of tools for the improvement of design life assessments. The strength of current multi-scale modelling efforts is the insight they offer as regards experimental phenomena. To obtain an understanding of these phenomena it is import to focus on issues which are important at the various time and length scales of the modelling code. In this presentation the multi-scale path will be demonstrated with a few recent examples which focus on VHTR applications. (authors)

  9. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  10. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  11. Modeling energy technology choices. Which investment analysis tools are appropriate?

    International Nuclear Information System (INIS)

    Johnson, B.E.

    1994-01-01

    A variety of tools from modern investment theory appear to hold promise for unraveling observed energy technology investment behavior that often appears anomalous when analyzed using traditional investment analysis methods. This paper reviews the assumptions and important insights of the investment theories most commonly suggested as candidates for explaining the apparent ''energy technology investment paradox''. The applicability of each theory is considered in the light of important aspects of energy technology investment problems, such as sunk costs, uncertainty and imperfect information. The theories addressed include the capital asset pricing model, the arbitrage pricing theory, and the theory of irreversible investment. Enhanced net present value methods are also considered. (author)

  12. Self-Consistant Numerical Modeling of E-Cloud Driven Instability of a Bunch Train in the CERN SPS

    International Nuclear Information System (INIS)

    Vay, J.-L.; Furman, M.A.; Secondo, R.; Venturini, M.; Fox, J.D.; Rivetta, C.H.

    2010-01-01

    The simulation package WARP-POSINST was recently upgraded for handling multiple bunches and modeling concurrently the electron cloud buildup and its effect on the beam, allowing for direct self-consistent simulation of bunch trains generating, and interacting with, electron clouds. We have used the WARP-POSINST package on massively parallel supercomputers to study the growth rate and frequency patterns in space-time of the electron cloud driven transverse instability for a proton bunch train in the CERN SPS accelerator. Results suggest that a positive feedback mechanism exists between the electron buildup and the e-cloud driven transverse instability, leading to a net increase in predicted electron density. Comparisons to selected experimental data are also given. Electron clouds have been shown to trigger fast growing instabilities on proton beams circulating in the SPS and other accelerators. So far, simulations of electron cloud buildup and their effects on beam dynamics have been performed separately. This is a consequence of the large computational cost of the combined calculation due to large space and time scale disparities between the two processes. We have presented the latest improvements of the simulation package WARP-POSINST for the simulation of self-consistent ecloud effects, including mesh refinement, and generation of electrons from gas ionization and impact at the pipe walls. We also presented simulations of two consecutive bunches interacting with electrons clouds in the SPS, which included generation of secondary electrons. The distribution of electrons in front of the first beam was initialized from a dump taken from a preceding buildup calculation using the POSINST code. In this paper, we present an extension of this work where one full batch of 72 bunches is simulated in the SPS, including the entire buildup calculation and the self-consistent interaction between the bunches and the electrons. Comparisons to experimental data are also given.

  13. Evaluation and comparison of models and modelling tools simulating nitrogen processes in treatment wetlands

    DEFF Research Database (Denmark)

    Edelfeldt, Stina; Fritzson, Peter

    2008-01-01

    with Modelica 2.1 (Wiley-IEEE Press, USA, 2004).] and an associated tool. The differences and similarities between the MathModelica Model Editor and three other ecological modelling tools have also been evaluated. The results show that the models can well be modelled and simulated in the MathModelica Model...... Editor, and that nitrogen decrease in a constructed treatment wetland should be described and simulated using the Nitrification/Denitrification model as this model has the highest overall quality score and provides a more variable environment.......In this paper, two ecological models of nitrogen processes in treatment wetlands have been evaluated and compared. These models were implemented, simulated, and visualized using the Modelica modelling and simulation language [P. Fritzson, Principles of Object-Oriented Modelling and Simulation...

  14. ExEP yield modeling tool and validation test results

    Science.gov (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  15. Conceptual Models as Tools for Communication Across Disciplines

    Directory of Open Access Journals (Sweden)

    Marieke Heemskerk

    2003-12-01

    Full Text Available To better understand and manage complex social-ecological systems, social scientists and ecologists must collaborate. However, issues related to language and research approaches can make it hard for researchers in different fields to work together. This paper suggests that researchers can improve interdisciplinary science through the use of conceptual models as a communication tool. The authors share lessons from a workshop in which interdisciplinary teams of young scientists developed conceptual models of social-ecological systems using data sets and metadata from Long-Term Ecological Research sites across the United States. Both the process of model building and the models that were created are discussed. The exercise revealed that the presence of social scientists in a group influenced the place and role of people in the models. This finding suggests that the participation of both ecologists and social scientists in the early stages of project development may produce better questions and more accurate models of interactions between humans and ecosystems. Although the participants agreed that a better understanding of human intentions and behavior would advance ecosystem science, they felt that interdisciplinary research might gain more by training strong disciplinarians than by merging ecology and social sciences into a new field. It is concluded that conceptual models can provide an inspiring point of departure and a guiding principle for interdisciplinary group discussions. Jointly developing a model not only helped the participants to formulate questions, clarify system boundaries, and identify gaps in existing data, but also revealed the thoughts and assumptions of fellow scientists. Although the use of conceptual models will not serve all purposes, the process of model building can help scientists, policy makers, and resource managers discuss applied problems and theory among themselves and with those in other areas.

  16. A self-consistent model of rich clusters of galaxies. I. The galactic component of a cluster

    International Nuclear Information System (INIS)

    Konyukov, M.V.

    1985-01-01

    It is shown that to obtain the distribution function for the galactic component of a cluster reduces in the last analysis to solving the boundary-value problem for the gravitational potential of a self-consistent field. The distribution function is determined by two main parameters. An algorithm is constructed for the solution of the problem, and a program is set up to solve it. It is used to establish the region of values of the parameters in the problem for which solutions exist. The scheme proposed is extended to the case where there exists in the cluster a separate central body with a known density distribution (for example, a cD galaxy). A method is indicated for the estimation of the parameters of the model from the results of observations of clusters of galaxies in the optical range

  17. System dynamics models as decision-making tools in agritourism

    Directory of Open Access Journals (Sweden)

    Jere Jakulin Tadeja

    2016-12-01

    Full Text Available Agritourism as a type of niche tourism is a complex and softly defined phaenomenon. The demands for fast and integrated decision regarding agritourism and its interconnections with environment, economy (investments, traffic and social factors (tourists is urgent. Many different methodologies and methods master softly structured questions and dilemmas with global and local properties. Here we present methods of systems thinking and system dynamics, which were first brought into force in the educational and training area in the form of different computer simulations and later as tools for decision-making and organisational re-engineering. We develop system dynamics models in order to present accuracy of methodology. These models are essentially simple and can serve only as describers of the activity of basic mutual influences among variables. We will pay the attention to the methodology for parameter model values determination and the so-called mental model. This one is the basis of causal connections among model variables. At the end, we restore a connection between qualitative and quantitative models in frame of system dynamics.

  18. A SELF-CONSISTENT MODEL OF THE CIRCUMSTELLAR DEBRIS CREATED BY A GIANT HYPERVELOCITY IMPACT IN THE HD 172555 SYSTEM

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, B. C.; Melosh, H. J. [Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907 (United States); Lisse, C. M. [JHU-APL, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States); Chen, C. H. [STScI, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Wyatt, M. C. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Thebault, P. [LESIA, Observatoire de Paris, F-92195 Meudon Principal Cedex (France); Henning, W. G. [NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Gaidos, E. [Department of Geology and Geophysics, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); Elkins-Tanton, L. T. [Department of Terrestrial Magnetism, Carnegie Institution for Science, Washington, DC 20015 (United States); Bridges, J. C. [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom); Morlok, A., E-mail: johns477@purdue.edu [Department of Physical Sciences, Open University, Walton Hall, Milton Keynes MK7 6AA (United Kingdom)

    2012-12-10

    Spectral modeling of the large infrared excess in the Spitzer IRS spectra of HD 172555 suggests that there is more than 10{sup 19} kg of submicron dust in the system. Using physical arguments and constraints from observations, we rule out the possibility of the infrared excess being created by a magma ocean planet or a circumplanetary disk or torus. We show that the infrared excess is consistent with a circumstellar debris disk or torus, located at {approx}6 AU, that was created by a planetary scale hypervelocity impact. We find that radiation pressure should remove submicron dust from the debris disk in less than one year. However, the system's mid-infrared photometric flux, dominated by submicron grains, has been stable within 4% over the last 27 years, from the Infrared Astronomical Satellite (1983) to WISE (2010). Our new spectral modeling work and calculations of the radiation pressure on fine dust in HD 172555 provide a self-consistent explanation for this apparent contradiction. We also explore the unconfirmed claim that {approx}10{sup 47} molecules of SiO vapor are needed to explain an emission feature at {approx}8 {mu}m in the Spitzer IRS spectrum of HD 172555. We find that unless there are {approx}10{sup 48} atoms or 0.05 M{sub Circled-Plus} of atomic Si and O vapor in the system, SiO vapor should be destroyed by photo-dissociation in less than 0.2 years. We argue that a second plausible explanation for the {approx}8 {mu}m feature can be emission from solid SiO, which naturally occurs in submicron silicate ''smokes'' created by quickly condensing vaporized silicate.

  19. A SELF-CONSISTENT MODEL OF THE CIRCUMSTELLAR DEBRIS CREATED BY A GIANT HYPERVELOCITY IMPACT IN THE HD 172555 SYSTEM

    International Nuclear Information System (INIS)

    Johnson, B. C.; Melosh, H. J.; Lisse, C. M.; Chen, C. H.; Wyatt, M. C.; Thebault, P.; Henning, W. G.; Gaidos, E.; Elkins-Tanton, L. T.; Bridges, J. C.; Morlok, A.

    2012-01-01

    Spectral modeling of the large infrared excess in the Spitzer IRS spectra of HD 172555 suggests that there is more than 10 19 kg of submicron dust in the system. Using physical arguments and constraints from observations, we rule out the possibility of the infrared excess being created by a magma ocean planet or a circumplanetary disk or torus. We show that the infrared excess is consistent with a circumstellar debris disk or torus, located at ∼6 AU, that was created by a planetary scale hypervelocity impact. We find that radiation pressure should remove submicron dust from the debris disk in less than one year. However, the system's mid-infrared photometric flux, dominated by submicron grains, has been stable within 4% over the last 27 years, from the Infrared Astronomical Satellite (1983) to WISE (2010). Our new spectral modeling work and calculations of the radiation pressure on fine dust in HD 172555 provide a self-consistent explanation for this apparent contradiction. We also explore the unconfirmed claim that ∼10 47 molecules of SiO vapor are needed to explain an emission feature at ∼8 μm in the Spitzer IRS spectrum of HD 172555. We find that unless there are ∼10 48 atoms or 0.05 M ⊕ of atomic Si and O vapor in the system, SiO vapor should be destroyed by photo-dissociation in less than 0.2 years. We argue that a second plausible explanation for the ∼8 μm feature can be emission from solid SiO, which naturally occurs in submicron silicate ''smokes'' created by quickly condensing vaporized silicate.

  20. An Improved Cognitive Model of the Iowa and Soochow Gambling Tasks With Regard to Model Fitting Performance and Tests of Parameter Consistency

    Directory of Open Access Journals (Sweden)

    Junyi eDai

    2015-03-01

    Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.

  1. Introducing Modeling Transition Diagrams as a Tool to Connect Mathematical Modeling to Mathematical Thinking

    Science.gov (United States)

    Czocher, Jennifer A.

    2016-01-01

    This study contributes a methodological tool to reconstruct the cognitive processes and mathematical activities carried out by mathematical modelers. Represented as Modeling Transition Diagrams (MTDs), individual modeling routes were constructed for four engineering undergraduate students. Findings stress the importance and limitations of using…

  2. 3D-Printed Craniosynostosis Model: New Simulation Surgical Tool.

    Science.gov (United States)

    Ghizoni, Enrico; de Souza, João Paulo Sant Ana Santos; Raposo-Amaral, Cassio Eduardo; Denadai, Rafael; de Aquino, Humberto Belém; Raposo-Amaral, Cesar Augusto; Joaquim, Andrei Fernandes; Tedeschi, Helder; Bernardes, Luís Fernando; Jardini, André Luiz

    2018-01-01

    Craniosynostosis is a complex disease once it involves deep anatomic perception, and a minor mistake during surgery can be fatal. The objective of this report is to present novel 3-dimensional-printed polyamide craniosynostosis models that can improve the understanding and treatment complex pathologies. The software InVesalius was used for segmentation of the anatomy image (from 3 patients between 6 and 9 months old). Afterward, the file was transferred to a 3-dimensional printing system and, with the use of an infrared laser, slices of powder PA 2200 were consecutively added to build a polyamide model of cranial bone. The 3 craniosynostosis models allowed fronto-orbital advancement, Pi procedure, and posterior distraction in the operating room environment. All aspects of the craniofacial anatomy could be shown on the models, as well as the most common craniosynostosis pathologic variations (sphenoid wing elevation, shallow orbits, jugular foramen stenosis). Another advantage of our model is its low cost, about 100 U.S. dollars or even less when several models are produced. Simulation is becoming an essential part of medical education for surgical training and for improving surgical safety with adequate planning. This new polyamide craniosynostosis model allowed the surgeons to have realistic tactile feedback on manipulating a child's bone and permitted execution of the main procedures for anatomic correction. It is a low-cost model. Therefore our model is an excellent option for training purposes and is potentially a new important tool to improve the quality of the management of patients with craniosynostosis. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Analysis of Cryogenic Cycle with Process Modeling Tool: Aspen HYSYS

    Science.gov (United States)

    Joshi, D. M.; Patel, H. K.

    2015-10-01

    Cryogenic engineering deals with the development and improvement of low temperature techniques, processes and equipment. A process simulator such as Aspen HYSYS, for the design, analysis, and optimization of process plants, has features that accommodate the special requirements and therefore can be used to simulate most cryogenic liquefaction and refrigeration processes. Liquefaction is the process of cooling or refrigerating a gas to a temperature below its critical temperature so that liquid can be formed at some suitable pressure which is below the critical pressure. Cryogenic processes require special attention in terms of the integration of various components like heat exchangers, Joule-Thompson Valve, Turbo expander and Compressor. Here, Aspen HYSYS, a process modeling tool, is used to understand the behavior of the complete plant. This paper presents the analysis of an air liquefaction plant based on the Linde cryogenic cycle, performed using the Aspen HYSYS process modeling tool. It covers the technique used to find the optimum values for getting the maximum liquefaction of the plant considering different constraints of other parameters. The analysis result so obtained gives clear idea in deciding various parameter values before implementation of the actual plant in the field. It also gives an idea about the productivity and profitability of the given configuration plant which leads to the design of an efficient productive plant.

  4. Analysis of Cryogenic Cycle with Process Modeling Tool: Aspen HYSYS

    International Nuclear Information System (INIS)

    Joshi, D.M.; Patel, H.K.

    2015-01-01

    Cryogenic engineering deals with the development and improvement of low temperature techniques, processes and equipment. A process simulator such as Aspen HYSYS, for the design, analysis, and optimization of process plants, has features that accommodate the special requirements and therefore can be used to simulate most cryogenic liquefaction and refrigeration processes. Liquefaction is the process of cooling or refrigerating a gas to a temperature below its critical temperature so that liquid can be formed at some suitable pressure which is below the critical pressure. Cryogenic processes require special attention in terms of the integration of various components like heat exchangers, Joule-Thompson Valve, Turbo expander and Compressor. Here, Aspen HYSYS, a process modeling tool, is used to understand the behavior of the complete plant. This paper presents the analysis of an air liquefaction plant based on the Linde cryogenic cycle, performed using the Aspen HYSYS process modeling tool. It covers the technique used to find the optimum values for getting the maximum liquefaction of the plant considering different constraints of other parameters. The analysis result so obtained gives clear idea in deciding various parameter values before implementation of the actual plant in the field. It also gives an idea about the productivity and profitability of the given configuration plant which leads to the design of an efficient productive plant

  5. Transposons As Tools for Functional Genomics in Vertebrate Models.

    Science.gov (United States)

    Kawakami, Koichi; Largaespada, David A; Ivics, Zoltán

    2017-11-01

    Genetic tools and mutagenesis strategies based on transposable elements are currently under development with a vision to link primary DNA sequence information to gene functions in vertebrate models. By virtue of their inherent capacity to insert into DNA, transposons can be developed into powerful tools for chromosomal manipulations. Transposon-based forward mutagenesis screens have numerous advantages including high throughput, easy identification of mutated alleles, and providing insight into genetic networks and pathways based on phenotypes. For example, the Sleeping Beauty transposon has become highly instrumental to induce tumors in experimental animals in a tissue-specific manner with the aim of uncovering the genetic basis of diverse cancers. Here, we describe a battery of mutagenic cassettes that can be applied in conjunction with transposon vectors to mutagenize genes, and highlight versatile experimental strategies for the generation of engineered chromosomes for loss-of-function as well as gain-of-function mutagenesis for functional gene annotation in vertebrate models, including zebrafish, mice, and rats. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  7. Development of tools and models for computational fracture assessment

    International Nuclear Information System (INIS)

    Talja, H.; Santaoja, K.

    1998-01-01

    The aim of the work presented in this paper has been to develop and test new computational tools and theoretically more sound methods for fracture mechanical analysis. The applicability of the engineering integrity assessment system MASI for evaluation of piping components has been extended. The most important motivation for the theoretical development have been the well-known fundamental limitations in the validity of J-integral, which limits its applicability in many important practical safety assessment cases. Examples are extensive plastic deformation, multimaterial structures and ascending loading paths (especially warm prestress, WPS). Further, the micromechanical Gurson model has been applied to several reactor pressure vessel materials. Special attention is paid to the transferability of Gurson model parameters from tensile test results to prediction of ductile failure behaviour of cracked structures. (author)

  8. Edge effect modeling of small tool polishing in planetary movement

    Science.gov (United States)

    Li, Qi-xin; Ma, Zhen; Jiang, Bo; Yao, Yong-sheng

    2018-03-01

    As one of the most challenging problems in Computer Controlled Optical Surfacing (CCOS), the edge effect greatly affects the polishing accuracy and efficiency. CCOS rely on stable tool influence function (TIF), however, at the edge of the mirror surface,with the grinding head out of the mirror ,the contact area and pressure distribution changes, which resulting in a non-linear change of TIF, and leads to tilting or sagging at the edge of the mirror. In order reduce the adverse effects and improve the polishing accuracy and efficiency. In this paper, we used the finite element simulation to analyze the pressure distribution at the mirror edge and combined with the improved traditional method to establish a new model. The new method fully considered the non-uniformity of pressure distribution. After modeling the TIFs in different locations, the description and prediction of the edge effects are realized, which has a positive significance on the control and suppression of edge effects

  9. MTK: An AI tool for model-based reasoning

    Science.gov (United States)

    Erickson, William K.; Schwartz, Mary R.

    1987-01-01

    A 1988 goal for the Systems Autonomy Demonstration Project Office of the NASA Ames Research Center is to apply model-based representation and reasoning techniques in a knowledge-based system that will provide monitoring, fault diagnosis, control and trend analysis of the space station Thermal Management System (TMS). A number of issues raised during the development of the first prototype system inspired the design and construction of a model-based reasoning tool called MTK, which was used in the building of the second prototype. These issues are outlined, along with examples from the thermal system to highlight the motivating factors behind them. An overview of the capabilities of MTK is given.

  10. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  11. Port performance evaluation tool based on microsimulation model

    Directory of Open Access Journals (Sweden)

    Tsavalista Burhani Jzolanda

    2017-01-01

    Full Text Available As port performance is becoming correlative to national competitiveness, the issue of port performance evaluation has significantly raised. Port performances can simply be indicated by port service levels to the ship (e.g., throughput, waiting for berthing etc., as well as the utilization level of equipment and facilities within a certain period. The performances evaluation then can be used as a tool to develop related policies for improving the port’s performance to be more effective and efficient. However, the evaluation is frequently conducted based on deterministic approach, which hardly captures the nature variations of port parameters. Therefore, this paper presents a stochastic microsimulation model for investigating the impacts of port parameter variations to the port performances. The variations are derived from actual data in order to provide more realistic results. The model is further developed using MATLAB and Simulink based on the queuing theory.

  12. MODEL CAR TRANSPORT SYSTEM - MODERN ITS EDUCATION TOOL

    Directory of Open Access Journals (Sweden)

    Karel Bouchner

    2017-12-01

    Full Text Available The model car transport system is a laboratory intended for a practical development in the area of the motor traffic. It is also an important education tool for students’ hands-on training, enabling students to test the results of their own studies. The main part of the model car transportation network is a model in a ratio 1:87 (HO, based on component units of FALLER Car system, e.g. cars, traffic lights, carriage way, parking spaces, stop sections, branch-off junctions, sensors and control sections. The model enables to simulate real traffic situations. It includes a motor traffic in a city, in a small village, on a carriageway between a city and a village including a railway crossing. The traffic infrastructure includes different kinds of intersections, such as T-junctions, a classic four-way crossroad and four-way traffic circle, with and without traffic lights control. Another important part of the model is a segment of a highway which includes an elevated crossing with highway approaches and exits.

  13. Thermodynamics of a Compressible Maier-Saupe Model Based on the Self-Consistent Field Theory of Wormlike Polymer

    Directory of Open Access Journals (Sweden)

    Ying Jiang

    2017-02-01

    Full Text Available This paper presents a theoretical formalism for describing systems of semiflexible polymers, which can have density variations due to finite compressibility and exhibit an isotropic-nematic transition. The molecular architecture of the semiflexible polymers is described by a continuum wormlike-chain model. The non-bonded interactions are described through a functional of two collective variables, the local density and local segmental orientation tensor. In particular, the functional depends quadratically on local density-variations and includes a Maier–Saupe-type term to deal with the orientational ordering. The specified density-dependence stems from a free energy expansion, where the free energy of an isotropic and homogeneous homopolymer melt at some fixed density serves as a reference state. Using this framework, a self-consistent field theory is developed, which produces a Helmholtz free energy that can be used for the calculation of the thermodynamics of the system. The thermodynamic properties are analysed as functions of the compressibility of the model, for values of the compressibility realizable in mesoscopic simulations with soft interactions and in actual polymeric materials.

  14. Three-dimensional self-consistent radiation transport model for the fluid simulation of plasma display panel cell

    International Nuclear Information System (INIS)

    Kim, H.C.; Yang, S.S.; Lee, J.K.

    2003-01-01

    In plasma display panels (PDPs), the resonance radiation trapping is one of the important processes. In order to incorporate this effect in a PDP cell, a three-dimensional radiation transport model is self-consistently coupled with a fluid simulation. This model is compared with the conventional trapping factor method in gas mixtures of neon and xenon. It shows the differences in the time evolutions of spatial profile and the total number of resonant excited states, especially in the afterglow. The generation rates of UV light are also compared for the two methods. The visible photon flux reaching the output window from the phosphor layers as well as the total UV photon flux arriving at the phosphor layer from the plasma region are calculated for resonant and nonresonant excited species. From these calculations, the time-averaged spatial profiles of the UV flux on the phosphor layers and the visible photon flux through the output window are obtained. Finally, the diagram of the energy efficiency and the contribution of each UV light are shown

  15. Development of a Self-Consistent Model of Plutonium Sorption: Quantification of Sorption Enthalpy and Ligand-Promoted Dissolution

    Energy Technology Data Exchange (ETDEWEB)

    Powell, Brian [Clemson Univ., SC (United States); Kaplan, Daniel I [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Arai, Yuji [Univ. of Illinois, Urbana-Champaign, IL (United States); Becker, Udo [Univ. of Michigan, Ann Arbor, MI (United States); Ewing, Rod [Stanford Univ., CA (United States)

    2016-12-29

    This university lead SBR project is a collaboration lead by Dr. Brian Powell (Clemson University) with co-principal investigators Dan Kaplan (Savannah River National Laboratory), Yuji Arai (presently at the University of Illinois), Udo Becker (U of Michigan) and Rod Ewing (presently at Stanford University). Hypothesis: The underlying hypothesis of this work is that strong interactions of plutonium with mineral surfaces are due to formation of inner sphere complexes with a limited number of high-energy surface sites, which results in sorption hysteresis where Pu(IV) is the predominant sorbed oxidation state. The energetic favorability of the Pu(IV) surface complex is strongly influenced by positive sorption entropies, which are mechanistically driven by displacement of solvating water molecules from the actinide and mineral surface during sorption. Objectives: The overarching objective of this work is to examine Pu(IV) and Pu(V) sorption to pure metal (oxyhydr)oxide minerals and sediments using variable temperature batch sorption, X-ray absorption spectroscopy, electron microscopy, and quantum-mechanical and empirical-potential calculations. The data will be compiled into a self-consistent surface complexation model. The novelty of this effort lies largely in the manner the information from these measurements and calculations will be combined into a model that will be used to evaluate the thermodynamics of plutonium sorption reactions as well as predict sorption of plutonium to sediments from DOE sites using a component additivity approach.

  16. Self-consistent modelling of lattice strains during the in-situ tensile loading of twinning induced plasticity steel

    International Nuclear Information System (INIS)

    Saleh, Ahmed A.; Pereloma, Elena V.; Clausen, Bjørn; Brown, Donald W.; Tomé, Carlos N.; Gazder, Azdiar A.

    2014-01-01

    The evolution of lattice strains in a fully recrystallised Fe–24Mn–3Al–2Si–1Ni–0.06C TWinning Induced Plasticity (TWIP) steel subjected to uniaxial tensile loading up to a true strain of ∼35% was investigated via in-situ neutron diffraction. Typical of fcc elastic and plastic anisotropy, the {111} and {200} grain families record the lowest and highest lattice strains, respectively. Using modelling cases with and without latent hardening, the recently extended Elasto-Plastic Self-Consistent model successfully predicted the macroscopic stress–strain response, the evolution of lattice strains and the development of crystallographic texture. Compared to the isotropic hardening case, latent hardening did not have a significant effect on lattice strains and returned a relatively faster development of a stronger 〈111〉 and a weaker 〈100〉 double fibre parallel to the tensile axis. Close correspondence between the experimental lattice strains and those predicted using particular orientations embedded within a random aggregate was obtained. The result suggests that the exact orientations of the surrounding aggregate have a weak influence on the lattice strain evolution

  17. An artificial intelligence tool for complex age-depth models

    Science.gov (United States)

    Bradley, E.; Anderson, K. A.; de Vesine, L. R.; Lai, V.; Thomas, M.; Nelson, T. H.; Weiss, I.; White, J. W. C.

    2017-12-01

    CSciBox is an integrated software system for age modeling of paleoenvironmental records. It incorporates an array of data-processing and visualization facilities, ranging from 14C calibrations to sophisticated interpolation tools. Using CSciBox's GUI, a scientist can build custom analysis pipelines by composing these built-in components or adding new ones. Alternatively, she can employ CSciBox's automated reasoning engine, Hobbes, which uses AI techniques to perform an in-depth, autonomous exploration of the space of possible age-depth models and presents the results—both the models and the reasoning that was used in constructing and evaluating them—to the user for her inspection. Hobbes accomplishes this using a rulebase that captures the knowledge of expert geoscientists, which was collected over the course of more than 100 hours of interviews. It works by using these rules to generate arguments for and against different age-depth model choices for a given core. Given a marine-sediment record containing uncalibrated 14C dates, for instance, Hobbes tries CALIB-style calibrations using a choice of IntCal curves, with reservoir age correction values chosen from the 14CHRONO database using the lat/long information provided with the core, and finally composes the resulting age points into a full age model using different interpolation methods. It evaluates each model—e.g., looking for outliers or reversals—and uses that information to guide the next steps of its exploration, and presents the results to the user in human-readable form. The most powerful of CSciBox's built-in interpolation methods is BACON, a Bayesian sedimentation-rate algorithm—a powerful but complex tool that can be difficult to use. Hobbes adjusts BACON's many parameters autonomously to match the age model to the expectations of expert geoscientists, as captured in its rulebase. It then checks the model against the data and iteratively re-calculates until it is a good fit to the data.

  18. Watershed modeling tools and data for prognostic and diagnostic

    Science.gov (United States)

    Chambel-Leitao, P.; Brito, D.; Neves, R.

    2009-04-01

    When eutrophication is considered an important process to control it can be accomplished reducing nitrogen and phosphorus losses from both point and nonpoint sources and helping to assess the effectiveness of the pollution reduction strategy. HARP-NUT guidelines (Guidelines on Harmonized Quantification and Reporting Procedures for Nutrients) (Borgvang & Selvik, 2000) are presented by OSPAR as the best common quantification and reporting procedures for calculating the reduction of nutrient inputs. In 2000, OSPAR HARP-NUT guidelines on a trial basis. They were intended to serve as a tool for OSPAR Contracting Parties to report, in a harmonized manner, their different commitments, present or future, with regard to nutrients under the OSPAR Convention, in particular the "Strategy to Combat Eutrophication". HARP-NUT Guidelines (Borgvang and Selvik, 2000; Schoumans, 2003) were developed to quantify and report on the individual sources of nitrogen and phosphorus discharges/losses to surface waters (Source Orientated Approach). These results can be compared to nitrogen and phosphorus figures with the total riverine loads measured at downstream monitoring points (Load Orientated Approach), as load reconciliation. Nitrogen and phosphorus retention in river systems represents the connecting link between the "Source Orientated Approach" and the "Load Orientated Approach". Both approaches are necessary for verification purposes and both may be needed for providing the information required for the various commitments. Guidelines 2,3,4,5 are mainly concerned with the sources estimation. They present a set of simple calculations that allow the estimation of the origin of loads. Guideline 6 is a particular case where the application of a model is advised, in order to estimate the sources of nutrients from diffuse sources associated with land use/land cover. The model chosen for this was SWAT (Arnold & Fohrer, 2005) model because it is suggested in the guideline 6 and because it

  19. Efficient implementation of three-dimensional reference interaction site model self-consistent-field method: application to solvatochromic shift calculations.

    Science.gov (United States)

    Minezawa, Noriyuki; Kato, Shigeki

    2007-02-07

    The authors present an implementation of the three-dimensional reference interaction site model self-consistent-field (3D-RISM-SCF) method. First, they introduce a robust and efficient algorithm for solving the 3D-RISM equation. The algorithm is a hybrid of the Newton-Raphson and Picard methods. The Jacobian matrix is analytically expressed in a computationally useful form. Second, they discuss the solute-solvent electrostatic interaction. For the solute to solvent route, the electrostatic potential (ESP) map on a 3D grid is constructed directly from the electron density. The charge fitting procedure is not required to determine the ESP. For the solvent to solute route, the ESP acting on the solute molecule is derived from the solvent charge distribution obtained by solving the 3D-RISM equation. Matrix elements of the solute-solvent interaction are evaluated by the direct numerical integration. A remarkable reduction in the computational time is observed in both routes. Finally, the authors implement the first derivatives of the free energy with respect to the solute nuclear coordinates. They apply the present method to "solute" water and formaldehyde in aqueous solvent using the simple point charge model, and the results are compared with those from other methods: the six-dimensional molecular Ornstein-Zernike SCF, the one-dimensional site-site RISM-SCF, and the polarizable continuum model. The authors also calculate the solvatochromic shifts of acetone, benzonitrile, and nitrobenzene using the present method and compare them with the experimental and other theoretical results.

  20. Toward a consistent model for strain accrual and release for the New Madrid Seismic Zone, central United States

    Science.gov (United States)

    Hough, S.E.; Page, M.

    2011-01-01

    At the heart of the conundrum of seismogenesis in the New Madrid Seismic Zone is the apparently substantial discrepancy between low strain rate and high recent seismic moment release. In this study we revisit the magnitudes of the four principal 1811–1812 earthquakes using intensity values determined from individual assessments from four experts. Using these values and the grid search method of Bakun and Wentworth (1997), we estimate magnitudes around 7.0 for all four events, values that are significantly lower than previously published magnitude estimates based on macroseismic intensities. We further show that the strain rate predicted from postglacial rebound is sufficient to produce a sequence with the moment release of one Mmax6.8 every 500 years, a rate that is much lower than previous estimates of late Holocene moment release. However, Mw6.8 is at the low end of the uncertainty range inferred from analysis of intensities for the largest 1811–1812 event. We show that Mw6.8 is also a reasonable value for the largest main shock given a plausible rupture scenario. One can also construct a range of consistent models that permit a somewhat higher Mmax, with a longer average recurrence rate. It is thus possible to reconcile predicted strain and seismic moment release rates with alternative models: one in which 1811–1812 sequences occur every 500 years, with the largest events being Mmax∼6.8, or one in which sequences occur, on average, less frequently, with Mmax of ∼7.0. Both models predict that the late Holocene rate of activity will continue for the next few to 10 thousand years.

  1. Quasiparticles and thermodynamical consistency

    International Nuclear Information System (INIS)

    Shanenko, A.A.; Biro, T.S.; Toneev, V.D.

    2003-01-01

    A brief and simple introduction into the problem of the thermodynamical consistency is given. The thermodynamical consistency relations, which should be taken into account under constructing a quasiparticle model, are found in a general manner from the finite-temperature extension of the Hellmann-Feynman theorem. Restrictions following from these relations are illustrated by simple physical examples. (author)

  2. Physical inversion of the full IASI spectra: Assessment of atmospheric parameters retrievals, consistency of spectroscopy and forward modelling

    International Nuclear Information System (INIS)

    Liuzzi, G.; Masiello, G.; Serio, C.; Venafra, S.; Camy-Peyret, C.

    2016-01-01

    Spectra observed by the Infrared Atmospheric Sounder Interferometer (IASI) have been used to assess both retrievals and the spectral quality and consistency of current forward models and spectroscopic databases for atmospheric gas line and continuum absorption. The analysis has been performed with thousands of observed spectra over sea surface in the Pacific Ocean close to the Mauna Loa (Hawaii) validation station. A simultaneous retrieval for surface temperature, atmospheric temperature, H_2O, HDO, O_3 profiles and gas average column abundance of CO_2, CO, CH_4, SO_2, N_2O, HNO_3, NH_3, OCS and CF_4 has been performed and compared to in situ observations. The retrieval system considers the full IASI spectrum (all 8461 spectral channels on the range 645–2760 cm"−"1). We have found that the average column amount of atmospheric greenhouse gases can be retrieved with a precision better than 1% in most cases. The analysis of spectral residuals shows that, after inversion, they are generally reduced to within the IASI radiometric noise. However, larger residuals still appear for many of the most abundant gases, namely H_2O, CH_4 and CO_2. The H_2O ν_2 spectral region is in general warmer (higher radiance) than observations. The CO_2ν_2 and N_2O/CO_2ν_3 spectral regions now show a consistent behavior for channels, which are probing the troposphere. Updates in CH_4 spectroscopy do not seem to improve the residuals. The effect of isotopic fractionation of HDO is evident in the 2500–2760 cm"−"1 region and in the atmospheric window around 1200 cm"−"1. - Highlights: • This is the first work that uses the full IASI spectrum. This aspect is new and unique. • Simultaneous retrieval of the average amount of CO_2, N_2O, CO, CH_4, SO_2, HNO_3, NH_3, OCS and CF_4, T, H_2O, HDO, O_3 profiles, and T_s. • Assessment of spectroscopy consistency over the full IASI spectrum (645 to 2760 cm"−"1). • Two-year record of IASI retrievals are available on request, compared

  3. Nucleation, growth and transport modelling of helium bubbles under nuclear irradiation in lead–lithium with the self-consistent nucleation theory and surface tension corrections

    International Nuclear Information System (INIS)

    Fradera, J.; Cuesta-López, S.

    2013-01-01

    Highlights: • The work presented in this manuscript provides a reliable computational tool to quantify the He complex phenomena in a HCLL. • A model based on the self-consistent nucleation theory (SCT) is exposed. It includes radiation induced nucleation modelling and surface tension corrections. • Results informed reinforce the necessity of conducting experiments to determine nucleation conditions and bubble transport parameters in LM breeders. • Our findings and model provide a good qualitative insight into the helium nucleation phenomenon in LM systems for fusion technology and can be used to identify key system parameters. -- Abstract: Helium (He) nucleation in liquid metal breeding blankets of a DT fusion reactor may have a significant impact regarding system design, safety and operation. Large He production rates are expected due to tritium (T) fuel self-sufficiency requirement, as both, He and T, are produced at the same rate. Low He solubility, local high concentrations, radiation damage and fluid discontinuities, among other phenomena, may yield the necessary conditions for He nucleation. Hence, He nucleation may have a significant impact on T inventory and may lower the T breeding ratio. A model based on the self-consistent nucleation theory (SCT) with a surface tension curvature correction model has been implemented in OpenFOAM ® CFD code. A modification through a single parameter of the necessary nucleation condition is proposed in order to take into account all the nucleation triggering phenomena, specially radiation induced nucleation. Moreover, the kinetic growth model has been adapted so as to allow for the transition from a critical cluster to a macroscopic bubble with a diffusion growth process. Limitations and capabilities of the models are shown by means of zero-dimensional simulations and sensitivity analyses to key parameters under HCLL breeding unit conditions. Results provide a good qualitative insight into the helium nucleation

  4. Nucleation, growth and transport modelling of helium bubbles under nuclear irradiation in lead–lithium with the self-consistent nucleation theory and surface tension corrections

    Energy Technology Data Exchange (ETDEWEB)

    Fradera, J., E-mail: jfradera@ubu.es; Cuesta-López, S., E-mail: scuesta@ubu.es

    2013-12-15

    Highlights: • The work presented in this manuscript provides a reliable computational tool to quantify the He complex phenomena in a HCLL. • A model based on the self-consistent nucleation theory (SCT) is exposed. It includes radiation induced nucleation modelling and surface tension corrections. • Results informed reinforce the necessity of conducting experiments to determine nucleation conditions and bubble transport parameters in LM breeders. • Our findings and model provide a good qualitative insight into the helium nucleation phenomenon in LM systems for fusion technology and can be used to identify key system parameters. -- Abstract: Helium (He) nucleation in liquid metal breeding blankets of a DT fusion reactor may have a significant impact regarding system design, safety and operation. Large He production rates are expected due to tritium (T) fuel self-sufficiency requirement, as both, He and T, are produced at the same rate. Low He solubility, local high concentrations, radiation damage and fluid discontinuities, among other phenomena, may yield the necessary conditions for He nucleation. Hence, He nucleation may have a significant impact on T inventory and may lower the T breeding ratio. A model based on the self-consistent nucleation theory (SCT) with a surface tension curvature correction model has been implemented in OpenFOAM{sup ®} CFD code. A modification through a single parameter of the necessary nucleation condition is proposed in order to take into account all the nucleation triggering phenomena, specially radiation induced nucleation. Moreover, the kinetic growth model has been adapted so as to allow for the transition from a critical cluster to a macroscopic bubble with a diffusion growth process. Limitations and capabilities of the models are shown by means of zero-dimensional simulations and sensitivity analyses to key parameters under HCLL breeding unit conditions. Results provide a good qualitative insight into the helium

  5. Using Modeling Tools to Better Understand Permafrost Hydrology

    Directory of Open Access Journals (Sweden)

    Clément Fabre

    2017-06-01

    Full Text Available Modification of the hydrological cycle and, subsequently, of other global cycles is expected in Arctic watersheds owing to global change. Future climate scenarios imply widespread permafrost degradation caused by an increase in air temperature, and the expected effect on permafrost hydrology is immense. This study aims at analyzing, and quantifying the daily water transfer in the largest Arctic river system, the Yenisei River in central Siberia, Russia, partially underlain by permafrost. The semi-distributed SWAT (Soil and Water Assessment Tool hydrological model has been calibrated and validated at a daily time step in historical discharge simulations for the 2003–2014 period. The model parameters have been adjusted to embrace the hydrological features of permafrost. SWAT is shown capable to estimate water fluxes at a daily time step, especially during unfrozen periods, once are considered specific climatic and soils conditions adapted to a permafrost watershed. The model simulates average annual contribution to runoff of 263 millimeters per year (mm yr−1 distributed as 152 mm yr−1 (58% of surface runoff, 103 mm yr−1 (39% of lateral flow and 8 mm yr−1 (3% of return flow from the aquifer. These results are integrated on a reduced basin area downstream from large dams and are closer to observations than previous modeling exercises.

  6. Unleashing spatially distributed ecohydrology modeling using Big Data tools

    Science.gov (United States)

    Miles, B.; Idaszak, R.

    2015-12-01

    Physically based spatially distributed ecohydrology models are useful for answering science and management questions related to the hydrology and biogeochemistry of prairie, savanna, forested, as well as urbanized ecosystems. However, these models can produce hundreds of gigabytes of spatial output for a single model run over decadal time scales when run at regional spatial scales and moderate spatial resolutions (~100-km2+ at 30-m spatial resolution) or when run for small watersheds at high spatial resolutions (~1-km2 at 3-m spatial resolution). Numerical data formats such as HDF5 can store arbitrarily large datasets. However even in HPC environments, there are practical limits on the size of single files that can be stored and reliably backed up. Even when such large datasets can be stored, querying and analyzing these data can suffer from poor performance due to memory limitations and I/O bottlenecks, for example on single workstations where memory and bandwidth are limited, or in HPC environments where data are stored separately from computational nodes. The difficulty of storing and analyzing spatial data from ecohydrology models limits our ability to harness these powerful tools. Big Data tools such as distributed databases have the potential to surmount the data storage and analysis challenges inherent to large spatial datasets. Distributed databases solve these problems by storing data close to computational nodes while enabling horizontal scalability and fault tolerance. Here we present the architecture of and preliminary results from PatchDB, a distributed datastore for managing spatial output from the Regional Hydro-Ecological Simulation System (RHESSys). The initial version of PatchDB uses message queueing to asynchronously write RHESSys model output to an Apache Cassandra cluster. Once stored in the cluster, these data can be efficiently queried to quickly produce both spatial visualizations for a particular variable (e.g. maps and animations), as well

  7. Uranyl adsorption and surface speciation at the imogolite-water interface: Self-consistent spectroscopic and surface complexation models

    Science.gov (United States)

    Arai, Y.; McBeath, M.; Bargar, J.R.; Joye, J.; Davis, J.A.

    2006-01-01

    Macro- and molecular-scale knowledge of uranyl (U(VI)) partitioning reactions with soil/sediment mineral components is important in predicting U(VI) transport processes in the vadose zone and aquifers. In this study, U(VI) reactivity and surface speciation on a poorly crystalline aluminosilicate mineral, synthetic imogolite, were investigated using batch adsorption experiments, X-ray absorption spectroscopy (XAS), and surface complexation modeling. U(VI) uptake on imogolite surfaces was greatest at pH ???7-8 (I = 0.1 M NaNO3 solution, suspension density = 0.4 g/L [U(VI)]i = 0.01-30 ??M, equilibration with air). Uranyl uptake decreased with increasing sodium nitrate concentration in the range from 0.02 to 0.5 M. XAS analyses show that two U(VI) inner-sphere (bidentate mononuclear coordination on outer-wall aluminol groups) and one outer-sphere surface species are present on the imogolite surface, and the distribution of the surface species is pH dependent. At pH 8.8, bis-carbonato inner-sphere and tris-carbonato outer-sphere surface species are present. At pH 7, bis- and non-carbonato inner-sphere surface species co-exist, and the fraction of bis-carbonato species increases slightly with increasing I (0.1-0.5 M). At pH 5.3, U(VI) non-carbonato bidentate mononuclear surface species predominate (69%). A triple layer surface complexation model was developed with surface species that are consistent with the XAS analyses and macroscopic adsorption data. The proton stoichiometry of surface reactions was determined from both the pH dependence of U(VI) adsorption data in pH regions of surface species predominance and from bond-valence calculations. The bis-carbonato species required a distribution of surface charge between the surface and ?? charge planes in order to be consistent with both the spectroscopic and macroscopic adsorption data. This research indicates that U(VI)-carbonato ternary species on poorly crystalline aluminosilicate mineral surfaces may be important in

  8. A Simple Evacuation Modeling and Simulation Tool for First Responders

    Energy Technology Data Exchange (ETDEWEB)

    Koch, Daniel B [ORNL; Payne, Patricia W [ORNL

    2015-01-01

    Although modeling and simulation of mass evacuations during a natural or man-made disaster is an on-going and vigorous area of study, tool adoption by front-line first responders is uneven. Some of the factors that account for this situation include cost and complexity of the software. For several years, Oak Ridge National Laboratory has been actively developing the free Incident Management Preparedness and Coordination Toolkit (IMPACT) to address these issues. One of the components of IMPACT is a multi-agent simulation module for area-based and path-based evacuations. The user interface is designed so that anyone familiar with typical computer drawing tools can quickly author a geospatially-correct evacuation visualization suitable for table-top exercises. Since IMPACT is designed for use in the field where network communications may not be available, quick on-site evacuation alternatives can be evaluated to keep pace with a fluid threat situation. Realism is enhanced by incorporating collision avoidance into the simulation. Statistics are gathered as the simulation unfolds, including most importantly time-to-evacuate, to help first responders choose the best course of action.

  9. Using urban forest assessment tools to model bird habitat potential

    Science.gov (United States)

    Lerman, Susannah B.; Nislow, Keith H.; Nowak, David J.; DeStefano, Stephen; King, David I.; Jones-Farrand, D. Todd

    2014-01-01

    The alteration of forest cover and the replacement of native vegetation with buildings, roads, exotic vegetation, and other urban features pose one of the greatest threats to global biodiversity. As more land becomes slated for urban development, identifying effective urban forest wildlife management tools becomes paramount to ensure the urban forest provides habitat to sustain bird and other wildlife populations. The primary goal of this study was to integrate wildlife suitability indices to an existing national urban forest assessment tool, i-Tree. We quantified available habitat characteristics of urban forests for ten northeastern U.S. cities, and summarized bird habitat relationships from the literature in terms of variables that were represented in the i-Tree datasets. With these data, we generated habitat suitability equations for nine bird species representing a range of life history traits and conservation status that predicts the habitat suitability based on i-Tree data. We applied these equations to the urban forest datasets to calculate the overall habitat suitability for each city and the habitat suitability for different types of land-use (e.g., residential, commercial, parkland) for each bird species. The proposed habitat models will help guide wildlife managers, urban planners, and landscape designers who require specific information such as desirable habitat conditions within an urban management project to help improve the suitability of urban forests for birds.

  10. Self-consistent modeling of induced magnetic field in Titan's atmosphere accounting for the generation of Schumann resonance

    Science.gov (United States)

    Béghin, Christian

    2015-02-01

    This model is worked out in the frame of physical mechanisms proposed in previous studies accounting for the generation and the observation of an atypical Schumann Resonance (SR) during the descent of the Huygens Probe in the Titan's atmosphere on 14 January 2005. While Titan is staying inside the subsonic co-rotating magnetosphere of Saturn, a secondary magnetic field carrying an Extremely Low Frequency (ELF) modulation is shown to be generated through ion-acoustic instabilities of the Pedersen current sheets induced at the interface region between the impacting magnetospheric plasma and Titan's ionosphere. The stronger induced magnetic field components are focused within field-aligned arcs-like structures hanging down the current sheets, with minimum amplitude of about 0.3 nT throughout the ramside hemisphere from the ionopause down to the Moon surface, including the icy crust and its interface with a conductive water ocean. The deep penetration of the modulated magnetic field in the atmosphere is thought to be allowed thanks to the force balance between the average temporal variations of thermal and magnetic pressures within the field-aligned arcs. However, there is a first cause of diffusion of the ELF magnetic components, probably due to feeding one, or eventually several SR eigenmodes. A second leakage source is ascribed to a system of eddy-Foucault currents assumed to be induced through the buried water ocean. The amplitude spectrum distribution of the induced ELF magnetic field components inside the SR cavity is found fully consistent with the measurements of the Huygens wave-field strength. Waiting for expected future in-situ exploration of Titan's lower atmosphere and the surface, the Huygens data are the only experimental means available to date for constraining the proposed model.

  11. Intramolecular structures in a single copolymer chain consisting of flexible and semiflexible blocks: Monte Carlo simulation of a lattice model

    International Nuclear Information System (INIS)

    Martemyanova, Julia A; Ivanov, Victor A; Paul, Wolfgang

    2014-01-01

    We study conformational properties of a single multiblock copolymer chain consisting of flexible and semiflexible blocks. Monomer units of different blocks are equivalent in the sense of the volume interaction potential, but the intramolecular bending potential between successive bonds along the chain is different. We consider a single flexible-semiflexible regular multiblock copolymer chain with equal content of flexible and semiflexible units and vary the length of the blocks and the stiffness parameter. We perform flat histogram type Monte Carlo simulations based on the Wang-Landau approach and employ the bond fluctuation lattice model. We present here our data on different non-trivial globular morphologies which we have obtained in our model for different values of the block length and the stiffness parameter. We demonstrate that the collapse can occur in one or in two stages depending on the values of both these parameters and discuss the role of the inhomogeneity of intraglobular distributions of monomer units of both flexible and semiflexible blocks. For short block length and/or large stiffness the collapse occurs in two stages, because it goes through intermediate (meta-)stable structures, like a dumbbell shaped conformation. In such conformations the semiflexible blocks form a cylinder-like core, and the flexible blocks form two domains at both ends of such a cylinder. For long block length and/or small stiffness the collapse occurs in one stage, and in typical conformations the flexible blocks form a spherical core of a globule while the semiflexible blocks are located on the surface and wrap around this core.

  12. A model of integration among prediction tools: applied study to road freight transportation

    Directory of Open Access Journals (Sweden)

    Henrique Dias Blois

    Full Text Available Abstract This study has developed a scenery analysis model which has integrated decision-making tools on investments: prospective scenarios (Grumbach Method and systems dynamics (hard modeling, with the innovated multivariate analysis of experts. It was designed through analysis and simulation scenarios and showed which are the most striking events in the study object as well as highlighted the actions could redirect the future of the analyzed system. Moreover, predictions are likely to be developed through the generated scenarios. The model has been validated empirically with road freight transport data from state of Rio Grande do Sul, Brazil. The results showed that the model contributes to the analysis of investment because it identifies probabilities of events that impact on decision making, and identifies priorities for action, reducing uncertainties in the future. Moreover, it allows an interdisciplinary discussion that correlates different areas of knowledge, fundamental when you wish more consistency in creating scenarios.

  13. Prototype of Automated PLC Model Checking Using Continuous Integration Tools

    CERN Document Server

    Lettrich, Michael

    2015-01-01

    To deal with the complexity of operating and supervising large scale industrial installations at CERN, often Programmable Logic Controllers (PLCs) are used. A failure in these control systems can cause a disaster in terms of economic loses, environmental damages or human losses. Therefore the requirements to software quality are very high. To provide PLC developers with a way to verify proper functionality against requirements, a Java tool named PLCverif has been developed which encapsulates and thus simplifies the use of third party model checkers. One of our goals in this project is to integrate PLCverif in development process of PLC programs. When the developer changes the program, all the requirements should be verified again, as a change on the code can produce collateral effects and violate one or more requirements. For that reason, PLCverif has been extended to work with Jenkins CI in order to trigger automatically the verication cases when the developer changes the PLC program. This prototype has been...

  14. A Relational Database Model and Tools for Environmental Sound Recognition

    Directory of Open Access Journals (Sweden)

    Yuksel Arslan

    2017-12-01

    Full Text Available Environmental sound recognition (ESR has become a hot topic in recent years. ESR is mainly based on machine learning (ML and ML algorithms require first a training database. This database must comprise the sounds to be recognized and other related sounds. An ESR system needs the database during training, testing and in the production stage. In this paper, we present the design and pilot establishment of a database which will assists all researchers who want to establish an ESR system. This database employs relational database model which is not used for this task before. We explain in this paper design and implementation details of the database, data collection and load process. Besides we explain the tools and developed graphical user interface for a desktop application and for the WEB.

  15. Computational Tools To Model Halogen Bonds in Medicinal Chemistry.

    Science.gov (United States)

    Ford, Melissa Coates; Ho, P Shing

    2016-03-10

    The use of halogens in therapeutics dates back to the earliest days of medicine when seaweed was used as a source of iodine to treat goiters. The incorporation of halogens to improve the potency of drugs is now fairly standard in medicinal chemistry. In the past decade, halogens have been recognized as direct participants in defining the affinity of inhibitors through a noncovalent interaction called the halogen bond or X-bond. Incorporating X-bonding into structure-based drug design requires computational models for the anisotropic distribution of charge and the nonspherical shape of halogens, which lead to their highly directional geometries and stabilizing energies. We review here current successes and challenges in developing computational methods to introduce X-bonding into lead compound discovery and optimization during drug development. This fast-growing field will push further development of more accurate and efficient computational tools to accelerate the exploitation of halogens in medicinal chemistry.

  16. Tool-chain for online modeling of the LHC

    International Nuclear Information System (INIS)

    Mueller, G.J.; Buffat, X.; Fuchsberger, K.; Giovannozzi, M.; Redaelli, S.; Schmidt, F.

    2012-01-01

    The control of high intensity beams in a high energy, superconducting machine with complex optics like the CERN Large Hadron Collider (LHC) is challenging not only from the design aspect but also for operation towards physics production. To support the LHC beam commissioning, efforts were devoted to the design and implementation of a software infrastructure aimed at using the computing power of the beam dynamics code MAD-X in the framework of the JAVA-based LHC control and measurement environment. Alongside interfaces to measurement data as well as to settings of the control system, the best knowledge of machine aperture and optic models is provided. In this paper, we will present the status of the tool chain and illustrate how it has been used during commissioning and operation of the LHC. Possible future implementations will be discussed. (authors)

  17. Standalone visualization tool for three-dimensional DRAGON geometrical models

    International Nuclear Information System (INIS)

    Lukomski, A.; McIntee, B.; Moule, D.; Nichita, E.

    2008-01-01

    DRAGON is a neutron transport and depletion code able to solve one-, two- and three-dimensional problems. To date DRAGON provides two visualization modules, able to represent respectively two- and three-dimensional geometries. The two-dimensional visualization module generates a postscript file, while the three dimensional visualization module generates a MATLAB M-file with instructions for drawing the tracks in the DRAGON TRACKING data structure, which implicitly provide a representation of the geometry. The current work introduces a new, standalone, tool based on the open-source Visualization Toolkit (VTK) software package which allows the visualization of three-dimensional geometrical models by reading the DRAGON GEOMETRY data structure and generating an axonometric image which can be manipulated interactively by the user. (author)

  18. The Innsbruck/ESO sky models and telluric correction tools*

    Directory of Open Access Journals (Sweden)

    Kimeswenger S.

    2015-01-01

    While the ground based astronomical observatories just have to correct for the line-of-sight integral of these effects, the Čerenkov telescopes use the atmosphere as the primary detector. The measured radiation originates at lower altitudes and does not pass through the entire atmosphere. Thus, a decent knowledge of the profile of the atmosphere at any time is required. The latter cannot be achieved by photometric measurements of stellar sources. We show here the capabilities of our sky background model and data reduction tools for ground-based optical/infrared telescopes. Furthermore, we discuss the feasibility of monitoring the atmosphere above any observing site, and thus, the possible application of the method for Čerenkov telescopes.

  19. Modeling of tool path for the CNC sheet cutting machines

    Science.gov (United States)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  20. Planning the network of gas pipelines through modeling tools

    Energy Technology Data Exchange (ETDEWEB)

    Sucupira, Marcos L.L.; Lutif Filho, Raimundo B. [Companhia de Gas do Ceara (CEGAS), Fortaleza, CE (Brazil)

    2009-07-01

    Natural gas is a source of non-renewable energy used by different sectors of the economy of Ceara. Its use may be industrial, residential, commercial, as a source of automotive fuel, as a co-generation of energy and as a source for generating electricity from heat. For its practicality this energy has a strong market acceptance and provides a broad list of clients to fit their use, which makes it possible to reach diverse parts of the city. Its distribution requires a complex network of pipelines that branches throughout the city to meet all potential clients interested in this source of energy. To facilitate the design, analysis, expansion and location of bottlenecks and breaks in the distribution network, a modeling software is used that allows the network manager of the net to manage the various information about the network. This paper presents the advantages of modeling the gas distribution network of natural gas companies in Ceara, showing the tool used, the steps necessary for the implementation of the models, the advantages of using the software and the findings obtained with its use. (author)