Consistent model driven architecture
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.
2017-01-01
This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.
The self-consistent field model for Fermi systems with account of three-body interactions
Directory of Open Access Journals (Sweden)
Yu.M. Poluektov
2015-12-01
Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.
Meek, J
1993-01-01
Purpose. The purpose of this article is to report a review and analysis of the concordance between current comprehensive corporate health promotion programs as described in the published literature and the systems model of health and to explore emerging trends in the field of health promotion. Search Methods. MEDLINE, BIOSIS, and PsycINFO searches were conducted from 1985 to 1991, and the bibliographies of articles thus obtained were back searched for additional descriptions of corporate health promotion programs. Inclusive criteria included "comprehensive" corporate programs, published in peer-reviewed journals or books, and descriptions adequate enough to permit coding in the majority of analysis matrix categories. Out of 63 identified programs, 16 met the inclusion criteria; 47 were excluded. A common reason for rejection was the limitation imposed by inadequate program descriptions in the published literature. Major Findings. On average, the comprehensive corporate programs reviewed were initiated between 1984 and 1987 and set in the context of a manufacturing firm with over 10,000 employees. A minority of programs (12.5%) consistently satisfied systems model criteria. The most common category of programs were those which were inconsistent (44%), meeting some of the criteria of a systems model of health promotion, but not all. The mechanistic medical and public health models predominated strongly (63%) with the preeminent goal being individual risk factor modification. Conclusions. The limitations of the published literature do not permit strong conclusions about the number or degree to which current corporate comprehensive programs are concordant with the systems model of health. Although mechanistic models of health predominated, there is evidence that a number of comprehensive programs were inconsistent with the mechanistic model, meeting some of the criteria, but also meeting some systems model criteria. To continue the advancement of health promotion with
Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems
Garg, Vikram V
2014-09-27
Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.
Maintaining consistency in distributed systems
Birman, Kenneth P.
1991-01-01
In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.
Directory of Open Access Journals (Sweden)
Xiaoxiao Meng
2018-01-01
Full Text Available AC microgrid mainly comprise inverter-interfaced distributed generators (IIDGs, which are nonlinear complex systems with multiple time scales, including frequency control, time delay measurements, and electromagnetic transients. The droop control-based IIDG in an AC microgrid is selected as the research object in this study, which comprises power droop controller, voltage- and current-loop controllers, and filter and line. The multi-time scale characteristics of the detailed IIDG model are divided based on singular perturbation theory. In addition, the IIDG model order is reduced by neglecting the system fast dynamics. The static and transient stability consistency of the IIDG model order reduction are demonstrated by extracting features of the IIDG small signal model and using the quadratic approximation method of the stability region boundary, respectively. The dynamic response consistencies of the IIDG model order reduction are evaluated using the frequency, damping and amplitude features extracted by the Prony transformation. Results are applicable to provide a simplified model for the dynamic characteristic analysis of IIDG systems in AC microgrid. The accuracy of the proposed method is verified by using the eigenvalue comparison, the transient stability index comparison and the dynamic time-domain simulation.
ERBE bidirectional model consistency check
Baldwin, D. G.; Coakley, J. A., Jr.
1986-01-01
A short analysis is presented of Earth Radiation Budget Experiment (ERBE) errors inherent in the directional models used for data interpretation. The models were all developed on the basis of experience with the Nimbus-7 ERB experiment, which had a spatial resolution one-third that of ERBE instrumentation. A pseudo-directional model is defined to simulate the ERBE scanner data, using the assumptions that the average radiant exitance for any particular scene is independent of the viewing geometry, geographic location and time the data is collected. The directionality of the view angle and solar zenith angle is accounted for by a method of bins.
International Nuclear Information System (INIS)
Spectral modeling of the large infrared excess in the Spitzer IRS spectra of HD 172555 suggests that there is more than 10 19 kg of submicron dust in the system. Using physical arguments and constraints from observations, we rule out the possibility of the infrared excess being created by a magma ocean planet or a circumplanetary disk or torus. We show that the infrared excess is consistent with a circumstellar debris disk or torus, located at ∼6 AU, that was created by a planetary scale hypervelocity impact. We find that radiation pressure should remove submicron dust from the debris disk in less than one year. However, the system's mid-infrared photometric flux, dominated by submicron grains, has been stable within 4% over the last 27 years, from the Infrared Astronomical Satellite (1983) to WISE (2010). Our new spectral modeling work and calculations of the radiation pressure on fine dust in HD 172555 provide a self-consistent explanation for this apparent contradiction. We also explore the unconfirmed claim that ∼10 47 molecules of SiO vapor are needed to explain an emission feature at ∼8 μm in the Spitzer IRS spectrum of HD 172555. We find that unless there are ∼10 48 atoms or 0.05 M ⊕ of atomic Si and O vapor in the system, SiO vapor should be destroyed by photo-dissociation in less than 0.2 years. We argue that a second plausible explanation for the ∼8 μm feature can be emission from solid SiO, which naturally occurs in submicron silicate ''smokes'' created by quickly condensing vaporized silicate.
Energy Technology Data Exchange (ETDEWEB)
Johnson, B. C.; Melosh, H. J. [Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907 (United States); Lisse, C. M. [JHU-APL, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States); Chen, C. H. [STScI, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Wyatt, M. C. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Thebault, P. [LESIA, Observatoire de Paris, F-92195 Meudon Principal Cedex (France); Henning, W. G. [NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Gaidos, E. [Department of Geology and Geophysics, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); Elkins-Tanton, L. T. [Department of Terrestrial Magnetism, Carnegie Institution for Science, Washington, DC 20015 (United States); Bridges, J. C. [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom); Morlok, A., E-mail: johns477@purdue.edu [Department of Physical Sciences, Open University, Walton Hall, Milton Keynes MK7 6AA (United Kingdom)
2012-12-10
Spectral modeling of the large infrared excess in the Spitzer IRS spectra of HD 172555 suggests that there is more than 10{sup 19} kg of submicron dust in the system. Using physical arguments and constraints from observations, we rule out the possibility of the infrared excess being created by a magma ocean planet or a circumplanetary disk or torus. We show that the infrared excess is consistent with a circumstellar debris disk or torus, located at {approx}6 AU, that was created by a planetary scale hypervelocity impact. We find that radiation pressure should remove submicron dust from the debris disk in less than one year. However, the system's mid-infrared photometric flux, dominated by submicron grains, has been stable within 4% over the last 27 years, from the Infrared Astronomical Satellite (1983) to WISE (2010). Our new spectral modeling work and calculations of the radiation pressure on fine dust in HD 172555 provide a self-consistent explanation for this apparent contradiction. We also explore the unconfirmed claim that {approx}10{sup 47} molecules of SiO vapor are needed to explain an emission feature at {approx}8 {mu}m in the Spitzer IRS spectrum of HD 172555. We find that unless there are {approx}10{sup 48} atoms or 0.05 M{sub Circled-Plus} of atomic Si and O vapor in the system, SiO vapor should be destroyed by photo-dissociation in less than 0.2 years. We argue that a second plausible explanation for the {approx}8 {mu}m feature can be emission from solid SiO, which naturally occurs in submicron silicate ''smokes'' created by quickly condensing vaporized silicate.
Consistent thermodynamic properties of lipids systems
DEFF Research Database (Denmark)
Cunico, Larissa; Ceriani, Roberta; Sarup, Bent
Physical and thermodynamic properties of pure components and their mixtures are the basic requirement for process design, simulation, and optimization. In the case of lipids, our previous works[1-3] have indicated a lack of experimental data for pure components and also for their mixtures...... different pressures, with azeotrope behavior observed. Available thermodynamic consistency tests for TPx data were applied before performing parameter regressions for Wilson, NRTL, UNIQUAC and original UNIFAC models. The relevance of enlarging experimental databank of lipids systems data in order to improve...... the performance of predictive thermodynamic models was confirmed in this work by analyzing the calculated values of original UNIFAC model. For solid-liquid equilibrium (SLE) data, new consistency tests have been developed [2]. Some of the developed tests were based in the quality tests proposed for VLE data...
Consistent Design of Dependable Control Systems
DEFF Research Database (Denmark)
Blanke, M.
1996-01-01
Design of fault handling in control systems is discussed, and a method for consistent design is presented.......Design of fault handling in control systems is discussed, and a method for consistent design is presented....
Schröder, Tom
2009-01-01
To understand how the uptake of water by roots locally affects and is affected by the soil water distribution, 3D soil-root water transfer models are needed. Nowadays, fully coupled 3D models at the plant scale, that simulate water flow along water potential gradients in the soil-root continuum, are available. However, the coupling of the soil and root system is not investigated thoroughly. In the available models the soil water potential gradient below the soil spatial discretization is negl...
Consistency of the MLE under mixture models
Chen, Jiahua
2016-01-01
The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...
Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen
2016-07-01
The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our
Consistent spectroscopy for a extended gauge model
International Nuclear Information System (INIS)
Oliveira Neto, G. de.
1990-11-01
The consistent spectroscopy was obtained with a Lagrangian constructed with vector fields with a U(1) group extended symmetry. As consistent spectroscopy is understood the determination of quantum physical properties described by the model in an manner independent from the possible parametrizations adopted in their description. (L.C.J.A.)
Sticky continuous processes have consistent price systems
DEFF Research Database (Denmark)
Bender, Christian; Pakkanen, Mikko; Sayit, Hasanjan
Under proportional transaction costs, a price process is said to have a consistent price system, if there is a semimartingale with an equivalent martingale measure that evolves within the bid-ask spread. We show that a continuous, multi-asset price process has a consistent price system, under...... arbitrarily small proportional transaction costs, if it satisfies a natural multi-dimensional generalization of the stickiness condition introduced by Guasoni...
Consistent Stochastic Modelling of Meteocean Design Parameters
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Sterndorff, M. J.
2000-01-01
Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...
Consistent Estimation of Partition Markov Models
Directory of Open Access Journals (Sweden)
Jesús E. García
2017-04-01
Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.
Self-consistent asset pricing models
Malevergne, Y.; Sornette, D.
2007-08-01
We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the
Developing consistent pronunciation models for phonemic variants
CSIR Research Space (South Africa)
Davel, M
2006-09-01
Full Text Available from a lexicon containing variants. In this paper we (the authors) address both these issues by creating ‘pseudo-phonemes’ associated with sets of ‘generation restriction rules’ to model those pronunciations that are consistently realised as two or more...
Banerjee, S.; Hassenklover, E.; Kleijn, J.M.; Cohen Stuart, M.A.; Leermakers, F.A.M.
2013-01-01
This paper presents experimental and modeling results on water–CO2 interfacial tension (IFT) together with wettability studies of water on both hydrophilic and hydrophobic surfaces immersed in CO2. CO2–water interfacial tension (IFT) measurements showed that the IFT decreased with increasing
Self-consistent nuclear energy systems
International Nuclear Information System (INIS)
Shimizu, A.; Fujiie, Y.
1995-01-01
A concept of self-consistent energy systems (SCNES) has been proposed as an ultimate goal of the nuclear energy system in the coming centuries. SCNES should realize a stable and unlimited energy supply without endangering the human race and the global environment. It is defined as a system that realizes at least the following four objectives simultaneously: (a) energy generation -attain high efficiency in the utilization of fission energy; (b) fuel production - secure inexhaustible energy source: breeding of fissile material with the breeding ratio greater than one and complete burning of transuranium through recycling; (c) burning of radionuclides - zero release of radionuclides from the system: complete burning of transuranium and elimination of radioactive fission products by neutron capture reactions through recycling; (d) system safety - achieve system safety both for the public and experts: eliminate criticality-related safety issues by using natural laws and simple logic. This paper describes the concept of SCNES and discusses the feasibility of the system. Both ''neutron balance'' and ''energbalance'' of the system are introduced as the necessary conditions to be satisfied at least by SCNES. Evaluations made so far indicate that both the neutron balance and the energy balance can be realized by fast reactors but not by thermal reactors. Concerning the system safety, two safety concepts: ''self controllability'' and ''self-terminability'' are introduced to eliminate the criticality-related safety issues in fast reactors. (author)
Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.
2018-02-01
The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.
Self-consistent model of confinement
International Nuclear Information System (INIS)
Swift, A.R.
1988-01-01
A model of the large-spatial-distance, zero--three-momentum, limit of QCD is developed from the hypothesis that there is an infrared singularity. Single quarks and gluons do not propagate because they have infinite energy after renormalization. The Hamiltonian formulation of the path integral is used to quantize QCD with physical, nonpropagating fields. Perturbation theory in the infrared limit is simplified by the absence of self-energy insertions and by the suppression of large classes of diagrams due to vanishing propagators. Remaining terms in the perturbation series are resummed to produce a set of nonlinear, renormalizable integral equations which fix both the confining interaction and the physical propagators. Solutions demonstrate the self-consistency of the concepts of an infrared singularity and nonpropagating fields. The Wilson loop is calculated to provide a general proof of confinement. Bethe-Salpeter equations for quark-antiquark pairs and for two gluons have finite-energy solutions in the color-singlet channel. The choice of gauge is addressed in detail. Large classes of corrections to the model are discussed and shown to support self-consistency
The Finitistic Consistency of Heck's Predicative Fregean System
DEFF Research Database (Denmark)
Cruz-Filipe, L.; Ferreira, Fernando
2015-01-01
Frege's theory is inconsistent (Russell's paradox). However, the predicative version of Frege's system is consistent. This was proved by Richard Heck in 1996 using a model theoretic argument. In this paper, we give a finitistic proof of this consistency result. As a consequence, Heck's predicative...
A thermodynamically consistent model for magnetic hysteresis
International Nuclear Information System (INIS)
Ho, Kwangsoo
2014-01-01
A phenomenological constitutive model is presented to describe the magnetization curve within the context of thermodynamics. Due to the phenomenological analogy between the magnetic hysteresis and the stress hysteresis, the basic structure of the proposed model comes from rate-dependent plasticity in continuum mechanics, namely viscoplasticity. The total magnetic flux density is assumed to be the sum of reversible and irreversible parts. The model introduces the evolution laws of two internal state variables to incorporate the effect of the ever-changing internal microstructure on the current state. The conception originated from viscoplasticity enables the frequency dependence of the hysteresis curve to be modeled. - Highlights: • A constitutive model is proposed within the framework of thermodynamic principles. • The basic structure of formulation is originated from the rate-dependent plasticity. • Decomposition of the magnetic flux into reversible and irreversible parts is assumed. • Constitutive model reproduces the frequency dependency of magnetic hysteresis
Parametrization of model consistant expectations in the Sidrauski model
Hoogenveen, Victoria; Sterken, Elmer
1996-01-01
This paper discusses a cubic parametrisation of model consistent expectations in a nonlinear dynamic monetary growth model. The so-called Sidrauski model links money, inflation and consumption growth. Iterative least squares combined with simulation is used to address the alleged impact of inflation
Consistent Alignment of World Embedding Models
2017-03-02
MIT Lincoln Laboratory 244 Wood Street Lexington, MA 02421, USA ABSTRACT Word embedding models offer continuous vector representations that can...generated synthetic data points. This generative process is inspired by the observation that a variety of linguistic relationships is captured by simple...as images , and genomic data. In Wang et al. (2016) manifold alignment techniques are used to discover logical relationships in supervised settings. We
Self-Consistent Models of Accretion Disks
Narayan, Ramesh
2000-01-01
Research was carried out on several topics in the theory of astrophysical accretion flows around black holes, neutron stars and white dwarfs. The focus of our effort was the advection-dominated accretion flow (ADAF) model which the PI and his collaborators proposed and developed over the last several years. Our group completed a total of 46 papers, of which 36 are in refereed journals and 12 are in conference proceedings. All the papers have either already appeared in print or are in press.
Consistency test of the standard model
International Nuclear Information System (INIS)
Pawlowski, M.; Raczka, R.
1997-01-01
If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions
Diagnosing a Strong-Fault Model by Conflict and Consistency.
Zhang, Wenfeng; Zhao, Qi; Zhao, Hongbo; Zhou, Gan; Feng, Wenquan
2018-03-29
The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model's prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain-the heat control unit of a spacecraft-where the proposed methods are significantly better than best first and conflict directly with A* search methods.
Monari, Antonio; Rivail, Jean-Louis; Assfeld, Xavier
2013-02-19
Molecular mechanics methods can efficiently compute the macroscopic properties of a large molecular system but cannot represent the electronic changes that occur during a chemical reaction or an electronic transition. Quantum mechanical methods can accurately simulate these processes, but they require considerably greater computational resources. Because electronic changes typically occur in a limited part of the system, such as the solute in a molecular solution or the substrate within the active site of enzymatic reactions, researchers can limit the quantum computation to this part of the system. Researchers take into account the influence of the surroundings by embedding this quantum computation into a calculation of the whole system described at the molecular mechanical level, a strategy known as the mixed quantum mechanics/molecular mechanics (QM/MM) approach. The accuracy of this embedding varies according to the types of interactions included, whether they are purely mechanical or classically electrostatic. This embedding can also introduce the induced polarization of the surroundings. The difficulty in QM/MM calculations comes from the splitting of the system into two parts, which requires severing the chemical bonds that link the quantum mechanical subsystem to the classical subsystem. Typically, researchers replace the quantoclassical atoms, those at the boundary between the subsystems, with a monovalent link atom. For example, researchers might add a hydrogen atom when a C-C bond is cut. This Account describes another approach, the Local Self Consistent Field (LSCF), which was developed in our laboratory. LSCF links the quantum mechanical portion of the molecule to the classical portion using a strictly localized bond orbital extracted from a small model molecule for each bond. In this scenario, the quantoclassical atom has an apparent nuclear charge of +1. To achieve correct bond lengths and force constants, we must take into account the inner shell of
Consistency conditions for data base systems: a new problem of systems analysis
International Nuclear Information System (INIS)
Schlageter, G.
1976-01-01
A data base can be seen as a model of a system in the real world. During the systems analysis conditions must be derived which guarantee a close correspondence between the real system and the data base. These conditions are called consistency constraints. The notion of consistency is analyzed; different types of consistency constraints are presented. (orig.) [de
Ensuring Data Consistency Over CMS Distributed Computing System
Rossman, Paul
2009-01-01
CMS utilizes a distributed infrastructure of computing centers to custodially store data, to provide organized processing resources, and to provide analysis computing resources for users. Integrated over the whole system, even in the first year of data taking, the available disk storage approaches 10 petabytes of space. Maintaining consistency between the data bookkeeping, the data transfer system, and physical storage is an interesting technical and operations challenge. In this paper we will discuss the CMS effort to ensure that data is consistently available at all computing centers. We will discuss the technical tools that monitor the consistency of the catalogs and the physical storage as well as the operations model used to find and solve inconsistencies.
Consistency checks in beam emission modeling for neutral beam injectors
International Nuclear Information System (INIS)
Punyapu, Bharathi; Vattipalle, Prahlad; Sharma, Sanjeev Kumar; Baruah, Ujjwal Kumar; Crowley, Brendan
2015-01-01
In positive neutral beam systems, the beam parameters such as ion species fractions, power fractions and beam divergence are routinely measured using Doppler shifted beam emission spectrum. The accuracy with which these parameters are estimated depend on the accuracy of the atomic modeling involved in these estimations. In this work, an effective procedure to check the consistency of the beam emission modeling in neutral beam injectors is proposed. As a first consistency check, at a constant beam voltage and current, the intensity of the beam emission spectrum is measured by varying the pressure in the neutralizer. Then, the scaling of measured intensity of un-shifted (target) and Doppler shifted intensities (projectile) of the beam emission spectrum at these pressure values are studied. If the un-shifted component scales with pressure, then the intensity of this component will be used as a second consistency check on the beam emission modeling. As a further check, the modeled beam fractions and emission cross sections of projectile and target are used to predict the intensity of the un-shifted component and then compared with the value of measured target intensity. An agreement between the predicted and measured target intensities provide the degree of discrepancy in the beam emission modeling. In order to test this methodology, a systematic analysis of Doppler shift spectroscopy data obtained on the JET neutral beam test stand data was carried out
Structure and internal consistency of a shoulder model.
Högfors, C; Karlsson, D; Peterson, B
1995-07-01
A three-dimensional biomechanical model of the shoulder is developed for force predictions in 46 shoulder structures. The model is directed towards the analysis of static working situations where the load is low or moderate. Arbitrary static arm postures in the natural shoulder range may be considered, as well as different kinds of external loads including different force and moment directions. The model can predict internal forces for the shoulder muscles, for the glenohumeral, the acromioclavicular and the sternoclavicular joint as well as for the coracohumeral ligament. A solution to the statistically indeterminate force system is obtained by minimising an objective function. The default function chosen for this is the sum of the squared muscle stresses, but other objective functions may be used as well. The structure of the model is described and its ingredients discussed. The internal consistency of the model, its structural stability and the compatibility of the elements that go into it, is investigated.
Large scale Bayesian nuclear data evaluation with consistent model defects
International Nuclear Information System (INIS)
Schnabel, G
2015-01-01
The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the
Thermodynamically consistent Bayesian analysis of closed biochemical reaction systems
Directory of Open Access Journals (Sweden)
Goutsias John
2010-11-01
Full Text Available Abstract Background Estimating the rate constants of a biochemical reaction system with known stoichiometry from noisy time series measurements of molecular concentrations is an important step for building predictive models of cellular function. Inference techniques currently available in the literature may produce rate constant values that defy necessary constraints imposed by the fundamental laws of thermodynamics. As a result, these techniques may lead to biochemical reaction systems whose concentration dynamics could not possibly occur in nature. Therefore, development of a thermodynamically consistent approach for estimating the rate constants of a biochemical reaction system is highly desirable. Results We introduce a Bayesian analysis approach for computing thermodynamically consistent estimates of the rate constants of a closed biochemical reaction system with known stoichiometry given experimental data. Our method employs an appropriately designed prior probability density function that effectively integrates fundamental biophysical and thermodynamic knowledge into the inference problem. Moreover, it takes into account experimental strategies for collecting informative observations of molecular concentrations through perturbations. The proposed method employs a maximization-expectation-maximization algorithm that provides thermodynamically feasible estimates of the rate constant values and computes appropriate measures of estimation accuracy. We demonstrate various aspects of the proposed method on synthetic data obtained by simulating a subset of a well-known model of the EGF/ERK signaling pathway, and examine its robustness under conditions that violate key assumptions. Software, coded in MATLAB®, which implements all Bayesian analysis techniques discussed in this paper, is available free of charge at http://www.cis.jhu.edu/~goutsias/CSS%20lab/software.html. Conclusions Our approach provides an attractive statistical methodology for
Standard Model Vacuum Stability and Weyl Consistency Conditions
DEFF Research Database (Denmark)
Antipin, Oleg; Gillioz, Marc; Krog, Jens
2013-01-01
At high energy the standard model possesses conformal symmetry at the classical level. This is reflected at the quantum level by relations between the different beta functions of the model. These relations are known as the Weyl consistency conditions. We show that it is possible to satisfy them...... order by order in perturbation theory, provided that a suitable coupling constant counting scheme is used. As a direct phenomenological application, we study the stability of the standard model vacuum at high energies and compare with previous computations violating the Weyl consistency conditions....
Modeling a Consistent Behavior of PLC-Sensors
Directory of Open Access Journals (Sweden)
E. V. Kuzmin
2014-01-01
Full Text Available The article extends the cycle of papers dedicated to programming and verificatoin of PLC-programs by LTL-specification. This approach provides the availability of correctness analysis of PLC-programs by the model checking method.The model checking method needs to construct a finite model of a PLC program. For successful verification of required properties it is important to take into consideration that not all combinations of input signals from the sensors can occur while PLC works with a control object. This fact requires more advertence to the construction of the PLC-program model.In this paper we propose to describe a consistent behavior of sensors by three groups of LTL-formulas. They will affect the program model, approximating it to the actual behavior of the PLC program. The idea of LTL-requirements is shown by an example.A PLC program is a description of reactions on input signals from sensors, switches and buttons. In constructing a PLC-program model, the approach to modeling a consistent behavior of PLC sensors allows to focus on modeling precisely these reactions without an extension of the program model by additional structures for realization of a realistic behavior of sensors. The consistent behavior of sensors is taken into account only at the stage of checking a conformity of the programming model to required properties, i. e. a property satisfaction proof for the constructed model occurs with the condition that the model contains only such executions of the program that comply with the consistent behavior of sensors.
Consistent partnership formation: application to a sexually transmitted disease model.
Artzrouni, Marc; Deuchert, Eva
2012-02-01
We apply a consistent sexual partnership formation model which hinges on the assumption that one gender's choices drives the process (male or female dominant model). The other gender's behavior is imputed. The model is fitted to UK sexual behavior data and applied to a simple incidence model of HSV-2. With a male dominant model (which assumes accurate male reports on numbers of partners) the modeled incidences of HSV-2 are 77% higher for men and 50% higher for women than with a female dominant model (which assumes accurate female reports). Although highly stylized, our simple incidence model sheds light on the inconsistent results one can obtain with misreported data on sexual activity and age preferences. Copyright © 2011 Elsevier Inc. All rights reserved.
Consistent estimation of linear panel data models with measurement error
Meijer, Erik; Spierdijk, Laura; Wansbeek, Thomas
2017-01-01
Measurement error causes a bias towards zero when estimating a panel data linear regression model. The panel data context offers various opportunities to derive instrumental variables allowing for consistent estimation. We consider three sources of moment conditions: (i) restrictions on the
Final Report Fermionic Symmetries and Self consistent Shell Model
International Nuclear Information System (INIS)
Zamick, Larry
2008-01-01
In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with 'anomoulous' magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them. The importance of a self consistent shell model was emphasized.
Detection and quantification of flow consistency in business process models
DEFF Research Database (Denmark)
Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel
2017-01-01
, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics......Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect......, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second...
Simplified models for dark matter face their consistent completions
Energy Technology Data Exchange (ETDEWEB)
Gonçalves, Dorival; Machado, Pedro A. N.; No, Jose Miguel
2017-03-01
Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.
The consistency service of the ATLAS Distributed Data Management system
Serfon, Cédric; Garonne, Vincent; ATLAS Collaboration
2011-12-01
With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failures is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically corrects the errors reported and informs the users in case of irrecoverable file loss.
The consistency service of the ATLAS Distributed Data Management system
Serfon, C; The ATLAS collaboration
2011-01-01
With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failures is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically corrects the errors reported and informs the users in case of irrecoverable file loss.
The Consistency Service of the ATLAS Distributed Data Management system
Serfon, C; The ATLAS collaboration
2010-01-01
With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failure is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically correct the errors reported and informs the users in case of irrecoverable file loss.
Consistency Across Standards or Standards in a New Business Model
Russo, Dane M.
2010-01-01
Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.
A Self-consistent Model of the Solar Tachocline
Wood, T. S.; Brummell, N. H.
2018-02-01
We present a local but fully nonlinear model of the solar tachocline, using three-dimensional direct numerical simulations. The tachocline forms naturally as a statistically steady balance between Coriolis, pressure, buoyancy, and Lorentz forces beneath a turbulent convection zone. Uniform rotation is maintained in the radiation zone by a primordial magnetic field, which is confined by meridional flows in the tachocline and convection zone. Such balanced dynamics has previously been found in idealized laminar models, but never in fully self-consistent numerical simulations.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Cont, Rama; Kokholm, Thomas
2013-01-01
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Cont, Rama; Kokholm, Thomas
We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... observed properties of variance swap dynamics and allows for jumps in volatility and returns. An affine specification using L´evy processes as building blocks leads to analytically tractable pricing formulas for options on variance swaps as well as efficient numerical methods for pricing of European...... options on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options...
Are paleoclimate model ensembles consistent with the MARGO data synthesis?
Directory of Open Access Journals (Sweden)
J. C. Hargreaves
2011-08-01
Full Text Available We investigate the consistency of various ensembles of climate model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day. However, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.
Development of a Consistent and Reproducible Porcine Scald Burn Model
Kempf, Margit; Kimble, Roy; Cuttle, Leila
2016-01-01
There are very few porcine burn models that replicate scald injuries similar to those encountered by children. We have developed a robust porcine burn model capable of creating reproducible scald burns for a wide range of burn conditions. The study was conducted with juvenile Large White pigs, creating replicates of burn combinations; 50°C for 1, 2, 5 and 10 minutes and 60°C, 70°C, 80°C and 90°C for 5 seconds. Visual wound examination, biopsies and Laser Doppler Imaging were performed at 1, 24 hours and at 3 and 7 days post-burn. A consistent water temperature was maintained within the scald device for long durations (49.8 ± 0.1°C when set at 50°C). The macroscopic and histologic appearance was consistent between replicates of burn conditions. For 50°C water, 10 minute duration burns showed significantly deeper tissue injury than all shorter durations at 24 hours post-burn (p ≤ 0.0001), with damage seen to increase until day 3 post-burn. For 5 second duration burns, by day 7 post-burn the 80°C and 90°C scalds had damage detected significantly deeper in the tissue than the 70°C scalds (p ≤ 0.001). A reliable and safe model of porcine scald burn injury has been successfully developed. The novel apparatus with continually refreshed water improves consistency of scald creation for long exposure times. This model allows the pathophysiology of scald burn wound creation and progression to be examined. PMID:27612153
Consistency between 2D-3D Sediment Transport models
Villaret, Catherine; Jodeau, Magali
2017-04-01
Sediment transport models have been developed and applied by the engineering community to estimate transport rates and morphodynamic bed evolutions in river flows, coastal and estuarine conditions. Environmental modelling systems like the open-source Telemac modelling system include a hierarchy of models from 1D (Mascaret), 2D (Telemac-2D/Sisyphe) and 3D (Telemac-3D/Sedi-3D) and include a wide range of processes to represent sediment flow interactions under more and more complex situations (cohesive, non-cohesive and mixed sediment). Despite some tremendous progresses in the numerical techniques and computing resources, the quality/accuracy of model results mainly depend on the numerous choices and skills of the modeler. In complex situations involving stratification effects, complex geometry, recirculating flows… 2D model assumptions are no longer valid. A full 3D turbulent flow model is then required in order to capture the vertical mixing processes and to represent accurately the coupled flow/sediment distribution. However a number of theoretical and numerical difficulties arise when dealing with sediment transport modelling in 3D which will be high-lighted : (1) Dependency of model results to the vertical grid refinement and choice of boundary conditions and numerical scheme (2) The choice of turbulence model determines also the sediment vertical distribution which is governed by a balance between the downward settling term and upward turbulent diffusion. (3) The use of different numerical schemes for both hydrodynamics (mean and turbulent flow) and sediment transport modelling can lead to some inconsistency including a mismatch in the definition of numerical cells and definition of boundary conditions. We discuss here those present issues and present some detailed comparison between 2D and 3D simulations on a set of validation test cases which are available in the Telemac 7.2 release using both cohesive and non-cohesive sediments.
Mechanistically Consistent Reduced Models of Synthetic Gene Networks
Mier-y-Terán-Romero, Luis; Silber, Mary; Hatzimanikatis, Vassily
2013-01-01
Designing genetic networks with desired functionalities requires an accurate mathematical framework that accounts for the essential mechanistic details of the system. Here, we formulate a time-delay model of protein translation and mRNA degradation by systematically reducing a detailed mechanistic model that explicitly accounts for the ribosomal dynamics and the cleaving of mRNA by endonucleases. We exploit various technical and conceptual advantages that our time-delay model offers over the mechanistic model to probe the behavior of a self-repressing gene over wide regions of parameter space. We show that a heuristic time-delay model of protein synthesis of a commonly used form yields a notably different prediction for the parameter region where sustained oscillations occur. This suggests that such heuristics can lead to erroneous results. The functional forms that arise from our systematic reduction can be used for every system that involves transcription and translation and they could replace the commonly used heuristic time-delay models for these processes. The results from our analysis have important implications for the design of synthetic gene networks and stress that such design must be guided by a combination of heuristic models and mechanistic models that include all relevant details of the process. PMID:23663853
Consistent approach to air-cleaning system duct design
International Nuclear Information System (INIS)
Miller, W.H.; Ornberg, S.C.; Rooney, K.L.
1981-01-01
Nuclear power plant air-cleaning system effectiveness is dependent on the capability of a duct system to safely convey contaminated gas to a filtration unit and subsequently to a point of discharge. This paper presents a logical and consistent design approach for selecting sheet metal ductwork construction to meet applicable criteria. The differences in design engineers' duct construction specifications are acknowledged. Typical duct construction details and suggestions for their effective use are presented. Improvements in duct design sections of ANSI/ASME N509-80 are highlighted. A detailed leakage analysis of a control room HVAC system is undertaken to illustrate the effects of conceptual design variations on duct construction requirements. Shortcomings of previously published analyses and interpretations of a current standard are included
Mean-field theory and self-consistent dynamo modeling
International Nuclear Information System (INIS)
Yoshizawa, Akira; Yokoi, Nobumitsu
2001-12-01
Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)
Self-consistent modeling of amorphous silicon devices
International Nuclear Information System (INIS)
Hack, M.
1987-01-01
The authors developed a computer model to describe the steady-state behaviour of a range of amorphous silicon devices. It is based on the complete set of transport equations and takes into account the important role played by the continuous distribution of localized states in the mobility gap of amorphous silicon. Using one set of parameters they have been able to self-consistently simulate the current-voltage characteristics of p-i-n (or n-i-p) solar cells under illumination, the dark behaviour of field-effect transistors, p-i-n diodes and n-i-n diodes in both the ohmic and space charge limited regimes. This model also describes the steady-state photoconductivity of amorphous silicon, in particular, its dependence on temperature, doping and illumination intensity
A self-consistent spin-diffusion model for micromagnetics
Abert, Claas
2016-12-17
We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.
Moreno Chaparro, Nicolas
2015-06-30
We introduce a framework for model reduction of polymer chain models for dissipative particle dynamics (DPD) simulations, where the properties governing the phase equilibria such as the characteristic size of the chain, compressibility, density, and temperature are preserved. The proposed methodology reduces the number of degrees of freedom required in traditional DPD representations to model equilibrium properties of systems with complex molecules (e.g., linear polymers). Based on geometrical considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling of the simulation parameters. In order to satisfy the geometrical constraints in the reduced model we introduce bond-angle potentials that account for the changes in the chain free energy after the model reduction. Following this coarse-graining process we represent high molecular weight DPD chains (i.e., ≥200≥200 beads per chain) with a significant reduction in the number of particles required (i.e., ≥20≥20 times the original system). We show that our methodology has potential applications modeling systems of high molecular weight molecules at large scales, such as diblock copolymer and DNA.
Classical and Quantum Consistency of the DGP Model
Nicolis, A; Nicolis, Alberto; Rattazzi, Riccardo
2004-01-01
We study the Dvali-Gabadadze-Porrati model by the method of the boundary effective action. The truncation of this action to the bending mode \\pi consistently describes physics in a wide range of regimes both at the classical and at the quantum level. The Vainshtein effect, which restores agreement with precise tests of general relativity, follows straightforwardly. We give a simple and general proof of stability, i.e. absence of ghosts in the fluctuations, valid for most of the relevant cases, like for instance the spherical source in asymptotically flat space. However we confirm that around certain interesting self-accelerating cosmological solutions there is a ghost. We consider the issue of quantum corrections. Around flat space \\pi becomes strongly coupled below a macroscopic length of 1000 km, thus impairing the predictivity of the model. Indeed the tower of higher dimensional operators which is expected by a generic UV completion of the model limits predictivity at even larger length scales. We outline ...
Thermodynamically consistent mesoscopic model of the ferro/paramagnetic transition
Czech Academy of Sciences Publication Activity Database
Benešová, Barbora; Kružík, Martin; Roubíček, Tomáš
2013-01-01
Roč. 64, Č. 1 (2013), s. 1-28 ISSN 0044-2275 R&D Projects: GA AV ČR IAA100750802; GA ČR GA106/09/1573; GA ČR GAP201/10/0357 Grant - others:GA ČR(CZ) GA106/08/1397; GA MŠk(CZ) LC06052 Program:GA; LC Institutional support: RVO:67985556 Keywords : ferro-para-magnetism * evolution * thermodynamics Subject RIV: BA - General Mathematics; BA - General Mathematics (UT-L) Impact factor: 1.214, year: 2013 http://library.utia.cas.cz/separaty/2012/MTR/kruzik-thermodynamically consistent mesoscopic model of the ferro-paramagnetic transition.pdf
Self-Consistent Dynamical Model of the Broad Line Region
International Nuclear Information System (INIS)
Czerny, Bozena; Li, Yan-Rong; Sredzinska, Justyna; Hryniewicz, Krzysztof; Panda, Swayam; Wildy, Conor; Karas, Vladimir
2017-01-01
We develop a self-consistent description of the Broad Line Region based on the concept of a failed wind powered by radiation pressure acting on a dusty accretion disk atmosphere in Keplerian motion. The material raised high above the disk is illuminated, dust evaporates, and the matter falls back toward the disk. This material is the source of emission lines. The model predicts the inner and outer radius of the region, the cloud dynamics under the dust radiation pressure and, subsequently, the gravitational field of the central black hole, which results in asymmetry between the rise and fall. Knowledge of the dynamics allows us to predict the shapes of the emission lines as functions of the basic parameters of an active nucleus: black hole mass, accretion rate, black hole spin (or accretion efficiency) and the viewing angle with respect to the symmetry axis. Here we show preliminary results based on analytical approximations to the cloud motion.
Consistent constraints on the Standard Model Effective Field Theory
Energy Technology Data Exchange (ETDEWEB)
Berthier, Laure; Trott, Michael [Niels Bohr International Academy, University of Copenhagen,Blegdamsvej 17, DK-2100 Copenhagen (Denmark)
2016-02-10
We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred and three observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, Λ≳ 3 TeV. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an S,T analysis is modified by the theory errors we include as an illustrative example.
Self-Consistent Dynamical Model of the Broad Line Region
Energy Technology Data Exchange (ETDEWEB)
Czerny, Bozena [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Li, Yan-Rong [Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing (China); Sredzinska, Justyna; Hryniewicz, Krzysztof [Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw (Poland); Panda, Swayam [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw (Poland); Wildy, Conor [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Karas, Vladimir, E-mail: bcz@cft.edu.pl [Astronomical Institute, Czech Academy of Sciences, Prague (Czech Republic)
2017-06-22
We develop a self-consistent description of the Broad Line Region based on the concept of a failed wind powered by radiation pressure acting on a dusty accretion disk atmosphere in Keplerian motion. The material raised high above the disk is illuminated, dust evaporates, and the matter falls back toward the disk. This material is the source of emission lines. The model predicts the inner and outer radius of the region, the cloud dynamics under the dust radiation pressure and, subsequently, the gravitational field of the central black hole, which results in asymmetry between the rise and fall. Knowledge of the dynamics allows us to predict the shapes of the emission lines as functions of the basic parameters of an active nucleus: black hole mass, accretion rate, black hole spin (or accretion efficiency) and the viewing angle with respect to the symmetry axis. Here we show preliminary results based on analytical approximations to the cloud motion.
A seismologically consistent compositional model of Earth's core.
Badro, James; Côté, Alexander S; Brodholt, John P
2014-05-27
Earth's core is less dense than iron, and therefore it must contain "light elements," such as S, Si, O, or C. We use ab initio molecular dynamics to calculate the density and bulk sound velocity in liquid metal alloys at the pressure and temperature conditions of Earth's outer core. We compare the velocity and density for any composition in the (Fe-Ni, C, O, Si, S) system to radial seismological models and find a range of compositional models that fit the seismological data. We find no oxygen-free composition that fits the seismological data, and therefore our results indicate that oxygen is always required in the outer core. An oxygen-rich core is a strong indication of high-pressure and high-temperature conditions of core differentiation in a deep magma ocean with an FeO concentration (oxygen fugacity) higher than that of the present-day mantle.
Flood damage: a model for consistent, complete and multipurpose scenarios
Menoni, Scira; Molinari, Daniela; Ballio, Francesco; Minucci, Guido; Mejri, Ouejdane; Atun, Funda; Berni, Nicola; Pandolfo, Claudia
2016-12-01
Effective flood risk mitigation requires the impacts of flood events to be much better and more reliably known than is currently the case. Available post-flood damage assessments usually supply only a partial vision of the consequences of the floods as they typically respond to the specific needs of a particular stakeholder. Consequently, they generally focus (i) on particular items at risk, (ii) on a certain time window after the occurrence of the flood, (iii) on a specific scale of analysis or (iv) on the analysis of damage only, without an investigation of damage mechanisms and root causes. This paper responds to the necessity of a more integrated interpretation of flood events as the base to address the variety of needs arising after a disaster. In particular, a model is supplied to develop multipurpose complete event scenarios. The model organizes available information after the event according to five logical axes. This way post-flood damage assessments can be developed that (i) are multisectoral, (ii) consider physical as well as functional and systemic damage, (iii) address the spatial scales that are relevant for the event at stake depending on the type of damage that has to be analyzed, i.e., direct, functional and systemic, (iv) consider the temporal evolution of damage and finally (v) allow damage mechanisms and root causes to be understood. All the above features are key for the multi-usability of resulting flood scenarios. The model allows, on the one hand, the rationalization of efforts currently implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.
Self-consistent Modeling of Elastic Anisotropy in Shale
Kanitpanyacharoen, W.; Wenk, H.; Matthies, S.; Vasin, R.
2012-12-01
Elastic anisotropy in clay-rich sedimentary rocks has increasingly received attention because of significance for prospecting of petroleum deposits, as well as seals in the context of nuclear waste and CO2 sequestration. The orientation of component minerals and pores/fractures is a critical factor that influences elastic anisotropy. In this study, we investigate lattice and shape preferred orientation (LPO and SPO) of three shales from the North Sea in UK, the Qusaiba Formation in Saudi Arabia, and the Officer Basin in Australia (referred to as N1, Qu3, and L1905, respectively) to calculate elastic properties and compare them with experimental results. Synchrotron hard X-ray diffraction and microtomography experiments were performed to quantify LPO, weight proportions, and three-dimensional SPO of constituent minerals and pores. Our preliminary results show that the degree of LPO and total amount of clays are highest in Qu3 (3.3-6.5 m.r.d and 74vol%), moderately high in N1 (2.4-5.6 m.r.d. and 70vol%), and lowest in L1905 (2.3-2.5 m.r.d. and 42vol%). In addition, porosity in Qu3 is as low as 2% while it is up to 6% in L1605 and 8% in N1, respectively. Based on this information and single crystal elastic properties of mineral components, we apply a self-consistent averaging method to calculate macroscopic elastic properties and corresponding seismic velocities for different shales. The elastic model is then compared with measured acoustic velocities on the same samples. The P-wave velocities measured from Qu3 (4.1-5.3 km/s, 26.3%Ani.) are faster than those obtained from L1905 (3.9-4.7 km/s, 18.6%Ani.) and N1 (3.6-4.3 km/s, 17.7%Ani.). By making adjustments for pore structure (aspect ratio) and single crystal elastic properties of clay minerals, a good agreement between our calculation and the ultrasonic measurement is obtained.
Promoting consistent use of the communication function classification system (CFCS).
Cunningham, Barbara Jane; Rosenbaum, Peter; Hidecker, Mary Jo Cooley
2016-01-01
We developed a Knowledge Translation (KT) intervention to standardize the way speech-language pathologists working in Ontario Canada's Preschool Speech and Language Program (PSLP) used the Communication Function Classification System (CFCS). This tool was being used as part of a provincial program evaluation and standardizing its use was critical for establishing reliability and validity within the provincial dataset. Two theoretical foundations - Diffusion of Innovations and the Communication Persuasion Matrix - were used to develop and disseminate the intervention to standardize use of the CFCS among a cohort speech-language pathologists. A descriptive pre-test/post-test study was used to evaluate the intervention. Fifty-two participants completed an electronic pre-test survey, reviewed intervention materials online, and then immediately completed an electronic post-test survey. The intervention improved clinicians' understanding of how the CFCS should be used, their intentions to use the tool in the standardized way, and their abilities to make correct classifications using the tool. Findings from this work will be shared with representatives of the Ontario PSLP. The intervention may be disseminated to all speech-language pathologists working in the program. This study can be used as a model for developing and disseminating KT interventions for clinicians in paediatric rehabilitation. The Communication Function Classification System (CFCS) is a new tool that allows speech-language pathologists to classify children's skills into five meaningful levels of function. There is uncertainty and inconsistent practice in the field about the methods for using this tool. This study used combined two theoretical frameworks to develop an intervention to standardize use of the CFCS among a cohort of speech-language pathologists. The intervention effectively increased clinicians' understanding of the methods for using the CFCS, ability to make correct classifications, and
Self-consistent approach for neutral community models with speciation
Haegeman, Bart; Etienne, Rampal S.
Hubbell's neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is
Self-consistent modelling of resonant tunnelling structures
DEFF Research Database (Denmark)
Fiig, T.; Jauho, A.P.
1992-01-01
We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated...
Understanding and Improving the Performance Consistency of Distributed Computing Systems
Yigitbasi, M.N.
2012-01-01
With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...
Is the island universe model consistent with observations?
Piao, Yun-Song
2005-01-01
We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.
An Evaluation of Information Consistency in Grid Information Systems
Field, Laurence
2016-01-01
A Grid information system resolves queries that may need to consider all information sources (Grid services), which are widely distributed geographically, in order to enable efficient Grid functions that may utilise multiple cooperating services. Fundamentally this can be achieved by either moving the query to the data (query shipping) or moving the data to the query (data shipping). Existing Grid information system implementations have adopted one of the two approaches. This paper explores the two approaches in further detail by evaluating them to the best possible extent with respect to Grid information system benchmarking metrics. A Grid information system that follows the data shipping approach based on the replication of information that aims to improve the currency for highly-mutable information is presented. An implementation of this, based on an Enterprise Messaging System, is evaluated using the benchmarking method and the consequence of the results for the design of Grid information systems is discu...
OSIRIS : Efficient and consistent recovery of compartmentalized operating systems
Bhat, Koustubha; Vogt, Dirk; Kouwe, Erik Van Der; Gras, Ben; Sambuc, Lionel; Tanenbaum, Andrew S.; Bos, Herbert; Giuffrida, Cristiano
2016-01-01
Much research has gone into making operating systems more amenable to recovery and more resilient to crashes. Traditional solutions rely on partitioning the operating system (OS) to contain the effects of crashes within compartments and facilitate modular recovery. However, state dependencies among
Thermodynamically consistent description of criticality in models of correlated electrons
Czech Academy of Sciences Publication Activity Database
Janiš, Václav; Kauch, Anna; Pokorný, Vladislav
2017-01-01
Roč. 95, č. 4 (2017), s. 1-14, č. článku 045108. ISSN 2469-9950 R&D Projects: GA ČR GA15-14259S Institutional support: RVO:68378271 Keywords : conserving approximations * Anderson model * Hubbard model * parquet equations Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.) Impact factor: 3.836, year: 2016
Towards a self-consistent dynamical nuclear model
International Nuclear Information System (INIS)
Roca-Maza, X; Colò, G; Bortignon, P F; Niu, Y F
2017-01-01
Density functional theory (DFT) is a powerful and accurate tool, exploited in nuclear physics to investigate the ground-state and some of the collective properties of nuclei along the whole nuclear chart. Models based on DFT are not, however, suitable for the description of single-particle dynamics in nuclei. Following the field theoretical approach by A Bohr and B R Mottelson to describe nuclear interactions between single-particle and vibrational degrees of freedom, we have taken important steps towards the building of a microscopic dynamic nuclear model. In connection with this, one important issue that needs to be better understood is the renormalization of the effective interaction in the particle-vibration approach. One possible way to renormalize the interaction is by the so-called subtraction method . In this contribution, we will implement the subtraction method in our model for the first time and study its consequences. (paper)
A thermodynamically consistent model of shape-memory alloys
Czech Academy of Sciences Publication Activity Database
Benešová, Barbora
2011-01-01
Roč. 11, č. 1 (2011), s. 355-356 ISSN 1617-7061 R&D Projects: GA ČR GAP201/10/0357 Institutional research plan: CEZ:AV0Z20760514 Keywords : slape memory alloys * model based on relaxation * thermomechanic coupling Subject RIV: BA - General Mathematics http://onlinelibrary.wiley.com/doi/10.1002/pamm.201110169/abstract
On self-consistent N=1 supersymmetric composite models
International Nuclear Information System (INIS)
Pirogov, Yu.F.
1984-01-01
A class of fermion-boson N=1 supersymmetric composite models is considered. The models satisfy the anomaly matching condition, n-independence and the survival hypothesis. A unique admissible set of light states has been found under additional requirements for the two-particle metacolour force saturation, left-right discrete symmetry and observability of spectator states, on a par with the composite ones, the formey being necessary to compensate for axial anomalies. With respect to the unbroken chiral symmetry Gsup((MF))=SU(n)sub(L)xSU(n)sub(R), the light set has in left-chiral notations the form [(n(n-1)/2, 1)+(1, anti n(n-1)/2]+2(anti n, n)+[(n(n+1)/2/, 1)+(1, anti n(n-1)/2] independent of the metacolo group Gsup((MC)). The effective interaction theory for the light set on the mass scales, smaller than that of compositeness, is the N=1 supersymmetric grand unified model Gsup((MF))=SU(n)sub(L)xSU(n)sub(R). Here n=6, 8 are phenomenologically acceptable. On low mass scales, the light set transforms exactly into four families of ordinary leptons and quarks. In accordance with the survival hypothesis, all exotic states are naturally heavy under the spontaneous breaking of Gsup((MF)) to the low-energy standard model symmetry
The internal consistency of the North Sea carbonate system
Salt, S.; Thomas, H.; Bozec, Y.; Borges, A.V.; de Baar, H.J.W
2016-01-01
In 2002 (February) and 2005 (August), the full suite of carbonate system parameters (total alkalinity (A_{T}), dissolved inorganic carbon (DIC), pH, and partial pressure of CO_{2} (pCO_{2}) were measured on two re-occupations of the entire North Sea basin, with three
Toward a Self-Consistent Dynamical Model of the NSSL
Matilsky, Loren
2018-01-01
The advent of helioseismology has revealed in detail the internal differential rotation profile of the Sun. In particular, the presence of two boundary layers, the tachocline at the bottom of the convection zone (CZ) and the Near Surface Shear Layer (NSSL) at the top of the CZ, has remained a mystery. These two boundary layers may have significant consequences for the internal dynamo that operates the Sun's magnetic field, and so understanding their dynamics is an important step in solar physics and in the theory of solar-like stellar structure in general. In this talk, we analyze three numerical models of hydrodynamic convection in rotating spherical shells with varying degrees of stratification in order to understand the dynamical balance of the solar near-surface shear layer (NSSL). We find that with sufficient stratification, a boundary layer with some characteristics of the NSSL develops at high latitudes, and it is maintained purely an inertial balance of torques in which the viscosity is negligible. An inward radial flux of angular momentum from the Reynold's stress (as has been predicted by theory) is balanced by the poleward latitudinal flux of angular momentum due to the meridional circulation. We analyze the similarities of the near surface shear in our models to that of the Sun, and find that the solar NSSL is most likely maintained by the inertial balance our simulations display at high latitudes, but with a modified upper boundary condition.
Consistency problems for Heath-Jarrow-Morton interest rate models
Filipović, Damir
2001-01-01
The book is written for a reader with knowledge in mathematical finance (in particular interest rate theory) and elementary stochastic analysis, such as provided by Revuz and Yor (Continuous Martingales and Brownian Motion, Springer 1991). It gives a short introduction both to interest rate theory and to stochastic equations in infinite dimension. The main topic is the Heath-Jarrow-Morton (HJM) methodology for the modelling of interest rates. Experts in SDE in infinite dimension with interest in applications will find here the rigorous derivation of the popular "Musiela equation" (referred to in the book as HJMM equation). The convenient interpretation of the classical HJM set-up (with all the no-arbitrage considerations) within the semigroup framework of Da Prato and Zabczyk (Stochastic Equations in Infinite Dimensions) is provided. One of the principal objectives of the author is the characterization of finite-dimensional invariant manifolds, an issue that turns out to be vital for applications. Finally, ge...
Consistent Prediction of Properties of Systems with Lipids
DEFF Research Database (Denmark)
Cunico, Larissa; Ceriani, Roberta; Sarup, Bent
Equilibria between vapour, liquid and/or solid phases, pure component properties and also the mixture-phase properties are necessary for synthesis, design and analysis of different unit operations found in the production of edible oils, fats and biodiesel. A systematic numerical analysis....... Lipids are found in almost all mixtures involving edible oils, fats and biodiesel. They are also being extracted for use in the pharma-industry. A database for pure components (lipids) present in these processes and mixtures properties has been developed and made available for different applications...... (model development, property verification, property prediction, etc.). The database has verified data for fatty acids, acylglycerols, fatty esters, fatty alcohols, vegetable oils, biodiesel and minor compounds as phospholipids, tocopherols, sterols, carotene and squalene, together with a user friendly...
Chaotic synchronization of vibrations of a coupled mechanical system consisting of a plate and beams
Directory of Open Access Journals (Sweden)
J. Awrejcewicz
Full Text Available In this paper mathematical model of a mechanical system consisting of a plate and either one or two beams is derived. Obtained PDEs are reduced to ODEs, and then studied mainly using the fast Fourier and wavelet transforms. A few examples of the chaotic synchronizations are illustrated and discussed.
On the internal consistency of holographic dark energy models
International Nuclear Information System (INIS)
Horvat, R
2008-01-01
Holographic dark energy (HDE) models, underpinned by an effective quantum field theory (QFT) with a manifest UV/IR connection, have become convincing candidates for providing an explanation of the dark energy in the universe. On the other hand, the maximum number of quantum states that a conventional QFT for a box of size L is capable of describing relates to those boxes which are on the brink of experiencing a sudden collapse to a black hole. Another restriction on the underlying QFT is that the UV cut-off, which cannot be chosen independently of the IR cut-off and therefore becomes a function of time in a cosmological setting, should stay the largest energy scale even in the standard cosmological epochs preceding a dark energy dominated one. We show that, irrespective of whether one deals with the saturated form of HDE or takes a certain degree of non-saturation in the past, the above restrictions cannot be met in a radiation dominated universe, an epoch in the history of the universe which is expected to be perfectly describable within conventional QFT
Numerical simulation of a thermodynamically consistent four-species tumor growth model.
Hawkins-Daarud, Andrea; van der Zee, Kristoffer G; Oden, J Tinsley
2012-01-01
In this paper, we develop a thermodynamically consistent four-species model of tumor growth on the basis of the continuum theory of mixtures. Unique to this model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models. A mixed finite element spatial discretization is developed and implemented to provide numerical results demonstrating the range of solutions this model can produce. A time-stepping algorithm is then presented for this system, which is shown to be first order accurate and energy gradient stable. The results of an array of numerical experiments are presented, which demonstrate a wide range of solutions produced by various choices of model parameters.
Kataoka, Hiroyuki; Yamada, Akihiro; Kamizono, Hiroki; Ando, Hideyuki; Tanaka, Takeshi
The progress of integrated-circuits technology in recent years has enabled a large performance-increase of system LSI. As it is needed long time to study the knowledge of the system LSI such as design, semiconductor process, and estimation of device, it is hard to study system LSI technology for company man and students. The basic consecutive system consisted of design, process and estimation of a fundamental IC system was studied.
DEFF Research Database (Denmark)
Toldbod, Thomas; Israelsen, Poul
2014-01-01
Companies rely on multiple Management Control Systems to obtain their short and long term objectives. When applying a multifaceted perspective on Management Control System the concept of internal consistency has been found to be important in obtaining goal congruency in the company. However...... of MCSs when analyzing internal consistency in the MCS package and how managers obtain internal consistency in the new MCS package when a MCS change occur. This study focuses specifically on changes to administrative controls, which are not internal consistent with the current cybernetic controls. As top......, to date we know little about how managers maintain internal consistency, when individual MCSs change and do not fit with the other MCSs. Based on a case study in a global Danish manufacturing company this study finds that it is necessary to distinguish between the design characteristics of MCS and use...
Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects
Directory of Open Access Journals (Sweden)
Guangjie Li
2015-07-01
Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.
Topologically Consistent Models for Efficient Big Geo-Spatio Data Distribution
Jahn, M. W.; Bradley, P. E.; Doori, M. Al; Breunig, M.
2017-10-01
Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.
A self-consistent model for thermodynamics of multicomponent solid solutions
International Nuclear Information System (INIS)
Svoboda, J.; Fischer, F.D.
2016-01-01
The self-consistent concept recently published in this journal (108, 27–30, 2015) is extended from a binary to a multicomponent system. This is possible by exploiting the trapping concept as basis for including the interaction of atoms in terms of pairs (e.g. A–A, B–B, C–C…) and couples (e.g. A–B, B–C, …) in a multicomponent system with A as solvent and B, C, … as dilute solutes. The model results in a formulation of Gibbs-energy, which can be minimized. Examples show that the couple and pair formation may influence the equilibrium Gibbs energy markedly.
International Nuclear Information System (INIS)
Ane, J.M.; Grandgirard, V.; Albajar, F.; Johner, J.
2001-01-01
A consistent and simple approach to derive plasma scenarios for next step tokamak design is presented. It is based on successive plasma equilibria snapshots from plasma breakdown to end of ramp-down. Temperature and density profiles for each equilibrium are derived from a 2D plasma model. The time interval between two successive equilibria is then computed from the toroidal field magnetic energy balance, the resistive term of which depends on n, T profiles. This approach provides a consistent analysis of plasma performance, flux consumption and PF system, including average voltages waveforms across the PF coils. The plasma model and the Poynting theorem for the toroidal magnetic energy are presented. Application to ITER-FEAT and to M2, a Q=5 machine designed at CEA, are shown. (author)
A new k-epsilon model consistent with Monin-Obukhov similarity theory
DEFF Research Database (Denmark)
van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.
2017-01-01
A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...
International Nuclear Information System (INIS)
Schmidt, J.R.; Roberts, S.T.; Loparo, J.J.; Tokmakoff, A.; Fayer, M.D.; Skinner, J.L.
2007-01-01
Vibrational spectroscopy can provide important information about structure and dynamics in liquids. In the case of liquid water, this is particularly true for isotopically dilute HOD/D 2 O and HOD/H 2 O systems. Infrared and Raman line shapes for these systems were measured some time ago. Very recently, ultrafast three-pulse vibrational echo experiments have been performed on these systems, which provide new, exciting, and important dynamical benchmarks for liquid water. There has been tremendous theoretical effort expended on the development of classical simulation models for liquid water. These models have been parameterized from experimental structural and thermodynamic measurements. The goal of this paper is to determine if representative simulation models are consistent with steady-state, and especially with these new ultrafast, experiments. Such a comparison provides information about the accuracy of the dynamics of these simulation models. We perform this comparison using theoretical methods developed in previous papers, and calculate the experimental observables directly, without making the Condon and cumulant approximations, and taking into account molecular rotation, vibrational relaxation, and finite excitation pulses. On the whole, the simulation models do remarkably well; perhaps the best overall agreement with experiment comes from the SPC/E model
Directory of Open Access Journals (Sweden)
Caroline Ghyoot
2017-07-01
Full Text Available Mixotrophy, i.e., the ability to combine phototrophy and phagotrophy in one organism, is now recognized to be widespread among photic-zone protists and to potentially modify the structure and functioning of planktonic ecosystems. However, few biogeochemical/ecological models explicitly include this mode of nutrition, owing to the large diversity of observed mixotrophic types, the few data allowing the parameterization of physiological processes, and the need to make the addition of mixotrophy into existing ecosystem models as simple as possible. We here propose and discuss a flexible model that depicts the main observed behaviors of mixotrophy in microplankton. A first model version describes constitutive mixotrophy (the organism photosynthesizes by use of its own chloroplasts. This model version offers two possible configurations, allowing the description of constitutive mixotrophs (CMs that favor either phototrophy or heterotrophy. A second version describes non-constitutive mixotrophy (the organism performs phototrophy by use of chloroplasts acquired from its prey. The model variants were described so as to be consistent with a plankton conceptualization in which the biomass is divided into separate components on the basis of their biochemical function (Shuter-approach; Shuter, 1979. The two model variants of mixotrophy can easily be implemented in ecological models that adopt the Shuter-approach, such as the MIRO model (Lancelot et al., 2005, and address the challenges associated with modeling mixotrophy.
A formally verified algorithm for interactive consistency under a hybrid fault model
Lincoln, Patrick; Rushby, John
1993-01-01
Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.
McClelland, James L
2013-11-01
The complementary learning systems theory of the roles of hippocampus and neocortex (McClelland, McNaughton, & O'Reilly, 1995) holds that the rapid integration of arbitrary new information into neocortical structures is avoided to prevent catastrophic interference with structured knowledge representations stored in synaptic connections among neocortical neurons. Recent studies (Tse et al., 2007, 2011) showed that neocortical circuits can rapidly acquire new associations that are consistent with prior knowledge. The findings challenge the complementary learning systems theory as previously presented. However, new simulations extending those reported in McClelland et al. (1995) show that new information that is consistent with knowledge previously acquired by a putatively cortexlike artificial neural network can be learned rapidly and without interfering with existing knowledge; it is when inconsistent new knowledge is acquired quickly that catastrophic interference ensues. Several important features of the findings of Tse et al. (2007, 2011) are captured in these simulations, indicating that the neural network model used in McClelland et al. has characteristics in common with neocortical learning mechanisms. An additional simulation generalizes beyond the network model previously used, showing how the rate of change of cortical connections can depend on prior knowledge in an arguably more biologically plausible network architecture. In sum, the findings of Tse et al. are fully consistent with the idea that hippocampus and neocortex are complementary learning systems. Taken together, these findings and the simulations reported here advance our knowledge by bringing out the role of consistency of new experience with existing knowledge and demonstrating that the rate of change of connections in real and artificial neural networks can be strongly prior-knowledge dependent. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Bosons system with finite repulsive interaction: self-consistent field method
International Nuclear Information System (INIS)
Renatino, M.M.B.
1983-01-01
Some static properties of a boson system (T = zero degree Kelvin), under the action of a repulsive potential are studied. For the repulsive potential, a model was adopted consisting of a region where it is constant (r c ), and a decay as 1/r (r > r c ). The self-consistent field approximation used takes into account short range correlations through a local field corrections, which leads to an effective field. The static structure factor S(q-vector) and the effective potential ψ(q-vector) are obtained through a self-consistent calculation. The pair-correlation function g(r-vector) and the energy of the collective excitations E(q-vector) are also obtained, from the structure factor. The density of the system and the parameters of the repulsive potential, that is, its height and the size of the constant region were used as variables for the problem. The results obtained for S(q-vector), g(r-vector) and E(q-vector) for a fixed ratio r o /r c and a variable λ, indicates the raising of a system structure, which is more noticeable when the potential became more repulsive. (author)
Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models
Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.
2011-09-01
We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.
Self-consistent theory of finite Fermi systems and radii of nuclei
International Nuclear Information System (INIS)
Saperstein, E. E.; Tolokonnikov, S. V.
2011-01-01
Present-day self-consistent approaches in nuclear theory were analyzed from the point of view of describing distributions of nuclear densities. The generalized method of the energy density functional due to Fayans and his coauthors (this is the most successful version of the self-consistent theory of finite Fermi systems) was the first among the approaches under comparison. The second was the most successful version of the Skyrme-Hartree-Fock method with the HFB-17 functional due to Goriely and his coauthors. Charge radii of spherical nuclei were analyzed in detail. Several isotopic chains of deformed nuclei were also considered. Charge-density distributions ρ ch (r) were calculated for several spherical nuclei. They were compared with model-independent data extracted from an analysis of elastic electron scattering on nuclei.
Development of a 3D consistent 1D neutronics model for reactor core simulation
International Nuclear Information System (INIS)
Lee, Ki Bog; Joo, Han Gyu; Cho, Byung Oh; Zee, Sung Quun
2001-02-01
In this report a 3D consistent 1D model based on nonlinear analytic nodal method is developed to reproduce the 3D results. During the derivation, the current conservation factor (CCF) is introduced which guarantees the same axial neutron currents obtained from the 1D equation as the 3D reference values. Furthermore in order to properly use 1D group constants, a new 1D group constants representation scheme employing tables for the fuel temperature, moderator density and boron concentration is developed and functionalized for the control rod tip position. To test the 1D kinetics model with CCF, several steady state and transient calculations were performed and compared with 3D reference values. The errors of K-eff values were reduced about one tenth when using CCF without significant computational overhead. And the errors of power distribution were decreased to the range of one fifth or tenth at steady state calculation. The 1D kinetics model with CCF and the 1D group constant functionalization employing tables as a function of control rod tip position can provide preciser results at the steady state and transient calculation. Thus it is expected that the 1D kinetics model derived in this report can be used in the safety analysis, reactor real time simulation coupled with system analysis code, operator support system etc.
Wood, David F.; Kohun, Frederick G.; Laverty, Joseph Packy
2010-01-01
This paper reports on a study of systems analysis textbooks in terms of topics covered and academic background of the authors. It addresses the consistency within IS curricula with respect to the content of a systems analysis and design course using the object-oriented approach. The research questions addressed were 1: Is there a consistency among…
Behzadi, Azad Esmailov
1999-10-01
The critical behavior of the fully frustrated XY model has remained controversial in spite of almost two decades of related research. In this study, we have developed a new method inspired by Netz and Berker's hard-spin mean- field theory. Our approach for XY models yields results consistent with Monte Carlo simulations as the ratio of antiferromagnetic to ferromagnetic interactions is varied. The method captures two phase transitions clearly separated in temperature for ratios of 0.5, 0.6, and 1.5, with these transitions moving closer together in temperature as the interaction ratio approaches 1.0, the fully frustrated case. From the system's chirality as a function of temperature in the critical region, we calculate the critical exponent β in agreement with an Ising transition for all of the interaction ratios studied, including 1.0. This result provides support for the view that there are two transitions, rather than one transition in a new universality class, occurring in the fully frustrated XY model. Finite size effects in this model can be essentially eliminated by rescaling the local magnetization, the quantity retained self- consistently in our computations. This rescaling scheme also shows excellent results when tested on the two- dimensional Ising model, and the method, as generalized, provides a framework for an analytical approach to complex systems. Monte Carlo simulations of the fully frustrated XY model in a magnetic field provide further evidence of two transitions. The magnetic field breaks the rotational symmetry of the model, but the two-fold chiral degeneracy of the ground state persists in the field. This lower degeneracy with the field present makes Monte Carlo simulations converge more rapidly. The critical exponent δ determined from the sublattice magnetizations as a function of field agrees with the value expected for a Kosterlitz-Thouless transition. Further, the zero-field specific heat obtained by extrapolation from simulations in a
Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül
2017-12-01
A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.
International Nuclear Information System (INIS)
Kita, Toshihiro
2005-01-01
A simple system consisting of a second-order lag element (a damped linear pendulum) and two first-order lag elements with piecewise-linear static feedback that has been derived from a power system model is presented. It exhibits chaotic behavior for a wide range of parameter values. The analysis of the bifurcations and the chaotic behavior are presented with qualitative estimation of the parameter values for which the chaotic behavior is observed. Several characteristics like scalability of the attractor and globality of the attractor-basin are also discussed
Self-consistent imbedding and the ellipsoidal model model for porous rocks
International Nuclear Information System (INIS)
Korringa, J.; Brown, R.J.S.; Thompson, D.D.; Runge, R.J.
1979-01-01
Equations are obtained for the effective elastic moduli for a model of an isotropic, heterogeneous, porous medium. The mathematical model used for computation is abstract in that it is not simply a rigorous computation for a composite medium of some idealized geometry, although the computation contains individual steps which are just that. Both the solid part and pore space are represented by ellipsoidal or spherical 'grains' or 'pores' of various sizes and shapes. The strain of each grain, caused by external forces applied to the medium, is calculated in a self-consistent imbedding (SCI) approximation, which replaces the true surrounding of any given grain or pore by an isotropic medium defined by the effective moduli to be computed. The ellipsoidal nature of the shapes allows us to use Eshelby's theoretical treatment of a single ellipsoidal inclusion in an infiinte homogeneous medium. Results are compared with the literature, and discrepancies are found with all published accounts of this problem. Deviations from the work of Wu, of Walsh, and of O'Connell and Budiansky are attributed to a substitution made by these authors which though an identity for the exact quantities involved, is only approximate in the SCI calculation. This reduces the validity of the equations to first-order effects only. Differences with the results of Kuster and Toksoez are attributed to the fact that the computation of these authors is not self-consistent in the sense used here. A result seems to be the stiffening of the medium as if the pores are held apart. For spherical grains and pores, their calculated moduli are those given by the Hashin-Shtrikman upper bounds. Our calculation reproduces, in the case of spheres, an early result of Budiansky. An additional feature of our work is that the algebra is simpler than in earlier work. We also incorporate into the theory the possibility that fluid-filled pores are interconnected
A CVAR scenario for a standard monetary model using theory-consistent expectations
DEFF Research Database (Denmark)
Juselius, Katarina
2017-01-01
A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...
Self-consistent model of the Rayleigh--Taylor instability in ablatively accelerated laser plasma
International Nuclear Information System (INIS)
Bychkov, V.V.; Golberg, S.M.; Liberman, M.A.
1994-01-01
A self-consistent approach to the problem of the growth rate of the Rayleigh--Taylor instability in laser accelerated targets is developed. The analytical solution of the problem is obtained by solving the complete system of the hydrodynamical equations which include both thermal conductivity and energy release due to absorption of the laser light. The developed theory provides a rigorous justification for the supplementary boundary condition in the limiting case of the discontinuity model. An analysis of the suppression of the Rayleigh--Taylor instability by the ablation flow is done and it is found that there is a good agreement between the obtained solution and the approximate formula σ = 0.9√gk - 3u 1 k, where g is the acceleration, u 1 is the ablation velocity. This paper discusses different regimes of the ablative stabilization and compares them with previous analytical and numerical works
Development of a Kohn-Sham like potential in the Self-Consistent Atomic Deformation Model
Mehl, M. J.; Boyer, L. L.; Stokes, H. T.
1996-01-01
This is a brief description of how to derive the local ``atomic'' potentials from the Self-Consistent Atomic Deformation (SCAD) model density function. Particular attention is paid to the spherically averaged case.
Rumsey, Christopher L.
2009-01-01
In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.
Solution of degenerate hypergeometric system of Horn consisting of three equations
Tasmambetov, Zhaksylyk N.; Zhakhina, Ryskul U.
2017-09-01
The possibilities of constructing normal-regular solutions of a system consisting of three partial differential equations of the second order are studied by the Frobenius-Latysheva method. The method of determining unknown coefficients is shown and the relationship of the studied system with the system, which solution is Laguerre's polynomial of three variables is indicated. The generalization of the Frobenius-Latysheva method to the case of a system consisting of three equations makes it possible to clarify the relationship of such systems, which solutions are special functions of three variables. These systems include the functions of Whittaker and Bessel, 205 special functions of three variables from the list of M. Srivastava and P.W. Carlsson, as well as orthogonal polynomials of three variables. All this contributes to the further development of the analytic theory of systems consisting of three partial differential equations of the second order.
A paradigm shift toward a consistent modeling framework to assess climate impacts
Monier, E.; Paltsev, S.; Sokolov, A. P.; Fant, C.; Chen, H.; Gao, X.; Schlosser, C. A.; Scott, J. R.; Dutkiewicz, S.; Ejaz, Q.; Couzo, E. A.; Prinn, R. G.; Haigh, M.
2017-12-01
Estimates of physical and economic impacts of future climate change are subject to substantial challenges. To enrich the currently popular approaches of assessing climate impacts by evaluating a damage function or by multi-model comparisons based on the Representative Concentration Pathways (RCPs), we focus here on integrating impacts into a self-consistent coupled human and Earth system modeling framework that includes modules that represent multiple physical impacts. In a sample application we show that this framework is capable of investigating the physical impacts of climate change and socio-economic stressors. The projected climate impacts vary dramatically across the globe in a set of scenarios with global mean warming ranging between 2.4°C and 3.6°C above pre-industrial by 2100. Unabated emissions lead to substantial sea level rise, acidification that impacts the base of the oceanic food chain, air pollution that exceeds health standards by tenfold, water stress that impacts an additional 1 to 2 billion people globally and agricultural productivity that decreases substantially in many parts of the world. We compare the outcomes from these forward-looking scenarios against the common goal described by the target-driven scenario of 2°C, which results in much smaller impacts. It is challenging for large internationally coordinated exercises to respond quickly to new policy targets. We propose that a paradigm shift toward a self-consistent modeling framework to assess climate impacts is needed to produce information relevant to evolving global climate policy and mitigation strategies in a timely way.
CONSISTENCY OF THE PERFORMANCE MANAGEMENT SYSTEM AND ITS QUANTIFICATION USING THE Z-MESOT FRAMEWORK
Directory of Open Access Journals (Sweden)
Jan Zavadsky
2016-12-01
Full Text Available The main purpose of this paper is: (1 to present the theoretical approach for testing a performance management system`s consistency using the Z-MESOT framework and (2 to present the results of empirical analysis in selected manufacturing companies. The Z-MESOT framework is a managerial approach, based on the definitions of attributes for measuring and assessing the performance of a company. It is a quantitative approach which can proof the degree of the performance management system`s consistency. The quantification comes from arithmetical calculation in the Z-MESOT matrix. The consistency of the performance management system does not assure the final performance. Consistency is a part of the systemic approach to the management even if we do not call it as quality management. A consistent definition of the performance management system can help enterprises to be flexible and to be able to quickly respond in the case of any changes in the internal or external business environment. A consistent definition is represented by a set of 21 performance indicator attributes including the requirement for measuring and evaluating strategic and operational goals. In the paper, we also describe the relationships between selected requirements of the ISO 9001:2015 standard and the Z-MESOT framework.
The Devil in the Dark: A Fully Self-Consistent Seismic Model for Venus
Unterborn, C. T.; Schmerr, N. C.; Irving, J. C. E.
2017-12-01
The bulk composition and structure of Venus is unknown despite accounting for 40% of the mass of all the terrestrial planets in our Solar System. As we expand the scope of planetary science to include those planets around other stars, the lack of measurements of basic planetary properties such as moment of inertia, core-size and thermal profile for Venus hinders our ability to compare the potential uniqueness of the Earth and our Solar System to other planetary systems. Here we present fully self-consistent, whole-planet density and seismic velocity profiles calculated using the ExoPlex and BurnMan software packages for various potential Venusian compositions. Using these models, we explore the seismological implications of the different thermal and compositional initial conditions, taking into account phase transitions due to changes in pressure, temperature as well as composition. Using mass-radius constraints, we examine both the centre frequencies of normal mode oscillations and the waveforms and travel times of body waves. Seismic phases which interact with the core, phase transitions in the mantle, and shallower parts of Venus are considered. We also consider the detectability and transmission of these seismic waves from within the dense atmosphere of Venus. Our work provides coupled compositional-seismological reference models for the terrestrial planet in our Solar System of which we know the least. Furthermore, these results point to the potential wealth of fundamental scientific insights into Venus and Earth, as well as exoplanets, which could be gained by including a seismometer on future planetary exploration missions to Venus, the devil in the dark.
Self-consistent assessment of Englert-Schwinger model on atomic properties.
Lehtomäki, Jouko; Lopez-Acevedo, Olga
2017-12-21
Our manuscript investigates a self-consistent solution of the statistical atom model proposed by Berthold-Georg Englert and Julian Schwinger (the ES model) and benchmarks it against atomic Kohn-Sham and two orbital-free models of the Thomas-Fermi-Dirac (TFD)-λvW family. Results show that the ES model generally offers the same accuracy as the well-known TFD-15vW model; however, the ES model corrects the failure in the Pauli potential near-nucleus region. We also point to the inability of describing low-Z atoms as the foremost concern in improving the present model.
Self-consistent model of a solid for the description of lattice and magnetic properties
International Nuclear Information System (INIS)
Balcerzak, T.; Szałowski, K.; Jaščur, M.
2017-01-01
In the paper a self-consistent theoretical description of the lattice and magnetic properties of a model system with magnetoelastic interaction is presented. The dependence of magnetic exchange integrals on the distance between interacting spins is assumed, which couples the magnetic and the lattice subsystem. The framework is based on summation of the Gibbs free energies for the lattice subsystem and magnetic subsystem. On the basis of minimization principle for the Gibbs energy, a set of equations of state for the system is derived. These equations of state combine the parameters describing the elastic properties (relative volume deformation) and the magnetic properties (magnetization changes). The formalism is extensively illustrated with the numerical calculations performed for a system of ferromagnetically coupled spins S=1/2 localized at the sites of simple cubic lattice. In particular, the significant influence of the magnetic subsystem on the elastic properties is demonstrated. It manifests itself in significant modification of such quantities as the relative volume deformation, thermal expansion coefficient or isothermal compressibility, in particular, in the vicinity of the magnetic phase transition. On the other hand, the influence of lattice subsystem on the magnetic one is also evident. It takes, for example, the form of dependence of the critical (Curie) temperature and magnetization itself on the external pressure, which is thoroughly investigated.
Self-consistent model of a solid for the description of lattice and magnetic properties
Energy Technology Data Exchange (ETDEWEB)
Balcerzak, T., E-mail: t_balcerzak@uni.lodz.pl [Department of Solid State Physics, Faculty of Physics and Applied Informatics, University of Łódź, ulica Pomorska 149/153, 90-236 Łódź (Poland); Szałowski, K., E-mail: kszalowski@uni.lodz.pl [Department of Solid State Physics, Faculty of Physics and Applied Informatics, University of Łódź, ulica Pomorska 149/153, 90-236 Łódź (Poland); Jaščur, M. [Department of Theoretical Physics and Astrophysics, Faculty of Science, P. J. Šáfárik University, Park Angelinum 9, 041 54 Košice (Slovakia)
2017-03-15
In the paper a self-consistent theoretical description of the lattice and magnetic properties of a model system with magnetoelastic interaction is presented. The dependence of magnetic exchange integrals on the distance between interacting spins is assumed, which couples the magnetic and the lattice subsystem. The framework is based on summation of the Gibbs free energies for the lattice subsystem and magnetic subsystem. On the basis of minimization principle for the Gibbs energy, a set of equations of state for the system is derived. These equations of state combine the parameters describing the elastic properties (relative volume deformation) and the magnetic properties (magnetization changes). The formalism is extensively illustrated with the numerical calculations performed for a system of ferromagnetically coupled spins S=1/2 localized at the sites of simple cubic lattice. In particular, the significant influence of the magnetic subsystem on the elastic properties is demonstrated. It manifests itself in significant modification of such quantities as the relative volume deformation, thermal expansion coefficient or isothermal compressibility, in particular, in the vicinity of the magnetic phase transition. On the other hand, the influence of lattice subsystem on the magnetic one is also evident. It takes, for example, the form of dependence of the critical (Curie) temperature and magnetization itself on the external pressure, which is thoroughly investigated.
Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.
2003-01-01
The amount of data available on the Internet is continuously increasing, consequentially there is a growing need for tools that help to analyse the data. Testing of consistency among data received from different sources is made difficult by the number of different languages and schemas being used....... In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling......, an important part of this technique is attaching of OCL expressions to special boolean class attributes that we call consistency attributes. The resulting integration model can be used for automatic consistency testing of two instances of the legacy models by automatically instantiate the whole integration...
Assessment of the Degree of Consistency of the System of Fuzzy Rules
Directory of Open Access Journals (Sweden)
Pospelova Lyudmila Yakovlevna
2013-12-01
Full Text Available The article analyses recent achievements and publications and shows that difficulties of explaining the nature of fuzziness and equivocation arise in socio-economic models that use the traditional paradigm of classical rationalism (computational, agent and econometric models. The accumulated collective experience of development of optimal models confirms prospectiveness of application of the fuzzy set approach in modelling the society. The article justifies the necessity of study of the nature of inconsistency in fuzzy knowledge bases both on the generalised ontology level and on pragmatic functional level of the logical inference. The article offers the method of search for logical and conceptual contradictions in the form of a combination of the abduction and modus ponens. It discusses the key issue of the proposed method: what properties should have the membership function of the secondary fuzzy set, which describes in fuzzy inference models such a resulting state of the object of management, which combines empirically incompatible properties with high probability. The degree of membership of the object of management in several incompatible classes with respect to the fuzzy output variable is the degree of fuzziness of the “Intersection of all results of the fuzzy inference of the set, applied at some input of rules, is an empty set” statement. The article describes an algorithm of assessment of the degree of consistency. It provides an example of the step-by-step detection of contradictions in statistical fuzzy knowledge bases at the pragmatic functional level of the logical output. The obtained results of testing in the form of sets of incompatible facts, output chains, sets of non-crossing intervals and computed degrees of inconsistency allow experts timely elimination of inadmissible contradictions and, at the same time, increase of quality of recommendations and assessment of fuzzy expert systems.
Validation study of the magnetically self-consistent inner magnetosphere model RAM-SCB
Yu, Yiqun; Jordanova, Vania; Zaharia, Sorin; Koller, Josef; Zhang, Jichun; Kistler, Lynn M.
2012-03-01
The validation of the magnetically self-consistent inner magnetospheric model RAM-SCB developed at Los Alamos National Laboratory is presented here. The model consists of two codes: a kinetic ring current-atmosphere interaction model (RAM) and a 3-D equilibrium magnetic field code (SCB). The validation is conducted by simulating two magnetic storm events and then comparing the model results against a variety of satellite in situ observations, including the magnetic field from Cluster and Polar spacecraft, ion differential flux from the Cluster/CODIF (Composition and Distribution Function) analyzer, and the ground-based SYM-H index. The model prediction of the magnetic field is in good agreement with observations, which indicates the model's capability of representing well the inner magnetospheric field configuration. This provides confidence for the RAM-SCB model to be utilized for field line and drift shell tracing, which are needed in radiation belt studies. While the SYM-H index, which reflects the total ring current energy content, is generally reasonably reproduced by the model using the Weimer electric field model, the modeled ion differential flux clearly depends on the electric field strength, local time, and magnetic activity level. A self-consistent electric field approach may be needed to improve the model performance in this regard.
Self-consistent treatment of quark-quark interaction in MIT bag model
Simonis, V
1997-01-01
Some features of the MlT bag model are discussed with particular emphasis on static, spherical cavity approximation to the model. A self-consistent procedure for obtaining wave functions and calculating gluon exchange effects is proposed. The equations derived are similar to state-dependent relativistic Hartree-Fock equations. (author)
Estimating long-term volatility parameters for market-consistent models
African Journals Online (AJOL)
Contemporary actuarial and accounting practices (APN 110 in the South African context) require the use of market-consistent models for the valuation of embedded investment derivatives. These models have to be calibrated with accurate and up-to-date market data. Arguably, the most important variable in the valuation of ...
A parameter study of self-consistent disk models around Herbig AeBe stars
Meijer, J.; Dominik, C.; de Koter, A.; Dullemond, C.P.; van Boekel, R.; Waters, L.B.F.M.
2008-01-01
We present a parameter study of self-consistent models of protoplanetary disks around Herbig AeBe stars. We use the code developed by Dullemond and Dominik, which solves the 2D radiative transfer problem including an iteration for the vertical hydrostatic structure of the disk. This grid of models
Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling
Pera, H.; Kleijn, J.M.; Leermakers, F.A.M.
2014-01-01
To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k c and k ¯ and
Self-consistent field modeling of adsorption from polymer/surfactant mixtures
Postmus, B.R.; Leermakers, F.A.M.; Cohen Stuart, M.A.
2008-01-01
We report on the development of a self-consistent field model that describes the competitive adsorption of nonionic alkyl-(ethylene oxide) surfactants and nonionic polymer poly(ethylene oxide) (PEO) from aqueous solutions onto silica. The model explicitly describes the response to the pH and the
New geometric design consistency model based on operating speed profiles for road safety evaluation.
Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo
2013-12-01
To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling.
Pera, H; Kleijn, J M; Leermakers, F A M
2014-02-14
To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus kc and k̄ and the preferred monolayer curvature J(0)(m), and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of kc and the area compression modulus kA are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k̄ and J(0)(m) can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k̄ and J(0)(m) change sign with relevant parameter changes. Although typically k̄ 0, especially at low ionic strengths. We anticipate that these changes lead to unstable membranes as these become vulnerable to pore formation or disintegration into lipid disks.
Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling
International Nuclear Information System (INIS)
Pera, H.; Kleijn, J. M.; Leermakers, F. A. M.
2014-01-01
To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k c and k ¯ and the preferred monolayer curvature J 0 m , and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k c and the area compression modulus k A are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k ¯ and J 0 m can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k ¯ and J 0 m change sign with relevant parameter changes. Although typically k ¯ 0 m ≫0, especially at low ionic strengths. We anticipate that these changes lead to unstable membranes as these become vulnerable to pore formation or disintegration into lipid disks
Bowman, Kaye; McKenna, Suzy
2016-01-01
This occasional paper provides an overview of the development of Australia's national training system and is a key knowledge document of a wider research project "Consistency with flexibility in the Australian national training system." This research project investigates the various approaches undertaken by each of the jurisdictions to…
A consistent description of kinetics and hydrodynamics of quantum Bose-systems
Directory of Open Access Journals (Sweden)
P.A.Hlushak
2004-01-01
Full Text Available A consistent approach to the description of kinetics and hydrodynamics of many-Boson systems is proposed. The generalized transport equations for strongly and weakly nonequilibrium Bose systems are obtained. Here we use the method of nonequilibrium statistical operator by D.N. Zubarev. New equations for the time distribution function of the quantum Bose system with a separate contribution from both the kinetic and potential energies of particle interactions are obtained. The generalized transport coefficients are determined accounting for the consistent description of kinetic and hydrodynamic processes.
A non-parametric consistency test of the ΛCDM model with Planck CMB data
Energy Technology Data Exchange (ETDEWEB)
Aghamousa, Amir; Shafieloo, Arman [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)
2017-09-01
Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation of the base ΛCDM model as cosmology's gold standard.
Development of a Model for Dynamic Recrystallization Consistent with the Second Derivative Criterion
Directory of Open Access Journals (Sweden)
Muhammad Imran
2017-11-01
Full Text Available Dynamic recrystallization (DRX processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.
Imran, Muhammad; Kühbach, Markus; Roters, Franz; Bambach, Markus
2017-11-02
Dynamic recrystallization (DRX) processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC) by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.
Postmus, B.R.; Leermakers, F.A.M.; Cohen Stuart, M.A.
2008-01-01
We have constructed a model to predict the properties of non-ionic (alkyl-ethylene oxide) (C(n)E(m)) surfactants, both in aqueous solutions and near a silica surface, based upon the self-consistent field theory using the Scheutjens-Fleer discretisation scheme. The system has the pH and the ionic
A fast-simplified wheel-rail contact model consistent with perfect plastic materials
Sebès, Michel; Chevalier, Luc; Ayasse, Jean-Bernard; Chollet, Hugues
2012-09-01
A method is described which is an extension of rolling contact models with respect to plasticity. This new method, which is an extension of the STRIPES semi-Hertzian (SH) model, has been implemented in a multi-body-system (MBS) package and does not result in a longer execution time than the STRIPES SH model [J.B. Ayasse and H. Chollet, Determination of the wheel-rail contact patch in semi-Hertzian conditions, Veh. Syst. Dyn. 43(3) (2005), pp. 161-172]. High speed of computation is obtained by some hypotheses about the plastic law, the shape of stresses, the locus of the maximum stress and the slip. Plasticity does not change the vehicle behaviour but there is a need for an extension of rolling contact models with respect to plasticity as far as fatigue analysis of rail is concerned: rolling contact fatigue may be addressed via the finite element method (FEM) including material non-linearities, where loads are the contact stresses provided by the post-processing of MBS results [K. Dang Van, M.H. Maitournam, Z. Moumni, and F. Roger, A comprehensive approach for modeling fatigue and fracture of rails, Eng. Fract. Mech. 76 (2009), pp. 2626-2636]. In STRIPES, like in other MBS models, contact stresses may exceed the plastic yield criterion, leading to wrong results in the subsequent FEM analysis. With the proposed method, contact stresses are kept consistent with a perfect plastic law, avoiding these problems. The method is benchmarked versus non-linear FEM in Hertzian geometries. As a consequence of taking plasticity into account, contact patch area is bigger than the elastic one. In accordance with FEM results, a different ellipse aspect ratio than the one predicted by Hertz theory was also found and finally pressure does not exceed the threshold prescribed by the plastic law. The method also provides more exact results with non-Hertzian geometries. The new approach is finally compared with non-linear FEM in a tangent case with a unidirectional load and a complete
A proposed grading system for standardizing tumor consistency of intracranial meningiomas.
Zada, Gabriel; Yashar, Parham; Robison, Aaron; Winer, Jesse; Khalessi, Alexander; Mack, William J; Giannotta, Steven L
2013-12-01
Tumor consistency plays an important and underrecognized role in the surgeon's ability to resect meningiomas, especially with evolving trends toward minimally invasive and keyhole surgical approaches. Aside from descriptors such as "hard" or "soft," no objective criteria exist for grading, studying, and conveying the consistency of meningiomas. The authors designed a practical 5-point scale for intraoperative grading of meningiomas based on the surgeon's ability to internally debulk the tumor and on the subsequent resistance to folding of the tumor capsule. Tumor consistency grades and features are as follows: 1) extremely soft tumor, internal debulking with suction only; 2) soft tumor, internal debulking mostly with suction, and remaining fibrous strands resected with easily folded capsule; 3) average consistency, tumor cannot be freely suctioned and requires mechanical debulking, and the capsule then folds with relative ease; 4) firm tumor, high degree of mechanical debulking required, and capsule remains difficult to fold; and 5) extremely firm, calcified tumor, approaches density of bone, and capsule does not fold. Additional grading categories included tumor heterogeneity (with minimum and maximum consistency scores) and a 3-point vascularity score. This grading system was prospectively assessed in 50 consecutive patients undergoing craniotomy for meningioma resection by 2 surgeons in an independent fashion. Grading scores were subjected to a linear weighted kappa analysis for interuser reliability. Fifty patients (100 scores) were included in the analysis. The mean maximal tumor diameter was 4.3 cm. The distribution of overall tumor consistency scores was as follows: Grade 1, 4%; Grade 2, 9%; Grade 3, 43%; Grade 4, 44%; and Grade 5, 0%. Regions of Grade 5 consistency were reported only focally in 14% of heterogeneous tumors. Tumors were designated as homogeneous in 68% and heterogeneous in 32% of grades. The kappa analysis score for overall tumor consistency
Collaborative CAD Synchronization Based on a Symmetric and Consistent Modeling Procedure
Directory of Open Access Journals (Sweden)
Yiqi Wu
2017-04-01
Full Text Available One basic issue with collaborative computer aided design (Co-CAD is how to maintain valid and consistent modeling results across all design sites. Moreover, modeling history is important in parametric CAD modeling. Therefore, different from a typical co-editing approach, this paper proposes a novel method for Co-CAD synchronization, in which all Co-CAD sites maintain symmetric and consistent operating procedures. Consequently, the consistency of both modeling results and history can be achieved. In order to generate a valid, unique, and symmetric queue among collaborative sites, a set of correlated mechanisms is presented in this paper. Firstly, the causal relationship of operations is maintained. Secondly, the operation queue is reconstructed for partial concurrency operation, and the concurrent operation can be retrieved. Thirdly, a symmetric, concurrent operation control strategy is proposed to determine the order of operations and resolve possible conflicts. Compared with existing Co-CAD consistency methods, the proposed method is convenient and flexible in supporting collaborative design. The experiment performed based on the collaborative modeling procedure demonstrates the correctness and applicability of this work.
Towards an Information Model of Consistency Maintenance in Distributed Interactive Applications
Directory of Open Access Journals (Sweden)
Xin Zhang
2008-01-01
Full Text Available A novel framework to model and explore predictive contract mechanisms in distributed interactive applications (DIAs using information theory is proposed. In our model, the entity state update scheme is modelled as an information generation, encoding, and reconstruction process. Such a perspective facilitates a quantitative measurement of state fidelity loss as a result of the distribution protocol. Results from an experimental study on a first-person shooter game are used to illustrate the utility of this measurement process. We contend that our proposed model is a starting point to reframe and analyse consistency maintenance in DIAs as a problem in distributed interactive media compression.
International Nuclear Information System (INIS)
Procassini, R.J.; Birdsall, C.K.; Morse, E.C.
1990-01-01
A fully kinetic particle-in-cell (PIC) model is used to self-consistently determine the steady-state potential profile in a collisionless plasma that contacts a floating, absorbing boundary. To balance the flow of particles to the wall, a distributed source region is used to inject particles into the one-dimensional system. The effect of the particle source distribution function on the source region and collector sheath potential drops, and particle velocity distributions is investigated. The ion source functions proposed by Emmert et al. [Phys. Fluids 23, 803 (1980)] and Bissell and Johnson [Phys. Fluids 30, 779 (1987)] (and various combinations of these) are used for the injection of both ions and electrons. The values of the potential drops obtained from the PIC simulations are compared to those from the theories of Emmert et al., Bissell and Johnson, and Scheuer and Emmert [Phys. Fluids 31, 3645 (1988)], all of which assume that the electron density is related to the plasma potential via the Boltzmann relation. The values of the source region and total potential drop are found to depend on the choice of the electron source function, as well as the ion source function. The question of an infinite electric field at the plasma--sheath interface, which arises in the analyses of Bissell and Johnson and Scheuer and Emmert, is also addressed
A pedestal temperature model with self-consistent calculation of safety factor and magnetic shear
International Nuclear Information System (INIS)
Onjun, T; Siriburanon, T; Onjun, O
2008-01-01
A pedestal model based on theory-motivated models for the pedestal width and the pedestal pressure gradient is developed for the temperature at the top of the H-mode pedestal. The pedestal width model based on magnetic shear and flow shear stabilization is used in this study, where the pedestal pressure gradient is assumed to be limited by first stability of infinite n ballooning mode instability. This pedestal model is implemented in the 1.5D BALDUR integrated predictive modeling code, where the safety factor and magnetic shear are solved self-consistently in both core and pedestal regions. With the self-consistently approach for calculating safety factor and magnetic shear, the effect of bootstrap current can be correctly included in the pedestal model. The pedestal model is used to provide the boundary conditions in the simulations and the Multi-mode core transport model is used to describe the core transport. This new integrated modeling procedure of the BALDUR code is used to predict the temperature and density profiles of 26 H-mode discharges. Simulations are carried out for 13 discharges in the Joint European Torus and 13 discharges in the DIII-D tokamak. The average root-mean-square deviation between experimental data and the predicted profiles of the temperature and the density, normalized by their central values, is found to be about 14%
A self-consistent kinetic modeling of a 1-D, bounded, plasma in ...
Indian Academy of Sciences (India)
Abstract. A self-consistent kinetic treatment is presented here, where the Boltzmann equation is solved for a particle ... This paper reports on the findings of a kinetic code that retains col- lisions and sources, models ..... was used in the runs reported in this paper, the source of particles is modified from the explicit source Л(Ъ).
A new self-consistent model for thermodynamics of binary solutions
Czech Academy of Sciences Publication Activity Database
Svoboda, Jiří; Shan, Y. V.; Fischer, F. D.
2015-01-01
Roč. 108, NOV (2015), s. 27-30 ISSN 1359-6462 R&D Projects: GA ČR(CZ) GA14-24252S Institutional support: RVO:68081723 Keywords : Thermodynamics * Analytical methods * CALPHAD * Phase diagram * Self-consistent model Subject RIV: BJ - Thermodynamics Impact factor: 3.305, year: 2015
Toward self-consistent tectono-magmatic numerical model of rift-to-ridge transition
Gerya, Taras; Bercovici, David; Liao, Jie
2017-04-01
Natural data from modern and ancient lithospheric extension systems suggest three-dimensional (3D) character of deformation and complex relationship between magmatism and tectonics during the entire rift-to-ridge transition. Therefore, self-consistent high-resolution 3D magmatic-thermomechanical numerical approaches stand as a minimum complexity requirement for modeling and understanding of this transition. Here we present results from our new high-resolution 3D finite-difference marker-in-cell rift-to-ridge models, which account for magmatic accretion of the crust and use non-linear strain-weakened visco-plastic rheology of rocks that couples brittle/plastic failure and ductile damage caused by grain size reduction. Numerical experiments suggest that nucleation of rifting and ridge-transform patterns are decoupled in both space and time. At intermediate stages, two patterns can coexist and interact, which triggers development of detachment faults, failed rift arms, hyper-extended margins and oblique proto-transforms. En echelon rift patterns typically develop in the brittle upper-middle crust whereas proto-ridge and proto-transform structures nucleate in the lithospheric mantle. These deep proto-structures propagate upward, inter-connect and rotate toward a mature orthogonal ridge-transform patterns on the timescale of millions years during incipient thermal-magmatic accretion of the new oceanic-like lithosphere. Ductile damage of the extending lithospheric mantle caused by grain size reduction assisted by Zenner pinning plays critical role in rift-to-ridge transition by stabilizing detachment faults and transform structures. Numerical results compare well with observations from incipient spreading regions and passive continental margins.
STUDY OF TRANSIENT AND STATIONARY OPERATION MODES OF SYNCHRONOUS SYSTEM CONSISTING IN TWO MACHINES
Directory of Open Access Journals (Sweden)
V. S. Safaryan
2017-01-01
Full Text Available The solution of the problem of reliable functioning of an electric power system (EPS in steady-state and transient regimes, prevention of EPS transition into asynchronous regime, maintenance and restoration of stability of post-emergency processes is based on formation and realization of mathematical models of an EPS processes. During the functioning of electric power system in asynchronous regime, besides the main frequencies, the currents and voltages include harmonic components, the frequencies of which are multiple of the difference of main frequencies. At the two-frequency asynchronous regime the electric power system is being made equivalent in a form of a two-machine system, functioning for a generalized load. In the article mathematical models of transient process of a two-machine system in natural form and in d–q coordinate system are presented. The mathematical model of two-machine system is considered in case of two windings of excitement at the rotors. Also, in the article varieties of mathematical models of EPS transient regimes (trivial, simple, complete are presented. Transient process of a synchronous two-machine system is described by the complete model. The quality of transient processes of a synchronous machine depends on the number of rotor excitation windings. When there are two excitation windings on the rotor (dual system of excitation, the mathematical model of electromagnetic transient processes of a synchronous machine is represented in a complex form, i.e. in coordinate system d, q, the current of rotor being represented by a generalized vector. In asynchronous operation of a synchronous two-machine system with two excitation windings on the rotor the current and voltage systems include only harmonics of two frequencies. The mathematical model of synchronous steady-state process of a two-machine system is also provided, and the steady-state regimes with different structures of initial information are considered.
Discretizing LTI Descriptor (Regular Differential Input Systems with Consistent Initial Conditions
Directory of Open Access Journals (Sweden)
Athanasios D. Karageorgos
2010-01-01
Full Text Available A technique for discretizing efficiently the solution of a Linear descriptor (regular differential input system with consistent initial conditions, and Time-Invariant coefficients (LTI is introduced and fully discussed. Additionally, an upper bound for the error ‖x¯(kT−x¯k‖ that derives from the procedure of discretization is also provided. Practically speaking, we are interested in such kind of systems, since they are inherent in many physical, economical and engineering phenomena.
Consistency in Multi-Viewpoint Architectural Design of Enterprise Information Systems
Dijkman, R.M.; Quartel, Dick; van Sinderen, Marten J.
2006-01-01
Different stakeholders in the design of an enterprise information system have their own view on that design. To help produce a coherent design this paper presents a framework that aids in specifying relations between such views. To help produce a consistent design the framework also aids in
Hsu, David D.
Due to high nanointerfacial area to volume ratio, the properties of "nanoconfined" polymer thin films, blends, and composites become highly altered compared to their bulk homopolymer analogues. Understanding the structure-property mechanisms underlying this effect is an active area of research. However, despite extensive work, a fundamental framework for predicting the local and system-averaged thermomechanical properties as a function of configuration and polymer species has yet to be established. Towards bridging this gap, here, we present a novel, systematic coarse-graining (CG) method which is able to capture quantitatively, the thermomechanical properties of real polymer systems in bulk and in nanoconfined geometries. This method, which we call thermomechanically consistent coarse-graining (TCCG), is a two-bead-per-monomer CG hybrid approach through which bonded interactions are optimized to match the atomistic structure via the Iterative Boltzmann Inversion method (IBI), and nonbonded interactions are tuned to macroscopic targets through parametric studies. We validate the TCCG method by systematically developing coarse-grain models for a group of five specialized methacrylate-based polymers including poly(methyl methacrylate) (PMMA). Good correlation with bulk all-atom (AA) simulations and experiments is found for the temperature-dependent glass transition temperature (Tg) Flory-Fox scaling relationships, self-diffusion coefficients of liquid monomers, and modulus of elasticity. We apply this TCCG method also to bulk polystyrene (PS) using a comparable coarse-grain CG bead mapping strategy. The model demonstrates chain stiffness commensurate with experiments, and we utilize a density-correction term to improve the transferability of the elastic modulus over a 500 K range. Additionally, PS and PMMA models capture the unexplained, characteristically dissimilar scaling of Tg with the thickness of free-standing films as seen in experiments. Using vibrational
Self-consistency in the phonon space of the particle-phonon coupling model
Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.
2018-04-01
In the paper the nonlinear generalization of the time blocking approximation (TBA) is presented. The TBA is one of the versions of the extended random-phase approximation (RPA) developed within the Green-function method and the particle-phonon coupling model. In the generalized version of the TBA the self-consistency principle is extended onto the phonon space of the model. The numerical examples show that this nonlinear version of the TBA leads to the convergence of results with respect to enlarging the phonon space of the model.
ICFD modeling of final settlers - developing consistent and effective simulation model structures
DEFF Research Database (Denmark)
Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham
Summary of key findings The concept of interpreted computational fluid dynamic (iCFD) modelling and the development methodology are presented (Fig. 1). The 1-D advection-dispersion model along with the statistically generated, meta-model for pseudo-dispersion constitutes the newly developed i...... nine different model structures based on literature (1; 3; 2; 10; 9) and on more recent considerations (Fig. 2a). Validation tests were done using the CFD outputs from extreme scenarios. The most effective model structure (relatively low the sum of square of relative errors, SSRE, and computational...... time) obtained is that in which the XTC is set at the concentration of the layer just below the feed-layer. The feed-layer location is set to the highest location where X>Xin (solids concentration in SST influent). An effective discretization level (computational time/numerical error) is assessed...
Self-consistent multidimensional electron kinetic model for inductively coupled plasma sources
Dai, Fa Foster
Inductively coupled plasma (ICP) sources have received increasing interest in microelectronics fabrication and lighting industry. In 2-D configuration space (r, z) and 2-D velocity domain (νθ,νz), a self- consistent electron kinetic analytic model is developed for various ICP sources. The electromagnetic (EM) model is established based on modal analysis, while the kinetic analysis gives the perturbed Maxwellian distribution of electrons by solving Boltzmann-Vlasov equation. The self- consistent algorithm combines the EM model and the kinetic analysis by updating their results consistently until the solution converges. The closed-form solutions in the analytical model provide rigorous and fast computing for the EM fields and the electron kinetic behavior. The kinetic analysis shows that the RF energy in an ICP source is extracted by a collisionless dissipation mechanism, if the electron thermovelocity is close to the RF phase velocities. A criterion for collisionless damping is thus given based on the analytic solutions. To achieve uniformly distributed plasma for plasma processing, we propose a novel discharge structure with both planar and vertical coil excitations. The theoretical results demonstrate improved uniformity for the excited azimuthal E-field in the chamber. Non-monotonic spatial decay in electric field and space current distributions was recently observed in weakly- collisional plasmas. The anomalous skin effect is found to be responsible for this phenomenon. The proposed model successfully models the non-monotonic spatial decay effect and achieves good agreements with the measurements for different applied RF powers. The proposed analytical model is compared with other theoretical models and different experimental measurements. The developed model is also applied to two kinds of ICP discharges used for electrodeless light sources. One structure uses a vertical internal coil antenna to excite plasmas and another has a metal shield to prevent the
Consistent biases in Antarctic sea ice concentration simulated by climate models
Roach, Lettie A.; Dean, Samuel M.; Renwick, James A.
2018-01-01
The simulation of Antarctic sea ice in global climate models often does not agree with observations. In this study, we examine the compactness of sea ice, as well as the regional distribution of sea ice concentration, in climate models from the latest Coupled Model Intercomparison Project (CMIP5) and in satellite observations. We find substantial differences in concentration values between different sets of satellite observations, particularly at high concentrations, requiring careful treatment when comparing to models. As a fraction of total sea ice extent, models simulate too much loose, low-concentration sea ice cover throughout the year, and too little compact, high-concentration cover in the summer. In spite of the differences in physics between models, these tendencies are broadly consistent across the population of 40 CMIP5 simulations, a result not previously highlighted. Separating models with and without an explicit lateral melt term, we find that inclusion of lateral melt may account for overestimation of low-concentration cover. Targeted model experiments with a coupled ocean-sea ice model show that choice of constant floe diameter in the lateral melt scheme can also impact representation of loose ice. This suggests that current sea ice thermodynamics contribute to the inadequate simulation of the low-concentration regime in many models.
Bindoff, I; Stafford, A; Peterson, G; Kang, B H; Tenni, P
2012-08-01
Drug-related problems (DRPs) are of serious concern worldwide, particularly for the elderly who often take many medications simultaneously. Medication reviews have been demonstrated to improve medication usage, leading to reductions in DRPs and potential savings in healthcare costs. However, medication reviews are not always of a consistently high standard, and there is often room for improvement in the quality of their findings. Our aim was to produce computerized intelligent decision support software that can improve the consistency and quality of medication review reports, by helping to ensure that DRPs relevant to a patient are overlooked less frequently. A system that largely achieved this goal was previously published, but refinements have been made. This paper examines the results of both the earlier and newer systems. Two prototype multiple-classification ripple-down rules medication review systems were built, the second being a refinement of the first. Each of the systems was trained incrementally using a human medication review expert. The resultant knowledge bases were analysed and compared, showing factors such as accuracy, time taken to train, and potential errors avoided. The two systems performed well, achieving accuracies of approximately 80% and 90%, after being trained on only a small number of cases (126 and 244 cases, respectively). Through analysis of the available data, it was estimated that without the system intervening, the expert training the first prototype would have missed approximately 36% of potentially relevant DRPs, and the second 43%. However, the system appeared to prevent the majority of these potential expert errors by correctly identifying the DRPs for them, leaving only an estimated 8% error rate for the first expert and 4% for the second. These intelligent decision support systems have shown a clear potential to substantially improve the quality and consistency of medication reviews, which should in turn translate into
Directory of Open Access Journals (Sweden)
John (Jack P. Riegel III
2016-04-01
Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a
A semi-nonparametric mixture model for selecting functionally consistent proteins.
Yu, Lianbo; Doerge, Rw
2010-09-28
High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.
Consistent and Clear Reporting of Results from Diverse Modeling Techniques: The A3 Method
Directory of Open Access Journals (Sweden)
Scott Fortmann-Roe
2015-08-01
Full Text Available The measurement and reporting of model error is of basic importance when constructing models. Here, a general method and an R package, A3, are presented to support the assessment and communication of the quality of a model fit along with metrics of variable importance. The presented method is accurate, robust, and adaptable to a wide range of predictive modeling algorithms. The method is described along with case studies and a usage guide. It is shown how the method can be used to obtain more accurate models for prediction and how this may simultaneously lead to altered inferences and conclusions about the impact of potential drivers within a system.
Microencapsulation of model oil in wall matrices consisting of SPI and maltodextrins
Directory of Open Access Journals (Sweden)
Moshe Rosenberg
2016-01-01
Full Text Available Microencapsulation can provide means to entrap, protect and deliver nutritional lipids and related compounds that are susceptible to deterioration. The encapsulation of high lipid loads represents a challenge. The research has investigated the encapsulation by spray drying of a model oil, at a core load of 25–60%, in wall systems consisting of 2.5–10% SPI and 17.5–10% maltodextrin. In general, core-in-wall-emulsions exhibited unimodal PSD and a mean particle diameter < 0.5 µm. Dry microcapsules ranged in diameter from about 5 to less than 50 µm and exhibited only a limited extent of surface indentation. Core domains, in the form of protein-coated droplets, were embedded throughout the wall matrices and no visible cracks connecting these domains with the environment could be detected. Core retention ranged from 72.2 to 95.9% and was significantly affected (p < 0.05 by a combined influence of wall composition and initial core load. Microencapsulation efficiency, MEE, ranged from 25.4 to 91.6% and from 12.4 to 91.4% after 5 and 30 min of extraction, respectively (p < 0.05. MEE was significantly influenced by wall composition, extraction time, initial core load and DE value of the maltodextrins. Results indicated that wall solutions containing as low as 2.5% SPI and 17.5% maltodextrin were very effective as microencapsulating agents for high oil load. Results highlighted the functionality of SPI as microencapsulating agent in food applications and indicated the importance of carefully designing the composition of core-in-wall-emulsions.
International Nuclear Information System (INIS)
Lundberg, Jonas; Johansson, Björn JE
2015-01-01
It has been realized that resilience as a concept involves several contradictory definitions, both for instance resilience as agile adjustment and as robust resistance to situations. Our analysis of resilience concepts and models suggest that beyond simplistic definitions, it is possible to draw up a systemic resilience model (SyRes) that maintains these opposing characteristics without contradiction. We outline six functions in a systemic model, drawing primarily on resilience engineering, and disaster response: anticipation, monitoring, response, recovery, learning, and self-monitoring. The model consists of four areas: Event-based constraints, Functional Dependencies, Adaptive Capacity and Strategy. The paper describes dependencies between constraints, functions and strategies. We argue that models such as SyRes should be useful both for envisioning new resilience methods and metrics, as well as for engineering and evaluating resilient systems. - Highlights: • The SyRes model resolves contradictions between previous resilience definitions. • SyRes is a core model for envisioning and evaluating resilience metrics and models. • SyRes describes six functions in a systemic model. • They are anticipation, monitoring, response, recovery, learning, self-monitoring. • The model describes dependencies between constraints, functions and strategies
Directory of Open Access Journals (Sweden)
Roy E Barnewall
2012-06-01
Full Text Available Repeated low-level exposures to Bacillus anthracis could occur before or after the remediation of an environmental release. This is especially true for persistent agents such as Bacillus anthracis spores, the causative agent of anthrax. Studies were conducted to examine aerosol methods needed for consistent daily low aerosol concentrations to deliver a low-dose (less than 106 colony forming units (CFU of B. anthracis spores and included a pilot feasibility characterization study, acute exposure study, and a multiple fifteen day exposure study. This manuscript focuses on the state-of-the-science aerosol methodologies used to generate and aerosolize consistent daily low aerosol concentrations and resultant low inhalation doses. The pilot feasibility characterization study determined that the aerosol system was consistent and capable of producing very low aerosol concentrations. In the acute, single day exposure experiment, targeted inhaled doses of 1 x 102, 1 x 103, 1 x 104, and 1 x 105 CFU were used. In the multiple daily exposure experiment, rabbits were exposed multiple days to targeted inhaled doses of 1 x 102, 1 x 103, and 1 x 104 CFU. In all studies, targeted inhaled doses remained fairly consistent from rabbit to rabbit and day to day. The aerosol system produced aerosolized spores within the optimal mass median aerodynamic diameter particle size range to reach deep lung alveoli. Consistency of the inhaled dose was aided by monitoring and recording respiratory parameters during the exposure with real-time plethysmography. Overall, the presented results show that the animal aerosol system was stable and highly reproducible between different studies and multiple exposure days.
Badenes, C.; Hughes, J.P.; Bravo, E.; Langer, N.
2007-01-01
We explore the relationship between the models for progenitor systems of Type Ia supernovae and the properties of the supernova remnants that evolve after the explosion. Most models for Type Ia progenitors in the single-degenerate scenario predict substantial outflows during the presupernova
O. Fovet; L. Ruiz; M. Hrachowitz; M. Faucheux; C. Gascuel-Odoux
2015-01-01
While most hydrological models reproduce the general flow dynamics, they frequently fail to adequately mimic system-internal processes. In particular, the relationship between storage and discharge, which often follows annual hysteretic patterns in shallow hard-rock aquifers, is rarely considered in modelling studies. One main reason is that catchment storage is...
Consistency Analysis of Genome-Scale Models of Bacterial Metabolism: A Metamodel Approach.
Ponce-de-Leon, Miguel; Calle-Espinosa, Jorge; Peretó, Juli; Montero, Francisco
2015-01-01
Genome-scale metabolic models usually contain inconsistencies that manifest as blocked reactions and gap metabolites. With the purpose to detect recurrent inconsistencies in metabolic models, a large-scale analysis was performed using a previously published dataset of 130 genome-scale models. The results showed that a large number of reactions (~22%) are blocked in all the models where they are present. To unravel the nature of such inconsistencies a metamodel was construed by joining the 130 models in a single network. This metamodel was manually curated using the unconnected modules approach, and then, it was used as a reference network to perform a gap-filling on each individual genome-scale model. Finally, a set of 36 models that had not been considered during the construction of the metamodel was used, as a proof of concept, to extend the metamodel with new biochemical information, and to assess its impact on gap-filling results. The analysis performed on the metamodel allowed to conclude: 1) the recurrent inconsistencies found in the models were already present in the metabolic database used during the reconstructions process; 2) the presence of inconsistencies in a metabolic database can be propagated to the reconstructed models; 3) there are reactions not manifested as blocked which are active as a consequence of some classes of artifacts, and; 4) the results of an automatic gap-filling are highly dependent on the consistency and completeness of the metamodel or metabolic database used as the reference network. In conclusion the consistency analysis should be applied to metabolic databases in order to detect and fill gaps as well as to detect and remove artifacts and redundant information.
Self-consistent Dark Matter simplified models with an s-channel scalar mediator
International Nuclear Information System (INIS)
Bell, Nicole F.; Busoni, Giorgio; Sanderson, Isaac W.
2017-01-01
We examine Simplified Models in which fermionic DM interacts with Standard Model (SM) fermions via the exchange of an s -channel scalar mediator. The single-mediator version of this model is not gauge invariant, and instead we must consider models with two scalar mediators which mix and interfere. The minimal gauge invariant scenario involves the mixing of a new singlet scalar with the Standard Model Higgs boson, and is tightly constrained. We construct two Higgs doublet model (2HDM) extensions of this scenario, where the singlet mixes with the 2nd Higgs doublet. Compared with the one doublet model, this provides greater freedom for the masses and mixing angle of the scalar mediators, and their coupling to SM fermions. We outline constraints on these models, and discuss Yukawa structures that allow enhanced couplings, yet keep potentially dangerous flavour violating processes under control. We examine the direct detection phenomenology of these models, accounting for interference of the scalar mediators, and interference of different quarks in the nucleus. Regions of parameter space consistent with direct detection measurements are determined.
Interstellar turbulence model : A self-consistent coupling of plasma and neutral fluids
International Nuclear Information System (INIS)
Shaikh, Dastgeer; Zank, Gary P.; Pogorelov, Nikolai
2006-01-01
We present results of a preliminary investigation of interstellar turbulence based on a self-consistent two-dimensional fluid simulation model. Our model describes a partially ionized magnetofluid interstellar medium (ISM) that couples a neutral hydrogen fluid to a plasma through charge exchange interactions and assumes that the ISM turbulent correlation scales are much bigger than the shock characteristic length-scales, but smaller than the charge exchange mean free path length-scales. The shocks have no influence on the ISM turbulent fluctuations. We find that nonlinear interactions in coupled plasma-neutral ISM turbulence are influenced substantially by charge exchange processes
Alfven-wave particle interaction in finite-dimensional self-consistent field model
International Nuclear Information System (INIS)
Padhye, N.; Horton, W.
1998-01-01
A low-dimensional Hamiltonian model is derived for the acceleration of ions in finite amplitude Alfven waves in a finite pressure plasma sheet. The reduced low-dimensional wave-particle Hamiltonian is useful for describing the reaction of the accelerated ions on the wave amplitudes and phases through the self-consistent fields within the envelope approximation. As an example, the authors show for a single Alfven wave in the central plasma sheet of the Earth's geotail, modeled by the linear pinch geometry called the Harris sheet, the time variation of the wave amplitude during the acceleration of fast protons
Self-consistent nonlinearly polarizable shell-model dynamics for ferroelectric materials
International Nuclear Information System (INIS)
Mkam Tchouobiap, S.E.; Kofane, T.C.; Ngabireng, C.M.
2002-11-01
We investigate the dynamical properties of the polarizable shellmodel with a symmetric double Morse-type electron-ion interaction in one ionic species. A variational calculation based on the Self-Consistent Einstein Model (SCEM) shows that a theoretical ferroelectric (FE) transition temperature can be derive which demonstrates the presence of a first-order phase transition for the potassium selenate (K 2 SeO 4 ) crystal around Tc 91.5 K. Comparison of the model calculation with the experimental critical temperature yields satisfactory agreement. (author)
Self-consistent quasi-particle RPA for the description of superfluid Fermi systems
Rahbi, A; Chanfray, G; Schuck, P
2002-01-01
Self-Consistent Quasi-Particle RPA (SCQRPA) is for the first time applied to a more level pairing case. Various filling situation and values for the coupling constant are considered. Very encouraging results in comparison with the exact solution of the model are obtaining. The nature of the low lying mode in SCQRPA is identified. The strong reduction of the number fluctuation in SCQRPA vs BCS is pointed out. The transition from superfluidity to the normal fluid case is carefully investigated.
A consistent modelling methodology for secondary settling tanks: a reliable numerical method.
Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena
2013-01-01
The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.
Development of a self-consistent lightning NOx simulation in large-scale 3-D models
Luo, Chao; Wang, Yuhang; Koshak, William J.
2017-03-01
We seek to develop a self-consistent representation of lightning NOx (LNOx) simulation in a large-scale 3-D model. Lightning flash rates are parameterized functions of meteorological variables related to convection. We examine a suite of such variables and find that convective available potential energy and cloud top height give the best estimates compared to July 2010 observations from ground-based lightning observation networks. Previous models often use lightning NOx vertical profiles derived from cloud-resolving model simulations. An implicit assumption of such an approach is that the postconvection lightning NOx vertical distribution is the same for all deep convection, regardless of geographic location, time of year, or meteorological environment. Detailed observations of the lightning channel segment altitude distribution derived from the NASA Lightning Nitrogen Oxides Model can be used to obtain the LNOx emission profile. Coupling such a profile with model convective transport leads to a more self-consistent lightning distribution compared to using prescribed postconvection profiles. We find that convective redistribution appears to be a more important factor than preconvection LNOx profile selection, providing another reason for linking the strength of convective transport to LNOx distribution.
Feng, Lian-Li; Tian, Shou-Fu; Zhang, Tian-Tian; Zhou, Jun
2017-07-01
Under investigation in this paper is the variant Boussinesq system, which describes the propagation of surface long wave towards two directions in a certain deep trough. With the help of the truncated Painlevé expansion, we construct its nonlocal symmetry, Bäcklund transformation, and Schwarzian form, respectively. The nonlocal symmetries can be localised to provide the corresponding nonlocal group, and finite symmetry transformations and similarity reductions are computed. Furthermore, we verify that the variant Boussinesq system is solvable via the consistent Riccati expansion (CRE). By considering the consistent tan-function expansion (CTE), which is a special form of CRE, the interaction solutions between soliton and cnoidal periodic wave are explicitly studied.
An Ice Model That is Consistent with Composite Rheology in GIA Modelling
Huang, P.; Patrick, W.
2017-12-01
There are several popular approaches in constructing ice history models. One of them is mainly based on thermo-mechanical ice models with forcing or boundary conditions inferred from paleoclimate data. The second one is mainly based on the observed response of the Earth to glacial loading and unloading, a process called Glacial Isostatic Adjustment or GIA. The third approach is a hybrid version of the first and second approaches. In this presentation, we will follow the second approach which also uses geological data such as ice flow, terminal moraine data and simple ice dynamic for the ice sheet re-construction (Peltier & Andrew 1976). The global ice model ICE-6G (Peltier et al. 2015) and all its predecessors (Tushingham & Peltier 1991, Peltier 1994, 1996, 2004, Lambeck et al. 2014) are constructed this way with the assumption that mantle rheology is linear. However, high temperature creep experiments on mantle rocks show that non-linear creep laws can also operate in the mantle. Since both linear (e.g. diffusion creep) and non-linear (e.g. dislocation) creep laws can operate simultaneously in the mantle, mantle rheology is likely composite, where the total creep is the sum of both linear and onlinear creep. Preliminary GIA studies found that composite rheology can fit regional RSL observations better than that from linear rheology(e.g. van der Wal et al. 2010). The aim of this paper is to construct ice models in Laurentia and Fennoscandia using this second approach, but with composite rheology, so that its predictions can fit GIA observations such as global RSL data, land uplift rate and g-dot simultaneously in addition to geological data and simple ice dynamics. The g-dot or gravity-rate-of-change data is from the GRACE gravity mission but with the effects of hydrology removed. Our GIA model is based on the Coupled Laplace-Finite Element method as described in Wu(2004) and van der Wal et al.(2010). It is found that composite rheology generally supports a thicker
Ring retroreflector system consisting of cube-corner reflectors with special coating
International Nuclear Information System (INIS)
Burmistrov, V B; Sadovnikov, M A; Sokolov, A L; Shargorodskiy, V D
2013-01-01
The ring retroreflector system (RS) consisting of cubecorner reflectors (CCRs) with a special coating of reflecting surfaces, intended for uniaxially Earth-oriented navigation satellites, is considered. The error of distance measurement caused by both the laser pulse delay in the CCR and its spatial position (CCR configuration) is studied. It is shown that the ring RS, formed by the CCR with a double-spot radiation pattern, allows the distance measurement error to be essentially reduced. (nanogradient dielectric coatings and metamaterials)
International Nuclear Information System (INIS)
Schrader, Heinrich
2000-01-01
Calibration in terms of activity of the ionization-chamber secondary standard measuring systems at the PTB is described. The measurement results of a Centronic IG12/A20, a Vinten ISOCAL IV and a radionuclide calibrator chamber for nuclear medicine applications are discussed, their energy-dependent efficiency curves established and the consistency checked using recently evaluated radionuclide decay data. Criteria for evaluating and transferring calibration factors (or efficiencies) are given
Ring retroreflector system consisting of cube-corner reflectors with special coating
Burmistrov, V. B.; Sadovnikov, M. A.; Sokolov, A. L.; Shargorodskiy, V. D.
2013-09-01
The ring retroreflector system (RS) consisting of cubecorner reflectors (CCRs) with a special coating of reflecting surfaces, intended for uniaxially Earth-oriented navigation satellites, is considered. The error of distance measurement caused by both the laser pulse delay in the CCR and its spatial position (CCR configuration) is studied. It is shown that the ring RS, formed by the CCR with a double-spot radiation pattern, allows the distance measurement error to be essentially reduced.
Hossain, Mokarram; Steinmann, Paul
2013-06-01
Rubber-like materials can deform largely and nonlinearly upon loading, and they return to the initial configuration when the load is removed. Such rubber elasticity is achieved due to very flexible long-chain molecules and a three-dimensional network structure that is formed via cross-linking or entanglements between molecules. Over the years, to model the mechanical behavior of such randomly oriented microstructures, several phenomenological and micromechanically motivated network models for nearly incompressible hyperelastic polymeric materials have been proposed in the literature. To implement these models for polymeric material (undoubtedly with widespread engineering applications) in the finite element framework for solving a boundary value problem, one would require two important ingredients, i.e., the stress tensor and the consistent fourth-order tangent operator, where the latter is the result of linearization of the former. In our previous work, 14 such material models are reviewed by deriving the accurate stress tensors and tangent operators from a group of phenomenological and micromechanical models at large deformations. The current contribution will supplement some further important models that were not included in the previous work. For comparison of all selected models in reproducing the well-known Treloar data, the analytical expressions for the three homogeneous defomation modes, i.e., uniaxial tension, equibiaxial tension, and pure shear, have been derived and the performances of the models are analyzed.
Possible world based consistency learning model for clustering and classifying uncertain data.
Liu, Han; Zhang, Xianchao; Zhang, Xiaotong
2018-06-01
Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Jan Zavadsky
2014-07-01
Full Text Available Purpose: The performance management system (PMS is a metasystem over all business processes at the strategic and operational level. Effectiveness of the various management systems depends on many factors. One of them is the consistent definition of each system elements. The main purpose of this study is to explore if the performance management systems of the sample companies is consistent and how companies can create such a system. The consistency in this case is based on the homogenous definition of attributes relating to the performance indicator as a basic element of PMS.Methodology: At the beginning, we used an affinity diagram that helped us to clarify and to group various attributes of performance indicators. The main research results we achieved are through empirical study. The empirical study was carried out in a sample of Slovak companies. The criterion for selection was the existence of the certified management systems according to the ISO 9001. Representativeness of the sample companies was confirmed by application of Pearson´s chi-squared test (χ2 - test due to above standards. Findings: Coming from the review of various literature, we defined four groups of attributes relating to the performance indicator: formal attributes, attributes of target value, informational attributes and attributes of evaluation. The whole set contains 21 attributes. The consistency of PMS is based not on maximum or minimum number of attributes, but on the same type of attributes for each performance indicator used in PMS at both the operational and strategic level. The main findings are: companies use various financial and non-financial indicators at strategic or operational level; companies determine various attributes of performance indicator, but most of the performance indicators are otherwise determined; we identified the common attributes for the whole sample of companies. Practical implications: The research results have got an implication for
Directory of Open Access Journals (Sweden)
Elif Uğur
2017-01-01
Full Text Available During the prevention and treatment of cardiovascular diseases, first cause of deaths in the world, diet has a vital role. While nutrition programs for the cardiovascular health generally focus on lipids and carbohydrates, effects of proteins are not well concerned. Thus this review is written in order to examine effect of proteins, amino acids, and the other amine consisting compounds on cardiovascular system. Because of that animal or plant derived proteins have different protein composition in different foods such as dairy products, egg, meat, chicken, fish, pulse and grains, their effects on blood pressure and regulation of lipid profile are unlike. In parallel amino acids made up proteins have different effect on cardiovascular system. From this point, sulfur containing amino acids, branched chain amino acids, aromatic amino acids, arginine, ornithine, citrulline, glycine, and glutamine may affect cardiovascular system in different metabolic pathways. In this context, one carbon metabolism, synthesis of hormone, stimulation of signaling pathways and effects of intermediate and final products that formed as a result of amino acids metabolism is determined. Despite the protein and amino acids, some other amine consisting compounds in diet include trimethylamine N-oxide, heterocyclic aromatic amines, polycyclic aromatic hydrocarbons and products of Maillard reaction. These amine consisting compounds generally increase the risk for cardiovascular diseases by stimulating oxidative stress, inflammation, and formation of atherosclerotic plaque.
Energy Technology Data Exchange (ETDEWEB)
Hamm, L.L.; Van Brunt, V.
1982-08-01
A comparison of implicit Runge-Kutta and orthogonal collocation methods is made for the numerical solution to the ordinary differential equation which describes the high-pressure vapor-liquid equilibria of a binary system. The systems of interest are limited to binary solubility systems where one of the components is supercritical and exists as a noncondensable gas in the pure state. Of the two methods - implicit Runge-Kuta and orthogonal collocation - this paper attempts to present some preliminary but not necessarily conclusive results that the implicit Runge-Kutta method is superior for the solution to the ordinary differential equation utilized in the thermodynamic consistency testing of binary solubility systems. Due to the extreme nonlinearity of thermodynamic properties in the region near the critical locus, an extended cubic spline fitting technique is devised for correlating the P-x data. The least-squares criterion is employed in smoothing the experimental data. Even though the derivation is presented specifically for the correlation of P-x data, the technique could easily be applied to any thermodynamic data by changing the endpoint requirements. The volumetric behavior of the systems must be given or predicted in order to perform thermodynamic consistency tests. A general procedure is developed for predicting the volumetric behavior required and some indication as to the expected limit of accuracy is given.
International Nuclear Information System (INIS)
Hamm, L.L.; Van Brunt, V.
1982-08-01
A comparison of implicit Runge-Kutta and orthogonal collocation methods is made for the numerical solution to the ordinary differential equation which describes the high-pressure vapor-liquid equilibria of a binary system. The systems of interest are limited to binary solubility systems where one of the components is supercritical and exists as a noncondensable gas in the pure state. Of the two methods - implicit Runge-Kuta and orthogonal collocation - this paper attempts to present some preliminary but not necessarily conclusive results that the implicit Runge-Kutta method is superior for the solution to the ordinary differential equation utilized in the thermodynamic consistency testing of binary solubility systems. Due to the extreme nonlinearity of thermodynamic properties in the region near the critical locus, an extended cubic spline fitting technique is devised for correlating the P-x data. The least-squares criterion is employed in smoothing the experimental data. Even though the derivation is presented specifically for the correlation of P-x data, the technique could easily be applied to any thermodynamic data by changing the endpoint requirements. The volumetric behavior of the systems must be given or predicted in order to perform thermodynamic consistency tests. A general procedure is developed for predicting the volumetric behavior required and some indication as to the expected limit of accuracy is given
Becerra, Marley; Frid, Henrik; Vázquez, Pedro A.
2017-12-01
This paper presents a self-consistent model of electrohydrodynamic (EHD) laminar plumes produced by electron injection from ultra-sharp needle tips in cyclohexane. Since the density of electrons injected into the liquid is well described by the Fowler-Nordheim field emission theory, the injection law is not assumed. Furthermore, the generation of electrons in cyclohexane and their conversion into negative ions is included in the analysis. Detailed steady-state characteristics of EHD plumes under weak injection and space-charge limited injection are studied. It is found that the plume characteristics far from both electrodes and under weak injection can be accurately described with an asymptotic simplified solution proposed by Vazquez et al. ["Dynamics of electrohydrodynamic laminar plumes: Scaling analysis and integral model," Phys. Fluids 12, 2809 (2000)] when the correct longitudinal electric field distribution and liquid velocity radial profile are used as input. However, this asymptotic solution deviates from the self-consistently calculated plume parameters under space-charge limited injection since it neglects the radial variations of the electric field produced by a high-density charged core. In addition, no significant differences in the model estimates of the plume are found when the simulations are obtained either with the finite element method or with a diffusion-free particle method. It is shown that the model also enables the calculation of the current-voltage characteristic of EHD laminar plumes produced by electron field emission, with good agreement with measured values reported in the literature.
Directory of Open Access Journals (Sweden)
Liyan Zhang
2017-01-01
Full Text Available The paper studies multiresolution traffic flow simulation model of urban expressway. Firstly, compared with two-level hybrid model, three-level multiresolution hybrid model has been chosen. Then, multiresolution simulation framework and integration strategies are introduced. Thirdly, the paper proposes an urban expressway multiresolution traffic simulation model by asynchronous integration strategy based on Set Theory, which includes three submodels: macromodel, mesomodel, and micromodel. After that, the applicable conditions and derivation process of the three submodels are discussed in detail. In addition, in order to simulate and evaluate the multiresolution model, “simple simulation scenario” of North-South Elevated Expressway in Shanghai has been established. The simulation results showed the following. (1 Volume-density relationships of three submodels are unanimous with detector data. (2 When traffic density is high, macromodel has a high precision and smaller error and the dispersion of results is smaller. Compared with macromodel, simulation accuracies of micromodel and mesomodel are lower but errors are bigger. (3 Multiresolution model can simulate characteristics of traffic flow, capture traffic wave, and keep the consistency of traffic state transition. Finally, the results showed that the novel multiresolution model can have higher simulation accuracy and it is feasible and effective in the real traffic simulation scenario.
Consistent phase-change modeling for CO2-based heat mining operation
DEFF Research Database (Denmark)
Singh, Ashok Kumar; Veje, Christian
2017-01-01
–gas phase transition with more accuracy and consistency. Calculation of fluid properties and saturation state were based on the volume translated Peng–Robinson equation of state and results verified. The present model has been applied to a scenario to simulate a CO2-based heat mining process. In this paper......The accuracy of mathematical modeling of phase-change phenomena is limited if a simple, less accurate equation of state completes the governing partial differential equation. However, fluid properties (such as density, dynamic viscosity and compressibility) and saturation state are calculated using...... a highly accurate, complex equation of state. This leads to unstable and inaccurate simulation as the equation of state and governing partial differential equations are mutually inconsistent. In this study, the volume-translated Peng–Robinson equation of state was used with emphasis to model the liquid...
International Nuclear Information System (INIS)
Baczmanski, A.; Braham, C.
2004-01-01
A new method for determining the parameters characterising elastoplastic deformation of two-phase material is proposed. The method is based on the results of neutron diffraction and mechanical experiments, which are analysed using the self-consistent rate-independent model of elastoplastic deformation. The neutron diffraction method has been applied to determine the lattice strains and diffraction peak broadening in two-phase austeno-ferritic steel during uniaxial tensile test. The elastoplastic model was used to predict evolution of internal stresses and critical resolved shear stresses. Calculations based on this model were successfully compared with experimental results and the parameters characterising elastoplastic deformation were determined for both phases of duplex steel
Directory of Open Access Journals (Sweden)
Ivana Dragović
2015-01-01
Full Text Available Fuzzy inference systems (FIS enable automated assessment and reasoning in a logically consistent manner akin to the way in which humans reason. However, since no conventional fuzzy set theory is in the Boolean frame, it is proposed that Boolean consistent fuzzy logic should be used in the evaluation of rules. The main distinction of this approach is that it requires the execution of a set of structural transformations before the actual values can be introduced, which can, in certain cases, lead to different results. While a Boolean consistent FIS could be used for establishing the diagnostic criteria for any given disease, in this paper it is applied for determining the likelihood of peritonitis, as the leading complication of peritoneal dialysis (PD. Given that patients could be located far away from healthcare institutions (as peritoneal dialysis is a form of home dialysis the proposed Boolean consistent FIS would enable patients to easily estimate the likelihood of them having peritonitis (where a high likelihood would suggest that prompt treatment is indicated, when medical experts are not close at hand.
Design of micro distribution systems consisting of long channels with arbitrary cross sections
International Nuclear Information System (INIS)
Misdanitis, S; Valougeorgis, D
2012-01-01
Gas flows through long micro-channels of various cross sections have been extensively investigated over the years both numerically and experimentally. In various technological applications including microfluidics, these micro-channels are combined together in order to form a micro-channel network. Computational algorithms for solving gas pipe networks in the hydrodynamic regime are well developed. However, corresponding tools for solving networks consisting of micro-channels under any degree of gas rarefaction is very limited. Recently a kinetic algorithm has been developed to simulate gas distribution systems consisting of long circular channels under any vacuum conditions. In the present work this algorithm is generalized and extended into micro-channels of arbitrary cross-section etched by KOH in silicon (triangular and trapezoidal channels with acute angle of 54.74°). Since a kinetic approach is implemented, the analysis is valid and the results are accurate in the whole range of the Knudsen number, while the involved computational effort is very small. This is achieved by successfully integrating the kinetic results for the corresponding single channels into the general solver for designing the gas pipe network. To demonstrate the feasibility of the approach two typical systems consisting of long rectangular and trapezoidal micro-channels are solved.
Armour, K.
2017-12-01
Global energy budget observations have been widely used to constrain the effective, or instantaneous climate sensitivity (ICS), producing median estimates around 2°C (Otto et al. 2013; Lewis & Curry 2015). A key question is whether the comprehensive climate models used to project future warming are consistent with these energy budget estimates of ICS. Yet, performing such comparisons has proven challenging. Within models, values of ICS robustly vary over time, as surface temperature patterns evolve with transient warming, and are generally smaller than the values of equilibrium climate sensitivity (ECS). Naively comparing values of ECS in CMIP5 models (median of about 3.4°C) to observation-based values of ICS has led to the suggestion that models are overly sensitive. This apparent discrepancy can partially be resolved by (i) comparing observation-based values of ICS to model values of ICS relevant for historical warming (Armour 2017; Proistosescu & Huybers 2017); (ii) taking into account the "efficacies" of non-CO2 radiative forcing agents (Marvel et al. 2015); and (iii) accounting for the sparseness of historical temperature observations and differences in sea-surface temperature and near-surface air temperature over the oceans (Richardson et al. 2016). Another potential source of discrepancy is a mismatch between observed and simulated surface temperature patterns over recent decades, due to either natural variability or model deficiencies in simulating historical warming patterns. The nature of the mismatch is such that simulated patterns can lead to more positive radiative feedbacks (higher ICS) relative to those engendered by observed patterns. The magnitude of this effect has not yet been addressed. Here we outline an approach to perform fully commensurate comparisons of climate models with global energy budget observations that take all of the above effects into account. We find that when apples-to-apples comparisons are made, values of ICS in models are
Directory of Open Access Journals (Sweden)
Jenny Roth
2018-04-01
Full Text Available The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification depends in part on the (incompatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (incompatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.
Roth, Jenny; Steffens, Melanie C; Vignoles, Vivian L
2018-01-01
The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance-congruity and imbalance-dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.
Self-consistent nonlinear transmission line model of standing wave effects in a capacitive discharge
International Nuclear Information System (INIS)
Chabert, P.; Raimbault, J.L.; Rax, J.M.; Lieberman, M.A.
2004-01-01
It has been shown previously [Lieberman et al., Plasma Sources Sci. Technol. 11, 283 (2002)], using a non-self-consistent model based on solutions of Maxwell's equations, that several electromagnetic effects may compromise capacitive discharge uniformity. Among these, the standing wave effect dominates at low and moderate electron densities when the driving frequency is significantly greater than the usual 13.56 MHz. In the present work, two different global discharge models have been coupled to a transmission line model and used to obtain the self-consistent characteristics of the standing wave effect. An analytical solution for the wavelength λ was derived for the lossless case and compared to the numerical results. For typical plasma etching conditions (pressure 10-100 mTorr), a good approximation of the wavelength is λ/λ 0 ≅40 V 0 1/10 l -1/2 f -2/5 , where λ 0 is the wavelength in vacuum, V 0 is the rf voltage magnitude in volts at the discharge center, l is the electrode spacing in meters, and f the driving frequency in hertz
Choi, Sung W; Gerencser, Akos A; Ng, Ryan; Flynn, James M; Melov, Simon; Danielson, Steven R; Gibson, Bradford W; Nicholls, David G; Bredesen, Dale E; Brand, Martin D
2012-11-21
Depressed cortical energy supply and impaired synaptic function are predominant associations of Alzheimer's disease (AD). To test the hypothesis that presynaptic bioenergetic deficits are associated with the progression of AD pathogenesis, we compared bioenergetic variables of cortical and hippocampal presynaptic nerve terminals (synaptosomes) from commonly used mouse models with AD-like phenotypes (J20 age 6 months, Tg2576 age 16 months, and APP/PS age 9 and 14 months) to age-matched controls. No consistent bioenergetic deficiencies were detected in synaptosomes from the three models; only APP/PS cortical synaptosomes from 14-month-old mice showed an increase in respiration associated with proton leak. J20 mice were chosen for a highly stringent investigation of mitochondrial function and content. There were no significant differences in the quality of the synaptosomal preparations or the mitochondrial volume fraction. Furthermore, respiratory variables, calcium handling, and membrane potentials of synaptosomes from symptomatic J20 mice under calcium-imposed stress were not consistently impaired. The recovery of marker proteins during synaptosome preparation was the same, ruling out the possibility that the lack of functional bioenergetic defects in synaptosomes from J20 mice was due to the selective loss of damaged synaptosomes during sample preparation. Our results support the conclusion that the intrinsic bioenergetic capacities of presynaptic nerve terminals are maintained in these symptomatic AD mouse models.
A Time consistent model for monetary value of man-sievert
International Nuclear Information System (INIS)
Na, S.H.; Kim, Sun G.
2008-01-01
Full text: Performing a cost-benefit analysis to establish optimum levels of radiation protection under the ALARA principle, we introduce a discrete stepwise model to evaluate man-sievert monetary value of Korea. The model formula, which is unique and country-specific, is composed of GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and the average life expectancy. Unlike previous researches on alpha-value assessment, we showed different alpha values optimized with respect to various ranges of individual dose, which would be more realistic and applicable to the radiation protection area. Employing economically constant term of GDP we showed the real values of man-sievert by year, which should be consistent in time series comparison even under price level fluctuation. GDP deflators of an economy have to be applied to measure one's own consistent value of radiation protection by year. In addition, we recommend that the concept of purchasing power parity should be adopted if it needs international comparison of alpha values in real terms. Finally, we explain the way that this stepwise model can be generalized simply to other countries without normalizing any country-specific factors. (author)
On optimization of an experimental system consisting of beam guidance and nuclear detectors
International Nuclear Information System (INIS)
Lehr, H.; Hinderer, G.; Maier, K.H.
1978-02-01
This report deals with the optimization of the resolution in nuclear physics experiments with a beam of accelerated particles. The complete system consisting of the beam handling, the nuclear reaction, and the particle detection is described with a linear matrix formalism. This allows to give analytic expressions for the linewidth of any physically interesting quantities, like Q-values of scattering angle in the center of mass system, as a function of beam line-, nuclear reaction-, and spectrometer parameters. From this then general prescriptions for optimizing the resolution by matching the beam handling and the detector system are derived. Explicitly treated are the measurements of Q-values and CM-scattering angle with an energy sensitive detector, a time of flight spectrometer, and a magnetic spectrometer. (orig.) [de
RSMASS system model development
International Nuclear Information System (INIS)
Marshall, A.C.; Gallup, D.R.
1998-01-01
1998. A radioisotope space power system model RISMASS is also under development. RISMASS will optimize and predict system masses for radioisotope power sources coupled with close-spaced thermionic diodes. Although RSMASS-D models have been developed for a broad variety of space nuclear power and propulsion systems, only a few concepts will be included in the releasable RSMASS-T computer code. A follow-on effort is recommended to incorporate all previous models as well as solar power system models into one general code. The proposed Space Power and propulsion system MASS (SPMASS) code would provide a consistent analysis tool for comparing a very broad range of alternative power and propulsion systems for any required power level and operating conditions. As for RSMASS-T the SPMASS model should be a certified, fully documented computer code available for general use. The proposed computer program would provide space mission planners with the capability to quickly and cost effectively explore power system options for any space mission. The code should be applicable for power requirements from as low as a few milliwatts (solar and isotopic system options) to many megawatts for reactor power and propulsion systems
Quest for consistent modelling of statistical decay of the compound nucleus
Banerjee, Tathagata; Nath, S.; Pal, Santanu
2018-01-01
A statistical model description of heavy ion induced fusion-fission reactions is presented where shell effects, collective enhancement of level density, tilting away effect of compound nuclear spin and dissipation are included. It is shown that the inclusion of all these effects provides a consistent picture of fission where fission hindrance is required to explain the experimental values of both pre-scission neutron multiplicities and evaporation residue cross-sections in contrast to some of the earlier works where a fission hindrance is required for pre-scission neutrons but a fission enhancement for evaporation residue cross-sections.
A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model
DEFF Research Database (Denmark)
Spann, Robert; Roca, Christophe; Kold, David
2017-01-01
Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...
Tree, A C; Khoo, V S; van As, N J; Partridge, M
2014-04-01
The α/β ratio for prostate cancer is thought to be low and less than for the rectum, which is usually the dose-limiting organ. Hypofractionated radiotherapy should therefore improve the therapeutic ratio, increasing cure rates with less toxicity. A number of models for predicting biochemical relapse-free survival have been developed from large series of patients treated with conventional and moderately hypofractionated radiotherapy. The purpose of this study was to test these models when significant numbers of patients treated with profoundly hypofractionated radiotherapy were included. A systematic review of the literature with regard to hypofractionated radiotherapy for prostate cancer was conducted, focussing on data recently presented on prostate stereotactic body radiotherapy. For the work described here, we have taken published biochemical control rates for a range of moderately and profoundly fractionated schedules and plotted these together with a range of radiobiological models, which are described. The data reviewed show consistency between the various radiobiological model predictions and the currently observed data. Current radiobiological models provide accurate predictions of biochemical relapse-free survival, even when profoundly hypofractionated patients are included in the analysis. Copyright © 2014 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Comparison of squashing and self-consistent input-output models of quantum feedback
Peřinová, V.; Lukš, A.; Křepelka, J.
2018-03-01
The paper (Yanagisawa and Hope, 2010) opens with two ways of analysis of a measurement-based quantum feedback. The scheme of the feedback includes, along with the homodyne detector, a modulator and a beamsplitter, which does not enable one to extract the nonclassical field. In the present scheme, the beamsplitter is replaced by the quantum noise evader, which makes it possible to extract the nonclassical field. We re-approach the comparison of two models related to the same scheme. The first one admits that in the feedback loop between the photon annihilation and creation operators, unusual commutation relations hold. As a consequence, in the feedback loop, squashing of the light occurs. In the second one, the description arrives at the feedback loop via unitary transformations. But it is obvious that the unitary transformation which describes the modulator changes even the annihilation operator of the mode which passes by the modulator which is not natural. The first model could be called "squashing model" and the second one could be named "self-consistent model". Although the predictions of the two models differ only a little and both the ways of analysis have their advantages, they have also their drawbacks and further investigation is possible.
Non local thermodynamic equilibrium self-consistent average atom model for plasma physics
International Nuclear Information System (INIS)
Faussurier, G.; Blancard, Ch.; Berthier, E.
2000-01-01
A time-dependent collisional-radiative average-atom model is presented to study statistical properties of highly-charged ion plasmas in off-equilibrium conditions. Atomic structure is described either with a screened-hydrogenic model including l-splitting, or by calculating one electron states in a self-consistent average-atom potential. Collisional and radiative excitation/deexcitation and ionization/recombination rats, as well as auto-ionization and dielectronic recombination rates, are formulated within the average-configuration framework. A good agreement with experiment is found for the charge-state distribution of a gold plasma at electron and density temperature equal to 6 x 10 20 cm -3 and 2200 eV. (author)
A self-consistent model for polycrystal deformation. Description and implementation
Energy Technology Data Exchange (ETDEWEB)
Clausen, B.; Lorentzen, T.
1997-04-01
This report is a manual for the ANSI C implementation of an incremental elastic-plastic rate-insensitive self-consistent polycrystal deformation model based on (Hutchinson 1970). The model is furthermore described in the Ph.D. thesis by Clausen (Clausen 1997). The structure of the main program, sc{sub m}odel.c, and its subroutines are described with flow-charts. Likewise the pre-processor, sc{sub i}ni.c, is described with a flowchart. Default values of all the input parameters are given in the pre-processor, but the user is able to select from other pre-defined values or enter new values. A sample calculation is made and the results are presented as plots and examples of the output files are shown. (au) 4 tabs., 28 ills., 17 refs.
Baczmański, A.; Gaj, A.; Le Joncour, L.; Wroński, S.; François, M.; Panicaud, B.; Braham, C.; Paradowska, A. M.
2012-08-01
The time-of-flight neutron diffraction technique and the elastoplastic self-consistent model were used to study the behaviour of single and multi-phase materials. Critical resolved shear stresses and hardening parameters in austenitic and austenitic-ferritic steels were found by analysing the evolution of the lattice strains measured during tensile tests. Special attention was paid to the changes of the grain stresses occurring due to transition from elastic to plastic deformation. Using a new method of data analysis, the variation of the stress localisation tensor as a function of macrostress was measured. The experimental results were successfully compared with model predictions for both phases of the duplex steel and also for the austenitic sample.
Self-Consistent Generation of Primordial Continental Crust in Global Mantle Convection Models
Jain, C.; Rozel, A.; Tackley, P. J.
2017-12-01
We present the generation of primordial continental crust (TTG rocks) using self-consistent and evolutionary thermochemical mantle convection models (Tackley, PEPI 2008). Numerical modelling commonly shows that mantle convection and continents have strong feedbacks on each other. However in most studies, continents are inserted a priori while basaltic (oceanic) crust is generated self-consistently in some models (Lourenco et al., EPSL 2016). Formation of primordial continental crust happened by fractional melting and crystallisation in episodes of relatively rapid growth from late Archean to late Proterozoic eras (3-1 Ga) (Hawkesworth & Kemp, Nature 2006) and it has also been linked to the onset of plate tectonics around 3 Ga. It takes several stages of differentiation to generate Tonalite-Trondhjemite-Granodiorite (TTG) rocks or proto-continents. First, the basaltic magma is extracted from the pyrolitic mantle which is both erupted at the surface and intruded at the base of the crust. Second, it goes through eclogitic transformation and then partially melts to form TTGs (Rudnick, Nature 1995; Herzberg & Rudnick, Lithos 2012). TTGs account for the majority of the Archean continental crust. Based on the melting conditions proposed by Moyen (Lithos 2011), the feasibility of generating TTG rocks in numerical simulations has already been demonstrated by Rozel et al. (Nature, 2017). Here, we have developed the code further by parameterising TTG formation. We vary the ratio of intrusive (plutonic) and extrusive (volcanic) magmatism (Crisp, Volcanol. Geotherm. 1984) to study the relative volumes of three petrological TTG compositions as reported from field data (Moyen, Lithos 2011). Furthermore, we systematically vary parameters such as friction coefficient, initial core temperature and composition-dependent viscosity to investigate the global tectonic regime of early Earth. Continental crust can also be destroyed by subduction or delamination. We will investigate
Modeling and estimating system availability
International Nuclear Information System (INIS)
Gaver, D.P.; Chu, B.B.
1976-11-01
Mathematical models to infer the availability of various types of more or less complicated systems are described. The analyses presented are probabilistic in nature and consist of three parts: a presentation of various analytic models for availability; a means of deriving approximate probability limits on system availability; and a means of statistical inference of system availability from sparse data, using a jackknife procedure. Various low-order redundant systems are used as examples, but extension to more complex systems is not difficult
Hoteit, Ibrahim
2010-03-02
An eddy-permitting adjoint-based assimilation system has been implemented to estimate the state of the tropical Pacific Ocean. The system uses the Massachusetts Institute of Technology\\'s general circulation model and its adjoint. The adjoint method is used to adjust the model to observations by controlling the initial temperature and salinity; temperature, salinity, and horizontal velocities at the open boundaries; and surface fluxes of momentum, heat, and freshwater. The model is constrained with most of the available data sets in the tropical Pacific, including Tropical Atmosphere and Ocean, ARGO, expendable bathythermograph, and satellite SST and sea surface height data, and climatologies. Results of hindcast experiments in 2000 suggest that the iterated adjoint-based descent is able to significantly improve the model consistency with the multivariate data sets, providing a dynamically consistent realization of the tropical Pacific circulation that generally matches the observations to within specified errors. The estimated model state is evaluated both by comparisons with observations and by checking the controls, the momentum balances, and the representation of small-scale features that were not well sampled by the observations used in the assimilation. As part of these checks, the estimated controls are smoothed and applied in independent model runs to check that small changes in the controls do not greatly change the model hindcast. This is a simple ensemble-based uncertainty analysis. In addition, the original and smoothed controls are applied to a version of the model with doubled horizontal resolution resulting in a broadly similar “downscaled” hindcast, showing that the adjustments are not tuned to a single configuration (meaning resolution, topography, and parameter settings). The time-evolving model state and the adjusted controls should be useful for analysis or to supply the forcing, initial, and boundary conditions for runs of other models.
Krimi, Abdelkader; Rezoug, Mehdi; Khelladi, Sofiane; Nogueira, Xesús; Deligant, Michael; Ramírez, Luis
2018-04-01
In this work, a consistent Smoothed Particle Hydrodynamics (SPH) model to deal with interfacial multiphase fluid flows simulation is proposed. A modification to the Continuum Stress Surface formulation (CSS) [1] to enhance the stability near the fluid interface is developed in the framework of the SPH method. A non-conservative first-order consistency operator is used to compute the divergence of stress surface tensor. This formulation benefits of all the advantages of the one proposed by Adami et al. [2] and, in addition, it can be applied to more than two phases fluid flow simulations. Moreover, the generalized wall boundary conditions [3] are modified in order to be well adapted to multiphase fluid flows with different density and viscosity. In order to allow the application of this technique to wall-bounded multiphase flows, a modification of generalized wall boundary conditions is presented here for using the SPH method. In this work we also present a particle redistribution strategy as an extension of the damping technique presented in [3] to smooth the initial transient phase of gravitational multiphase fluid flow simulations. Several computational tests are investigated to show the accuracy, convergence and applicability of the proposed SPH interfacial multiphase model.
Self-consistent modeling of plasma response to impurity spreading from intense localized source
International Nuclear Information System (INIS)
Koltunov, Mikhail
2012-07-01
Non-hydrogen impurities unavoidably exist in hot plasmas of present fusion devices. They enter it intrinsically, due to plasma interaction with the wall of vacuum vessel, as well as are seeded for various purposes deliberately. Normally, the spots where injected particles enter the plasma are much smaller than its total surface. Under such conditions one has to expect a significant modification of local plasma parameters through various physical mechanisms, which, in turn, affect the impurity spreading. Self-consistent modeling of interaction between impurity and plasma is, therefore, not possible with linear approaches. A model based on the fluid description of electrons, main and impurity ions, and taking into account the plasma quasi-neutrality, Coulomb collisions of background and impurity charged particles, radiation losses, particle transport to bounding surfaces, is elaborated in this work. To describe the impurity spreading and the plasma response self-consistently, fluid equations for the particle, momentum and energy balances of various plasma components are solved by reducing them to ordinary differential equations for the time evolution of several parameters characterizing the solution in principal details: the magnitudes of plasma density and plasma temperatures in the regions of impurity localization and the spatial scales of these regions. The results of calculations for plasma conditions typical in tokamak experiments with impurity injection are presented. A new mechanism for the condensation phenomenon and formation of cold dense plasma structures is proposed.
Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models
Vignal, Philippe
2016-02-11
Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are
International Nuclear Information System (INIS)
Gustafsson, Jon Petter; Daessman, Ellinor; Baeckstroem, Mattias
2009-01-01
Uranium(VI), which is often elevated in granitoidic groundwaters, is known to adsorb strongly to Fe (hydr)oxides under certain conditions. This process can be used in water treatment to remove U(VI). To develop a consistent geochemical model for U(VI) adsorption to ferrihydrite, batch experiments were performed and previous data sets reviewed to optimize a set of surface complexation constants using the 3-plane CD-MUSIC model. To consider the effect of dissolved organic matter (DOM) on U(VI) speciation, new parameters for the Stockholm Humic Model (SHM) were optimized using previously published data. The model, which was constrained from available X-ray absorption fine structure (EXAFS) spectroscopy evidence, fitted the data well when the surface sites were divided into low- and high-affinity binding sites. Application of the model concept to other published data sets revealed differences in the reactivity of different ferrihydrites towards U(VI). Use of the optimized SHM parameters for U(VI)-DOM complexation showed that this process is important for U(VI) speciation at low pH. However in neutral to alkaline waters with substantial carbonate present, Ca-U-CO 3 complexes predominate. The calibrated geochemical model was used to simulate U(VI) adsorption to ferrihydrite for a hypothetical groundwater in the presence of several competitive ions. The results showed that U(VI) adsorption was strong between pH 5 and 8. Also near the calcite saturation limit, where U(VI) adsorption was weakest according to the model, the adsorption percentage was predicted to be >80%. Hence U(VI) adsorption to ferrihydrite-containing sorbents may be used as a method to bring down U(VI) concentrations to acceptable levels in groundwater
A feasibility study on FP transmutation for Self-Consistent Nuclear Energy System (SCNES)
International Nuclear Information System (INIS)
Fujita, Reiko; Kawashima, Masatoshi; Ueda, Hiroaki; Takagi, Ryuzo; Matsuura, Haruaki; Fujii-e, Yoichi
1997-01-01
A fast reactor core/fuel cycle concept is discussed for the future 'Self-Consistent Nuclear Energy System (SCNES)' concept. The present study mainly discussed long-lived fission products (LLFPs) burning capability and recycle scheme in the framework of metallic fuel fast reactor cycle, aiming at the goals for fuel breeding capability and confinement for TRU and radio-active FPs within the system. In present paper, burning capability for Cs135 and Zr93 is mainly discussed from neutronic and chemical view points, assuming metallic fuel cycle system. The recent experimental results indicate that Cs can be separable along with the pyroprocess for metal fuel recycle system, as previously designed for a candidate fuel cycle system. Combining neutron spectrum-shift for target sub-assemblies and isotope separation using tunable laser, LLFP burning capability is enhanced. This result indicates that major LLFPs can be treated in the additional recycle schemes to avoid LLFP accumulation along with energy production. In total, the proposed fuel cycle is an candidate for realizing SCNES concept. (author)
Directory of Open Access Journals (Sweden)
Hans-Jörg Rheinberger
2011-06-01
Full Text Available It is generally accepted that the development of the modern sciences is rooted in experiment. Yet for a long time, experimentation did not occupy a prominent role, neither in philosophy nor in history of science. With the 'practical turn' in studying the sciences and their history, this has begun to change. This paper is concerned with systems and cultures of experimentation and the consistencies that are generated within such systems and cultures. The first part of the paper exposes the forms of historical and structural coherence that characterize the experimental exploration of epistemic objects. In the second part, a particular experimental culture in the life sciences is briefly described as an example. A survey will be given of what it means and what it takes to analyze biological functions in the test tube.
Consistent modelling of wind turbine noise propagation from source to receiver.
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick
2017-11-01
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.
Consistency and discrepancy in the atmospheric response to Arctic sea-ice loss across climate models
Screen, James A.; Deser, Clara; Smith, Doug M.; Zhang, Xiangdong; Blackport, Russell; Kushner, Paul J.; Oudar, Thomas; McCusker, Kelly E.; Sun, Lantao
2018-02-01
The decline of Arctic sea ice is an integral part of anthropogenic climate change. Sea-ice loss is already having a significant impact on Arctic communities and ecosystems. Its role as a cause of climate changes outside of the Arctic has also attracted much scientific interest. Evidence is mounting that Arctic sea-ice loss can affect weather and climate throughout the Northern Hemisphere. The remote impacts of Arctic sea-ice loss can only be properly represented using models that simulate interactions among the ocean, sea ice, land and atmosphere. A synthesis of six such experiments with different models shows consistent hemispheric-wide atmospheric warming, strongest in the mid-to-high-latitude lower troposphere; an intensification of the wintertime Aleutian Low and, in most cases, the Siberian High; a weakening of the Icelandic Low; and a reduction in strength and southward shift of the mid-latitude westerly winds in winter. The atmospheric circulation response seems to be sensitive to the magnitude and geographic pattern of sea-ice loss and, in some cases, to the background climate state. However, it is unclear whether current-generation climate models respond too weakly to sea-ice change. We advocate for coordinated experiments that use different models and observational constraints to quantify the climate response to Arctic sea-ice loss.
Model for ICRF fast wave current drive in self-consistent MHD equilibria
International Nuclear Information System (INIS)
Bonoli, P.T.; Englade, R.C.; Porkolab, M.; Fenstermacher, M.E.
1993-01-01
Recently, a model for fast wave current drive in the ion cyclotron radio frequency (ICRF) range was incorporated into the current drive and MHD equilibrium code ACCOME. The ACCOME model combines a free boundary solution of the Grad Shafranov equation with the calculation of driven currents due to neutral beam injection, lower hybrid (LH) waves, bootstrap effects, and ICRF fast waves. The equilibrium and current drive packages iterate between each other to obtain an MHD equilibrium which is consistent with the profiles of driven current density. The ICRF current drive package combines a toroidal full-wave code (FISIC) with a parameterization of the current drive efficiency obtained from an adjoint solution of the Fokker Planck equation. The electron absorption calculation in the full-wave code properly accounts for the combined effects of electron Landau damping (ELD) and transit time magnetic pumping (TTMP), assuming a Maxwellian (or bi-Maxwellian) electron distribution function. Furthermore, the current drive efficiency includes the effects of particle trapping, momentum conserving corrections to the background Fokker Planck collision operator, and toroidally induced variations in the parallel wavenumbers of the injected ICRF waves. This model has been used to carry out detailed studies of advanced physics scenarios in the proposed Tokamak Physics Experiment (TPX). Results are shown, for example, which demonstrate the possibility of achieving stable equilibria at high beta and high bootstrap current fraction in TPX. Model results are also shown for the proposed ITER device
A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints
Directory of Open Access Journals (Sweden)
L. Kantha
2016-01-01
Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.
Thermodynamically consistent modeling of elementary electrochemistry in lithium-ion batteries
International Nuclear Information System (INIS)
Colclasure, Andrew M.; Kee, Robert J.
2010-01-01
This paper is particularly concerned with the elementary reactions and transport processes that are responsible for Li-ion battery performance. The model generally follows the widely practiced approach developed by Newman and co-workers (e.g., Doyle et al., J. Electrochem. Soc. 140 (1993) 1526 ). However, there are significant departures, especially in modeling electrochemical charge transfer. The present approach introduces systems of microscopically reversible reactions, including both heterogeneous thermal reactions and electrochemical charge-transfer reactions. All reaction rates are evaluated in elementary form, providing a powerful alternative to a Butler-Volmer formalism for the charge-transfer reactions. The paper is particularly concerned with the influence of non-ideal thermodynamics for evaluating reversible potentials as well as charge-transfer rates. The theory and modeling approach establishes a framework for extending chemistry models to incorporate detailed reaction mechanisms that represent multiple competitive reaction pathways.
Self-consistent theory of finite Fermi systems and Skyrme–Hartree–Fock method
Energy Technology Data Exchange (ETDEWEB)
Saperstein, E. E., E-mail: saper@mbslab.kiae.ru; Tolokonnikov, S. V. [National Research Center Kurchatov Institute (Russian Federation)
2016-11-15
Recent results obtained on the basis of the self-consistent theory of finite Fermi systems by employing the energy density functional proposed by Fayans and his coauthors are surveyed. These results are compared with the predictions of Skyrme–Hartree–Fock theory involving several popular versions of the Skyrme energy density functional. Spherical nuclei are predominantly considered. The charge radii of even and odd nuclei and features of low-lying 2{sup +} excitations in semimagic nuclei are discussed briefly. The single-particle energies ofmagic nuclei are examined inmore detail with allowance for corrections to mean-field theory that are induced by particle coupling to low-lying collective surface excitations (phonons). The importance of taking into account, in this problem, nonpole (tadpole) diagrams, which are usually disregarded, is emphasized. The spectroscopic factors of magic and semimagic nuclei are also considered. In this problem, only the surface term stemming from the energy dependence induced in the mass operator by the exchange of surface phonons is usually taken into account. The volume contribution associated with the energy dependence initially present in the mass operator within the self-consistent theory of finite Fermi systems because of the exchange of high-lying particle–hole excitations is also included in the spectroscopic factor. The results of the first studies that employed the Fayans energy density functional for deformed nuclei are also presented.
Berg, Matthew; Hartley, Brian; Richters, Oliver
2015-01-01
By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.
Is the thermal-spike model consistent with experimentally determined electron temperature?
International Nuclear Information System (INIS)
Ajryan, Eh.A.; Fedorov, A.V.; Kostenko, B.F.
2000-01-01
Carbon K-Auger electron spectra from amorphous carbon foils induced by fast heavy ions are theoretically investigated. The high-energy tail of the Auger structure showing a clear projectile charge dependence is analyzed within the thermal-spike model framework as well as in the frame of another model taking into account some kinetic features of the process. A poor comparison results between theoretically and experimentally determined temperatures are suggested to be due to an improper account of double electron excitations or due to shake-up processes which leave the system in a more energetic initial state than a statically screened core hole
Yang, Yuyi; Wei, Buqing; Zhao, Yuhua; Wang, Jun
2013-02-01
Azo dyes are toxic and carcinogenic and are often present in industrial effluents. In this research, azoreductase and glucose 1-dehydrogenase were coupled for both continuous generation of the cofactor NADH and azo dye removal. The results show that 85% maximum relative activity of azoreductase in an integrated enzyme system was obtained at the conditions: 1U azoreductase:10U glucose 1-dehydrogenase, 250mM glucose, 1.0mM NAD(+) and 150μM methyl red. Sensitivity analysis of the factors in the enzyme system affecting dye removal examined by an artificial neural network model shows that the relative importance of enzyme ratio between azoreductase and glucose 1-dehydrogenase was 22%, followed by dye concentration (27%), NAD(+) concentration (23%) and glucose concentration (22%), indicating none of the variables could be ignored in the enzyme system. Batch results show that the enzyme system has application potential for dye removal. Copyright © 2012 Elsevier Ltd. All rights reserved.
Self-consistent field modeling of adsorption from polymer/surfactant mixtures.
Postmus, Bart R; Leermakers, Frans A M; Cohen Stuart, Martien A
2008-06-01
We report on the development of a self-consistent field model that describes the competitive adsorption of nonionic alkyl-(ethylene oxide) surfactants and nonionic polymer poly(ethylene oxide) (PEO) from aqueous solutions onto silica. The model explicitly describes the response to the pH and the ionic strength. On an inorganic oxide surface such as silica, the dissociation of the surface depends on the pH. However, salt ions can screen charges on the surface, and hence, the number of dissociated groups also depends on the ionic strength. Furthermore, the solvent quality for the EO groups is a function of the ionic strength. Using our model, we can compute bulk parameters such as the average size of the polymer coil and the surfactant CMC. We can make predictions on the adsorption behavior of either polymers or surfactants, and we have made adsorption isotherms, i.e., calculated the relationship between the surface excess and its corresponding bulk concentration. When we add both polymer and surfactant to our mixture, we can find a surfactant concentration (or, more precisely, a surfactant chemical potential) below which only the polymer will adsorb and above which only the surfactant will adsorb. The corresponding surfactant concentration is called the CSAC. In a first-order approximation, the surfactant chemical potential has the CMC as its upper bound. We can find conditions for which CMC model is to understand the experimental data from one of our previous articles. We managed to explain most, but unfortunately not all, of the experimental trends. At the end of the article we discuss the possibilities for improving the model.
Directory of Open Access Journals (Sweden)
Damian M Cummings
2010-05-01
Full Text Available Since the identification of the gene responsible for HD (Huntington's disease, many genetic mouse models have been generated. Each employs a unique approach for delivery of the mutated gene and has a different CAG repeat length and background strain. The resultant diversity in the genetic context and phenotypes of these models has led to extensive debate regarding the relevance of each model to the human disorder. Here, we compare and contrast the striatal synaptic phenotypes of two models of HD, namely the YAC128 mouse, which carries the full-length huntingtin gene on a yeast artificial chromosome, and the CAG140 KI*** (knock-in mouse, which carries a human/mouse chimaeric gene that is expressed in the context of the mouse genome, with our previously published data obtained from the R6/2 mouse, which is transgenic for exon 1 mutant huntingtin. We show that striatal MSNs (medium-sized spiny neurons in YAC128 and CAG140 KI mice have similar electrophysiological phenotypes to that of the R6/2 mouse. These include a progressive increase in membrane input resistance, a reduction in membrane capacitance, a lower frequency of spontaneous excitatory postsynaptic currents and a greater frequency of spontaneous inhibitory postsynaptic currents in a subpopulation of striatal neurons. Thus, despite differences in the context of the inserted gene between these three models of HD, the primary electrophysiological changes observed in striatal MSNs are consistent. The outcomes suggest that the changes are due to the expression of mutant huntingtin and such alterations can be extended to the human condition.
Self-Consistent Atmosphere Models of the Most Extreme Hot Jupiters
Lothringer, Joshua; Barman, Travis
2018-01-01
We present a detailed look at self-consistent PHOENIX atmosphere models of the most highly irradiated hot Jupiters known to exist. These hot Jupiters typically have equilibrium temperatures approaching and sometimes exceeding 3000 K, orbiting A, F, and early-G type stars on orbits less than 0.03 AU (10x closer than Mercury is to the Sun). The most extreme example, KELT-9b, is the hottest known hot Jupiter with a measured dayside temperature of 4600 K. Many of the planets we model have recently attracted attention with high profile discoveries, including temperature inversions in WASP-33b and WASP-121, changing phase curve offsets possibly caused by magnetohydrodymanic effects in HAT-P-7b, and TiO in WASP-19b. Our modeling provides a look at the a priori expectations for these planets and helps us understand these recent discoveries. We show that, in the hottest cases, all molecules are dissociated down to relatively high pressures. These planets may have detectable temperature inversions, more akin to thermospheres than stratospheres in that an optical absorber like TiO or VO is not needed. Instead, the inversions are created by a lack of cooling in the IR combined with heating from atoms and ions at UV and blue optical wavelengths. We also reevaluate some of the assumptions that have been made in retrieval analyses of these planets.
Height-Diameter Models for Mixed-Species Forests Consisting of Spruce, Fir, and Beech
Directory of Open Access Journals (Sweden)
Petráš Rudolf
2014-06-01
Full Text Available Height-diameter models define the general relationship between the tree height and diameter at each growth stage of the forest stand. This paper presents generalized height-diameter models for mixed-species forest stands consisting of Norway spruce (Picea abies Karst., Silver fir (Abies alba L., and European beech (Fagus sylvatica L. from Slovakia. The models were derived using two growth functions from the exponential family: the two-parameter Michailoff and three-parameter Korf functions. Generalized height-diameter functions must normally be constrained to pass through the mean stand diameter and height, and then the final growth model has only one or two parameters to be estimated. These “free” parameters are then expressed over the quadratic mean diameter, height and stand age and the final mathematical form of the model is obtained. The study material included 50 long-term experimental plots located in the Western Carpathians. The plots were established 40-50 years ago and have been repeatedly measured at 5 to 10-year intervals. The dataset includes 7,950 height measurements of spruce, 21,661 of fir and 5,794 of beech. As many as 9 regression models were derived for each species. Although the “goodness of fit” of all models showed that they were generally well suited for the data, the best results were obtained for silver fir. The coefficient of determination ranged from 0.946 to 0.948, RMSE (m was in the interval 1.94-1.97 and the bias (m was -0.031 to 0.063. Although slightly imprecise parameter estimation was established for spruce, the estimations of the regression parameters obtained for beech were quite less precise. The coefficient of determination for beech was 0.854-0.860, RMSE (m 2.67-2.72, and the bias (m ranged from -0.144 to -0.056. The majority of models using Korf’s formula produced slightly better estimations than Michailoff’s, and it proved immaterial which estimated parameter was fixed and which parameters
Thermal states of neutron stars with a consistent model of interior
Fortin, M.; Taranto, G.; Burgio, G. F.; Haensel, P.; Schulze, H.-J.; Zdunik, J. L.
2018-04-01
We model the thermal states of both isolated neutron stars and accreting neutron stars in X-ray transients in quiescence and confront them with observations. We use an equation of state calculated using realistic two-body and three-body nucleon interactions, and superfluid nucleon gaps obtained using the same microscopic approach in the BCS approximation. Consistency with low-luminosity accreting neutron stars is obtained, as the direct Urca process is operating in neutron stars with mass larger than 1.1 M⊙ for the employed equation of state. In addition, proton superfluidity and sufficiently weak neutron superfluidity, obtained using a scaling factor for the gaps, are necessary to explain the cooling of middle-aged neutron stars and to obtain a realistic distribution of neutron star masses.
Buchanan, John J; Dean, Noah
2014-02-01
The experiment undertaken was designed to elucidate the impact of model skill level on observational learning processes. The task was bimanual circle tracing with a 90° relative phase lead of one hand over the other hand. Observer groups watched videos of either an instruction model, a discovery model, or a skilled model. The instruction and skilled model always performed the task with the same movement strategy, the right-arm traced clockwise and the left-arm counterclockwise around circle templates with the right-arm leading. The discovery model used several movement strategies (tracing-direction/hand-lead) during practice. Observation of the instruction and skilled model provided a significant benefit compared to the discovery model when performing the 90° relative phase pattern in a post-observation test. The observers of the discovery model had significant room for improvement and benefited from post-observation practice of the 90° pattern. The benefit of a model is found in the consistency with which that model uses the same movement strategy, and not within the skill level of the model. It is the consistency in strategy modeled that allows observers to develop an abstract perceptual representation of the task that can be implemented into a coordinated action. Theoretically, the results show that movement strategy information (relative motion direction, hand lead) and relative phase information can be detected through visual perception processes and be successfully mapped to outgoing motor commands within an observational learning context. Copyright © 2013 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
BRANNON,REBECCA M.
2000-11-01
A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion
Charge transfer from first principles: self-consistent GW applied to donor-acceptor systems
Atalla, Viktor; Caruso, Fabio; Rubio, Angel; Scheffler, Matthias; Rinke, Patrick
2015-03-01
Charge transfer in donor-acceptor systems (DAS) is determined by the relative alignment between the frontier orbitals of the donor and the acceptor. Semi-local approximations to density functional theory (DFT) may give a qualitatively wrong level alignment in DAS, leading to unphysical fractional electron transfer in weakly bound donor-acceptor pairs. GW calculations based on first-order perturbation theory (G0W0) correct the level alignment, but leave unaffected the electron density. We demonstrate that self-consistent GW (sc GW) provides an ideal framework for the description of charge transfer in DAS. Moreover, sc GW seamlessly accounts for many-body correlations and van der Waals interactions. As in G0W0 , the sc GW level alignment is in agreement with experimental reference data. However in sc GW , also the electron density is treated at the GW level and, therefore, it is consistent with the level alignment between donor and acceptor leading to a qualitatively correct description of charge-transfer properties.
Self-consistent second-order Green’s function perturbation theory for periodic systems
International Nuclear Information System (INIS)
Rusakov, Alexander A.; Zgid, Dominika
2016-01-01
Despite recent advances, systematic quantitative treatment of the electron correlation problem in extended systems remains a formidable task. Systematically improvable Green’s function methods capable of quantitatively describing weak and at least qualitatively strong correlations appear as promising candidates for computational treatment of periodic systems. We present a periodic implementation of temperature-dependent self-consistent 2nd-order Green’s function (GF2) method, where the self-energy is evaluated in the basis of atomic orbitals. Evaluating the real-space self-energy in atomic orbitals and solving the Dyson equation in k-space are the key components of a computationally feasible algorithm. We apply this technique to the one-dimensional hydrogen lattice — a prototypical crystalline system with a realistic Hamiltonian. By analyzing the behavior of the spectral functions, natural occupations, and self-energies, we claim that GF2 is able to recover metallic, band insulating, and at least qualitatively Mott regimes. We observe that the iterative nature of GF2 is essential to the emergence of the metallic and Mott phases
Neutron excess generation by fusion neutron source for self-consistency of nuclear energy system
International Nuclear Information System (INIS)
Saito, Masaki; Artisyuk, V.; Chmelev, A.
1999-01-01
The present day fission energy technology faces with the problem of transmutation of dangerous radionuclides that requires neutron excess generation. Nuclear energy system based on fission reactors needs fuel breeding and, therefore, suffers from lack of neutron excess to apply large-scale transmutation option including elimination of fission products. Fusion neutron source (FNS) was proposed to improve neutron balance in the nuclear energy system. Energy associated with the performance of FNS should be small enough to keep the position of neutron excess generator, thus, leaving the role of dominant energy producers to fission reactors. The present paper deals with development of general methodology to estimate the effect of neutron excess generation by FNS on the performance of nuclear energy system as a whole. Multiplication of fusion neutrons in both non-fissionable and fissionable multipliers was considered. Based on the present methodology it was concluded that neutron self-consistency with respect to fuel breeding and transmutation of fission products can be attained with small fraction of energy associated with innovated fusion facilities. (author)
Consistent Probabilistic Description of the Neutral Kaon System: Novel Observable Effects
Bernabeu, J.; Villanueva-Perez, P.
2013-01-01
The neutral Kaon system has both CP violation in the mass matrix and a non-vanishing lifetime difference in the width matrix. This leads to an effective Hamiltonian which is not a normal operator, with incompatible (non-commuting) masses and widths. In the Weisskopf-Wigner Approach (WWA), by diagonalizing the entire Hamiltonian, the unphysical non-orthogonal "stationary" states $K_{L,S}$ are obtained. These states have complex eigenvalues whose real (imaginary) part does not coincide with the eigenvalues of the mass (width) matrix. In this work we describe the system as an open Lindblad-type quantum mechanical system due to Kaon decays. This approach, in terms of density matrices for initial and final states, provides a consistent probabilistic description, avoiding the standard problems because the width matrix becomes a composite operator not included in the Hamiltonian. We consider the dominant-decay channel to two pions, so that one of the Kaon states with definite lifetime becomes stable. This new approa...
Self-consistent study of space-charge-dominated beams in a misaligned transport system
International Nuclear Information System (INIS)
Sing Babu, P.; Goswami, A.; Pandit, V.S.
2013-01-01
A self-consistent particle-in-cell (PIC) simulation method is developed to investigate the dynamics of space-charge-dominated beams through a misaligned solenoid based transport system. Evolution of beam centroid, beam envelope and emittance is studied as a function of misalignment parameters for various types of beam distributions. Simulation results performed up to 40 mA of proton beam indicate that centroid oscillations induced by the displacement and rotational misalignments of solenoids do not depend of the beam distribution. It is shown that the beam envelope around the centroid is independent of the centroid motion for small centroid oscillation. In addition, we have estimated the loss of beam during the transport caused by the misalignment for various beam distributions
Study of impurity effects on CFETR steady-state scenario by self-consistent integrated modeling
Shi, Nan; Chan, Vincent S.; Jian, Xiang; Li, Guoqiang; Chen, Jiale; Gao, Xiang; Shi, Shengyu; Kong, Defeng; Liu, Xiaoju; Mao, Shifeng; Xu, Guoliang
2017-12-01
Impurity effects on fusion performance of China fusion engineering test reactor (CFETR) due to extrinsic seeding are investigated. An integrated 1.5D modeling workflow evolves plasma equilibrium and all transport channels to steady state. The one modeling framework for integrated tasks framework is used to couple the transport solver, MHD equilibrium solver, and source and sink calculations. A self-consistent impurity profile constructed using a steady-state background plasma, which satisfies quasi-neutrality and true steady state, is presented for the first time. Studies are performed based on an optimized fully non-inductive scenario with varying concentrations of Argon (Ar) seeding. It is found that fusion performance improves before dropping off with increasing {{Z}\\text{eff}} , while the confinement remains at high level. Further analysis of transport for these plasmas shows that low-k ion temperature gradient modes dominate the turbulence. The decrease in linear growth rate and resultant fluxes of all channels with increasing {{Z}\\text{eff}} can be traced to impurity profile change by transport. The improvement in confinement levels off at higher {{Z}\\text{eff}} . Over the regime of study there is a competition between the suppressed transport and increasing radiation that leads to a peak in the fusion performance at {{Z}\\text{eff}} (~2.78 for CFETR). Extrinsic impurity seeding to control divertor heat load will need to be optimized around this value for best fusion performance.
Deconvolution of experimental data of aggregates using self-consistent polycrystal models
International Nuclear Information System (INIS)
Tome, C.N.; Christodoulou, N.; Holt, R.; Woo, C.H.; Lebensohn, R.A.; Turner, P.A.
1994-01-01
We present in this work an overview of self-consistent polycrystal models, together with a comprehensive body of work where those models are used to characterize the response of zirconium alloy aggregates under several deformation regimes. In particular, we address here: evolution of internal stresses associated with heat treatments (thermo-elastic regime) and small deformations (elasto-plastic regime); dimensional changes induced by creep and growth during neutron irradiation (visco-elastic regime); texture development associated with forming operations (visco-plastic regime). In each case we emphasize the effect of texture and internal stresses in the observed response of the aggregate, and from the comparison of the predictions with experimental evidence we determine the single crystal properties from the macroscopic response of the polycrystal. The latter approach is particularly useful in the case of zirconium alloys, a material for which it is not possible to grow single crystals and thus directly measure their single crystal properties. Specifically, we infer information concerning: the stress-free lattice parameters and thermal coefficients of the hexagonal crystals; the irradiation creep compliances and growth coefficients; the crystallographic deformation modes and their associated critical stresses. (au) (38 refs.)
Validity and Internal Consistency of the New Knee Society Knee Scoring System.
Culliton, Sharon E; Bryant, Dianne M; MacDonald, Steven J; Hibbert, Kathryn M; Chesworth, Bert M
2018-01-01
In 2012, a new Knee Society Knee Scoring System (KSS) was developed and validated to address the needs for a scoring system that better encompasses the expectations, satisfaction, and physical involvement of a younger, more active population of patients undergoing TKA. Revalidating this tool in a separate population by individuals other than the developers of the scoring system seems important, because such replication would tend to confirm the generalizability of this tool. The purposes of this study were (1) to validate the KSS using a separate sample of patients undergoing primary TKA; and (2) to evaluate the internal consistency of the KSS. Intervention and control groups from a randomized controlled trial with no between-group differences were pooled. Preoperative and postoperative (6 weeks and 1 year) data were used. Patients with osteoarthritis undergoing primary TKA completed the patient-reported component of the KSS, Knee Injury and Osteoarthritis Outcome Score (KOOS), SF-12, two independent questions about expectations of surgery, and the Patient Acceptable Symptom State (PASS) single-question outcome. This study included 345 patients with 221 (64%) women, an average (SD) age of 64 (8.6) years, a mean (SD) body mass index of 32.9 (7.5) kg/m, and 225 (68%) having their first primary TKA. Loss to followup in the control group was 18% and loss to followup in the intervention group was 13%. We quantified cross-sectional (preoperative scores) and longitudinal validity (pre- to postoperative change scores) by evaluating associations between the KSS and KOOS subscales using Spearman's correlation coefficient. Preoperative known-group validity of the KSS symptoms and functional activity score was evaluated with a one-way analysis of variance across three levels of physical health status using the SF-12 Physical Component Score. Known-group validity of the KSS expectation score was evaluated with an unpaired t-test by comparing means across known expectation
Multi-component Self-Consistent Nuclear Energy System: On proliferation resistance aspect
International Nuclear Information System (INIS)
Shmelev, A.; Saito, M; Artisyuk, V.
2000-01-01
Self-Consistent Nuclear Energy System (SCNES) that simultaneously meets four requirements: energy production, fuel production, burning of radionuclides and safety is targeted at harmonization of nuclear energy technology with human environment. The main bulk of SCNES studies focus on a potential of fast reactor (FR) in generating neutron excess to keep suitable neutron balance. Proliferation resistance was implicitly anticipated in a fuel cycle with co-processing of Pu, minor actinides (MA) and some relatively short-lived fission products (FP). In a contrast to such a mono-component system, the present paper advertises advantage of incorporating accelerator and fusion driven neutron sources which could drastically improve characteristics of nuclear waste incineration. What important is that they could help in creating advanced Np and Pa containing fuels with double protection against uncontrolled proliferation. The first level of protection deals with possibility to approach long life core (LLC) in fission reactors. Extending the core life-time to reactor-time is beneficial from the proliferation resistance viewpoint since LLC would not necessarily require fuel management at energy producing site, with potential advantage of being moved to vendor site for spent fuel refabrication. Second level is provided by the presence of substantial amounts of 238 Pu and 232 U in these fuels that makes fissile nuclides in them isotopically protected. All this reveals an important advantage of a multi-component SCNES that could draw in developing countries without elaborated technological infrastructure. (author)
Hazard-consistent ground motions generated with a stochastic fault-rupture model
Energy Technology Data Exchange (ETDEWEB)
Nishida, Akemi, E-mail: nishida.akemi@jaea.go.jp [Center for Computational Science and e-Systems, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba 277-0871 (Japan); Igarashi, Sayaka, E-mail: igrsyk00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Sakamoto, Shigehiro, E-mail: shigehiro.sakamoto@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Uchiyama, Yasuo, E-mail: yasuo.uchiyama@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Yamamoto, Yu, E-mail: ymmyu-00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Muramatsu, Ken, E-mail: kmuramat@tcu.ac.jp [Department of Nuclear Safety Engineering, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo 158-8557 (Japan); Takada, Tsuyoshi, E-mail: takada@load.arch.t.u-tokyo.ac.jp [Department of Architecture, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2015-12-15
Conventional seismic probabilistic risk assessments (PRAs) of nuclear power plants consist of probabilistic seismic hazard and fragility curves. Even when earthquake ground-motion time histories are required, they are generated to fit specified response spectra, such as uniform hazard spectra at a specified exceedance probability. These ground motions, however, are not directly linked with seismic-source characteristics. In this context, the authors propose a method based on Monte Carlo simulations to generate a set of input ground-motion time histories to develop an advanced PRA scheme that can explain exceedance probability and the sequence of safety-functional loss in a nuclear power plant. These generated ground motions are consistent with seismic hazard at a reference site, and their seismic-source characteristics can be identified in detail. Ground-motion generation is conducted for a reference site, Oarai in Japan, the location of a hypothetical nuclear power plant. A total of 200 ground motions are generated, ranging from 700 to 1100 cm/s{sup 2} peak acceleration, which corresponds to a 10{sup −4} to 10{sup −5} annual exceedance frequency. In the ground-motion generation, seismic sources are selected according to their hazard contribution at the site, and Monte Carlo simulations with stochastic parameters for the seismic-source characteristics are then conducted until ground motions with the target peak acceleration are obtained. These ground motions are selected so that they are consistent with the hazard. Approximately 110,000 simulations were required to generate 200 ground motions with these peak accelerations. Deviations of peak ground motion acceleration generated for 1000–1100 cm/s{sup 2} range from 1.5 to 3.0, where the deviation is evaluated with peak ground motion accelerations generated from the same seismic source. Deviations of 1.0 to 3.0 for stress drops, one of the stochastic parameters of seismic-source characteristics, are required to
Dental students consistency in applying the ICDAS system within paediatric dentistry.
Foley, J I
2012-12-01
To examine dental students' consistency in utilising the International Caries Detection and Assessment System (ICDAS) one and three months after training. A prospective study. All clinical dental students (Year Two: BDS2; Year Three: BDS3; Year Four: BDS4) as part of their education in Paediatric Dentistry at Aberdeen Dental School (n = 56) received baseline training by two "gold-standard" examiners and were advised to complete the 90-minute ICDAS e-learning program. Study One: One month later, the occlusal surface of 40 extracted primary and permanent molar teeth were examined and assigned both a caries (0-6 scale) and restorative code (0-9 scale). Study Two: The same teeth were examined three months later. Kappa statistics were used to determine inter- and intra-examiner reliability at baseline and after three months. In total, 31 students (BDS2: n = 9; BDS3: n = 8; BDS4: n = 14) completed both examinations. The inter-examiner reliability kappa scores for restoration codes for Study One and Study Two were: BDS2: 0.47 and 0.38; BDS3: 0.61 and 0.52 and BDS4: 0.56 and 0.52. The caries scores for the two studies were: BDS2: 0.31 and 0.20; BDS3: 0.45 and 0.32 and BDS4: 0.35 and 0.34. The intra-examiner reliability range for restoration codes were: BDS2: 0.20 to 0.55; BDS3: 0.34 to 0.72 and BDS4: 0.28 to 0.80. The intra-examiner reliability range for caries codes were: BDS2: 0.35 to 0.62; BDS3: 0.22 to 0.53 and BDS4: 0.22 to 0.65. The consistency of ICDAS codes varied between students and also, between year groups. In general, consistency was greater for restoration codes.
Self-consistent evolution models for slow CMEs up to 1 AU
Poedts, S.; Pomoell, J.; Zuccarello, F. P.
2016-02-01
Our 2.5D (axi-symmetric) self-consistent numerical magneto-hydrodynamics (MHD) models for the onset of CMEs under solar minimum conditions and for their interaction with coronal streamers and subsequent evolution up to 1 AU, are presented and discussed. The CMEs are initiated by magnetic flux emergence/cancellation and/or by shearing the magnetic foot points of a magnetic arcade which is positioned above or below the equatorial plane and embedded in a larger helmet streamer. The overlying magnetic streamer field then deflects the CMEs towards the equator, and the deflection path is dependent on the driving velocity. The core of the CME, created during the onset process, contains a magnetic flux rope and the synthetic white light images often show the typical three-part CME structure. The resulting CMEs propagate only slightly faster than the background solar wind, but this small excess speed is high enough to create a fast MHD shock wave from a distance of 0.25 AU onwards. At 1 AU, the plasma shows the typical characteristics of a magnetic cloud, and the simulated data are in good agreement with the (ACE) observations.
International Nuclear Information System (INIS)
Zaghloul, Mofreh R.
2003-01-01
Flibe (2LiF-BeF2) is a molten salt that has been chosen as the coolant and breeding material in many design studies of the inertial confinement fusion (ICF) chamber. Flibe plasmas are to be generated in the ICF chamber in a wide range of temperatures and densities. These plasmas are more complex than the plasma of any single chemical species. Nevertheless, the composition and thermodynamic properties of the resulting flibe plasmas are needed for the gas dynamics calculations and the determination of other design parameters in the ICF chamber. In this paper, a simple consistent model for determining the detailed plasma composition and thermodynamic functions of high-temperature, fully dissociated and partially ionized flibe gas is presented and used to calculate different thermodynamic properties of interest to fusion applications. The computed properties include the average ionization state; kinetic pressure; internal energy; specific heats; adiabatic exponent, as well as the sound speed. The presented results are computed under the assumptions of local thermodynamic equilibrium (LTE) and electro-neutrality. A criterion for the validity of the LTE assumption is presented and applied to the computed results. Other attempts in the literature are assessed with their implied inaccuracies pointed out and discussed
Magy: Time dependent, multifrequency, self-consistent code for modeling electron beam devices
International Nuclear Information System (INIS)
Botton, M.; Antonsen, T.M.; Levush, B.
1997-01-01
A new MAGY code is being developed for three dimensional modeling of electron beam devices. The code includes a time dependent multifrequency description of the electromagnetic fields and a self consistent analysis of the electrons. The equations of motion are solved with the electromagnetic fields as driving forces and the resulting trajectories are used as current sources for the fields. The calculations of the electromagnetic fields are based on the waveguide modal representation, which allows the solution of relatively small number of coupled one dimensional partial differential equations for the amplitudes of the modes, instead of the full solution of Maxwell close-quote s equations. Moreover, the basic time scale for updating the electromagnetic fields is the cavity fill time and not the high frequency of the fields. In MAGY, the coupling among the various modes is determined by the waveguide non-uniformity, finite conductivity of the walls, and the sources due to the electron beam. The equations of motion of the electrons are solved assuming that all the electrons traverse the cavity in less than the cavity fill time. Therefore, at each time step, a set of trajectories are calculated with the high frequency and other external fields as the driving forces. The code includes a verity of diagnostics for both electromagnetic fields and particles trajectories. It is simple to operate and requires modest computing resources, thus expected to serve as a design tool. copyright 1997 American Institute of Physics
Ma, Qiang; Cheng, Huanyu; Jang, Kyung-In; Luan, Haiwen; Hwang, Keh-Chih; Rogers, John A; Huang, Yonggang; Zhang, Yihui
2016-05-01
Development of advanced synthetic materials that can mimic the mechanical properties of non-mineralized soft biological materials has important implications in a wide range of technologies. Hierarchical lattice materials constructed with horseshoe microstructures belong to this class of bio-inspired synthetic materials, where the mechanical responses can be tailored to match the nonlinear J-shaped stress-strain curves of human skins. The underlying relations between the J-shaped stress-strain curves and their microstructure geometry are essential in designing such systems for targeted applications. Here, a theoretical model of this type of hierarchical lattice material is developed by combining a finite deformation constitutive relation of the building block (i.e., horseshoe microstructure), with the analyses of equilibrium and deformation compatibility in the periodical lattices. The nonlinear J-shaped stress-strain curves and Poisson ratios predicted by this model agree very well with results of finite element analyses (FEA) and experiment. Based on this model, analytic solutions were obtained for some key mechanical quantities, e.g., elastic modulus, Poisson ratio, peak modulus, and critical strain around which the tangent modulus increases rapidly. A negative Poisson effect is revealed in the hierarchical lattice with triangular topology, as opposed to a positive Poisson effect in hierarchical lattices with Kagome and honeycomb topologies. The lattice topology is also found to have a strong influence on the stress-strain curve. For the three isotropic lattice topologies (triangular, Kagome and honeycomb), the hierarchical triangular lattice material renders the sharpest transition in the stress-strain curve and relative high stretchability, given the same porosity and arc angle of horseshoe microstructure. Furthermore, a demonstrative example illustrates the utility of the developed model in the rapid optimization of hierarchical lattice materials for
Self-consistent spectral function for non-degenerate Coulomb systems and analytic scaling behaviour
International Nuclear Information System (INIS)
Fortmann, Carsten
2008-01-01
Novel results for the self-consistent single-particle spectral function and self-energy are presented for non-degenerate one-component Coulomb systems at various densities and temperatures. The GW (0) -method for the dynamical self-energy is used to include many-particle correlations beyond the quasi-particle approximation. The self-energy is analysed over a broad range of densities and temperatures (n = 10 17 cm -3 -10 27 cm -3 , T = 10 2 eV/k B -10 4 eV/k B ). The spectral function shows a systematic behaviour, which is determined by collective plasma modes at small wavenumbers and converges towards a quasi-particle resonance at higher wavenumbers. In the low density limit, the numerical results comply with an analytic scaling law that is presented for the first time. It predicts a power-law behaviour of the imaginary part of the self-energy, ImΣ ∼ -n 1/4 . This resolves a long time problem of the quasi-particle approximation which yields a finite self-energy at vanishing density
DEFF Research Database (Denmark)
Cachorro, Irene Albacete; Daraban, Iulia Maria; Lainé, Guillaume
2013-01-01
In this paper a system consisting of an SOFC system for cogeneration of heat and power and vapour absorption heat pump for cooling and freezing is assessed and performance is evaluated. Food industry where demand includes four forms of energy simultaneously is a relevant application such a system....... The heat pump is a heat driven system and is running with the heat recovered by a heat exchanger from the exhausted gases from SOFC. The working fluid pair is NH3-H2O and is driven in two evaporators which are working at two different pressures. Thus, the heat pump will operate at tree pressure level...... with natural gas. The natural gas is first converted to a mixture of H2 and CO which feed the anode after a preheating step. The cathode is supplied with preheated air and gives, as output, electrical energy. The anode output is the exhaust gas which represents the thermal energy reservoir for heating...
Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J
2008-02-01
Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial
A self-consistent model for the electronic structure of the u-center in alkali-halides
International Nuclear Information System (INIS)
Koiller, B.; Brandi, H.S.
1978-01-01
A simple one-orbital per site model Hamiltonian for the U center in alkali-halides with rock-salt structure where correlation effects are introduced via an Anderson type Hamiltonian is presented. The Cluster-Bethe lattice method is used to determine the local density of states, yielding both localized and extended states. A one-electron approximation is assumed and the problem is solved self consistently in the Hartree-Fock scheme. The optical excitation energy is in fair agreement with experiment. The present approach is compared with other models previously used to describe this center and the results indicate that is adequately incorporates the relevant features of the system indicating the possibility of its application to other physical situations [pt
Consistency analysis for the performance of planar detector systems used in advanced radiotherapy
Directory of Open Access Journals (Sweden)
Kanan Jassal
2015-03-01
Full Text Available Purpose: To evaluate the performance linked to the consistency of a-Si EPID and ion-chamber array detectors for dose verification in advanced radiotherapy.Methods: Planar measurements were made for 250 patients using an array of ion chamber and a-Si EPID. For pre-treatment verification, the plans were generated on the phantom for re-calculation of doses. The γ-evaluation method with the criteria: dose-difference (DD ≤ 3% and distance-to-agreement (DTA ≤ 3 mm was used for the comparison of measurements. Also, the central axis (CAX doses were measured using 0.125cc ion chamber and were compared with the central chamber of array and central pixel correlated dose value from EPID image. Two types of statistical approaches were applied for the analysis. Conventional statistics used analysis of variance (ANOVA and unpaired t-test to evaluate the performance of the detectors. And statistical process control (SPC was utilized to study the statistical variation for the measured data. Control charts (CC based on an average , standard deviation ( and exponentially weighted moving averages (EWMA were prepared. The capability index (Cpm was determined as an indicator for the performance consistency of the two systems.Results: Array and EPID measurements had the average gamma pass rates as 99.9% ± 0.15% and 98.9% ± 1.06% respectively. For the point doses, the 0.125cc chamber results were within 2.1% ± 0.5% of the central chamber of the array. Similarly, CAX doses from EPID and chamber matched within 1.5% ± 0.3%. The control charts showed that both the detectors were performing optimally and all the data points were within ± 5%. EWMA charts revealed that both the detectors had a slow drift along the mean of the processes but was found well within ± 3%. Further, higher Cpm values for EPID demonstrate its higher efficiency for radiotherapy techniques.Conclusion: The performances of both the detectors were seen to be of high quality irrespective of the
2012-06-13
generating , sizing, quan- tifying, and sampling aerosols of inert materials also hold true for bioaerosols , i.e., for aerosolizing materials of...characterization, traditional bioaerosol generation and collection techniques can be employed to achieve consistent and reproducible low-dose expo- sures... generate and aerosolize consistent daily low aerosol concentrations and resultant low inhalation doses to rabbits. The pilot feasibility characterization
DEFF Research Database (Denmark)
Sogachev, Andrey; Kelly, Mark C.; Leclerc, Monique Y.
2012-01-01
A self-consistent two-equation closure treating buoyancy and plant drag effects has been developed, through consideration of the behaviour of the supplementary equation for the length-scale-determining variable in homogeneous turbulent flow. Being consistent with the canonical flow regimes of gri...
Bjerklie, D. M.
2014-12-01
As part of a U. S. Geological Survey effort to (1) estimate river discharge in ungaged basins, (2) understand runoff quantity and timing for watersheds between gaging stations, and (3) estimate potential future streamflow, a national scale precipitation runoff model is in development. The effort uses the USGS Precipitation Runoff Modeling System (PRMS) model. The model development strategy includes methods to assign hydrologic routing coefficients a priori from national scale GIS data bases. Once developed, the model can serve as an initial baseline for more detailed and locally/regionally calibrated models designed for specific projects and purposes. One of the key hydrologic routing coefficients is the groundwater coefficient (gw_coef). This study estimates the gw_coef from continental US GIS data, including geology, drainage density, aquifer type, vegetation type, and baseflow index information. The gw_coef is applied in regional PRMS models and is estimated using two methods. The first method uses a statistical model to predict the gw_coef from weighted average values of surficial geologic materials, dominant aquifer type, baseflow index, vegetation type, and the drainage density. The second method computes the gw_coef directly from the physical conditions in the watershed including the percentage geologic material and the drainage density. The two methods are compared against the gw_coef derived from streamflow records, and tested for selected rivers in different regions of the country. To address the often weak correlation between geology and baseflow, the existence of groundwater sinks, and complexities of groundwater flow paths, the spatial characteristics of the gw_coef prediction error were evaluated, and a correction factor developed from the spatial error distribution. This provides a consistent and improved method to estimate the gw_coef for regional PRMS models that is derived from available GIS data and physical information for watersheds.
Self-consistent tight-binding model of B and N doping in graphene
DEFF Research Database (Denmark)
Pedersen, Thomas Garm; Pedersen, Jesper Goor
2013-01-01
. The impurity potential depends sensitively on the impurity occupancy, leading to a self-consistency requirement. We solve this problem using the impurity Green's function and determine the self-consistent local density of states at the impurity site and, thereby, identify acceptor and donor energy resonances.......Boron and nitrogen substitutional impurities in graphene are analyzed using a self-consistent tight-binding approach. An analytical result for the impurity Green's function is derived taking broken electron-hole symmetry into account and validated by comparison to numerical diagonalization...
System Convergence in Transport Modelling
DEFF Research Database (Denmark)
Rich, Jeppe; Nielsen, Otto Anker; Cantarella, Guilio E.
2010-01-01
A fundamental premise of most applied transport models is the existence and uniqueness of an equilibrium solution that balances demand x(t) and supply t(x). The demand consists of the people that travel in the transport system and on the defined network, whereas the supply consists of the resulting...... level-of-service attributes (e.g., travel time and cost) offered to travellers. An important source of complexity is the congestion, which causes increasing demand to affect travel time in a non-linear way. Transport models most often involve separate models for traffic assignment and demand modelling...
Zhang, Bo
2010-01-01
This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…
Compositional Modelling of Stochastic Hybrid Systems
Strubbe, S.N.
2005-01-01
In this thesis we present a modelling framework for compositional modelling of stochastic hybrid systems. Hybrid systems consist of a combination of continuous and discrete dynamics. The state space of a hybrid system is hybrid in the sense that it consists of a continuous component and a discrete
DEFF Research Database (Denmark)
Peña, N.A.; Anton, A.; Fantke, Peter
2016-01-01
Quantifying over the life cycle of a product or service the chemical emissions to the environment in the life cycle inventory (LCI) phase is typically based on generic assumptions. Regarding the LCI application to agricultural systems the estimation of pesticide emissions is often based on standard......, and it will influence the outcomes of the impact profile. The pesticide emission model PestLCI 2.0 is the most advanced currently available inventory model for LCA intended to provide an estimation of organic pesticide emission fractions to the environment. We use this model as starting point for quantifying emission...... estimate metal-specific pesticide emission fractions, addressing the issue of inorganic pesticides for inventory analysis in LCA of agricultural systems....
Martinez, Guillermo F.; Gupta, Hoshin V.
2011-12-01
Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.
Mathevet, Thibault; Kumar, Rohini; Gupta, Hoshin; Vaze, Jai; Andréassian, Vazken
2015-04-01
This poster introduces the aims of the Large Sample Hydrology working group (LSH-WG) of the new IAHS Panta Rhei decade (2013-2022). The aim of the LSH-WG is to promote large sample hydrology, as discussed by Gupta et al. (2014) and to invite the community to collaborate on building and sharing a comprehensive and representative world-wide sample of watershed datasets. By doing so, LSH will allow the community to work towards 'hydrological consistency' (Martinez and Gupta, 2011) as a basis for hydrologic model development and evaluation, thereby increasing robustness of the model evaluation process. Classical model evaluation metrics based on 'robust statistics' are needed, but clearly not sufficient: multi-criteria assessments based on multiple hydrological signatures can help to better characterize hydrological functioning. Further, large-sample data sets can greatly facilitate: (i) improved understanding through rigorous testing and comparison of competing model hypothesis and structures, (ii) improved robustness of generalizations through statistical analyses that minimize the influence of outliers and case-specific studies, (iii) classification, regionalization and model transfer across a broad diversity of hydrometeorological contexts, and (iv) estimation of predictive uncertainties at a location and across locations (Mathevet et al., 2006; Andréassian et al., 2009; Gupta et al., 2014) References Andréassian, V., Perrin, C., Berthet, L., Le Moine, N., Lerat, J., Loumagne, C., Oudin, L., Mathevet, T., Ramos, M. H., and Valéry, A.: Crash tests for a standardized evaluation of hydrological models, Hydrology and Earth System Sciences, 1757-1764, 2009. Gupta, H. V., Perrin, C., Blöschl, G., Montanari, A., Kumar, R., Clark, M., and Andréassian, V.: Large-sample hydrology: a need to balance depth with breadth, Hydrol. Earth Syst. Sci., 18, 463-477, doi:10.5194/hess-18-463-2014, 2014. Martinez, G. F., and H. V.Gupta (2011), Hydrologic consistency as a basis for
Studying the Consistency between and within the Student Mental Models for Atomic Structure
Zarkadis, Nikolaos; Papageorgiou, George; Stamovlasis, Dimitrios
2017-01-01
Science education research has revealed a number of student mental models for atomic structure, among which, the one based on Bohr's model seems to be the most dominant. The aim of the current study is to investigate the coherence of these models when students apply them for the explanation of a variety of situations. For this purpose, a set of…
Self-consistent field modeling of linear non-ionic micelles
Jodar-Reyes, A.B.; Leermakers, F.A.M.
2006-01-01
A self-consistent field theory is used to predict structural, mechanical, and thermodynamical properties of linear micelles of selected nonionic surfactants of the type CnEm. Upon increase in surfactant concentration the sudden micelle shape transition from spherical to cylindrical (second critical
Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.
2009-01-01
This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…
Plasma Processes: A self-consistent kinetic modeling of a 1-D ...
Indian Academy of Sciences (India)
A self-consistent kinetic treatment is presented here, where the Boltzmann equation is solved for a particle conserving Krook collision operator. The resulting equations have been implemented numerically. The treatment solves for the entire quasineutral column, making no assumptions about mfp/, where mfp is the ...
Self-consistency condition and high-density virial theorem in relativistic many-particle systems
International Nuclear Information System (INIS)
Kalman, G.; Canuto, V.; Datta, B.
1976-01-01
In order for the thermodynamic and kinetic definitions of the chemical potential and the pressure to lead to identical results a nontrivial self-consistency criterion has to be satisfied. This, in turn, leads to a virial-like theorem in the high-density limit
Directory of Open Access Journals (Sweden)
Ying Jiang
2017-02-01
Full Text Available This paper presents a theoretical formalism for describing systems of semiflexible polymers, which can have density variations due to finite compressibility and exhibit an isotropic-nematic transition. The molecular architecture of the semiflexible polymers is described by a continuum wormlike-chain model. The non-bonded interactions are described through a functional of two collective variables, the local density and local segmental orientation tensor. In particular, the functional depends quadratically on local density-variations and includes a Maier–Saupe-type term to deal with the orientational ordering. The specified density-dependence stems from a free energy expansion, where the free energy of an isotropic and homogeneous homopolymer melt at some fixed density serves as a reference state. Using this framework, a self-consistent field theory is developed, which produces a Helmholtz free energy that can be used for the calculation of the thermodynamics of the system. The thermodynamic properties are analysed as functions of the compressibility of the model, for values of the compressibility realizable in mesoscopic simulations with soft interactions and in actual polymeric materials.
Self-consistent gyrokinetic modeling of neoclassical and turbulent impurity transport
Estève , D. ,; Sarazin , Y.; Garbet , X.; Grandgirard , V.; Breton , S. ,; Donnel , P. ,; Asahi , Y. ,; Bourdelle , C.; Dif-Pradalier , G; Ehrlacher , C.; Emeriau , C.; Ghendrih , Ph; Gillot , C.; Latu , G.; Passeron , C.
2018-01-01
International audience; Trace impurity transport is studied with the flux-driven gyrokinetic GYSELA code [V. Grandgirard et al., Comp. Phys. Commun. 207, 35 (2016)]. A reduced and linearized multi-species collision operator has been recently implemented, so that both neoclassical and turbulent transport channels can be treated self-consistently on an equal footing. In the Pfirsch-Schlüter regime likely relevant for tungsten, the standard expression of the neoclassical impurity flux is shown t...
The Bioenvironmental modeling of Bahar city based on Climate-consistent Architecture
Parna Kazemian
2014-01-01
The identification of the climate of a particularplace and the analysis of the climatic needs in terms of human comfort and theuse of construction materials is one of the prerequisites of aclimate-consistent design. In studies on climate and weather, usingillustrative reports, first a picture of the state of climate is offered. Then,based on the obtained results, the range of changes is determined, and thecause-effect relationships at different scales are identified. Finally, by ageneral exam...
Self-consistent gyrokinetic modeling of neoclassical and turbulent impurity transport
Estève, D.; Sarazin, Y.; Garbet, X.; Grandgirard, V.; Breton, S.; Donnel, P.; Asahi, Y.; Bourdelle, C.; Dif-Pradalier, G.; Ehrlacher, C.; Emeriau, C.; Ghendrih, Ph.; Gillot, C.; Latu, G.; Passeron, C.
2018-03-01
Trace impurity transport is studied with the flux-driven gyrokinetic GYSELA code (Grandgirard et al 2016 Comput. Phys. Commun. 207 35). A reduced and linearized multi-species collision operator has been recently implemented, so that both neoclassical and turbulent transport channels can be treated self-consistently on an equal footing. In the Pfirsch-Schlüter regime that is probably relevant for tungsten, the standard expression for the neoclassical impurity flux is shown to be recovered from gyrokinetics with the employed collision operator. Purely neoclassical simulations of deuterium plasma with trace impurities of helium, carbon and tungsten lead to impurity diffusion coefficients, inward pinch velocities due to density peaking, and thermo-diffusion terms which quantitatively agree with neoclassical predictions and NEO simulations (Belli et al 2012 Plasma Phys. Control. Fusion 54 015015). The thermal screening factor appears to be less than predicted analytically in the Pfirsch-Schlüter regime, which can be detrimental to fusion performance. Finally, self-consistent nonlinear simulations have revealed that the tungsten impurity flux is not the sum of turbulent and neoclassical fluxes computed separately, as is usually assumed. The synergy partly results from the turbulence-driven in-out poloidal asymmetry of tungsten density. This result suggests the need for self-consistent simulations of impurity transport, i.e. including both turbulence and neoclassical physics, in view of quantitative predictions for ITER.
José Gómez-Navarro, Juan; Raible, Christoph C.; Blumer, Sandro; Martius, Olivia; Felder, Guido
2016-04-01
Extreme precipitation episodes, although rare, are natural phenomena that can threat human activities, especially in areas densely populated such as Switzerland. Their relevance demands the design of public policies that protect public assets and private property. Therefore, increasing the current understanding of such exceptional situations is required, i.e. the climatic characterisation of their triggering circumstances, severity, frequency, and spatial distribution. Such increased knowledge shall eventually lead us to produce more reliable projections about the behaviour of these events under ongoing climate change. Unfortunately, the study of extreme situations is hampered by the short instrumental record, which precludes a proper characterization of events with return period exceeding few decades. This study proposes a new approach that allows studying storms based on a synthetic, but physically consistent database of weather situations obtained from a long climate simulation. Our starting point is a 500-yr control simulation carried out with the Community Earth System Model (CESM). In a second step, this dataset is dynamically downscaled with the Weather Research and Forecasting model (WRF) to a final resolution of 2 km over the Alpine area. However, downscaling the full CESM simulation at such high resolution is infeasible nowadays. Hence, a number of case studies are previously selected. This selection is carried out examining the precipitation averaged in an area encompassing Switzerland in the ESM. Using a hydrological criterion, precipitation is accumulated in several temporal windows: 1 day, 2 days, 3 days, 5 days and 10 days. The 4 most extreme events in each category and season are selected, leading to a total of 336 days to be simulated. The simulated events are affected by systematic biases that have to be accounted before this data set can be used as input in hydrological models. Thus, quantile mapping is used to remove such biases. For this task
Physically-consistent wall boundary conditions for the k-ω turbulence model
DEFF Research Database (Denmark)
Fuhrman, David R.; Dixen, Martin; Jacobsen, Niels Gjøl
2010-01-01
A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components...
CONSISTENT USE OF THE KALMAN FILTER IN CHEMICAL TRANSPORT MODELS (CTMS) FOR DEDUCING EMISSIONS
Past research has shown that emissions can be deduced using observed concentrations of a chemical, a Chemical Transport Model (CTM), and the Kalman filter in an inverse modeling application. An expression was derived for the relationship between the "observable" (i.e., the con...
Consistency Analysis and Data Consultation of Gas System of Gas-Electricity Network of Latvia
Zemite, L.; Kutjuns, A.; Bode, I.; Kunickis, M.; Zeltins, N.
2018-02-01
In the present research, the main critical points of gas transmission and storage system of Latvia have been determined to ensure secure and reliable gas supply among the Baltic States to fulfil the core objectives of the EU energy policies. Technical data of critical points of the gas transmission and storage system of Latvia have been collected and analysed with the SWOT method and solutions have been provided to increase the reliability of the regional natural gas system.
System consisting of fluorides and chlorides of sodium, strontium and barium
International Nuclear Information System (INIS)
Bukhalova, G.A.; Yagub'yan, E.S.; Keropyan, V.V.; Mirsoyanova, N.N.
1980-01-01
Simplicial sharing of a prism of the composition of a quaternary reciprocal Na, Sr, Ba long F, Cl system is performed. By visual-polythermal and thermographic methods the surface of liquidus of four tetrahedral sections is studied, and a low-melting region of the system is revealed. The lowest melting quaternary point is at 578 deg C. A dendritic scheme of the system crystallization is constructed. Thermodynamical calculations are performed for reactions occurring at the central points of nonvariant straight lines
Stochastic Modelling of Energy Systems
DEFF Research Database (Denmark)
Andersen, Klaus Kaae
2001-01-01
In this thesis dynamic models of typical components in Danish heating systems are considered. Emphasis is made on describing and evaluating mathematical methods for identification of such models, and on presentation of component models for practical applications. The thesis consists of seven...... of component models, such as e.g. heat exchanger and valve models, adequate for system simulations. Furthermore, the thesis demonstrates and discusses the advantages and disadvantages of using statistical methods in conjunction with physical knowledge in establishing adequate component models of heating...... research papers (case studies) together with a summary report. Each case study takes it's starting point in typical heating system components and both, the applied mathematical modelling methods and the application aspects, are considered. The summary report gives an introduction to the scope...
Bigagli, Lorenzo; Papeschi, Fabrizio; Nativi, Stefano; Bastin, Lucy; Masó, Joan
2013-04-01
GeoViQua (QUAlity aware VIsualisation for the Global Earth Observation System of Systems) is an FP7 project aiming at complementing the Global Earth Observation System of Systems (GEOSS) with rigorous data quality specifications and quality-aware capabilities, in order to improve reliability in scientific studies and policy decision-making. GeoViQua main scientific and technical objective is to enhance the GEOSS Common Infrastructure (GCI) providing the user community with innovative quality-aware search and visualization tools, which will be integrated in the GEOPortal, as well as made available to other end-user interfaces. To this end, GeoViQua will promote the extension of the current standard metadata for geographic information with accurate and expressive quality indicators. The project will also contribute to the definition of a quality label, the GEOLabel, reflecting scientific relevance, quality, acceptance and societal needs. The concept of Quality Information is very broad. When talking about the quality of a product, this is not limited to geophysical quality but also includes concepts like mission quality (e.g. data coverage with respect to planning). In general, it provides an indication of the overall fitness for use of a specific type of product. Employing and extending several ISO standards such as 19115, 19157 and 19139, a common set of data quality indicators has been selected to be used within the project. The resulting work, in the form of a data model, is expressed in XML Schema Language and encoded in XML. Quality information can be stated both by data producers and by data users, actually resulting in two conceptually distinct data models, the Producer Quality model and the User Quality model (or User Feedback model). A very important issue concerns the association between the quality reports and the affected products that are target of the report. This association is usually achieved by means of a Product Identifier (PID), but actually just
Consistency tests of cosmogonic theories from models of Uranus and Neptune
Podolak, M.; Reynolds, R. T.
1984-01-01
The planetary ratios of ice to rock (I/R) abundances expected in Uranus and Neptune are derived on the basis of several cosmogonic theories. For both Uranus and Neptune, the value of I/R lies between about 1.0 and 3.6. This value is difficult to reconcile with a scenario in which N and C are accreted primarily in the form of N2 and CO. It is consistent with some versions of both giant protoplanet theories and equilibrium accretion theories.
Derivation of a Self-Consistent Auroral Oval Model Using the Auroral Boundary Index
National Research Council Canada - National Science Library
Anderson, Keith
2004-01-01
... current HF communications capabilities. The auroral morphology is a good indicator of the level at which space weather and its near-Earth consequences are occurring, and thus it is important to develop an auroral prediction model...
National Aeronautics and Space Administration — During hypersonic entry into a planetary atmosphere, a spacecraft transitions from free-molecular flow conditions to fully continuum conditions. When modeling and...
DEFF Research Database (Denmark)
Keck, Rolf-Erik; Veldkamp, Dick; Wedel-Heinen, Jens Jakob
as a standalone flow-solver for the velocity and turbulence distribution, and power production in a wind farm. The performance of the standalone implementation is validated against field data, higher-order computational fluid dynamics models, as well as the most common engineering wake models in the wind industry...... evolution 4. atmospheric stability effects on wake deficit evolution and meandering The conducted research is to a large extent based on detailed wake investigations and reference data generated through computational fluid dynamics simulations, where the wind turbine rotor has been represented......This thesis describes the further development and validation of the dynamic meandering wake model for simulating the flow field and power production of wind farms operating in the atmospheric boundary layer (ABL). The overall objective of the conducted research is to improve the modelling...
DEFF Research Database (Denmark)
Kock, Anders Bredahl
2016-01-01
as if only these had been included in the model from the outset. In particular, this implies that it is able to discriminate between stationary and nonstationary autoregressions and it thereby constitutes an addition to the set of unit root tests. Next, and important in practice, we show that choosing...... to perform conservative model selection it has power even against shrinking alternatives of this form and compare it to the plain Lasso....
Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.
2009-01-01
Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.
Directory of Open Access Journals (Sweden)
R. P. Gerber
2013-03-01
Full Text Available Currently, the most successful predictive models for activity coefficients are those based on functional groups such as UNIFAC. In contrast, these models require a large amount of experimental data for the determination of their parameter matrix. A more recent alternative is the models based on COSMO, for which only a small set of universal parameters must be calibrated. In this work, a recalibrated COSMO-SAC model was compared with the UNIFAC (Do model employing experimental infinite dilution activity coefficient data for 2236 non-hydrogen-bonding binary mixtures at different temperatures. As expected, UNIFAC (Do presented better overall performance, with a mean absolute error of 0.12 ln-units against 0.22 for our COSMO-SAC implementation. However, in cases involving molecules with several functional groups or when functional groups appear in an unusual way, the deviation for UNIFAC was 0.44 as opposed to 0.20 for COSMO-SAC. These results show that COSMO-SAC provides more reliable predictions for multi-functional or more complex molecules, reaffirming its future prospects.
Consistency Between Convection Allowing Model Output and Passive Microwave Satellite Observations
Bytheway, J. L.; Kummerow, C. D.
2018-01-01
Observations from the Global Precipitation Measurement (GPM) core satellite were used along with precipitation forecasts from the High Resolution Rapid Refresh (HRRR) model to assess and interpret differences between observed and modeled storms. Using a feature-based approach, precipitating objects were identified in both the National Centers for Environmental Prediction Stage IV multisensor precipitation product and HRRR forecast at lead times of 1, 2, and 3 h at valid times corresponding to GPM overpasses. Precipitating objects were selected for further study if (a) the observed feature occurred entirely within the swath of the GPM Microwave Imager (GMI) and (b) the HRRR model predicted it at all three forecast lead times. Output from the HRRR model was used to simulate microwave brightness temperatures (Tbs), which were compared to those observed by the GMI. Simulated Tbs were found to have biases at both the warm and cold ends of the distribution, corresponding to the stratiform/anvil and convective areas of the storms, respectively. Several experiments altered both the simulation microphysics and hydrometeor classification in order to evaluate potential shortcomings in the model's representation of precipitating clouds. In general, inconsistencies between observed and simulated brightness temperatures were most improved when transferring snow water content to supercooled liquid hydrometeor classes.
Baldwin, Daniel G.; Coakley, James A., Jr.
1991-01-01
The anisotropy of the radiance field estimated from bidirectional models derived from Nimbus 7 ERB scanner data is compared with the anisotropy observed with the ERB Experiment (ERBE) scanner aboard the ERB satellite. The results of averaging over groups of 40 ERBE scanner scan lines for a period of a month revealed significant differences between the modeled and the observed anisotropy for given scene types and the sun-earth-satellite viewing geometries. By comparing the radiative fluxes derived using the observed anisotropy with those derived assuming isotropic reflection, it is concluded that a reasonable estimate for the maximum error due to the use of incorrect bidirectional models is a bias of about 4 percent for a typical 2.5 deg latitude-longitude monthly mean, and an rms error of 15 percent.
A. Mairesse; H. Goosse; P. Mathiot; H. Wanner; S. Dubinkina (Svetlana)
2013-01-01
htmlabstractThe mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of
International Nuclear Information System (INIS)
Colonna, G.; Pietanza, L.D.; D’Ammando, G.
2012-01-01
Graphical abstract: Self-consistent coupling between radiation, state-to-state kinetics, electron kinetics and fluid dynamics. Highlight: ► A CR model of shock-wave in hydrogen plasma has been presented. ► All equations have been coupled self-consistently. ► Non-equilibrium electron and level distributions are obtained. ► The results show non-local effects and non-equilibrium radiation. - Abstract: A collisional-radiative model for hydrogen atom, coupled self-consistently with the Boltzmann equation for free electrons, has been applied to model a shock tube. The kinetic model has been completed considering atom–atom collisions and the vibrational kinetics of the ground state of hydrogen molecules. The atomic level kinetics has been also coupled with a radiative transport equation to determine the effective adsorption and emission coefficients and non-local energy transfer.
On the Consistency of Gamma-Ray Burst Spectral Indices with the Synchrotron Shock Model
Preece, R. D.; Briggs, M. S.; Giblin, T. W.; Mallozzi, R. S.; Pendleton, G. N.; Paciesad, W. S.; Band, D. L.
2002-01-01
The current scenario for gamma-ray bursts (GRBs) involves internal shocks for the prompt GRB emission phase and external shocks for the afterglow phase. Assuming optically thin synchrotron emission from isotropically distributed energetic shocked electrons, GRB spectra observed with a low-energy power-law spectral index greater than -2/3 (for positive photon number indices E(exp alpha) indicate a problem with this model. For spectra that do not violate this condition, additional tests of the shock model can be made by comparing the low- and high-energy spectral indices, on the basis of the model's assertion that synchrotron emission from a single power-law distribution of electrons is responsible for both the low-energy and the high-energy power-law portions of the spectra. We find in most cases that the inferred relationship between the two spectral indices of observed GRB spectra is inconsistent with the constraints from the simple optically thin synchrotron shock emission model. In this sense, the prompt burst phase is different from the afterglow phase, and this difference may be related to anisotropic distributions of particles or to their continual acceleration in shocks during the prompt phase.
Loncke, Justine; Mayer, Axel; Eichelsheim, Veroni I.; Branje, Susan J. T.; Meeus, W.H.J.; Koot, Hans M.; Buysse, Ann; Loeys, Tom
Support is key to healthy family functioning. Using the family social relations model (SRM), it has already been shown that variability in perceived support is mostly attributed to individual perceiver effects. Little is known, however, as to whether those effects are stable or occasion-specific.
Self-consistent semi-analytic models of the first stars
Visbal, Eli; Haiman, Zoltán; Bryan, Greg L.
2018-01-01
We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter halos from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at z ≥ 20. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual halos) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.
Energy Technology Data Exchange (ETDEWEB)
Novello, M [Centro Brasileiro de Pesquisas Fisicas, Rua Dr Xavier Sigaud 150, Urca 22290-180 Rio de Janeiro, RJ (Brazil); Barcelos-Neto, J [Instituto de Fisica, Universidade Federal do Rio de Janeiro, RJ (Brazil); Salim, J M [Centro Brasileiro de Pesquisas Fisicas, Rua Dr Xavier Sigaud 150, Urca 22290-180 Rio de Janeiro, RJ (Brazil)
2002-06-07
We use a model where the cosmological term can be related to the chiral gauge anomaly of a possible quantum scenario of the initial evolution of the universe. We show that this term is compatible with the Friedmann behaviour of the present universe.
Self-consistent semi-analytic models of the first stars
Visbal, Eli; Haiman, Zoltán; Bryan, Greg L.
2018-04-01
We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter haloes from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at z ≥ 20. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual haloes) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.
A self-consistent model for the Galactic cosmic ray, antiproton and positron spectra
CERN. Geneva
2015-01-01
In this talk I will present the escape model of Galactic cosmic rays. This model explains the measured cosmic ray spectra of individual groups of nuclei from TeV to EeV energies. It predicts an early transition to extragalactic cosmic rays, in agreement with recent Auger data. The escape model also explains the soft neutrino spectrum 1/E^2.5 found by IceCube in concordance with Fermi gamma-ray data. I will show that within the same model one can explain the excess of positrons and antiprotons above 20 GeV found by PAMELA and AMS-02, the discrepancy in the slopes of the spectra of cosmic ray protons and heavier nuclei in the TeV-PeV energy range and the plateau in cosmic ray dipole anisotropy in the 2-50 TeV energy range by adding the effects of a 2 million year old nearby supernova.
Application of a Mass-Consistent Wind Model to Chinook Windstorms
1988-06-01
Meteor., 6, 837--344. Endlich, R. M., F. L. Ludwig, C. M. Bhunralkar, and M. A. Estoque , 1380: A practical method for estimating wind character34szics at...Project 8349, Menlo Park, CA. 94025. Endlich, R. M., F. L. Ludwig, C. M. Bhunralkar, and M. A. Estoque , 1982: A diagnostic model for estimating winds
Consistent stress-strain ductile fracture model as applied to two grades of beryllium
International Nuclear Information System (INIS)
Priddy, T.G.; Benzley, S.E.; Ford, L.M.
1980-01-01
Published yield and ultimate biaxial stress and strain data for two grades of beryllium are correlated with a more complete method of characterizing macroscopic strain at fracture initiation in ductile materials. Results are compared with those obtained from an exponential, mean stress dependent, model. Simple statistical methods are employed to illustrate the degree of correlation for each method with the experimental data
How to consistently make your product, technology or system more environmentally-sustainable?
DEFF Research Database (Denmark)
Laurent, Alexis; Cosme, Nuno Miguel Dias; Molin, Christine
-impact materials, identifying environmental hotspots parts of the life cycle with largest environmental impacts), making prospective simulations through scenario analyses, comparing and selecting most environmentally-friendly product/technology alternatives, reporting on the environmental performances......-hand with low environmental impacts, low-carbon emissions, low environmental footprints or more sustainability as a whole. To enable a scientifically-sound and consistent documentation of such sustainable development, quantitative assessments of all environmental impacts are needed. Life cycle assessment (LCA......) is recognized as the most holistic tool to address that need. LCA has two main strengths: (1) the ability to quantify all relevant environmental impacts – not just climate change, but also metal depletion, water use, toxicity exerted by pollutants on ecosystems and human health, etc.; and (2) making...
Kou, Jisheng
2017-12-09
A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is an attractive alternative recently over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of multiple fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which demonstrates that chemical potential gradients become the primary driving force of fluid motion. Moreover, we prove that the proposed model satisfies the total (free) energy dissipation with time. For numerical simulation of the proposed model, the key difficulties result from the strong nonlinearity of Helmholtz free energy density and tight coupling relations between molar densities and velocity. To resolve these problems, we propose a novel convex-concave splitting of Helmholtz free energy density and deal well with the coupling relations between molar densities and velocity through very careful physical observations with a mathematical rigor. We prove that the proposed numerical scheme can preserve the discrete (free) energy dissipation. Numerical tests are carried out to verify the effectiveness of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Baldwin, D.G. (Univ. of Colorado, Boulder (USA)); Coakley, J.A. (Oregon State Univ., Corvallis (USA))
1991-03-20
The Earth Radiation Budget Experiment (ERBE) uses bidirectional models to estimate radiative fluxes from observed radiances. The anisotropy of the radiance field derived from these models is compared with that observed with the ERBE scanner on the Earth Radiation Budget Satellite (ERBS). The bidirectional models used by ERBE were derived from NIMBUS 7 Earth radiation budget (ERB) scanner observations. Because of probable differences in the radiometric calibrations of the ERB and ERBE scanners and because of differences in their field of view sizes, the authors expect to find systematic differences of a few percent between the NIMBUS 7 ERB-derived radiation field anisotropy and the ERBS scanner-observed anisotropy. The differences expected are small compared with the variability of the anisotropy which arises from the variability in cloud cover allowed to occur within the individual scene types. By averaging over groups of 40 ERBE scanner scan lines (equivalent to an average over approximately 2,000 km) for a period of a month, they detect significant differences between the modeled and observed anisotropy for particular scene types and Sun-Earth-satellite viewing geometries. For a typical 2.5{degree} latitude-longitude region these differences give rise to a bias in the radiative flux that is at least 0.3% for the monthly mean and an rms error that is at least 4% for instantaneous observations. By comparing the fluxes derived using the observed anisotropy with those derived assuming isotropic reflection, they conclude that a reasonable estimate for the maximum error due to the use of incorrect bidirectional models is a bias of approximately 4% for a typical 2.5{degree} latitude-longitude, monthly mean and an rms error of 15%.
Self-Consistent 3D Modeling of Electron Cloud Dynamics and Beam Response
International Nuclear Information System (INIS)
Furman, Miguel; Furman, M.A.; Celata, C.M.; Kireeff-Covo, M.; Sonnad, K.G.; Vay, J.-L.; Venturini, M.; Cohen, R.; Friedman, A.; Grote, D.; Molvik, A.; Stoltz, P.
2007-01-01
We present recent advances in the modeling of beam electron-cloud dynamics, including surface effects such as secondary electron emission, gas desorption, etc, and volumetric effects such as ionization of residual gas and charge-exchange reactions. Simulations for the HCX facility with the code WARP/POSINST will be described and their validity demonstrated by benchmarks against measurements. The code models a wide range of physical processes and uses a number of novel techniques, including a large-timestep electron mover that smoothly interpolates between direct orbit calculation and guiding-center drift equations, and a new computational technique, based on a Lorentz transformation to a moving frame, that allows the cost of a fully 3D simulation to be reduced to that of a quasi-static approximation
A self-consistent LTE model of a microwave-driven, high-pressure sulfur lamp
Energy Technology Data Exchange (ETDEWEB)
Johnston, C.W.; Mullen, J.J.A.M. van der [Department of Applied Physics, Eindhoven University of Technology (Netherlands)]. E-mails: C.W.Johnston@tue.nl; J.J.A.M.v.d.Mullen@tue.nl; Heijden, H.W.P. van der; Janssen, G.M.; Dijk, J. van [Department of Applied Physics, Eindhoven University of Technology (Netherlands)
2002-02-21
A one-dimensional LTE model of a microwave-driven sulfur lamp is presented to aid our understanding of the discharge. The energy balance of the lamp is determined by Ohmic input on one hand and transport due to conductive heat transfer and molecular radiation on the other. We discuss the origin of operational trends in the spectrum, present the model and discuss how the material properties of the plasma are determined. Not only are temperature profiles and electric field strengths simulated but also the spectrum of the lamp from 300 to 900 nm under various conditions of input power and lamp filling pressure. We show that simulated spectra demonstrate observed trends and that radiated power increases linearly with input power as is also found from experiment. (author)
Self-consisting modeling of entangled network strands and dangling ends
DEFF Research Database (Denmark)
Jensen, Mette Krog; Schieber, Jay D.; Khaliullin, Renat N.
2009-01-01
Text of Abstract We seek knowledge about the effect of dangling ends and soluble structures of stoichiometrically imbalanced networks. To interpretate our recent experimental results we seek a molecular model that can predict LVE data. The discrete slip-link model (DSM) has proven to be a robust......, we call this an ideal entangled network (IEN). We simulate monodisperse polypropylene oxide with an average number of entanglements of ~3.8. Such lightly entangled networks show a G0 that is about 24% lower than GN0. This decrease is a result of monomer fluctuations between entanglements...... of dangling ends and soluble structures. Energy dissipation is increased by adding a fraction of dangling ends, wDE, to the ensemble. We find that when wDE=0.6, G0 is about 75% lower than GN0, this suggests that the fraction of network strands, wNS=1-wDE, largely influences the plateau value at low...
Teaching Consistency of UML Specifications
Sikkel, Nicolaas; Daneva, Maia
2010-01-01
Consider the situation that you have a data model, a functional model and a process model of a system, perhaps made by different analysts at different times. Are these models consistent with each other? A relevant question in practice – and therefore we think it should also be addressed in our
Flood damage: a model for consistent, complete and multipurpose scenarios
Directory of Open Access Journals (Sweden)
S. Menoni
2016-12-01
implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.
A Mind/Brain/Matter Model Consistent with Quantum Physics and UFO phenomena
1979-01-01
realities of a second type (E.P. Wigr, ,.’ "Two Kinds of Reality," The Monist , Vol. 48, No. 2, April 1964). Note that the modei -eing c dvanced by the...biological organism, including egos of "dead" biosystems. Note also that the wave-packet reduction (collapse of the wave function) is not a relativistically ...new fourth law of logic, which is briefly described and summarized. A new photon interaction model. of quantized observable changc is also presented
A consistent model for leptogenesis, dark matter and the IceCube signal
Energy Technology Data Exchange (ETDEWEB)
Fiorentin, M. Re [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Niro, V. [Departamento de Física Teórica, Universidad Autónoma de Madrid,Cantoblanco, E-28049 Madrid (Spain); Instituto de Física Teórica UAM/CSIC,Calle Nicolás Cabrera 13-15, Cantoblanco, E-28049 Madrid (Spain); Fornengo, N. [Dipartimento di Fisica, Università di Torino,via P. Giuria, 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Torino,via P. Giuria, 1, 10125 Torino (Italy)
2016-11-04
We discuss a left-right symmetric extension of the Standard Model in which the three additional right-handed neutrinos play a central role in explaining the baryon asymmetry of the Universe, the dark matter abundance and the ultra energetic signal detected by the IceCube experiment. The energy spectrum and neutrino flux measured by IceCube are ascribed to the decays of the lightest right-handed neutrino N{sub 1}, thus fixing its mass and lifetime, while the production of N{sub 1} in the primordial thermal bath occurs via a freeze-in mechanism driven by the additional SU(2){sub R} interactions. The constraints imposed by IceCube and the dark matter abundance allow nonetheless the heavier right-handed neutrinos to realize a standard type-I seesaw leptogenesis, with the B−L asymmetry dominantly produced by the next-to-lightest neutrino N{sub 2}. Further consequences and predictions of the model are that: the N{sub 1} production implies a specific power-law relation between the reheating temperature of the Universe and the vacuum expectation value of the SU(2){sub R} triplet; leptogenesis imposes a lower bound on the reheating temperature of the Universe at 7×10{sup 9} GeV. Additionally, the model requires a vanishing absolute neutrino mass scale m{sub 1}≃0.
Jha, Sanjeev Kumar
2013-01-01
A downscaling approach based on multiple-point geostatistics (MPS) is presented. The key concept underlying MPS is to sample spatial patterns from within training images, which can then be used in characterizing the relationship between different variables across multiple scales. The approach is used here to downscale climate variables including skin surface temperature (TSK), soil moisture (SMOIS), and latent heat flux (LH). The performance of the approach is assessed by applying it to data derived from a regional climate model of the Murray-Darling basin in southeast Australia, using model outputs at two spatial resolutions of 50 and 10 km. The data used in this study cover the period from 1985 to 2006, with 1985 to 2005 used for generating the training images that define the relationships of the variables across the different spatial scales. Subsequently, the spatial distributions for the variables in the year 2006 are determined at 10 km resolution using the 50 km resolution data as input. The MPS geostatistical downscaling approach reproduces the spatial distribution of TSK, SMOIS, and LH at 10 km resolution with the correct spatial patterns over different seasons, while providing uncertainty estimates through the use of multiple realizations. The technique has the potential to not only bridge issues of spatial resolution in regional and global climate model simulations but also in feature sharpening in remote sensing applications through image fusion, filling gaps in spatial data, evaluating downscaled variables with available remote sensing images, and aggregating/disaggregating hydrological and groundwater variables for catchment studies.
Consistent negative response of US crops to high temperatures in observations and crop models
Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja
2017-04-01
High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day above 30°C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures above 30°C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.
Drummond, Benjamin; Mayne, N. J.; Manners, James; Carter, Aarynn L.; Boutle, Ian A.; Baraffe, Isabelle; Hebrard, Eric; Tremblin, Pascal; Sing, David K.; Amundsen, David S.; Acreman, Dave
2018-01-01
We present a study of the effect of wind-driven advection on the chemical composition of hot Jupiter atmospheres using a fully-consistent 3D hydrodynamics, chemistry and radiative transfer code, the Met Office Unified Model (UM). Chemical modelling of exoplanet atmospheres has primarily been restricted to 1D models that cannot account for 3D dynamical processes. In this work we couple a chemical relaxation scheme to the UM to account for the chemical interconversion of methane and carbon mono...
Integrable (2 + 1-Dimensional Spin Models with Self-Consistent Potentials
Directory of Open Access Journals (Sweden)
Ratbay Myrzakulov
2015-08-01
Full Text Available Integrable spin systems possess interesting geometrical and gauge invariance properties and have important applications in applied magnetism and nanophysics. They are also intimately connected to the nonlinear Schrödinger family of equations. In this paper, we identify three different integrable spin systems in (2 + 1 dimensions by introducing the interaction of the spin field with more than one scalar potential, or vector potential, or both. We also obtain the associated Lax pairs. We discuss various interesting reductions in (2 + 1 and (1 + 1 dimensions. We also deduce the equivalent nonlinear Schrödinger family of equations, including the (2 + 1-dimensional version of nonlinear Schrödinger–Hirota–Maxwell–Bloch equations, along with their Lax pairs.
Khan, A.; Belluzzi, L.; Landi Degl'Innocenti, E.; Fineschi, S.; Romoli, M.
2011-05-01
Context. The presence and importance of the coronal magnetic field is illustrated by a wide range of phenomena, such as the abnormally high temperatures of the coronal plasma, the existence of a slow and fast solar wind, the triggering of explosive events such as flares and CMEs. Aims: We investigate the possibility of using the Hanle effect to diagnose the coronal magnetic field by analysing its influence on the linear polarisation, i.e. the rotation of the plane of polarisation and depolarisation. Methods: We analyse the polarisation characteristics of the first three lines of the hydrogen Lyman-series using an axisymmetric, self-consistent, minimum-corona MHD model with relatively low values of the magnetic field (a few Gauss). Results: We find that the Hanle effect in the above-mentioned lines indeed seems to be a valuable tool for analysing the coronal magnetic field. However, great care must be taken when analysing the spectropolarimetry of the Lα line, given that a non-radial solar wind and active regions on the solar disk can mimic the effects of the magnetic field, and, in some cases, even mask them. Similar drawbacks are not found for the Lβ and Lγ lines because they are more sensitive to the magnetic field. We also briefly consider the instrumental requirements needed to perform polarimetric observations for diagnosing the coronal magnetic fields. Conclusions: The combined analysis of the three aforementioned lines could provide an important step towards better constrainting the value of solar coronal magnetic fields.
Hydronic distribution system computer model
Energy Technology Data Exchange (ETDEWEB)
Andrews, J.W.; Strasser, J.J.
1994-10-01
A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.
Cosmological evolution and Solar System consistency of massive scalar-tensor gravity
de Pirey Saint Alby, Thibaut Arnoulx; Yunes, Nicolás
2017-09-01
The scalar-tensor theory of Damour and Esposito-Farèse recently gained some renewed interest because of its ability to suppress modifications to general relativity in the weak field, while introducing large corrections in the strong field of compact objects through a process called scalarization. A large sector of this theory that allows for scalarization, however, has been shown to be in conflict with Solar System observations when accounting for the cosmological evolution of the scalar field. We here study an extension of this theory by endowing the scalar field with a mass to determine whether this allows the theory to pass Solar System constraints upon cosmological evolution for a larger sector of coupling parameter space. We show that the cosmological scalar field goes first through a quiescent phase, similar to the behavior of a massless field, but then it enters an oscillatory phase, with an amplitude (and frequency) that decays (and grows) exponentially. We further show that after the field enters the oscillatory phase, its effective energy density and pressure are approximately those of dust, as expected from previous cosmological studies. Due to these oscillations, we show that the scalar field cannot be treated as static today on astrophysical scales, and so we use time-dependent perturbation theory to compute the scalar-field-induced modifications to Solar System observables. We find that these modifications are suppressed when the mass of the scalar field and the coupling parameter of the theory are in a wide range, allowing the theory to pass Solar System constraints, while in principle possibly still allowing for scalarization.
Self-consistent one-dimensional modelling of x-ray laser plasmas
International Nuclear Information System (INIS)
Wan, A.S.; Walling, R.S.; Scott, H.A.; Mayle, R.W.; Osterheld, A.L.
1992-01-01
This paper presents the simulation of a planar, one-dimensional expanding Ge x-ray laser plasma using a new code which combines hydrodynamics, laser absorption, and detailed level population calculations within the same simulation. Previously, these simulations were performed in separate steps. We will present the effect of line transfer on gains and excited level populations and compare the line transfer result with simulations using escape probabilities. We will also discuss the impact of different atomic models on the accuracy of our simulation
Stretched-exponential decay functions from a self-consistent model of dielectric relaxation
International Nuclear Information System (INIS)
Milovanov, A.V.; Rasmussen, J.J.; Rypdal, K.
2008-01-01
There are many materials whose dielectric properties are described by a stretched exponential, the so-called Kohlrausch-Williams-Watts (KWW) relaxation function. Its physical origin and statistical-mechanical foundation have been a matter of debate in the literature. In this Letter we suggest a model of dielectric relaxation, which naturally leads to a stretched exponential decay function. Some essential characteristics of the underlying charge conduction mechanisms are considered. A kinetic description of the relaxation and charge transport processes is proposed in terms of equations with time-fractional derivatives
Directory of Open Access Journals (Sweden)
Laura Louise Scott
2017-12-01
Full Text Available Although cyanobacterial β-N-methylamino-l-alanine (BMAA has been implicated in the development of Alzheimer’s Disease (AD, Parkinson’s Disease (PD and Amyotrophic Lateral Sclerosis (ALS, no BMAA animal model has reproduced all the neuropathology typically associated with these neurodegenerative diseases. We present here a neonatal BMAA model that causes β-amyloid deposition, neurofibrillary tangles of hyper-phosphorylated tau, TDP-43 inclusions, Lewy bodies, microbleeds and microgliosis as well as severe neuronal loss in the hippocampus, striatum, substantia nigra pars compacta, and ventral horn of the spinal cord in rats following a single BMAA exposure. We also report here that BMAA exposure on particularly PND3, but also PND4 and 5, the critical period of neurogenesis in the rodent brain, is substantially more toxic than exposure to BMAA on G14, PND6, 7 and 10 which suggests that BMAA could potentially interfere with neonatal neurogenesis in rats. The observed selective toxicity of BMAA during neurogenesis and, in particular, the observed pattern of neuronal loss observed in BMAA-exposed rats suggest that BMAA elicits its effect by altering dopamine and/or serotonin signaling in rats.
A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling
Shapiro, B.; Jin, Q.
2015-12-01
Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.
Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers
DEFF Research Database (Denmark)
Cartar, William; Mørk, Jesper; Hughes, Stephen
2017-01-01
We present a powerful computational approach to simulate the threshold behavior of photonic-crystal quantum-dot (QD) lasers. Using a finite-difference time-domain (FDTD) technique, Maxwell-Bloch equations representing a system of thousands of statistically independent and randomly positioned two...... on both the passive cavity and active lasers, where the latter show a general increase in the pump threshold for cavity lengths greater than N = 7, and a reduction in the nominal cavity mode volume for increasing amounts of disorder....
Ohmacht, Martin
2014-09-09
In a multiprocessor system, a central memory synchronization module coordinates memory synchronization requests responsive to memory access requests in flight, a generation counter, and a reclaim pointer. The central module communicates via point-to-point communication. The module includes a global OR reduce tree for each memory access requesting device, for detecting memory access requests in flight. An interface unit is implemented associated with each processor requesting synchronization. The interface unit includes multiple generation completion detectors. The generation count and reclaim pointer do not pass one another.
3D self-consistent modeling of a matrix source of negative hydrogen ions.
Tarnev, Kh; Demerdjiev, A; Shivarova, A; Lishev, St
2016-02-01
The paper is in the scope of studies on the rf driving of a matrix source of negative hydrogen ions: a matrix of small radius discharges with planar-coil inductive driving and single aperture extraction from each discharge. The results from a three-dimensional model, in which plasma description is coupled to electrodynamics, confirm former conclusion that a single coil driving of the whole matrix by a zigzag coil with an omega-shaped conductor on the bottom of each discharge tube ensures efficient rf power deposition to the plasma. The latter is due to similarities with the rf driving of a single discharge by a single planar coil, shown by the obtained induced current and spatial distribution of the plasma parameters. Distinctions associated with the coil configuration as a single coil for the whole matrix are also discussed.
Directory of Open Access Journals (Sweden)
Meric Ataman
2017-07-01
Full Text Available Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.
Directory of Open Access Journals (Sweden)
Shuichiro Yazawa
2014-06-01
Full Text Available The role of surface protective additives becomes vital when operating conditions become severe and moving components operate in a boundary lubrication regime. After protecting film is slowly removed by rubbing, it can regenerate through the tribochemical reaction of the additives at the contact. However, there are limitations about the regeneration of the protecting film when additives are totally consumed. On the other hand, there are a lot of hard coatings to protect the steel surface from wear. These can enable the functioning of tribological systems, even in adverse lubrication conditions. However, hard coatings usually make the friction coefficient higher, because of their high interfacial shear strength. Amongst hard coatings, diamond-like carbon (DLC is widely used, because of its relatively low friction and superior wear resistance. In practice, conventional lubricants that are essentially formulated for a steel/steel surface are still used for lubricating machine component surfaces provided with protective coatings, such as DLCs, despite the fact that the surface properties of coatings are quite different from those of steel. It is therefore important that the design of additive molecules and their interaction with coatings should be re-considered. The main aim of this paper is to discuss the DLC and the additive combination that enable tribofilm formation and effective lubrication of tribological systems.
DEFF Research Database (Denmark)
Staunstrup, Jørgen
1998-01-01
This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....
National Research Council Canada - National Science Library
Feiler, Peter
2007-01-01
.... The Society of Automotive Engineers (SAE) Architecture Analysis & Design Language (AADL) is an industry-standard, architecture-modeling notation specifically designed to support a component- based approach to modeling embedded systems...
Modelling Railway Interlocking Systems
DEFF Research Database (Denmark)
Lindegaard, Morten Peter; Viuf, P.; Haxthausen, Anne Elisabeth
2000-01-01
In this report we present a model of interlocking systems, and describe how the model may be validated by simulation. Station topologies are modelled by graphs in which the nodes denote track segments, and the edges denote connectivity for train traÆc. Points and signals are modelled by annotatio...
Tile drainage phosphorus loss with long-term consistent cropping systems and fertilization.
Zhang, T Q; Tan, C S; Zheng, Z M; Drury, C F
2015-03-01
Phosphorus (P) loss in tile drainage water may vary with agricultural practices, and the impacts are often hard to detect with short-term studies. We evaluated the effects of long-term (≥43 yr) cropping systems (continuous corn [CC], corn-oats-alfalfa-alfalfa rotation [CR], and continuous grass [CS]) and fertilization (fertilization [F] vs. no-fertilization [NF]) on P loss in tile drainage water from a clay loam soil over a 4-yr period. Compared with NF, long-term fertilization increased concentrations and losses of dissolved reactive P (DRP), dissolved unreactive P (DURP), and total P (TP) in tile drainage water, with the increments following the order: CS > CR > CC. Dissolved P (dissolved reactive P [DRP] and dissolved unreactive P [DURP]) was the dominant P form in drainage outflow, accounting for 72% of TP loss under F-CS, whereas particulate P (PP) was the major form of TP loss under F-CC (72%), F-CR (62%), NF-CS (66%), NF-CC (74%), and NF-CR (72%). Dissolved unreactive P played nearly equal roles as DRP in P losses in tile drainage water. Stepwise regression analysis showed that the concentration of P (DRP, DURP, and PP) in tile drainage flow, rather than event flow volume, was the most important factor contributing to P loss in tile drainage water, although event flow volume was more important in PP loss than in dissolved P loss. Continuous grass significantly increased P loss by increasing P concentration and flow volume of tile drainage water, especially under the fertilization treatment. Long-term grasslands may become a significant P source in tile-drained systems when they receive regular P addition. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Complementarity of DM searches in a consistent simplified model: the case of Z{sup ′}
Energy Technology Data Exchange (ETDEWEB)
Jacques, Thomas [SISSA and INFN,via Bonomea 265, 34136 Trieste (Italy); Katz, Andrey [Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Morgante, Enrico; Racco, Davide [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Rameez, Mohamed [Département de Physique Nucléaire et Corpusculaire,Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Riotto, Antonio [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland)
2016-10-14
We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC, direct and indirect detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy Z{sup ′} mediates the interactions between the SM and the DM. We find that for heavy dark matter indirect detection provides the strongest bounds on this scenario, while IceCube bounds are typically stronger than those from direct detection. The LHC constraints are dominant for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun and the Galactic Center are either bb̄ or tt̄, while the heavy DM annihilation is completely dominated by Zh channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast IceCube constraints to allow proper comparison with constraints from direct and indirect detection experiments and LHC exclusions.
Complementarity of DM Searches in a Consistent Simplified Model: the Case of Z'
Jacques, Thomas; Morgante, Enrico; Racco, Davide; Rameez, Mohamed; Riotto, Antonio
2016-01-01
We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC and direct detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy $Z'$ mediates the interactions between the SM and the DM. We find that in most cases IceCube provides the strongest bounds on this scenario, while the LHC constraints are only meaningful for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun are either $b \\bar b$ or $t \\bar t$, while the heavy DM annihilation is completely dominated by $Zh$ channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast Ice...
Complementarity of DM searches in a consistent simplified model: the case of Z′
International Nuclear Information System (INIS)
Jacques, Thomas; Katz, Andrey; Morgante, Enrico; Racco, Davide; Rameez, Mohamed; Riotto, Antonio
2016-01-01
We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC, direct and indirect detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy Z ′ mediates the interactions between the SM and the DM. We find that for heavy dark matter indirect detection provides the strongest bounds on this scenario, while IceCube bounds are typically stronger than those from direct detection. The LHC constraints are dominant for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun and the Galactic Center are either bb̄ or tt̄, while the heavy DM annihilation is completely dominated by Zh channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast IceCube constraints to allow proper comparison with constraints from direct and indirect detection experiments and LHC exclusions.
A Self-consistent Model of a Ray Through the Orion Complex
Abel, N. P.; Ferland, G. J.
2003-12-01
The Orion Complex is the best studied region of active star formation, with observational data available over the entire electromagnetic spectrum. These extensive observations give us a good idea of the physical structure of Orion, that being a thin ( ˜ 0.1 parsec) blister H II region on the face of the molecular cloud OMC-1. A PDR, where the transition from atoms & ions to molecules occurs, forms an interface between the two. Most of the physical processes are driven by starlight from the Trapezium cluster, with the star Ori C being the strongest source of radiation. Observations made towards lines of sight near Ori C reveal numerous H II and molecular line intensities. Photoionization calculations have played an important role in determining the physical properties of the regions where these lines originate, but thus far have treated the H II region and PDR as separate problems. Actually these regions are energized by the same source of radiation, with the gas hydrodynamics providing the physical link between them. Here were present a unified physical model of a single ray through the Orion Complex. We choose a region 60'' west of Ori C, where extensive observations exist. These include lines that originate within the H II region, background PDR, and from regions deep inside OMC-1 itself. An improved treatment of the grain, molecular hydrogen, and CO physics have all been developed as part of the continuing evolution of the plasma code Cloudy, so that we can now simultaneously predict the full spectrum with few free parameters. This provides a holistic approach that will be validated in this well-studied environment then extended to the distant starburst galaxies. Acknowledgements: We thank the NSF and NASA for support.
Final Scientific/Technical Report "Arc Tube Coating System for Color Consistency"
Energy Technology Data Exchange (ETDEWEB)
Buelow, Roger [Energy Focus, Inc., Solon, OH (United States); Jenson, Chris [Energy Focus, Inc., Solon, OH (United States); Kazenski, Keith [Energy Focus, Inc., Solon, OH (United States)
2013-03-21
DOE has enabled the use of coating materials using low cost application methods on light sources to positively affect the output of those sources. The coatings and light source combinations have shown increased lumen output of LED fixtures (1.5%-2.0%), LED arrays (1.4%) and LED powered remote phosphor systems Philips L-Prize lamp (0.9%). We have also demonstrated lifetime enhancements (3000 hrs vs 8000 hrs) and shifting to higher CRI (51 to 65) in metal halide high intensity discharge lamps with metal oxide coatings. The coatings on LEDs and LED products are significant as the market is moving increasingly more towards LED technology. Enhancements in LED performance are demonstrated in this work through the use of available materials and low cost application processes. EFOI used low refractive index fluoropolymers and low cost dipping processes for application of the material to surfaces related to light transmission of LEDs and LED products. Materials included Teflon AF, an amorphous fluorinated polymer and fluorinated acrylic monomers. The DOE SSL Roadmap sets goals for LED performance moving into the future. EFOI's coating technology is a means to shift the performance curve for LEDs. This is not limited to one type of LED, but is relevant across LED technologies. The metal halide work included the use of sol-gel solutions resulting in silicon dioxide and titanium dioxide coatings on the quartz substrates of the metal halide arc tubes. The coatings were applied using low cost dipping processes.
Models of vertical coordination consistent with the development of bio-energetics
Directory of Open Access Journals (Sweden)
Gianluca Nardone
Full Text Available To foster the development of the biomasses for solid fuel it is fundamental to build up a strategy at a local level in which co-exists farms as well as industrial farms. To such aim, it is necessary to implement an effective vertical coordination between the stakeholders with the definition of a contract that prevents opportunistic behaviors and guarantees the industrial investments of constant supplies over the time. Starting from a project that foresees a biomasses power plant in the south of Italy, this study reflects on the payments to fix in an eventual contract in such a way to maintain the fidelity of the farmers. These one have a greater flexibility since they can choose the most convenient crop. Therefore, their fidelity can be obtained tying the contractual payments to the price of the main alternative crop to the energetic one. The results of the study seem to indicate the opportunity to fix a purchase price of the raw materials linked to the one of durum wheat that is the most widespread crop in the territory and the one that depends more on a volatile market. Using the data of the District 12 of the province of Foggia Water Consortium with an area of 11.300 hectares (instead of the 20.000 demanded in the proposal, it has been possible to organize approximately 600 enterprises in five cluster, each of them identified by a representative farm. With a model of linear programming, we have run different simulations taking into account the possibility to grow sorghum in different ways. Through an aggregation process, it has been calculated that farmers may find it convenient to supply the energetic crop at a price of 50 €/t when the price of durum wheat is 150 €/t. Anyway, this price is lower than the one offered by firm that is planning to build the power plant. Moreover, it has been identified a strong correlation between the price of the durum wheat and the price that makes convenient for the farmers to grow the sorghum. When the
Models of vertical coordination consistent with the development of bio-energetics
Directory of Open Access Journals (Sweden)
Rosaria Viscecchia
2011-02-01
Full Text Available To foster the development of the biomasses for solid fuel it is fundamental to build up a strategy at a local level in which co-exists farms as well as industrial farms. To such aim, it is necessary to implement an effective vertical coordination between the stakeholders with the definition of a contract that prevents opportunistic behaviors and guarantees the industrial investments of constant supplies over the time. Starting from a project that foresees a biomasses power plant in the south of Italy, this study reflects on the payments to fix in an eventual contract in such a way to maintain the fidelity of the farmers. These one have a greater flexibility since they can choose the most convenient crop. Therefore, their fidelity can be obtained tying the contractual payments to the price of the main alternative crop to the energetic one. The results of the study seem to indicate the opportunity to fix a purchase price of the raw materials linked to the one of durum wheat that is the most widespread crop in the territory and the one that depends more on a volatile market. Using the data of the District 12 of the province of Foggia Water Consortium with an area of 11.300 hectares (instead of the 20.000 demanded in the proposal, it has been possible to organize approximately 600 enterprises in five cluster, each of them identified by a representative farm. With a model of linear programming, we have run different simulations taking into account the possibility to grow sorghum in different ways. Through an aggregation process, it has been calculated that farmers may find it convenient to supply the energetic crop at a price of 50 €/t when the price of durum wheat is 150 €/t. Anyway, this price is lower than the one offered by firm that is planning to build the power plant. Moreover, it has been identified a strong correlation between the price of the durum wheat and the price that makes convenient for the farmers to grow the sorghum. When the
Directory of Open Access Journals (Sweden)
J. Callies
2012-01-01
Full Text Available A simple model of the thermohaline circulation (THC is formulated, with the objective to represent explicitly the geostrophic force balance of the basinwide THC. The model comprises advective-diffusive density balances in two meridional-vertical planes located at the eastern and the western walls of a hemispheric sector basin. Boundary mixing constrains vertical motion to lateral boundary layers along these walls. Interior, along-boundary, and zonally integrated meridional flows are in thermal-wind balance. Rossby waves and the absence of interior mixing render isopycnals zonally flat except near the western boundary, constraining meridional flow to the western boundary layer. The model is forced by a prescribed meridional surface density profile.
This two-plane model reproduces both steady-state density and steady-state THC structures of a primitive-equation model. The solution shows narrow deep sinking at the eastern high latitudes, distributed upwelling at both boundaries, and a western boundary current with poleward surface and equatorward deep flow. The overturning strength has a 2/3-power-law dependence on vertical diffusivity and a 1/3-power-law dependence on the imposed meridional surface density difference. Convective mixing plays an essential role in the two-plane model, ensuring that deep sinking is located at high latitudes. This role of convective mixing is consistent with that in three-dimensional models and marks a sharp contrast with previous two-dimensional models.
Overall, the two-plane model reproduces crucial features of the THC as simulated in simple-geometry three-dimensional models. At the same time, the model self-consistently makes quantitative a conceptual picture of the three-dimensional THC that hitherto has been expressed either purely qualitatively or not self-consistently.
Béghin, Christian
2015-02-01
This model is worked out in the frame of physical mechanisms proposed in previous studies accounting for the generation and the observation of an atypical Schumann Resonance (SR) during the descent of the Huygens Probe in the Titan's atmosphere on 14 January 2005. While Titan is staying inside the subsonic co-rotating magnetosphere of Saturn, a secondary magnetic field carrying an Extremely Low Frequency (ELF) modulation is shown to be generated through ion-acoustic instabilities of the Pedersen current sheets induced at the interface region between the impacting magnetospheric plasma and Titan's ionosphere. The stronger induced magnetic field components are focused within field-aligned arcs-like structures hanging down the current sheets, with minimum amplitude of about 0.3 nT throughout the ramside hemisphere from the ionopause down to the Moon surface, including the icy crust and its interface with a conductive water ocean. The deep penetration of the modulated magnetic field in the atmosphere is thought to be allowed thanks to the force balance between the average temporal variations of thermal and magnetic pressures within the field-aligned arcs. However, there is a first cause of diffusion of the ELF magnetic components, probably due to feeding one, or eventually several SR eigenmodes. A second leakage source is ascribed to a system of eddy-Foucault currents assumed to be induced through the buried water ocean. The amplitude spectrum distribution of the induced ELF magnetic field components inside the SR cavity is found fully consistent with the measurements of the Huygens wave-field strength. Waiting for expected future in-situ exploration of Titan's lower atmosphere and the surface, the Huygens data are the only experimental means available to date for constraining the proposed model.
Consistent analysis of peripheral reaction channels and fusion for the 16,18O+58Ni systems
International Nuclear Information System (INIS)
Alves, J.J.S.; Gomes, P.R.S.; Lubian, J.; Chamon, L.C.; Pereira, D.; Anjos, R.M.; Rossi, E.S.; Silva, C.P.; Alvarez, M.A.G.; Nobre, G.P.A.; Gasques, L.R.
2005-01-01
We have measured elastic scattering and peripheral reaction channel cross sections for the 16,18 O+ 58 Ni systems at ELab=46 MeV. The data were analyzed through extensive coupled-channel calculations. It was investigated the consistency of the present analysis with a previous one at sub-barrier energies. Experimental fusion cross sections for these systems are also compared with the corresponding predictions of the coupled-channel calculations
Sean P. Healey; Paul L. Patterson; Sassan S. Saatchi; Michael A. Lefsky; Andrew J. Lister; Elizabeth A. Freeman
2012-01-01
Lidar height data collected by the Geosciences Laser Altimeter System (GLAS) from 2002 to 2008 has the potential to form the basis of a globally consistent sample-based inventory of forest biomass. GLAS lidar return data were collected globally in spatially discrete full waveform "shots," which have been shown to be strongly correlated with aboveground forest...
Internally consistent thermodynamic data for aqueous species in the system Na-K-Al-Si-O-H-Cl
Miron, George D.; Wagner, Thomas; Kulik, Dmitrii A.; Heinrich, Christoph A.
2016-08-01
A large amount of critically evaluated experimental data on mineral solubility, covering the entire Na-K-Al-Si-O-H-Cl system over wide ranges in temperature and pressure, was used to simultaneously refine the standard state Gibbs energies of aqueous ions and complexes in the framework of the revised Helgeson-Kirkham-Flowers equation of state. The thermodynamic properties of the solubility-controlling minerals were adopted from the internally consistent dataset of Holland and Powell (2002; Thermocalc dataset ds55). The global optimization of Gibbs energies of aqueous species, performed with the GEMSFITS code (Miron et al., 2015), was set up in such a way that the association equilibria for ion pairs and complexes, independently derived from conductance and potentiometric data, are always maintained. This was achieved by introducing reaction constraints into the parameter optimization that adjust Gibbs energies of complexes by their respective Gibbs energy effects of reaction, whenever the Gibbs energies of reactant species (ions) are changed. The optimized thermodynamic dataset is reported with confidence intervals for all parameters evaluated by Monte Carlo trial calculations. The new thermodynamic dataset is shown to reproduce all available fluid-mineral phase equilibria and mineral solubility data with good accuracy and precision over wide ranges in temperature (25-800 °C), pressure (1 bar to 5 kbar) and composition (salt concentrations up to 5 molal). The global data optimization process adopted in this study can be readily repeated any time when extensions to new chemical elements and species are needed, when new experimental data become available, or when a different aqueous activity model or equation of state should be used. This work serves as a proof of concept that our optimization strategy is feasible and successful in generating a thermodynamic dataset reproducing all fluid-mineral and aqueous speciation equilibria in the Na-K-Al-Si-O-H-Cl system within
Using a Theory-Consistent CVAR Scenario to Test an Exchange Rate Model Based on Imperfect Knowledge
Directory of Open Access Journals (Sweden)
Katarina Juselius
2017-07-01
Full Text Available A theory-consistent CVAR scenario describes a set of testable regularieties one should expect to see in the data if the basic assumptions of the theoretical model are empirically valid. Using this method, the paper demonstrates that all basic assumptions about the shock structure and steady-state behavior of an an imperfect knowledge based model for exchange rate determination can be formulated as testable hypotheses on common stochastic trends and cointegration. This model obtaines remarkable support for almost every testable hypothesis and is able to adequately account for the long persistent swings in the real exchange rate.
Directory of Open Access Journals (Sweden)
A. S. Candy
2018-01-01
Full Text Available The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.
Candy, Adam S.; Pietrzak, Julie D.
2018-01-01
The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.
Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.
Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.
Lin, M. C.; Verboncoeur, J.
2016-10-01
A maximum electron current transmitted through a planar diode gap is limited by space charge of electrons dwelling across the gap region, the so called space charge limited (SCL) emission. By introducing a counter-streaming ion flow to neutralize the electron charge density, the SCL emission can be dramatically raised, so electron current transmission gets enhanced. In this work, we have developed a relativistic self-consistent model for studying the enhancement of maximum transmission by a counter-streaming ion current. The maximum enhancement is found when the ion effect is saturated, as shown analytically. The solutions in non-relativistic, intermediate, and ultra-relativistic regimes are obtained and verified with 1-D particle-in-cell simulations. This self-consistent model is general and can also serve as a comparison for verification of simulation codes, as well as extension to higher dimensions.
Directory of Open Access Journals (Sweden)
Jeong Ran Park
2007-12-01
Full Text Available We tried to develop itemized evaluation criteria and a clinical rater qualification system through rating training of inter-rater consistency for experienced clinical dental hygienists and dental hygiene clinical educators. A total of 15 clinical dental hygienists with 1-year careers participated as clinical examination candidates, while 5 dental hygienists with 3-year educations and clinical careers or longer participated as clinical raters. They all took the clinical examination as examinees. The results were compared, and the consistency of competence was measured. The comparison of clinical competence between candidates and clinical raters showed that the candidate group?占퐏 mean clinical competence ranged from 2.96 to 3.55 on a 5-point system in a total of 3 instruments (Probe, Explorer, Curet, while the clinical rater group?占퐏 mean clinical competence ranged from 4.05 to 4.29. There was a higher inter-rater consistency after education of raters in the following 4 items: Probe, Explorer, Curet, and insertion on distal surface. The mean score distribution of clinical raters ranged from 75% to 100%, which was more uniform in the competence to detect an artificial calculus than that of candidates (25% to 100%. According to the above results, there was a necessity in the operating clinical rater qualification system for comprehensive dental hygiene clinicians. Furthermore, in order to execute the clinical rater qualification system, it will be necessary to keep conducting a series of studies on educational content, time, frequency, and educator level.
Donnellan, M. Brent; Kenny, David A.; Trzesniewski, Kali H.; Lucas, Richard E.; Conger, Rand D.
2012-01-01
The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-est...
Liu, Zhihao; Wei, Pingmin; Huang, Minghao; Liu, Yuan bao; Li, Lucy; Gong, Xiao; Chen, Juan; Li, Xiaoning
2014-01-01
Due to the increase incidents of premarital sex and the lack of reproductive health services, college students are at high risk of HIV/AIDS infections in China. This study was designed to examine the predictors of consistency of condom use among college students based on the Information-Motivation-Behavioral Skills (IMB) model and to describe the relationships between the model constructs. A cross-sectional study was conducted to assess HIV/AIDS related information, motivation, behavioral skills and preventive behavior among college students in five colleges and universities in Nanjing, China. An anonymous questionnaire survey was conducted for data collection, and the structural equation model (SEM) was used to assess the IMB model. A total of 3183 participants completed this study. The average age was 19.90 years (SD = 1.43, range 16 to 25). 342 (10.7%) participants of them reported having had premarital sex, among whom 30.7% reported having had a consistent condom use, 13.7% with the experience of abortion (including the participants whose sex partner has the same experience), 32.7% of participants had experience of multiple sex partners. The final IMB model provided acceptable fit to the data (CFI = 0.992, RMSEA = 0.028). Preventive behavior was significantly predicted by behavioral skills (β = 0.754, Pmotivation (β = 0.363, Pstudents in China. The main influencing factor of preventive behavior among college students is behavioral skills. Both information and motivation could affect preventive behavior through behavioral skills. Further research could develop preventive interventions based on the IMB model to promote consistent condom use among college students in China.
Velikina, Julia V; Samsonov, Alexey A
2015-11-01
To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
Andrade, Maria Celia Ramos; Ludwig, Gerson Otto [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Lab. Associado de Plasma]. E-mail: mcr@plasma.inpe.br
2004-07-01
Different bootstrap current formulations are implemented in a self-consistent equilibrium calculation obtained from a direct variational technique in fixed boundary tokamak plasmas. The total plasma current profile is supposed to have contributions of the diamagnetic, Pfirsch-Schlueter, and the neoclassical Ohmic and bootstrap currents. The Ohmic component is calculated in terms of the neoclassical conductivity, compared here among different expressions, and the loop voltage determined consistently in order to give the prescribed value of the total plasma current. A comparison among several bootstrap current models for different viscosity coefficient calculations and distinct forms for the Coulomb collision operator is performed for a variety of plasma parameters of the small aspect ratio tokamak ETE (Experimento Tokamak Esferico) at the Associated Plasma Laboratory of INPE, in Brazil. We have performed this comparison for the ETE tokamak so that the differences among all the models reported here, mainly regarding plasma collisionality, can be better illustrated. The dependence of the bootstrap current ratio upon some plasma parameters in the frame of the self-consistent calculation is also analysed. We emphasize in this paper what we call the Hirshman-Sigmar/Shaing model, valid for all collisionality regimes and aspect ratios, and a fitted formulation proposed by Sauter, which has the same range of validity but is faster to compute than the previous one. The advantages or possible limitations of all these different formulations for the bootstrap current estimate are analysed throughout this work. (author)
International Nuclear Information System (INIS)
Palermo, L.; Silva, X.A. da
1981-01-01
The magnetic properties of a model consisting of an electron gas, interacting by exchange with van Vleck ions, under the action of a crystal field, are studied in the narrow band limit. Iso-T sub(c) curvers (in the plane of the interaction parameters), ionic and electronic magnetizations and susceptibilities versus temperature, as well as the magnetic specific heat, are obtained for several values of exchange and crystal field parameters. (Author) [pt
Matthäus, Franziska; Pahle, Jürgen
2017-01-01
This contributed volume comprises research articles and reviews on topics connected to the mathematical modeling of cellular systems. These contributions cover signaling pathways, stochastic effects, cell motility and mechanics, pattern formation processes, as well as multi-scale approaches. All authors attended the workshop on "Modeling Cellular Systems" which took place in Heidelberg in October 2014. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.
Garrett, T. J.
2014-12-01
Studies of the response of global climate to anthropogenic activities rely upon scenarios for future human activity to provide a range of possible trajectories for greenhouse gases emissions over the coming century. Sophisticated integrated models are used to explore not only what will happen, but what should happen in order to optimize societal well-being. Hundreds of equations might be used to account for the interplay between human decisions, technological change, and macroeconomic priniciples. In contrast, the model equations used to describe geophysical phenomena look very different because they are a) purely deterministic and b) consistent with basic thermodynamic laws. This inconsistency between macroeconomics and physics suggests a rather unhappy marriage. During the Anthropocene the evolution of humanity and our environment will become increasingly intertwined. Representing such a coupling suggests a need for a common theoretical basis. To this end, the approach that is described here is to treat civilization like any other physical process, that is as an open, non-equilibrium thermodynamic system that dissipates energy and diffuses matter in order to sustain existing circulations and to further its material growth. Theoretical arguments and over 40 years of measurements show that a very general representation of global economic wealth (not GDP) has been tied to rates of global primary energy consumption through a constant 7.1 ± 0.1 mW per year 2005 USD. This link between physics and economics leads to very simple expressions for how fast civilization and its rate of energy consumption grow. These are expressible as a function of rates of energy and material resource discovery and depletion, and of the magnitude of externally imposed decay. The equations are validated through hindcasts that show, for example, that economic conditions in the 1950s can be invoked to make remarkably accurate forecasts of present rates of global GDP growth and primary energy
Directory of Open Access Journals (Sweden)
Zhihao Liu
Full Text Available BACKGROUND: Due to the increase incidents of premarital sex and the lack of reproductive health services, college students are at high risk of HIV/AIDS infections in China. This study was designed to examine the predictors of consistency of condom use among college students based on the Information-Motivation-Behavioral Skills (IMB model and to describe the relationships between the model constructs. METHODS: A cross-sectional study was conducted to assess HIV/AIDS related information, motivation, behavioral skills and preventive behavior among college students in five colleges and universities in Nanjing, China. An anonymous questionnaire survey was conducted for data collection, and the structural equation model (SEM was used to assess the IMB model. RESULTS: A total of 3183 participants completed this study. The average age was 19.90 years (SD = 1.43, range 16 to 25. 342 (10.7% participants of them reported having had premarital sex, among whom 30.7% reported having had a consistent condom use, 13.7% with the experience of abortion (including the participants whose sex partner has the same experience, 32.7% of participants had experience of multiple sex partners. The final IMB model provided acceptable fit to the data (CFI = 0.992, RMSEA = 0.028. Preventive behavior was significantly predicted by behavioral skills (β = 0.754, P<0.001. Information (β = 0.138, P<0.001 and motivation (β = 0.363, P<0.001 were indirectly affected preventive behavior, and was mediated through behavioral skills. CONCLUSIONS: The results of the study demonstrate the utility of the IMB model for consistent condom use among college students in China. The main influencing factor of preventive behavior among college students is behavioral skills. Both information and motivation could affect preventive behavior through behavioral skills. Further research could develop preventive interventions based on the IMB model to promote consistent condom
Feofilov, Artem G.; Yankovsky, Valentine A.; Pesnell, William D.; Kutepov, Alexander A.; Goldberg, Richard A.; Mauilova, Rada O.
2007-01-01
We present the new version of the ALI-ARMS (for Accelerated Lambda Iterations for Atmospheric Radiation and Molecular Spectra) model. The model allows simultaneous self-consistent calculating the non-LTE populations of the electronic-vibrational levels of the O3 and O2 photolysis products and vibrational level populations of CO2, N2,O2, O3, H2O, CO and other molecules with detailed accounting for the variety of the electronic-vibrational, vibrational-vibrational and vibrational-translational energy exchange processes. The model was used as the reference one for modeling the O2 dayglows and infrared molecular emissions for self-consistent diagnostics of the multi-channel space observations of MLT in the SABER experiment It also allows reevaluating the thermalization efficiency of the absorbed solar ultraviolet energy and infrared radiative cooling/heating of MLT by detailed accounting of the electronic-vibrational relaxation of excited photolysis products via the complex chain of collisional energy conversion processes down to the vibrational energy of optically active trace gas molecules.
Modelling of wastewater systems
DEFF Research Database (Denmark)
Bechmann, Henrik
Oxygen Demand) flux and SS flux in the inlet to the WWTP. COD is measured by means of a UV absorption sensor while SS is measured by a turbidity sensor. These models include a description of the deposit of COD and SS amounts, respectively, in the sewer system, and the models can thus be used to quantify......In this thesis, models of pollution fluxes in the inlet to 2 Danish wastewater treatment plants (WWTPs) as well as of suspended solids (SS) concentrations in the aeration tanks of an alternating WWTP and in the effluent from the aeration tanks are developed. The latter model is furthermore used...... to analyze and quantify the effect of the Aeration Tank Settling (ATS) operating mode, which is used during rain events. Furthermore, the model is used to propose a control algorithm for the phase lengths during ATS operation. The models are mainly formulated as state space model in continuous time...
Multi-model comparison highlights consistency in predicted effect of warming on a semi-arid shrub
Renwick, Katherine M.; Curtis, Caroline; Kleinhesselink, Andrew R.; Schlaepfer, Daniel R.; Bradley, Bethany A.; Aldridge, Cameron L.; Poulter, Benjamin; Adler, Peter B.
2018-01-01
A number of modeling approaches have been developed to predict the impacts of climate change on species distributions, performance, and abundance. The stronger the agreement from models that represent different processes and are based on distinct and independent sources of information, the greater the confidence we can have in their predictions. Evaluating the level of confidence is particularly important when predictions are used to guide conservation or restoration decisions. We used a multi-model approach to predict climate change impacts on big sagebrush (Artemisia tridentata), the dominant plant species on roughly 43 million hectares in the western United States and a key resource for many endemic wildlife species. To evaluate the climate sensitivity of A. tridentata, we developed four predictive models, two based on empirically derived spatial and temporal relationships, and two that applied mechanistic approaches to simulate sagebrush recruitment and growth. This approach enabled us to produce an aggregate index of climate change vulnerability and uncertainty based on the level of agreement between models. Despite large differences in model structure, predictions of sagebrush response to climate change were largely consistent. Performance, as measured by change in cover, growth, or recruitment, was predicted to decrease at the warmest sites, but increase throughout the cooler portions of sagebrush's range. A sensitivity analysis indicated that sagebrush performance responds more strongly to changes in temperature than precipitation. Most of the uncertainty in model predictions reflected variation among the ecological models, raising questions about the reliability of forecasts based on a single modeling approach. Our results highlight the value of a multi-model approach in forecasting climate change impacts and uncertainties and should help land managers to maximize the value of conservation investments.
Kou, Jisheng
2016-11-25
A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is a latest alternative over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of two fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which demonstrates that chemical potential gradients become the primary driving force of fluid motion. Moreover, we prove that the proposed model satisfies the total (free) energy dissipation with time. For numerical simulation of the proposed model, the key difficulties result from the strong nonlinearity of Helmholtz free energy density and tight coupling relations between molar densities and velocity. To resolve these problems, we propose a novel convex-concave splitting of Helmholtz free energy density and deal well with the coupling relations between molar densities and velocity through very careful physical observations with a mathematical rigor. We prove that the proposed numerical scheme can preserve the discrete (free) energy dissipation. Numerical tests are carried out to verify the effectiveness of the proposed method.
Donnellan, M. Brent; Kenny, David A.; Trzesniewski, Kali H.; Lucas, Richard E.; Conger, Rand D.
2012-01-01
The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-esteem assessments, whereas state factors accounted for about 16% of the variance in repeated assessments of latent self-esteem. The stability of individual differences in self-esteem increased with age consistent with the cumulative continuity principle of personality development. PMID:23180899
Systems modeling for laser IFE
Meier, W. R.; Raffray, A. R.; Sviatoslavsky, I. N.
2006-06-01
A systems model of a laser-driven IFE power plant is being developed to assist in design trade-offs and optimization. The focus to date has been on modeling the fusion chamber, blanket and power conversion system. A self-consistent model has been developed to determine key chamber and thermal cycle parameters (e.g., chamber radius, structure and coolant temperatures, cycle efficiency, etc.) as a function of the target yield and pulse repetition rate. Temperature constraints on the tungsten armor, ferritic steel wall, and structure/coolant interface are included in evaluating the potential design space. Results are presented for a lithium cooled first wall coupled with a Brayton power cycle. LLNL work performed under the auspices of the US Department of Energy by the University of California LLNL under Contract W-7405-Eng-48.
Directory of Open Access Journals (Sweden)
Li-Cheng Hsieh
2012-08-01
Full Text Available The aim of this research is to use LabVIEW to help bowlers understand theirjoint movements, forces acting on their joints, and the consistency of their knee movements while competing in ten-pin bowling. Kinetic and kinematic data relating to the lower limbs were derived from bowlers’ joint angles and the joint forces were calculated from the Euler angles using the inverse dynamics method with Newton-Euler equations. An artificial-neural-network (ANN-based data-driven model for predicting knee forces using the Euler angles was developed. This approach allows for the collection of data inbowling alleys without the use of force plates. Correlation coefficients were computed after ANN training and all values exceeded 0.9. This result implies a strong correlation between the joint angles and forces. Furthermore, the predicted 3D forces (obtained from ANN simulations and the measured forces (obtained from force plates via the inverse dynamics method are strongly correlated. The agreement between the predicted andmeasured forces was evaluated by the coefficient of determination (R2, which reflects the bowler’s consistency and steadiness of the bowling motion at the knee. The R2 value was beneficial in assessing the consistency of the bowling motion. An R2 value close to 1 implies a more consistent sliding motion. This research enables the prediction of the forceson the knee during ten-pin bowling by ANN simulations using the measured knee angles. Consequently, coaches and bowlers can use the developed ANN model and the analysis module to improve bowling motion.
Bonnet-Lebrun, Anne-Sophie
2017-03-17
Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.
Zhou, Yuzhi; Wang, Han; Liu, Yu; Gao, Xingyu; Song, Haifeng
2018-03-01
The Kerker preconditioner, based on the dielectric function of homogeneous electron gas, is designed to accelerate the self-consistent field (SCF) iteration in the density functional theory calculations. However, a question still remains regarding its applicability to the inhomogeneous systems. We develop a modified Kerker preconditioning scheme which captures the long-range screening behavior of inhomogeneous systems and thus improves the SCF convergence. The effectiveness and efficiency is shown by the tests on long-z slabs of metals, insulators, and metal-insulator contacts. For situations without a priori knowledge of the system, we design the a posteriori indicator to monitor if the preconditioner has suppressed charge sloshing during the iterations. Based on the a posteriori indicator, we demonstrate two schemes of the self-adaptive configuration for the SCF iteration.
Boccara, Nino
2010-01-01
Modeling Complex Systems, 2nd Edition, explores the process of modeling complex systems, providing examples from such diverse fields as ecology, epidemiology, sociology, seismology, and economics. It illustrates how models of complex systems are built and provides indispensable mathematical tools for studying their dynamics. This vital introductory text is useful for advanced undergraduate students in various scientific disciplines, and serves as an important reference book for graduate students and young researchers. This enhanced second edition includes: . -recent research results and bibliographic references -extra footnotes which provide biographical information on cited scientists who have made significant contributions to the field -new and improved worked-out examples to aid a student’s comprehension of the content -exercises to challenge the reader and complement the material Nino Boccara is also the author of Essentials of Mathematica: With Applications to Mathematics and Physics (Springer, 2007).
International Nuclear Information System (INIS)
Schreckenberg, M
2004-01-01
This book by Nino Boccara presents a compilation of model systems commonly termed as 'complex'. It starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics. The subsequent chapters are devoted to various categories of mean-field type models (differential and recurrence equations, chaos) and of agent-based models (cellular automata, networks and power-law distributions). Each chapter is supplemented by a number of exercises and their solutions. The table of contents looks a little arbitrary but the author took the most prominent model systems investigated over the years (and up until now there has been no unified theory covering the various aspects of complex dynamics). The model systems are explained by looking at a number of applications in various fields. The book is written as a textbook for interested students as well as serving as a comprehensive reference for experts. It is an ideal source for topics to be presented in a lecture on dynamics of complex systems. This is the first book on this 'wide' topic and I have long awaited such a book (in fact I planned to write it myself but this is much better than I could ever have written it!). Only section 6 on cellular automata is a little too limited to the author's point of view and one would have expected more about the famous Domany-Kinzel model (and more accurate citation!). In my opinion this is one of the best textbooks published during the last decade and even experts can learn a lot from it. Hopefully there will be an actualization after, say, five years since this field is growing so quickly. The price is too high for students but this, unfortunately, is the normal case today. Nevertheless I think it will be a great success! (book review)
International Nuclear Information System (INIS)
Malmberg, T.
1993-09-01
The objective of this study is to derive and investigate thermodynamic restrictions for a particular class of internal variable models. Their evolution equations consist of two contributions: the usual irreversible part, depending only on the present state, and a reversible but path dependent part, linear in the rates of the external variables (evolution equations of ''mixed type''). In the first instance the thermodynamic analysis is based on the classical Clausius-Duhem entropy inequality and the Coleman-Noll argument. The analysis is restricted to infinitesimal strains and rotations. The results are specialized and transferred to a general class of elastic-viscoplastic material models. Subsequently, they are applied to several viscoplastic models of ''mixed type'', proposed or discussed in the literature (Robinson et al., Krempl et al., Freed et al.), and it is shown that some of these models are thermodynamically inconsistent. The study is closed with the evaluation of the extended Clausius-Duhem entropy inequality (concept of Mueller) where the entropy flux is governed by an assumed constitutive equation in its own right; also the constraining balance equations are explicitly accounted for by the method of Lagrange multipliers (Liu's approach). This analysis is done for a viscoplastic material model with evolution equations of the ''mixed type''. It is shown that this approach is much more involved than the evaluation of the classical Clausius-Duhem entropy inequality with the Coleman-Noll argument. (orig.) [de
International Nuclear Information System (INIS)
Pitkaenen, P.; Loefman, J.; Korkealaakso, J.; Koskinen, L.; Ruotsalainen, P.; Hautojaervi, A.; Aeikaes, T.
1999-01-01
In the assessment of the suitability and safety of a geological repository for radioactive waste the understanding of the fluid flow at a site is essential. In order to build confidence in the assessment of the hydrogeological performance of a site in various conditions, integration of hydrological and hydrogeochemical methods and studies provides the primary method for investigating the evolution that has taken place in the past, and for predicting future conditions at the potential disposal site. A systematic geochemical sampling campaign was started since the beginning of 1990's in the Finnish site investigation programme. This enabled the initiating of integration and evaluation of site scale hydrogeochemical and groundwater flow models. Hydrogeochemical information has been used to screen relevant external processes and variables for definition of the initial and boundary conditions in hydrological simulations. The results obtained from interpretation and modelling hydrogeochemical evolution have been employed in testing the hydrogeochemical consistency of conceptual flow models. Integration and testing of flow models with hydrogeochemical information are considered to improve significantly the hydrogeological understanding of a site and increases confidence in conceptual hydrogeological models. (author)
Cohen, Bruce; Umansky, Maxim; Joseph, Ilon
2015-11-01
Progress is reported on including self-consistent zonal flows in simulations of drift-resistive ballooning turbulence using the BOUT + + framework. Previous published work addressed the simulation of L-mode edge turbulence in realistic single-null tokamak geometry using the BOUT three-dimensional fluid code that solves Braginskii-based fluid equations. The effects of imposed sheared ExB poloidal rotation were included, with a static radial electric field fitted to experimental data. In new work our goal is to include the self-consistent effects on the radial electric field driven by the microturbulence, which contributes to the sheared ExB poloidal rotation (zonal flow generation). We describe a model for including self-consistent zonal flows and an algorithm for maintaining underlying plasma profiles to enable the simulation of steady-state turbulence. We examine the role of Braginskii viscous forces in providing necessary dissipation when including axisymmetric perturbations. We also report on some of the numerical difficulties associated with including the axisymmetric component of the fluctuating fields. This work was performed under the auspices of the U.S. Department of Energy under contract DE-AC52-07NA27344 at the Lawrence Livermore National Laboratory (LLNL-ABS-674950).
Energy Technology Data Exchange (ETDEWEB)
Ojima, D. [ed.
1992-12-31
The 1990 Global Change Institute (GCI) on Earth System Modeling is the third of a series organized by the Office for Interdisciplinary Earth Studies to look in depth at particular issues critical to developing a better understanding of the earth system. The 1990 GCI on Earth System Modeling was organized around three themes: defining critical gaps in the knowledge of the earth system, developing simplified working models, and validating comprehensive system models. This book is divided into three sections that reflect these themes. Each section begins with a set of background papers offering a brief tutorial on the subject, followed by working group reports developed during the institute. These reports summarize the joint ideas and recommendations of the participants and bring to bear the interdisciplinary perspective that imbued the institute. Since the conclusion of the 1990 Global Change Institute, research programs, nationally and internationally, have moved forward to implement a number of the recommendations made at the institute, and many of the participants have maintained collegial interactions to develop research projects addressing the needs identified during the two weeks in Snowmass.
International Nuclear Information System (INIS)
Kawashima, Masatoshi; Arie, Kazuo; Araki, Yoshio; Sato, Mitsuyoshi; Mori, Kenji; Nakayama, Yoshiyuki; Nakazono, Ryuichi; Kuroda, Yuji; Ishiguma, Kazuo; Fujii-e, Yoichi
2008-01-01
A sustainable nuclear energy system was developed based on the concept of Self-Consistent Nuclear Energy System (SCNES). Our study that trans-uranium (TRU) metallic fuel fast reactor cycle coupled with recycling of five long-lived fission products (LLFP) as well as actinides is the most promising system for the sustainable nuclear utilization. Efficient utilization of uranium-238 through the SCNES concept opens the doors to prolong the lifetime of nuclear energy systems towards several tens of thousand years. Recent evolution of the concept revealed compatibility of fuel sustainability, minor actinide (MA) minimization and non-proliferation aspects for peaceful use of nuclear energy systems through the discussion. As for those TRU compositions stabilized under fast neutron spectra, plutonium isotope fractions are remained in the range of reactor grade classification with high fraction of Pu240 isotope. Recent evolution of the SCNES concept has revealed that TRU recycling can cope with enhancing non-proliferation efforts in peaceful use with the 'no-blanket and multi-zoning core' concept. Therefore, the realization of SCNES is most important. In addition, along the process to the goals, a three-step approach is proposed to solve concurrent problems raised in the LWR systems. We discussed possible roles and contribution to the near future demand along worldwide expansion of LWR capacities by applying the 1st generation SCNES. MA fractions in TRU are more than 10% from LWR discharged fuels and even higher up to 20% in fuels from long interim storages. TRU recycling in the 1st generation SCNES system can reduce the MA fractions down to 4-5% in a few decades. This capability significantly releases 'MA' pressures in down-stream of LWR systems. Current efforts for enhancing capabilities for energy generation by LWR systems are efficient against the global warming crisis. In parallel to those movements, early realization of the SCNES concept can be the most viable decision
Parametric Modeling for Fluid Systems
Pizarro, Yaritzmar Rosario; Martinez, Jonathan
2013-01-01
Fluid Systems involves different projects that require parametric modeling, which is a model that maintains consistent relationships between elements as is manipulated. One of these projects is the Neo Liquid Propellant Testbed, which is part of Rocket U. As part of Rocket U (Rocket University), engineers at NASA's Kennedy Space Center in Florida have the opportunity to develop critical flight skills as they design, build and launch high-powered rockets. To build the Neo testbed; hardware from the Space Shuttle Program was repurposed. Modeling for Neo, included: fittings, valves, frames and tubing, between others. These models help in the review process, to make sure regulations are being followed. Another fluid systems project that required modeling is Plant Habitat's TCUI test project. Plant Habitat is a plan to develop a large growth chamber to learn the effects of long-duration microgravity exposure to plants in space. Work for this project included the design and modeling of a duct vent for flow test. Parametric Modeling for these projects was done using Creo Parametric 2.0.
Charnay, B.; Bézard, B.; Baudino, J.-L.; Bonnefoy, M.; Boccaletti, A.; Galicher, R.
2018-02-01
We developed a simple, physical, and self-consistent cloud model for brown dwarfs and young giant exoplanets. We compared different parametrizations for the cloud particle size, by fixing either particle radii or the mixing efficiency (parameter f sed), or by estimating particle radii from simple microphysics. The cloud scheme with simple microphysics appears to be the best parametrization by successfully reproducing the observed photometry and spectra of brown dwarfs and young giant exoplanets. In particular, it reproduces the L–T transition, due to the condensation of silicate and iron clouds below the visible/near-IR photosphere. It also reproduces the reddening observed for low-gravity objects, due to an increase of cloud optical depth for low gravity. In addition, we found that the cloud greenhouse effect shifts chemical equilibrium, increasing the abundances of species stable at high temperature. This effect should significantly contribute to the strong variation of methane abundance at the L–T transition and to the methane depletion observed on young exoplanets. Finally, we predict the existence of a continuum of brown dwarfs and exoplanets for absolute J magnitude = 15–18 and J-K color = 0–3, due to the evolution of the L–T transition with gravity. This self-consistent model therefore provides a general framework to understand the effects of clouds and appears well-suited for atmospheric retrievals.
Sarofim, M. C.; Martinich, J.; Waldhoff, S.; DeAngelo, B. J.; McFarland, J.; Jantarasami, L.; Shouse, K.; Crimmins, A.; Li, J.
2014-12-01
The Climate Change Impacts and Risk Analysis (CIRA) project establishes a new multi-model framework to systematically assess the physical impacts, economic damages, and risks from climate change. The primary goal of this framework is to estimate the degree to which climate change impacts and damages in the United States are avoided or reduced in the 21st century under multiple greenhouse gas (GHG) emissions mitigation scenarios. The first phase of the CIRA project is a modeling exercise that included two integrated assessment models and 15 sectoral models encompassing five broad impacts sectors: water resources, electric power, infrastructure, human health, and ecosystems. Three consistent socioeconomic and climate scenarios are used to analyze the benefits of global GHG mitigation targets: a reference scenario and two policy scenarios with total radiative forcing targets in 2100 of 4.5 W/m2 and 3.7 W/m2. In this exercise, the implications of key uncertainties are explored, including climate sensitivity, climate model, natural variability, and model structures and parameters. This presentation describes the motivations and goals of the CIRA project; the design and academic contribution of the first CIRA modeling exercise; and briefly summarizes several papers published in a special issue of Climatic Change. The results across impact sectors show that GHG mitigation provides benefits to the United States that increase over time, the effects of climate change can be strongly influenced by near-term policy choices, adaptation can reduce net damages, and impacts exhibit spatial and temporal patterns that may inform mitigation and adaptation policy discussions.
Energy Technology Data Exchange (ETDEWEB)
Yin, Q; Jacobsen, B; Moynier, F; Hutcheon, I D
2007-05-02
New high-precision {sup 53}Mn-{sup 53}Cr data obtained for chondrules extracted from a primitive ordinary chondrite, Chainpur (LL3.4), define an initial {sup 53}Mn/{sup 55}Mn ratio of (5.1 {+-} 1.6) x 10{sup -6}. As a result of this downward revision from an earlier higher value of (9.4 {+-} 1.7) x 10{sup -6} for the same meteorite (Nyquist et al. 2001), together with an assessment of recent literature, we show that a consistent chronology with other chronometers such as the {sup 26}Al-{sup 26}Mg and {sup 207}Pb-{sup 206}Pb systems emerges in the early Solar System.
Time-dependent restricted-active-space self-consistent-field theory for bosonic many-body systems
International Nuclear Information System (INIS)
Lévêque, Camille; Madsen, Lars Bojer
2017-01-01
We develop an ab initio time-dependent wavefunction based theory for the description of a many-body system of cold interacting bosons. Like the multi-configurational time-dependent Hartree method for bosons (MCTDHB), the theory is based on a configurational interaction Ansatz for the many-body wavefunction with time-dependent self-consistent-field orbitals. The theory generalizes the MCTDHB method by incorporating restrictions on the active space of the orbital excitations. The restrictions are specified based on the physical situation at hand. The equations of motion of this time-dependent restricted-active-space self-consistent-field (TD-RASSCF) theory are derived. The similarity between the formal development of the theory for bosons and fermions is discussed. The restrictions on the active space allow the theory to be evaluated under conditions where other wavefunction based methods due to exponential scaling in the numerical effort cannot, and to clearly identify the excitations that are important for an accurate description, significantly beyond the mean-field approach. For ground state calculations we find it to be important to allow a few particles to have the freedom to move in many orbitals, an insight facilitated by the flexibility of the restricted-active-space Ansatz . Moreover, we find that a high accuracy can be obtained by including only even excitations in the many-body self-consistent-field wavefunction. Time-dependent simulations of harmonically trapped bosons subject to a quenching of their noncontact interaction, show failure of the mean-field Gross-Pitaevskii approach within a fraction of a harmonic oscillation period. The TD-RASSCF theory remains accurate at much reduced computational cost compared to the MCTDHB method. Exploring the effect of changes of the restricted-active-space allows us to identify that even self-consistent-field excitations are mainly responsible for the accuracy of the method. (paper)
International Nuclear Information System (INIS)
Lerche, I.; Low, B.C.
1977-01-01
A theoretical model of quiescent prominences in the form of an infinite vertical sheet is presented. Self-consistent solutions are obtained by integrating simultaneously the set of nonlinear equations of magnetostatic equilibrium and thermal balance. The basic features of the models are: (1) The prominence matter is confined to a sheet and supported against gravity by a bowed magnetic field. (2) The thermal flux is channelled along magnetic field lines. (3) The thermal flux is everywhere balanced by Low's (1975) hypothetical heat sink which is proportional to the local density. (4) A constant component of the magnetic field along the length of the prominence shields the cool plasma from the hot surrounding. It is assumed that the prominence plasma emits more radiation than it absorbes from the radiation fields of the photosphere, chromosphere and corona, and the above hypothetical heat sink is interpreted to represent the amount of radiative loss that must be balanced by a nonradiative energy input. Using a central density and temperature of 10 11 particles cm -3 and 5000 K respectively, a magnetic field strength between 2 to 10 gauss and a thermal conductivity that varies linearly with temperature, the physical properties implied by the model are discussed. The analytic treatment can also be carried out for a class of more complex thermal conductivities. These models provide a useful starting point for investigating the combined requirements of magnetostatic equilibrium and thermal balance in the quiescent prominence. (Auth.)
Mohammed, Asadig; Murugan, Jeff; Nastase, Horatiu
2012-11-02
We present an embedding of the three-dimensional relativistic Landau-Ginzburg model for condensed matter systems in an N = 6, U(N) × U(N) Chern-Simons-matter theory [the Aharony-Bergman-Jafferis-Maldacena model] by consistently truncating the latter to an Abelian effective field theory encoding the collective dynamics of O(N) of the O(N(2)) modes. In fact, depending on the vacuum expectation value on one of the Aharony-Bergman-Jafferis-Maldacena scalars, a mass deformation parameter μ and the Chern-Simons level number k, our Abelianization prescription allows us to interpolate between the Abelian Higgs model with its usual multivortex solutions and a Ø(4) theory. We sketch a simple condensed matter model that reproduces all the salient features of the Abelianization. In this context, the Abelianization can be interpreted as giving a dimensional reduction from four dimensions.
Self-Consistant Numerical Modeling of E-Cloud Driven Instability of a Bunch Train in the CERN SPS
International Nuclear Information System (INIS)
Vay, J.-L.; Furman, M.A.; Secondo, R.; Venturini, M.; Fox, J.D.; Rivetta, C.H.
2010-01-01
The simulation package WARP-POSINST was recently upgraded for handling multiple bunches and modeling concurrently the electron cloud buildup and its effect on the beam, allowing for direct self-consistent simulation of bunch trains generating, and interacting with, electron clouds. We have used the WARP-POSINST package on massively parallel supercomputers to study the growth rate and frequency patterns in space-time of the electron cloud driven transverse instability for a proton bunch train in the CERN SPS accelerator. Results suggest that a positive feedback mechanism exists between the electron buildup and the e-cloud driven transverse instability, leading to a net increase in predicted electron density. Comparisons to selected experimental data are also given. Electron clouds have been shown to trigger fast growing instabilities on proton beams circulating in the SPS and other accelerators. So far, simulations of electron cloud buildup and their effects on beam dynamics have been performed separately. This is a consequence of the large computational cost of the combined calculation due to large space and time scale disparities between the two processes. We have presented the latest improvements of the simulation package WARP-POSINST for the simulation of self-consistent ecloud effects, including mesh refinement, and generation of electrons from gas ionization and impact at the pipe walls. We also presented simulations of two consecutive bunches interacting with electrons clouds in the SPS, which included generation of secondary electrons. The distribution of electrons in front of the first beam was initialized from a dump taken from a preceding buildup calculation using the POSINST code. In this paper, we present an extension of this work where one full batch of 72 bunches is simulated in the SPS, including the entire buildup calculation and the self-consistent interaction between the bunches and the electrons. Comparisons to experimental data are also given.
International Nuclear Information System (INIS)
Rose, Harvey A.; Russell, David A.
2001-01-01
A Vlasov equation based model is used to determine various regimes of electron plasma wave response to a source appropriate to stimulated scatter in a laser hot spot. It incorporates trapped particle effects such as the standard nonlinear frequency shift, extended beyond the weak regime, and a reduction of damping a la Zakharov and Karpman [V. E. Zakharov and V. I. Karpman, JETP 16, 351 (1963)]. The results are consistent with those of Holloway and Dorning [J. P. Holloway and J. J. Dorning, Phys. Rev. A 44, 3856 (1991)] for small amplitude Bernstein-Greene-Kruskal modes. This leads to the prediction that as long as kλ D ≥0.53 for a background Maxwellian distribution function, e.g., a 5 keV plasma with n e /n c ≤0.075, anomalously large backward stimulated Raman scatter can be excluded. A similar analysis leads to density limits on stimulated Brillouin scatter
Dai, Junyi; Kerestes, Rebecca; Upton, Daniel J; Busemeyer, Jerome R; Stout, Julie C
2015-01-01
The Iowa Gambling Task (IGT) and the Soochow Gambling Task (SGT) are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning (EVL) model and the prospect valence learning (PVL) model, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79) and 27 control participants (mean age 35; SD 10.44) completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.
Directory of Open Access Journals (Sweden)
Junyi eDai
2015-03-01
Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.
International Nuclear Information System (INIS)
Zhang, Bo; Ye, Xianggui; Edwards, Brian J.
2013-01-01
A combination of self-consistent field theory and density functional theory was used to examine the stable, 3-dimensional equilibrium morphologies formed by diblock copolymers with a tethered nanoparticle attached either between the two blocks or at the end of one of the blocks. Both neutral and interacting particles were examined, with and without favorable/unfavorable energetic potentials between the particles and the block segments. The phase diagrams of the various systems were constructed, allowing the identification of three types of ordered mesophases composed of lamellae, hexagonally packed cylinders, and spheroids. In particular, we examined the conditions under which the mesophases could be generated wherein the tethered particles were primarily located within the interface between the two blocks of the copolymer. Key factors influencing these properties were determined to be the particle position along the diblock chain, the interaction potentials of the blocks and particles, the block copolymer composition, and molecular weight of the copolymer
Information Systems Efficiency Model
Directory of Open Access Journals (Sweden)
Milos Koch
2017-07-01
Full Text Available This contribution discusses the basic concept of creating a new model for the efficiency and effectiveness assessment of company information systems. The present trends in this field are taken into account, and the attributes are retained of measuring the optimal solutions for a company’s ICT (the implementation, functionality, service, innovations, safety, relationships, costs, etc.. The proposal of a new model of assessment comes from our experience with formerly implemented and employed methods, methods which we have modified in time and adapted to companies’ needs but also to the necessaries of our research that has been done through the ZEFIS portal. The most noteworthy of them is the HOS method that we have discussed in a number of forums. Its main feature is the fact that it respects the complexity of an information system in correlation with the balanced state of its individual parts.
Energy Technology Data Exchange (ETDEWEB)
Chen, Zhaoquan, E-mail: zqchen@aust.edu.cn [Faculty of Physics, St. Petersburg State University, St. Petersburg 198504 (Russian Federation); College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); Yin, Zhixiang, E-mail: zxyin66@163.com; Chen, Minggong; Hong, Lingli; Hu, Yelin; Huang, Yourui [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); Xia, Guangqing; Liu, Minghai [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Kudryavtsev, A. A. [Faculty of Physics, St. Petersburg State University, St. Petersburg 198504 (Russian Federation)
2014-10-21
In present study, a pulsed lower-power microwave-driven atmospheric-pressure argon plasma jet has been introduced with the type of coaxial transmission line resonator. The plasma jet plume is with room air temperature, even can be directly touched by human body without any hot harm. In order to study ionization process of the proposed plasma jet, a self-consistent hybrid fluid model is constructed in which Maxwell's equations are solved numerically by finite-difference time-domain method and a fluid model is used to study the characteristics of argon plasma evolution. With a Guass type input power function, the spatio-temporal distributions of the electron density, the electron temperature, the electric field, and the absorbed power density have been simulated, respectively. The simulation results suggest that the peak values of the electron temperature and the electric field are synchronous with the input pulsed microwave power but the maximum quantities of the electron density and the absorbed power density are lagged to the microwave power excitation. In addition, the pulsed plasma jet excited by the local enhanced electric field of surface plasmon polaritons should be the discharge mechanism of the proposed plasma jet.
International Nuclear Information System (INIS)
Saleh, Ahmed A.; Pereloma, Elena V.; Clausen, Bjørn; Brown, Donald W.; Tomé, Carlos N.; Gazder, Azdiar A.
2014-01-01
The evolution of lattice strains in a fully recrystallised Fe–24Mn–3Al–2Si–1Ni–0.06C TWinning Induced Plasticity (TWIP) steel subjected to uniaxial tensile loading up to a true strain of ∼35% was investigated via in-situ neutron diffraction. Typical of fcc elastic and plastic anisotropy, the {111} and {200} grain families record the lowest and highest lattice strains, respectively. Using modelling cases with and without latent hardening, the recently extended Elasto-Plastic Self-Consistent model successfully predicted the macroscopic stress–strain response, the evolution of lattice strains and the development of crystallographic texture. Compared to the isotropic hardening case, latent hardening did not have a significant effect on lattice strains and returned a relatively faster development of a stronger 〈111〉 and a weaker 〈100〉 double fibre parallel to the tensile axis. Close correspondence between the experimental lattice strains and those predicted using particular orientations embedded within a random aggregate was obtained. The result suggests that the exact orientations of the surrounding aggregate have a weak influence on the lattice strain evolution
Energy Technology Data Exchange (ETDEWEB)
Powell, Brian [Clemson Univ., SC (United States); Kaplan, Daniel I [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Arai, Yuji [Univ. of Illinois, Urbana-Champaign, IL (United States); Becker, Udo [Univ. of Michigan, Ann Arbor, MI (United States); Ewing, Rod [Stanford Univ., CA (United States)
2016-12-29
This university lead SBR project is a collaboration lead by Dr. Brian Powell (Clemson University) with co-principal investigators Dan Kaplan (Savannah River National Laboratory), Yuji Arai (presently at the University of Illinois), Udo Becker (U of Michigan) and Rod Ewing (presently at Stanford University). Hypothesis: The underlying hypothesis of this work is that strong interactions of plutonium with mineral surfaces are due to formation of inner sphere complexes with a limited number of high-energy surface sites, which results in sorption hysteresis where Pu(IV) is the predominant sorbed oxidation state. The energetic favorability of the Pu(IV) surface complex is strongly influenced by positive sorption entropies, which are mechanistically driven by displacement of solvating water molecules from the actinide and mineral surface during sorption. Objectives: The overarching objective of this work is to examine Pu(IV) and Pu(V) sorption to pure metal (oxyhydr)oxide minerals and sediments using variable temperature batch sorption, X-ray absorption spectroscopy, electron microscopy, and quantum-mechanical and empirical-potential calculations. The data will be compiled into a self-consistent surface complexation model. The novelty of this effort lies largely in the manner the information from these measurements and calculations will be combined into a model that will be used to evaluate the thermodynamics of plutonium sorption reactions as well as predict sorption of plutonium to sediments from DOE sites using a component additivity approach.
Modeling Control Situations in Power System Operations
DEFF Research Database (Denmark)
Saleem, Arshad; Lind, Morten; Singh, Sri Niwas
2010-01-01
Increased interconnection and loading of the power system along with deregulation has brought new challenges for electric power system operation, control and automation. Traditional power system models used in intelligent operation and control are highly dependent on the task purpose. Thus, a model...... of explicit principles for model construction. This paper presents a work on using explicit means-ends model based reasoning about complex control situations which results in maintaining consistent perspectives and selecting appropriate control action for goal driven agents. An example of power system...... for intelligent operation and control must represent system features, so that information from measurements can be related to possible system states and to control actions. These general modeling requirements are well understood, but it is, in general, difficult to translate them into a model because of the lack...
Dziedzic, Adam; Mulawka, Jan
2014-11-01
NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.
Greco, Cristina; Jiang, Ying; Chen, Jeff Z Y; Kremer, Kurt; Daoulas, Kostas Ch
2016-11-14
Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.
International Nuclear Information System (INIS)
Martemyanova, Julia A; Ivanov, Victor A; Paul, Wolfgang
2014-01-01
We study conformational properties of a single multiblock copolymer chain consisting of flexible and semiflexible blocks. Monomer units of different blocks are equivalent in the sense of the volume interaction potential, but the intramolecular bending potential between successive bonds along the chain is different. We consider a single flexible-semiflexible regular multiblock copolymer chain with equal content of flexible and semiflexible units and vary the length of the blocks and the stiffness parameter. We perform flat histogram type Monte Carlo simulations based on the Wang-Landau approach and employ the bond fluctuation lattice model. We present here our data on different non-trivial globular morphologies which we have obtained in our model for different values of the block length and the stiffness parameter. We demonstrate that the collapse can occur in one or in two stages depending on the values of both these parameters and discuss the role of the inhomogeneity of intraglobular distributions of monomer units of both flexible and semiflexible blocks. For short block length and/or large stiffness the collapse occurs in two stages, because it goes through intermediate (meta-)stable structures, like a dumbbell shaped conformation. In such conformations the semiflexible blocks form a cylinder-like core, and the flexible blocks form two domains at both ends of such a cylinder. For long block length and/or small stiffness the collapse occurs in one stage, and in typical conformations the flexible blocks form a spherical core of a globule while the semiflexible blocks are located on the surface and wrap around this core.
The KBC Void: Consistency with Supernovae Type Ia and the Kinematic SZ Effect in a ΛLTB Model
Hoscheit, Benjamin L.; Barger, Amy J.
2018-02-01
There is substantial and growing observational evidence from the normalized luminosity density in the near-infrared that the local universe is underdense on scales of several hundred megaparsecs. We test whether our parameterization of the observational data of such a “void” is compatible with the latest supernovae type Ia data and with constraints from line-of-sight peculiar-velocity motions of galaxy clusters with respect to the cosmic microwave background rest-frame, known as the linear kinematic Sunyaev–Zel’dovich (kSZ) effect. Our study is based on the large local void (LLV) radial profile observed by Keenan, Barger, and Cowie (KBC) and a theoretical void description based on the Lemaître–Tolman–Bondi model with a nonzero cosmological constant (ΛLTB). We find consistency with the measured luminosity distance–redshift relation on radial scales relevant to the KBC LLV through a comparison with 217 low-redshift supernovae type Ia over the redshift range 0.0233Cosmology Telescope, are fully compatible with the existence of the KBC LLV.
Pham, Hung Q; Bernales, Varinia; Gagliardi, Laura
2018-03-13
Density matrix embedding theory (DMET) [ Phys. Rev. Lett. 2012, 109, 186404] has been demonstrated as an efficient wave-function-based embedding method to treat extended systems. Despite its success in many quantum lattice models, the extension of DMET to real chemical systems has been tested only on selected cases. Herein, we introduce the use of the complete active space self-consistent field (CASSCF) method as a correlated impurity solver for DMET, leading to a method called CAS-DMET. We test its performance in describing the dissociation of H-H single bonds in a H 10 ring model system and an N═N double bond in azomethane (CH 3 -N═N-CH 3 ) and pentyldiazene (CH 3 (CH 2 ) 4 -N═NH). We find that the performance of CAS-DMET is comparable to CASSCF with different active space choices when single-embedding DMET corresponding to only one embedding problem for the system is used. When multiple embedding problems are used for the system, the CAS-DMET is in good agreement with CASSCF for the geometries around the equilibrium, but not in equal agreement at bond dissociation.
Rondón, Carmen; Campo, Paloma; Zambonino, Maria Angeles; Blanca-Lopez, Natalia; Torres, Maria J; Melendez, Lidia; Herrera, Rocio; Guéant-Rodriguez, Rosa-Maria; Guéant, Jean-Louis; Canto, Gabriela; Blanca, Miguel
2014-04-01
Local allergic rhinitis (LAR) is a common disease that affects 25.7% of the rhinitis population and more than 47% of patients previously diagnosed with nonallergic rhinitis. Whether LAR is the first step in the natural history of allergic rhinitis (AR) with systemic atopy or a consistent entity is unknown. The aim was to evaluate the natural history of a population with LAR of recent onset and the development of AR and asthma. A prospective 10-year follow-up study with initial cohorts of 194 patients with LAR of recent onset and 130 healthy controls is being undertaken. A clinical-demographic questionnaire, spirometry, skin prick test, and specific IgE to aeroallergens were done yearly. Nasal allergen provocation tests with Dermatophagoides pteronyssinus, Alternaria alternata, Olea europea, and a mix of grass pollen were performed at baseline and after 5 years. At disease onset, most of the patients with LAR had moderate-to-severe persistent-perennial rhinitis; conjunctivitis and asthma were the main comorbidities (51.1% and 18.8%, respectively), and D pteronyssinus was the most relevant aeroallergen (51.1%). After 5 years of follow-up, a worsening of rhinitis was detected in 26.2%, with an increase in symptom persistence and severity, and new associations with conjunctivitis and asthma. Atopy was detected by skin prick test and/or serum specific-IgE in patients with LAR (6.81%) and in controls (4.5%). This study shows a similar rate of development of systemic atopy in LAR and controls, which suggests that LAR is an entity well differentiated from AR. To determine the natural course of LAR more precisely, this study is in progress to complete 10 years of follow-up. Copyright © 2013 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.
Vlad, Marcel O; Popa, Vlad T; Ross, John
2011-02-03
We examine the problem of consistency between the kinetic and thermodynamic descriptions of reaction networks. We focus on reaction networks with linearly dependent (but generally kinetically independent) reactions for which only some of the stoichiometric vectors attached to the different reactions are linearly independent. We show that for elementary reactions without constraints preventing the system from approaching equilibrium there are general scaling relations for nonequilibrium rates, one for each linearly dependent reaction. These scaling relations express the ratios of the forward and backward rates of the linearly dependent reactions in terms of products of the ratios of the forward and backward rates of the linearly independent reactions raised to different scaling powers; the scaling powers are elements of the transformation matrix, which relates the linearly dependent stoichiometric vectors to the linearly independent stoichiometric vectors. These relations are valid for any network of elementary reactions without constraints, linear or nonlinear kinetics, far from equilibrium or close to equilibrium. We show that similar scaling relations for the reaction routes exist for networks of nonelementary reactions described by the Horiuti-Temkin theory of reaction routes where the linear dependence of the mechanistic (elementary) reactions is transferred to the overall (route) reactions. However, in this case, the scaling conditions are valid only at the steady state. General relationships between reaction rates of the two levels of description are presented. These relationships are illustrated for a specific complex reaction: radical chlorination of ethylene.
Time-dependent restricted-active-space self-consistent-field theory for bosonic many-body systems
Leveque, Camille; Madsen, Lars Bojer
2017-04-01
We have developed an ab-initio time-dependent wavefunction based theory for the description of many-body systems of bosons. The theory is based on a configurational interaction Ansatz for the many-body wavefunction with time-dependent self-consistent-field orbitals. The active space of the orbital excitations is subject to restrictions to be specified based on the physical situation at hand. The restrictions on the active space allow the theory to be evaluated under conditions where other wavefunction based methods, due to exponential scaling in the numerical efforts, cannot. The restrictions also allow us to clearly identify the excitations that are important for an accurate description, significantly beyond the mean-field approach. We first apply this theory to compute the ground-state energy of tens of trapped bosons, and second to simulate the dynamics following an instantaneous quenching of a non-contact interaction. The method provides accurate results and its computational cost is largely reduced compared with other wavefunction based many-body methods thanks to the restriction of the active orbital space. The important excitations are clearly identified and the method provides a new way to gain insight in correlation effects. This work was supported by the ERC-StG (Project No. 277767-TDMET) and the VKR center of excellence, QUSCOPE.
International Nuclear Information System (INIS)
Gordeev, A.V.
1996-01-01
The electron inertia effects in the one-dimensional model of the applied-B ion diode for the relativistic diode potential eU/m e c 2 ≥ 1 were investigated, where the magnetic Debye length r B is of the order of the collisionless electron skin depth c/ω pe . For this, an analytical relation between the magnetic field and the electric potential was developed, owing to which the second order eigenvalue problem can be reduced to a system of algebraic equations. Instabilities inside the vacuum gap and in the near-anode emitting plasma are considered. In the near-anode Hall plasma, the instability with two ion species was obtained; this can can contribute to the ion angle divergence. (author). 10 refs
A NEW ALGORITHM FOR SELF-CONSISTENT THREE-DIMENSIONAL MODELING OF COLLISIONS IN DUSTY DEBRIS DISKS
International Nuclear Information System (INIS)
Stark, Christopher C.; Kuchner, Marc J.
2009-01-01
We present a new 'collisional grooming' algorithm that enables us to model images of debris disks where the collision time is less than the Poynting-Robertson (PR) time for the dominant grain size. Our algorithm uses the output of a collisionless disk simulation to iteratively solve the mass flux equation for the density distribution of a collisional disk containing planets in three dimensions. The algorithm can be run on a single processor in ∼1 hr. Our preliminary models of disks with resonant ring structures caused by terrestrial mass planets show that the collision rate for background particles in a ring structure is enhanced by a factor of a few compared to the rest of the disk, and that dust grains in or near resonance have even higher collision rates. We show how collisions can alter the morphology of a resonant ring structure by reducing the sharpness of a resonant ring's inner edge and by smearing out azimuthal structure. We implement a simple prescription for particle fragmentation and show how PR drag and fragmentation sort particles by size, producing smaller dust grains at smaller circumstellar distances. This mechanism could cause a disk to look different at different wavelengths, and may explain the warm component of dust interior to Fomalhaut's outer dust ring seen in the resolved 24 μm Spitzer image of this system.
Grey Box Modelling of Hydrological Systems
DEFF Research Database (Denmark)
Thordarson, Fannar Ørn
The main topic of the thesis is grey box modelling of hydrologic systems, as well as formulation and assessment of their embedded uncertainties. Grey box model is a combination of a white box model, a physically-based model that is traditionally formulated using deterministic ordinary differential...... the lack of fit in state space formulation, and further support decisions for a model expansion. By using stochastic differential equations to formulate the dynamics of the hydrological system, either the complexity of the model can be increased by including the necessary hydrological processes...... in the model, or formulation of process noise can be considered so that it meets the physical limits of the hydrological system and give an adequate description of the embedded uncertainty in model structure. The thesis consists of two parts: a summary report and a part which contains six scientific papers...
Di Remigio, Roberto; Beerepoot, Maarten T P; Cornaton, Yann; Ringholm, Magnus; Steindal, Arnfinn Hykkerud; Ruud, Kenneth; Frediani, Luca
2016-12-21
The study of high-order absorption properties of molecules is a field of growing importance. Quantum-chemical studies can help design chromophores with desirable characteristics. Given that most experiments are performed in solution, it is important to devise a cost-effective strategy to include solvation effects in quantum-chemical studies of these properties. We here present an open-ended formulation of self-consistent field (SCF) response theory for a molecular solute coupled to a polarizable continuum model (PCM) description of the solvent. Our formulation relies on the open-ended, density matrix-based quasienergy formulation of SCF response theory of Thorvaldsen, et al., [J. Chem. Phys., 2008, 129, 214108] and the variational formulation of the PCM, as presented by Lipparini et al., [J. Chem. Phys., 2010, 133, 014106]. Within the PCM approach to solvation, the mutual solute-solvent polarization is represented by means of an apparent surface charge (ASC) spread over the molecular cavity defining the solute-solvent boundary. In the variational formulation, the ASC is an independent, variational degree of freedom. This allows us to formulate response theory for molecular solutes in the fixed-cavity approximation up to arbitrary order and with arbitrary perturbation operators. For electric dipole perturbations, pole and residue analyses of the response functions naturally lead to the identification of excitation energies and transition moments. We document the implementation of this approach in the Dalton program package using a recently developed open-ended response code and the PCMSolver libraries and present results for one-, two-, three-, four- and five-photon absorption processes of three small molecules in solution.
Directory of Open Access Journals (Sweden)
Yuanbin Yu
2016-01-01
Full Text Available This paper presents a new method for battery degradation estimation using a power-energy (PE function in a battery/ultracapacitor hybrid energy storage system (HESS, and the integrated optimization which concerns both parameters matching and control for HESS has been done as well. A semiactive topology of HESS with double-layer capacitor (EDLC coupled directly with DC-link is adopted for a hybrid electric city bus (HECB. In the purpose of presenting the quantitative relationship between system parameters and battery serving life, the data during a 37-minute driving cycle has been collected and decomposed into discharging/charging fragments firstly, and then the optimal control strategy which is supposed to maximally use the available EDLC energy is presented to decompose the power between battery and EDLC. Furthermore, based on a battery degradation model, the conversion of power demand by PE function and PE matrix is applied to evaluate the relationship between the available energy stored in HESS and the serving life of battery pack. Therefore, according to the approach which could decouple parameters matching and optimal control of the HESS, the process of battery degradation and its serving life estimation for HESS has been summed up.
Models of cuspy triaxial stellar systems. IV: Rotating systems
Carpintero, D. D.; Muzzio, J. C.
2016-01-01
We built two self-consistent models of triaxial, cuspy, rotating stellar systems adding rotation to non-rotating models presented in previous papers of this series. The final angular velocity of the material is not constant and varies with the distance to the center and with the height over the equator of the systems, but the figure rotation is very uniform in both cases. Even though the addition of rotation to the models modifies their original semiaxes ratios, the final rotating models are ...
DEFF Research Database (Denmark)
Churchill, Nathan William; Madsen, Kristoffer Hougaard; Mørup, Morten
2016-01-01
flexibility: they only estimate segregated structure and do not model interregional functional connectivity, nor do they account for network variability across voxels or between subjects. To address these issues, this letter develops the functional segregation and integration model (FSIM). This extension...... of the GMM framework simultaneously estimates spatial clustering and the most consistent group functional connectivity structure. It also explicitly models network variability, based on voxel- and subject-specific network scaling profiles. We compared the FSIM to standard GMM in a predictive cross......-validation framework and examined the importance of different model parameters, using both simulated and experimental resting-state data. The reliability of parcellations is not significantly altered by flexibility of the FSIM, whereas voxel- and subject-specific network scaling profiles significantly improve...
Modelling and parameter estimation of dynamic systems
Raol, JR; Singh, J
2004-01-01
Parameter estimation is the process of using observations from a system to develop mathematical models that adequately represent the system dynamics. The assumed model consists of a finite set of parameters, the values of which are calculated using estimation techniques. Most of the techniques that exist are based on least-square minimization of error between the model response and actual system response. However, with the proliferation of high speed digital computers, elegant and innovative techniques like filter error method, H-infinity and Artificial Neural Networks are finding more and mor
Discrete modelling of drapery systems
Thoeni, Klaus; Giacomini, Anna
2016-04-01
Drapery systems are an efficient and cost-effective measure in preventing and controlling rockfall hazards on rock slopes. The simplest form consists of a row of ground anchors along the top of the slope connected to a horizontal support cable from which a wire mesh is suspended down the face of the slope. Such systems are generally referred to as simple or unsecured draperies (Badger and Duffy 2012). Variations such as secured draperies, where a pattern of ground anchors is incorporated within the field of the mesh, and hybrid systems, where the upper part of an unsecured drapery is elevated to intercept rockfalls originating upslope of the installation, are becoming more and more popular. This work presents a discrete element framework for simulation of unsecured drapery systems and its variations. The numerical model is based on the classical discrete element method (DEM) and implemented into the open-source framework YADE (Šmilauer et al., 2010). The model takes all relevant interactions between block, drapery and slope into account (Thoeni et al., 2014) and was calibrated and validated based on full-scale experiments (Giacomini et al., 2012).The block is modelled as a rigid clump made of spherical particles which allows any shape to be approximated. The drapery is represented by a set of spherical particle with remote interactions. The behaviour of the remote interactions is governed by the constitutive behaviour of the wire and generally corresponds to a piecewise linear stress-strain relation (Thoeni et al., 2013). The same concept is used to model wire ropes. The rock slope is represented by rigid triangular elements where material properties (e.g., normal coefficient of restitution, friction angle) are assigned to each triangle. The capabilities of the developed model to simulate drapery systems and estimate the residual hazard involved with such systems is shown. References Badger, T.C., Duffy, J.D. (2012) Drapery systems. In: Turner, A.K., Schuster R
Wu, Xiaojie; Li, Xiantao
2015-01-01
Results from molecular dynamics simulations often need to be further processed to understand the physics on a larger scale. This paper considers the definitions of momentum and energy fluxes obtained from a control-volume approach. To assess the validity of these defined quantities, two consistency criteria are proposed. As examples, the embedded atom potential and the Tersoff potential are considered. The consistency is verified using analytical and numerical methods.
Bravo, S.; Ocania, G.
1991-04-01
energetization of the wind, one of the possibilities allowed for fltix the observational uncertailities shows a very good agreement wi4 an NI Ill) seli'consistent modelling with the only additional term of the Lorentz force in the iiii equation. Key words: SUN-CORONA
Reservoir Model Information System: REMIS
Lee, Sang Yun; Lee, Kwang-Wu; Rhee, Taehyun; Neumann, Ulrich
2009-01-01
We describe a novel data visualization framework named Reservoir Model Information System (REMIS) for the display of complex and multi-dimensional data sets in oil reservoirs. It is aimed at facilitating visual exploration and analysis of data sets as well as user collaboration in an easier way. Our framework consists of two main modules: the data access point module and the data visualization module. For the data access point module, the Phrase-Driven Grammar System (PDGS) is adopted for helping users facilitate the visualization of data. It integrates data source applications and external visualization tools and allows users to formulate data query and visualization descriptions by selecting graphical icons in a menu or on a map with step-by-step visual guidance. For the data visualization module, we implemented our first prototype of an interactive volume viewer named REMVR to classify and to visualize geo-spatial specific data sets. By combining PDGS and REMVR, REMIS assists users better in describing visualizations and exploring data so that they can easily find desired data and explore interesting or meaningful relationships including trends and exceptions in oil reservoir model data.
International Nuclear Information System (INIS)
Guest, Geoffrey; Bright, Ryan M.; Cherubini, Francesco; Strømman, Anders H.
2013-01-01
Temporary and permanent carbon storage from biogenic sources is seen as a way to mitigate climate change. The aim of this work is to illustrate the need to harmonize the quantification of such mitigation across all possible storage pools in the bio- and anthroposphere. We investigate nine alternative storage cases and a wide array of bio-resource pools: from annual crops, short rotation woody crops, medium rotation temperate forests, and long rotation boreal forests. For each feedstock type and biogenic carbon storage pool, we quantify the carbon cycle climate impact due to the skewed time distribution between emission and sequestration fluxes in the bio- and anthroposphere. Additional consideration of the climate impact from albedo changes in forests is also illustrated for the boreal forest case. When characterizing climate impact with global warming potentials (GWP), we find a large variance in results which is attributed to different combinations of biomass storage and feedstock systems. The storage of biogenic carbon in any storage pool does not always confer climate benefits: even when biogenic carbon is stored long-term in durable product pools, the climate outcome may still be undesirable when the carbon is sourced from slow-growing biomass feedstock. For example, when biogenic carbon from Norway Spruce from Norway is stored in furniture with a mean life time of 43 years, a climate change impact of 0.08 kg CO 2 eq per kg CO 2 stored (100 year time horizon (TH)) would result. It was also found that when biogenic carbon is stored in a pool with negligible leakage to the atmosphere, the resulting GWP factor is not necessarily − 1 CO 2 eq per kg CO 2 stored. As an example, when biogenic CO 2 from Norway Spruce biomass is stored in geological reservoirs with no leakage, we estimate a GWP of − 0.56 kg CO 2 eq per kg CO 2 stored (100 year TH) when albedo effects are also included. The large variance in GWPs across the range of resource and carbon storage
Consistent Pricing of VIX and Equity Derivatives with the 4/2 Stochastic Volatility Plus Jumps Model
Lin, Wei; Li, Shenghong; Luo, Xingguo; Chern, Shane
2015-01-01
In this paper, we develop a 4/2 stochastic volatility plus jumps model, namely, a new stochastic volatility model including the Heston model and 3/2 model as special cases. Our model is highly tractable by applying the Lie symmetries theory for PDEs, which means that the pricing procedure can be performed efficiently. In fact, we obtain a closed-form solution for the joint Fourier-Laplace transform so that equity and realized-variance derivatives can be priced. We also employ our model to con...
Candy, A.S.; Pietrzak, J.D.
2018-01-01
The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale,
Directory of Open Access Journals (Sweden)
Martin Gorges
Full Text Available The neuropathological process underlying amyotrophic lateral sclerosis (ALS can be traced as a four-stage progression scheme of sequential corticofugal axonal spread. The examination of eye movement control gains deep insights into brain network pathology and provides the opportunity to detect both disturbance of the brainstem oculomotor circuitry as well as executive deficits of oculomotor function associated with higher brain networks.To study systematically oculomotor characteristics in ALS and its underlying network pathology in order to determine whether eye movement deterioration can be categorized within a staging system of oculomotor decline that corresponds to the neuropathological model.Sixty-eight ALS patients and 31 controls underwent video-oculographic, clinical and neuropsychological assessments.Oculomotor examinations revealed increased anti- and delayed saccades' errors, gaze-palsy and a cerebellary type of smooth pursuit disturbance. The oculomotor disturbances occurred in a sequential manner: Stage 1, only executive control of eye movements was affected. Stage 2 indicates disturbed executive control plus 'genuine' oculomotor dysfunctions such as gaze-paly. We found high correlations (p<0.001 between the oculomotor stages and both, the clinical presentation as assessed by the ALS Functional Rating Scale (ALSFRS score, and cognitive scores from the Edinburgh Cognitive and Behavioral ALS Screen (ECAS.Dysfunction of eye movement control in ALS can be characterized by a two-staged sequential pattern comprising executive deficits in Stage 1 and additional impaired infratentorial oculomotor control pathways in Stage 2. This pattern parallels the neuropathological staging of ALS and may serve as a technical marker of the neuropathological spreading.
Spatial Models and Networks of Living Systems
DEFF Research Database (Denmark)
Juul, Jeppe Søgaard
with interactions defined by network topology. In this thesis I first describe three different biological models of ageing and cancer, in which spatial structure is important for the system dynamics. I then turn to describe characteristics of ecosystems consisting of three cyclically interacting species......When studying the dynamics of living systems, insight can often be gained by developing a mathematical model that can predict future behaviour of the system or help classify system characteristics. However, in living cells, organisms, and especially groups of interacting individuals, a large number...... of different factors influence the time development of the system. This often makes it challenging to construct a mathematical model from which to draw conclusions. One traditional way of capturing the dynamics in a mathematical model is to formulate a set of coupled differential equations for the essential...
Lee, Candice Y; Sauer, Jude S; Gorea, Heather R; Martellaro, Angelo J; Knight, Peter A
2014-01-01
This study compared the strength, consistency, and speed of prosthetic attachment sutures secured with automated fasteners with those of manual knots using an ex vivo porcine mitral valve annuloplasty model. A novel miniature pressure transducer system was developed to quantify pressures between sutured prosthetic rings and underlying cardiac tissue. Sixteen mitral annuloplasty rings were sewn into ex vivo pig hearts. Eight rings were secured with the COR-KNOT device; and eight rings, with hand-tied knots using a knot pusher. A cardiac surgeon and a surgery resident each completed four manually tied rings and four COR-KNOT rings via a thoracotomy trainer. The total time to knot and cut each ring's sutures was recorded. Suture attachment pressures were measured within (intrasuture) and between (extrasuture) each suture loop using a 0.5 × 2.0-mm microtransducer probe system. The suture holding pressures for the COR-KNOT fasteners were significantly greater than for the manually tied knots (median, 1008.9 vs 415.8 mm Hg, P COR-KNOT fasteners than for the hand-tied knots (SD, 401.6 vs 499.3 mm Hg, P = 0.04). Significant time savings occurred with the use of the COR-KNOT compared with manual tying (12.4 vs 71.1 seconds per knot, P = 0.001). The novel microtransducer technology provided an innovative means of evaluating cardiac prosthetic anchoring sutures. In this model, mitral annuloplasty ring sutures secured with the COR-KNOT device were stronger, more consistent, and faster than with manually tied knots.
Serfon, Cedric; The ATLAS collaboration
2016-01-01
One of the biggest challenge with Large scale data management system is to ensure the consistency between the global file catalog and what is physically on all storage elements. To tackle this issue, the Rucio software which is used by the ATLAS Distributed Data Management system has been extended to automatically handle lost or unregistered files (aka Dark Data). This system automatically detects these inconsistencies and take actions like recovery or deletion of unneeded files in a central manner. In this talk, we will present this system, explain the internals and give some results.
2015-01-05
fully-automatic method to detect cracks from pavement images, that can be used for pavement road maintenance. The developed method consists of three...steps: 1) A geodesic shadow-removal algorithm to remove the pavement shadows while preserving the cracks ; 2) building a crack probability map to enhance... cracks . Cracktree was evaluated on real pavement images and it achieves better performance than existing methods. 1 Multi-label Segmentation Propagation
National Research Council Canada - National Science Library
Mironenko, M
1992-01-01
.... The model applies the Gibbs energy minimization method for phase equilibria computation combined with the UNIFAC routine and thermodynamic database for calculating activity coefficients of organic...
Energy Technology Data Exchange (ETDEWEB)
Weimer-Jehle, Wolfgang; Wassermann, Sandra; Kosow, Hannah [Internationales Zentrum fuer Kultur- und Technikforschung an der Univ. Stuttgart (Germany). ZIRN Interdisziplinaerer Forschungsschwerpunkt Risiko und Nachhaltige Technikentwicklung
2011-04-15
Model-based environmental scenarios normally require multiple framework assumptions regarding future social, political and economic developments (external developments). In most cases these framework assumptions are highly uncertain. Furthermore, different external developments are not isolated from each other and their interdependences can be described by qualitative judgments only. If the internal consistency of framework assumptions is not methodologically addressed, environmental models risk to be based on inconsistent combinations of framework assumptions which do not reflect existing relations between the respective factors in an appropriate way. This report aims at demonstrating how consistent context scenarios can be developed with the help of the cross-impact balance analysis (CIB). This method allows not only for the internal consistency of framework assumptions of a single model but also for the overall consistency of framework assumptions of modeling instruments, supporting the integrated interpretation of the results of different models. In order to demonstrate the method, in a first step, ten common framework assumptions were chosen and their possible future developments until 2030 were described. In a second step, a qualitative impact network was developed based on expert elicitation. The impact network provided the basis for a qualitative but systematic analysis of the internal consistency of combinations of framework assumptions. This analysis was carried out with the CIB-method and resulted in a set of consistent context scenarios. These scenarios can be used as an informative background for defining framework assumptions for environmental models at the UBA. (orig.)
Li, Jiahui; Yu, Qiqing
2016-01-01
Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.
Sun, W. L.; Wang, J.; Soukhovitskii, E. Sh.; Capote, R.; Quesada, J. M.
2017-09-01
A fully Lane-consistent dispersive spherical optical potential is proposed to describe nucleon scattering interaction with doubly magic nucleus 208Pb up to 200 MeV. The experimental neutron total cross sections, elastically scattered nucleon angular distributions and (p,n) data had been used to search the potential parameters. Good agreement between experiments and the calculations with this potential is observed. Meanwhile, the application of the determined optical potential with the same parameters to neighbouring near magic Pb-Bi isotopes is also examined to show the predictive power of this potential.
International Nuclear Information System (INIS)
Galán, J; Verleysen, P; Lebensohn, R A
2014-01-01
A new algorithm for the solution of the deformation of a polycrystalline material using a self-consistent scheme, and its integration as part of the finite element software Abaqus/Standard are presented. The method is based on the original VPSC formulation by Lebensohn and Tomé and its integration with Abaqus/Standard by Segurado et al. The new algorithm has been implemented as a set of Fortran 90 modules, to be used either from a standalone program or from Abaqus subroutines. The new implementation yields the same results as VPSC7, but with a significantly better performance, especially when used in multicore computers. (paper)
International Nuclear Information System (INIS)
Huicochea, Armando; Rivera, Wilfrido; Gutierrez-Urueta, Geydy; Bruno, Joan Carles; Coronas, Alberto
2011-01-01
Combining heating and power systems represent an option to improve the efficiency of energy usage and to reduce thermal pollution toward environment. Microturbines generate electrical power and usable residual heat which can be partially used to activate a thermally driven chiller. The purpose of this paper is to analyze theoretically the thermodynamic performance of a trigeneration system formed by a microturbine and a double-effect water/LiBr absorption chiller. The heat data supplied to the generator of the double effect air conditioning system was acquired from experimental data of a 28 kW E microturbine, obtained at CREVER facilities. A thermodynamic simulator was developed at Centro de Investigacion en Energia in the Universidad Nacional Autonoma de Mexico by using a MATLAB programming language. Mass and energy balances of the main components of the cooling system were obtained with water-lithium bromide solution as working fluid. The trigeneration system was evaluated at different operating conditions: ambient temperatures, generation temperatures and microturbine fuel mass flow rate. The results demonstrated that this system represents an attractive technological alternative to use the energy from the microturbine exhaust gases for electric power generation, cooling and heating produced simultaneously. - Highlights: → The thermodynamic performance of a trigeneration system is analyzed theoretically. → A microturbine and a double-effect H 2 O-LiBr absorption chiller integrate the system. → The heat data supplied to generator was obtained from experimental data. → The trigeneration system was evaluated at different operating conditions. → Results show that this system is an attractive option to use exhaust energy for electricity, cooling and heating generation.
Modelling of reverberation enhancement systems
ROUCH , Jeremy; Schmich , Isabelle; Galland , Marie-Annick
2012-01-01
International audience; Electroacoustic enhancement systems are increasingly specified by acoustic consultants to address the requests for a multi-purpose use of performance halls. However, there is still a lack of simple models to predict the effect induced by these systems on the acoustic field. Two models are introduced to establish the impulse responses of a room equipped with a reverberation enhancement system. These models are based on passive impulse responses according to the modified...
MODELLING OF MATERIAL FLOW SYSTEMS
PÉTER TELEK
2012-01-01
Material flow systems are in generally very complex processes. During design, building and operation of complex systems there are many different problems. If these complex processes can be described in a simple model, the tasks will be clearer, better adaptable and easier solvable. As the material flow systems are very different, so using models is a very important aid to create uniform methods and solutions. This paper shows the details of the application possibilities of modelling in the ma...
Dynamic Modeling of ALS Systems
Jones, Harry
2002-01-01
The purpose of dynamic modeling and simulation of Advanced Life Support (ALS) systems is to help design them. Static steady state systems analysis provides basic information and is necessary to guide dynamic modeling, but static analysis is not sufficient to design and compare systems. ALS systems must respond to external input variations and internal off-nominal behavior. Buffer sizing, resupply scheduling, failure response, and control system design are aspects of dynamic system design. We develop two dynamic mass flow models and use them in simulations to evaluate systems issues, optimize designs, and make system design trades. One model is of nitrogen leakage in the space station, the other is of a waste processor failure in a regenerative life support system. Most systems analyses are concerned with optimizing the cost/benefit of a system at its nominal steady-state operating point. ALS analysis must go beyond the static steady state to include dynamic system design. All life support systems exhibit behavior that varies over time. ALS systems must respond to equipment operating cycles, repair schedules, and occasional off-nominal behavior or malfunctions. Biological components, such as bioreactors, composters, and food plant growth chambers, usually have operating cycles or other complex time behavior. Buffer sizes, material stocks, and resupply rates determine dynamic system behavior and directly affect system mass and cost. Dynamic simulation is needed to avoid the extremes of costly over-design of buffers and material reserves or system failure due to insufficient buffers and lack of stored material.
Modeling soft interface dominated systems
Lamorgese, A.; Mauri, R.; Sagis, L.M.C.
2017-01-01
The two main continuum frameworks used for modeling the dynamics of soft multiphase systems are the Gibbs dividing surface model, and the diffuse interface model. In the former the interface is modeled as a two dimensional surface, and excess properties such as a surface density, or surface energy
Validation of systems biology models
Hasdemir, D.
2015-01-01
The paradigm shift from qualitative to quantitative analysis of biological systems brought a substantial number of modeling approaches to the stage of molecular biology research. These include but certainly are not limited to nonlinear kinetic models, static network models and models obtained by the
From Numeric Models to Granular System Modeling
Directory of Open Access Journals (Sweden)
Witold Pedrycz
2015-03-01
To make this study self-contained, we briefly recall the key concepts of granular computing and demonstrate how this conceptual framework and its algorithmic fundamentals give rise to granular models. We discuss several representative formal setups used in describing and processing information granules including fuzzy sets, rough sets, and interval calculus. Key architectures of models dwell upon relationships among information granules. We demonstrate how information granularity and its optimization can be regarded as an important design asset to be exploited in system modeling and giving rise to granular models. With this regard, an important category of rule-based models along with their granular enrichments is studied in detail.
Coastal Modeling System Advanced Topics
2012-06-18
22 June 2012 - Day 5 Debugging and Problem solving Model Calibration Post-processing Coastal and Hydraulics Laboratory Focus of...Efficiently: • The setup process is fast and without wasted time or effort 3 Coastal and Hydraulics Laboratory 4 Coastal Modeling System (CMS) What...is the CMS? Integrated wave, current, and morphology change model in the Surface- water Modeling System (SMS). Why CMS? Operational at 10
Safeguards system effectiveness modeling
International Nuclear Information System (INIS)
Bennett, H.A.; Boozer, D.D.; Chapman, L.D.; Daniel, S.L.; Engi, D.; Hulme, B.L.; Varnado, G.B.
1976-01-01
A general methodology for the comparative evaluation of physical protection system effectiveness at nuclear facilities is presently under development. The approach is applicable to problems of sabotage or theft at fuel cycle facilities. The overall methodology and the primary analytic techniques used to assess system effectiveness are briefly outlined
Safeguards system effectiveness modeling
International Nuclear Information System (INIS)
Bennett, H.A.; Boozer, D.D.; Chapman, L.D.; Daniel, S.L.; Engi, D.; Hulme, B.L.; Varnado, G.B.
1976-01-01
A general methodology for the comparative evaluation of physical protection system effectiveness at nuclear facilities is presently under development. The approach is applicable to problems of sabotage or theft at fuel cycle facilities. In this paper, the overall methodology and the primary analytic techniques used to assess system effectiveness are briefly outlined
International Nuclear Information System (INIS)
Suwanna, S.; Onjun, T.; Wongpan, P.; Parail, V.; Poolyarat, N.; Picha, R.
2009-01-01
Full text: A formation of a steep pressure gradient region near the plasma edge, called the pedestal, is a main reason for an improved performance in H-mode plasma. In this work, new pedestal temperature models are developed based on different theoretical-based width concepts: flow shear stabilization width concept, magnetic and flow shear stabilization width concept, and diamagnetic stabilization width concept. In the BALDUR code, each pedestal width model is combined with a ballooning mode pressure gradient model to predict the pedestal temperature, which is a boundary condition needed to predict plasma profiles. In the JETTO code, an anomalous transport is suppressed within the pedestal region, which results in a formation of a steep pressure gradient region. The pedestal width is predicted using these theoretically based width concepts. The plasma profiles in the pedestal region are limited by ELM crashes, which can be triggered either by ballooning modes or by peeling modes, depending on which instability is destabilized first. It is found in the BALDUR simulations that the simulated pedestal temperature profiles agree well with experimental data in the region close to the pedestal, but show larger deviation in the core region. In a preliminary investigation, these models agree reasonably well with experiments, yielding overall RMS less than 20%. Furthermore, the model based flow shear stabilization matches very well data from both DIII-D and JET, while the model based on magnetic and flow shear stabilization over-predicts results from JET and under-predicts those from DIII-D. Other statistical analyses such a calculation of offset values, ratios of predicted pedestal (resp. core) temperatures to those from experiments are performed. (author)
Heald, C.R.; Stolnik, S.; Matteis, De C.; Garnett, M.C.; Illum, L.; Davis, S.S.; Leermakers, F.A.M.
2003-01-01
Self-consistent field (SCF) modelling studies can be used to predict the properties of poly(lactic acid):poly(ethyleneoxide) (PLA:PEG) nanoparticles using the theory developed by Scheutjens and Fleer. Good agreement in the results between experimental and modelled data has been observed previously
Directory of Open Access Journals (Sweden)
van Dijk Arie PJ
2008-08-01
Full Text Available Abstract Background The method used to delineate the boundary of the right ventricle (RV, relative to the trabeculations and papillary muscles in cardiovascular magnetic resonance (CMR ventricular volume analysis, may matter more when these structures are hypertrophied than in individuals with normal cardiovascular anatomy. This study aimed to compare two methods of cavity delineation in patients with systemic RV. Methods Twenty-nine patients (mean age 34.7 ± 12.4 years with a systemic RV (12 with congenitally corrected transposition of the great arteries (ccTGA and 17 with atrially switched (TGA underwent CMR. We compared measurements of systemic RV volumes and function using two analysis protocols. The RV trabeculations and papillary muscles were either included in the calculated blood volume, the boundary drawn immediately within the apparently compacted myocardial layer, or they were manually outlined and excluded. RV stroke volume (SV calculated using each method was compared with corresponding left ventricular (LV SV. Additionally, we compared the differences in analysis time, and in intra- and inter-observer variability between the two methods. Paired samples t-test was used to test for differences in volumes, function and analysis time between the two methods. Differences in intra- and inter-observer reproducibility were tested using an extension of the Bland-Altman method. Results The inclusion of trabeculations and papillary muscles in the ventricular volume resulted in higher values for systemic RV end diastolic volume (mean difference 28.7 ± 10.6 ml, p Conclusion The choice of method for systemic RV cavity delineation significantly affected volume measurements, given the CMR acquisition and analysis systems used. We recommend delineation outside the trabeculations for routine clinical measurements of systemic RV volumes as this approach took less time and gave more reproducible measurements.
Dijkstra, J.J.; Meeussen, J.C.L.; Sloot, van der H.A.; Comans, R.N.J.
2008-01-01
To improve the long-term environmental risk assessment of waste applications, a predictive "multi-surface" modelling approach has been developed to simultaneously predict the leaching and reactive transport of a broad range of major and trace elements (i.e., pH, Na, Al, Fe, Ca, SO4, Mg, Si, PO4,
Energy Technology Data Exchange (ETDEWEB)
Jemai, M
2004-07-01
In the present thesis we have applied the self consistent random phase approximation (SCRPA) to the Hubbard model with a small number of sites (a chain of 2, 4, 6,... sites). Earlier SCRPA had produced very good results in other models like the pairing model of Richardson. It was therefore interesting to see what kind of results the method is able to produce in the case of a more complex model like the Hubbard model. To our great satisfaction the case of two sites with two electrons (half-filling) is solved exactly by the SCRPA. This may seem a little trivial but the fact is that other respectable approximations like 'GW' or the approach with the Gutzwiller wave function yield results still far from exact. With this promising starting point, the case of 6 sites at half filling was considered next. For that case, evidently, SCRPA does not any longer give exact results. However, they are still excellent for a wide range of values of the coupling constant U, covering for instance the phase transition region towards a state with non zero magnetisation. We consider this as a good success of the theory. Non the less the case of 4 sites (a plaquette), as indeed all cases with 4n sites at half filling, turned out to have a problem because of degeneracies at the Hartree Fock level. A generalisation of the present method, including in addition to the pairs, quadruples of Fermions operators (called second RPA) is proposed to also include exactly the plaquette case in our approach. This is therefore a very interesting perspective of the present work. (author)
Towards Modelling of Hybrid Systems
DEFF Research Database (Denmark)
Wisniewski, Rafal
2006-01-01
The article is an attempt to use methods of category theory and topology for analysis of hybrid systems. We use the notion of a directed topological space; it is a topological space together with a set of privileged paths. Dynamical systems are examples of directed topological spaces. A hybrid...... system consists of a number of dynamical systems that are glued together according to information encoded in the discrete part of the system. We develop a definition of a hybrid system as a functor from the category generated by a transition system to the category of directed topological spaces. Its...... directed homotopy colimit (geometric realization) is a single directed topological space. The behavior of hybrid systems can be then understood in terms of the behavior of dynamical systems through the directed homotopy colimit....
The report, a review of the literature on heat flow through powders, was motivated by the use of fine powder systems to produce high thermal resistivities (thermal resistance per unit thickness). he term "superinsulations" has been used to describe this type of material, which ha...
2011-02-01
human volunteers with sporozoites. 6 A sporozoite challenge model has been available for P. falciparum for several decades and has led to...the reproduc- ibility of the infection. In those studies, sporozoites inoculated by < 5 mosquitoes led to an irregular infection in malaria-naive...particularly to Juana Vergara and Johanna Parra, for the vol- unteers’ recruitment and health assistance. We also thank Luz Amparo Martínez and all the
Choi, Sung W.; Gerencser, Akos A.; Ng, Ryan; Flynn, James M.; Melov, Simon; Danielson, Steven R.; Gibson, Bradford W.; Nicholls, David G.; Bredesen, Dale E.; Brand, Martin D.
2012-01-01
Depressed cortical energy supply and impaired synaptic function are predominant associations of Alzheimer’s disease (AD). To test the hypothesis that presynaptic bioenergetic deficits are associated with the progression of AD pathogenesis, we compared bioenergetic variables of cortical and hippocampal presynaptic nerve terminals (synaptosomes) from commonly used mouse models with AD-like phenotypes (J20 age 6 months, Tg2576 age 16 months and APP/PS age 9 and 14 months) to ag...
Kajino, Mizuo; Easter, Richard C.; Ghan, Steven J.
2013-09-01
triple-moment sectional (TMS) aerosol dynamics model, Modal Bin Hybrid Model (MBHM), has been developed. In addition to number and mass (volume), surface area is predicted (and preserved), which is important for aerosol processes and properties such as gas-to-particle mass transfer, heterogeneous reaction, and light extinction cross section. The performance of MBHM was evaluated against double-moment sectional (DMS) models with coarse (BIN4) to very fine (BIN256) size resolutions for simulating evolution of particles under simultaneously occurring nucleation, condensation, and coagulation processes (BINx resolution uses x sections to cover the 1 nm to 1 µm size range). Because MBHM gives a physically consistent form of the intrasectional distributions, errors and biases of MBHM at BIN4-8 resolution were almost equivalent to those of DMS at BIN16-32 resolution for various important variables such as the moments Mk (k: 0, 2, 3), dMk/dt, and the number and volume of particles larger than a certain diameter. Another important feature of MBHM is that only a single bin is adequate to simulate full aerosol dynamics for particles whose size distribution can be approximated by a single lognormal mode. This flexibility is useful for process-oriented (multicategory and/or mixing state) modeling: Primary aerosols whose size parameters would not differ substantially in time and space can be expressed by a single or a small number of modes, whereas secondary aerosols whose size changes drastically from 1 to several hundred nanometers can be expressed by a number of modes. Added dimensions can be applied to MBHM to represent mixing state or photochemical age for aerosol mixing state studies.
Wu, Mengxue; Li, Chen; Yao, Wu
2017-01-11
In cement-based pastes, the relationship between the complex phase assemblage and mechanical properties is usually described by the "gel/space ratio" descriptor. The gel/space ratio is defined as the volume ratio of the gel to the available space in the composite system, and it has been widely studied in the cement unary system. This work determines the gel/space ratio in the cement-silica fume-fly ash ternary system (C-SF-FA system) by measuring the reaction degrees of the cement, SF, and FA. The effects that the supplementary cementitious material (SCM) replacements exert on the evolution of the gel/space ratio are discussed both theoretically and practically. The relationship between the gel/space ratio and compressive strength is then explored, and the relationship disparities for different mix proportions are analyzed in detail. The results demonstrate that the SCM replacements promote the gel/space ratio evolution only when the SCM reaction degree is higher than a certain value, which is calculated and defined as the critical reaction degree (CRD). The effects of the SCM replacements can be predicted based on the CRD, and the theological predictions agree with the test results quite well. At low gel/space ratios, disparities in the relationship between the gel/space ratio and the compressive strength are caused by porosity, which has also been studied in cement unary systems. The ratio of cement-produced gel to SCM-produced gel ( G C to G S C M ratio) is introduced for use in analyzing high gel/space ratios, in which it plays a major role in creating relationship disparities.
Models of complex attitude systems
DEFF Research Database (Denmark)
Sørensen, Bjarne Taulo
Existing research on public attitudes towards agricultural production systems is largely descriptive, abstracting from the processes through which members of the general public generate their evaluations of such systems. The present paper adopts a systems perspective on such evaluations, understa......Existing research on public attitudes towards agricultural production systems is largely descriptive, abstracting from the processes through which members of the general public generate their evaluations of such systems. The present paper adopts a systems perspective on such evaluations......, understanding them as embedded into a wider attitude system that consists of attitudes towards objects of different abstraction levels, ranging from personal value orientations over general socio-political attitudes to evaluations of specific characteristics of agricultural production systems. It is assumed...... that evaluative affect propagates through the system in such a way that the system becomes evaluatively consistent and operates as a schema for the generation of evaluative judgments. In the empirical part of the paper, the causal structure of an attitude system from which people derive their evaluations of pork...
Yasuda, Michiko; Furuyashiki, Takashi; Nakamura, Toshiyuki; Kakutani, Ryo; Takata, Hiroki; Ashida, Hitoshi
2013-09-01
Previously, we developed enzymatically synthesized glycogen (ESG) from starch, and showed its immunomodulatory and dietary fiber-like activities. In this study, we investigated the metabolism of ESG and its immunomodulatory activity using differentiated Caco-2 cells as a model of the intestinal barrier. In a co-culture system consisting of differentiated Caco-2 cells and RAW264.7 macrophages, mRNA expression of IL-6, IL-8, IL-1β and BAFF cytokines was up-regulated in Caco-2 cells and IL-8 production in basolateral medium was induced after 24 h apical treatment with 5 mg ml(-1) of ESG. The mRNA level of iNOS was also up-regulated in RAW264.7 macrophages. After characterization of the binding of anti-glycogen monoclonal antibodies (IV58B6 and ESG1A9) to ESG and its digested metabolite resistant glycogen (RG), an enzyme-linked immunosorbent assay (ELISA) system was developed to quantify ESG and RG. Using this system, we investigated the metabolism of ESG in differentiated Caco-2 cells. When ESG (7000 kDa, 5 mg ml(-1)) was added to the apical side of Caco-2 monolayers, ESG disappeared and RG (about 3000 kDa, 3.5 mg ml(-1)) appeared in the apical solution during a 24 h incubation. Neither ESG nor RG was detected in the basolateral solution. In addition, both ESG and RG were bound to TLR2 in Caco-2 cells. In conclusion, we suggest that ESG is metabolized to a RG-like structure in the intestine, and this metabolite activates the immune system via stimulation of the intestinal epithelium, although neither ESG nor its metabolite could permeate the intestinal cells under our experimental conditions. These results provide evidence for the beneficial function of ESG as a food ingredient.
Dmitrieva, Olga; Michalakidis, Georgios; Mason, Aaron; Jones, Simon; Chan, Tom; de Lusignan, Simon
2012-01-01
A new distributed model of health care management is being introduced in England. Family practitioners have new responsibilities for the management of health care budgets and commissioning of services. There are national datasets available about health care providers and the geographical areas they serve. These data could be better used to assist the family practitioner turned health service commissioners. Unfortunately these data are not in a form that is readily usable by these fledgling family commissioning groups. We therefore Web enabled all the national hospital dermatology treatment data in England combining it with locality data to provide a smart commissioning tool for local communities. We used open-source software including the Ruby on Rails Web framework and MySQL. The system has a Web front-end, which uses hypertext markup language cascading style sheets (HTML/CSS) and JavaScript to deliver and present data provided by the database. A combination of advanced caching and schema structures allows for faster data retrieval on every execution. The system provides an intuitive environment for data analysis and processing across a large health system dataset. Web-enablement has enabled data about in patients, day cases and outpatients to be readily grouped, viewed, and linked to other data. The combination of web-enablement, consistent data collection from all providers; readily available locality data; and a registration based primary system enables the creation of data, which can be used to commission dermatology services in small areas. Standardized datasets collected across large health enterprises when web enabled can readily benchmark local services and inform commissioning decisions.
International Nuclear Information System (INIS)
Hartje, Udo A.J.
2008-01-01
Internationally stressed physics is looking for the solution of the basic problems of physics at higher and higher energies in impressive plants which outbid themselves in their expenditure for technology reciprocally. If with this manner shall be to seek the ''atomos'' and the ''unit of the physics'' then this is an error way. Sought-after Higgs particles are certainly not a simply thing; but a most complex object which would contain an enormous number of effect quanta in its structure. Since Planck, Poincare, Einstein, Bohr, Heisenberg, Schroedinger, De Broglie and others well-known physicists we know that this ''atomos'' have only a tiny energy quantity which single is not measurable. The search with gigantic machines is at all besides more nonsensical than such processes there will pump even energy into it. The elementary contains only fractions from the energy what is in known smallest particles or weakest beams too. This work follows another approach to grasp the nature in a Final Theory (Grand Unification) on a deductive way. It starts from a most general analysis and synthesis of scientific and everyday-language concepts. This shored up it on the principle of general physical field. The dynamic processes of the field are vivid illustrated by graphic means in systems of coordinates with space-time. Through it arises a everywhere consistent view for most simple existences and simple structures up to most complicate existences for all fields of physics and philosophy. That remained shut off till now obstinately for the cognition. A important result is the solution of the puzzle of ''Dualism of Wave and Particle''. Matter-structures consist not from 'a priori' existing 'little verdicts' which secondary swing. But they consist from beams; which remain in the inside of the particles radiation-like: and they rotate there in themselves. This creates locality without changing the radiation itself into 'electrons' which rotate on paths. The Classical Physics and the
International Nuclear Information System (INIS)
Hartje, Udo A.J.
2007-01-01
Internationally stressed physics is looking for the solution of the basic problems of physics at higher and higher energies in impressive plants which outbid themselves in their expenditure for technology reciprocally. If with this manner shall be to seek the ''atomos'' and the ''unit of the physics'' then this is an error way. Sought-after Higgs particles are certainly not a simply thing; but a most complex object which would contain an enormous number of effect quanta in its structure. Since Planck, Poincare, Einstein, Bohr, Heisenberg, Schroedinger, De Broglie and others well-known physicists we know that this ''atomos'' have only a tiny energy quantity which single is not measurable. The search with gigantic machines is at all besides more nonsensical than such processes there will pump even energy into it. The elementary contains only fractions from the energy what is in known smallest particles or weakest beams too. This work follows another approach to grasp the nature in a Final Theory (Grand Unification) on a deductive way. It starts from a most general analysis and synthesis of scientific and everyday-language concepts. This shored up it on the principle of general physical field. The dynamic processes of the field are vivid illustrated by graphic means in systems of coordinates with space-time. Through it arises a everywhere consistent view for most simple existences and simple structures up to most complicate existences for all fields of physics and philosophy. That remained shut off till now obstinately for the cognition. A important result is the solution of the puzzle of ''Dualism of Wave and Particle''. Matter-structures consist not from 'a priori' existing 'little verdicts' which secondary swing. But they consist from beams; which remain in the inside of the particles radiation-like: and they rotate there in themselves. This creates locality without changing the radiation itself into 'electrons' which rotate on paths. The Classical Physics and the
Stochastic Modelling Of The Repairable System
Directory of Open Access Journals (Sweden)
Andrzejczak Karol
2015-11-01
Full Text Available All reliability models consisting of random time factors form stochastic processes. In this paper we recall the definitions of the most common point processes which are used for modelling of repairable systems. Particularly this paper presents stochastic processes as examples of reliability systems for the support of the maintenance related decisions. We consider the simplest one-unit system with a negligible repair or replacement time, i.e., the unit is operating and is repaired or replaced at failure, where the time required for repair and replacement is negligible. When the repair or replacement is completed, the unit becomes as good as new and resumes operation. The stochastic modelling of recoverable systems constitutes an excellent method of supporting maintenance related decision-making processes and enables their more rational use.
DEFF Research Database (Denmark)
Yang, Laurence; Tan, Justin; O'Brien, Edward J.
2015-01-01
Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood...... based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma...... across genetic backgrounds (two times higher Spearman rank correlation) and exhibit significantly more complex transcriptional and posttranscriptional regulatory features (40% more transcription start sites per gene, 22% longer 5'UTR). Thus, genome-scale systems biology approaches rigorously identify...
Shoda, Munehito; Yokoyama, Takaaki; Suzuki, Takeru K.
2018-02-01
We propose a novel one-dimensional model that includes both shock and turbulence heating and qualify how these processes contribute to heating the corona and driving the solar wind. Compressible MHD simulations allow us to automatically consider shock formation and dissipation, while turbulent dissipation is modeled via a one-point closure based on Alfvén wave turbulence. Numerical simulations were conducted with different photospheric perpendicular correlation lengths {λ }0, which is a critical parameter of Alfvén wave turbulence, and different root-mean-square photospheric transverse-wave amplitudes δ {v}0. For the various {λ }0, we obtain a low-temperature chromosphere, high-temperature corona, and supersonic solar wind. Our analysis shows that turbulence heating is always dominant when {λ }0≲ 1 {Mm}. This result does not mean that we can ignore the compressibility because the analysis indicates that the compressible waves and their associated density fluctuations enhance the Alfvén wave reflection and therefore the turbulence heating. The density fluctuation and the cross-helicity are strongly affected by {λ }0, while the coronal temperature and mass-loss rate depend weakly on {λ }0.
Wójcicki, Tomasz; Nowicki, Michał
2016-01-01
The article presents a selected area of research and development concerning the methods of material analysis based on the automatic image recognition of the investigated metallographic sections. The objectives of the analyses of the materials for gas nitriding technology are described. The methods of the preparation of nitrided layers, the steps of the process and the construction and operation of devices for gas nitriding are given. We discuss the possibility of using the methods of digital images processing in the analysis of the materials, as well as their essential task groups: improving the quality of the images, segmentation, morphological transformations and image recognition. The developed analysis model of the nitrided layers formation, covering image processing and analysis techniques, as well as selected methods of artificial intelligence are presented. The model is divided into stages, which are formalized in order to better reproduce their actions. The validation of the presented method is performed. The advantages and limitations of the developed solution, as well as the possibilities of its practical use, are listed. PMID:28773389
Knight, Kevin S.
2015-03-01
The thermoelastic properties of the thermoelectric chalcogenide galena, lead sulfide (PbS), have been determined in the temperature interval 10-350 K from high resolution neutron powder diffraction data, and literature values of the isobaric heat capacity. Within this temperature range, galena can be described by a simple phenomenological model in which the cation and anion vibrate independently of one another in a Debye-like manner, with vibrational Debye temperatures of 120(1) K for the lead, and 324(2) K for the sulfur. Simultaneous fitting of the unit cell volume and the isochoric heat capacity to a two-term Debye internal energy function gives characteristic temperatures of 110(2), and 326(5) K in excellent agreement with the measured vibrational Debye temperatures derived from fitting the atomic displacement parameters. The thermodynamic Grüneisen constant derived from the isochoric heat capacity is found to monotonically increase with decreasing temperature, from 2.5 at 300 K, to 3.25 at 25 K, in agreement with the deductions of earlier work. The full phonon density of states calculated from the two-term Debye model shows fair agreement with that derived from density functional theory.
Wójcicki, Tomasz; Nowicki, Michał
2016-04-01
The article presents a selected area of research and development concerning the methods of material analysis based on the automatic image recognition of the investigated metallographic sections. The objectives of the analyses of the materials for gas nitriding technology are described. The methods of the preparation of nitrided layers, the steps of the process and the construction and operation of devices for gas nitriding are given. We discuss the possibility of using the methods of digital images processing in the analysis of the materials, as well as their essential task groups: improving the quality of the images, segmentation, morphological transformations and image recognition. The developed analysis model of the nitrided layers formation, covering image processing and analysis techniques, as well as selected methods of artificial intelligence are presented. The model is divided into stages, which are formalized in order to better reproduce their actions. The validation of the presented method is performed. The advantages and limitations of the developed solution, as well as the possibilities of its practical use, are listed.
Cai, Yong; Ye, Xiuxia; Shi, Rong; Xu, Gang; Shen, Lixiao; Ren, Jia; Huang, Hong
2013-06-04
High prevalence of risky sexual behaviors and lack of information, skills and preventive support mean that, adolescents face high risks of HIV/AIDS. This study applied the information-motivation-behavioral skills (IMB) model to examine the predictors of consistent condom use among senior high school students from three coastal cities in China and clarify the relationships between the model constructs. A cross-sectional study was conducted to assess HIV/AIDS related information, motivation, behavioral skills and preventive behaviors among senior high school students in three coastal cities in China. Structural equation modelling (SEM) was used to assess the IMB model. Of the 12313 participants, 4.5% (95% CI: 4.2-5.0) reported having had premarital sex and among them 25.0% (95% CI: 21.2-29.1) reported having used a condom in their sexual debut. Only about one-ninth of participants reported consistent condom use. The final IMB model provided acceptable fit to the data (CFI = 0.981, RMSEA = 0.014). Consistent condom use was significantly predicted by motivation (β = 0.175, P students in China. The IMB model could predict consistent condom use and suggests that future interventions should focus on improving motivation and behavioral skills.
Neradilek, Moni B; Polissar, Nayak L; Einstein, Daniel R; Glenny, Robb W; Minard, Kevin R; Carson, James P; Jiao, Xiangmin; Jacob, Richard E; Cox, Timothy C; Postlethwait, Edward M; Corley, Richard A
2012-06-01
We examine a previously published branch-based approach for modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that take account of error. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys, and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from self-consistency exist, we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. The new variance model can be used instead. Measurement error has an important impact on the estimated morphometry models and needs to be addressed in the analysis. Copyright © 2012 Wiley Periodicals, Inc.
Mechanical Systems, Classical Models
Teodorescu, Petre P
2007-01-01
All phenomena in nature are characterized by motion; this is an essential property of matter, having infinitely many aspects. Motion can be mechanical, physical, chemical or biological, leading to various sciences of nature, mechanics being one of them. Mechanics deals with the objective laws of mechanical motion of bodies, the simplest form of motion. In the study of a science of nature mathematics plays an important role. Mechanics is the first science of nature which was expressed in terms of mathematics by considering various mathematical models, associated to phenomena of the surrounding nature. Thus, its development was influenced by the use of a strong mathematical tool; on the other hand, we must observe that mechanics also influenced the introduction and the development of many mathematical notions. In this respect, the guideline of the present book is precisely the mathematical model of mechanics. A special accent is put on the solving methodology as well as on the mathematical tools used; vectors, ...
International Nuclear Information System (INIS)
Halbach, K.
1978-01-01
A description is given of some unfinished work that may have a bearing on the problem of producing a small beam spot on a target for heavy ion fusion. One of the important results obtained so far is an existence proof that shows that it is possible, at least in principle, to design systems, containing only quadrupoles and/or solenoids, with vanishing first and second derivatives of the spotsize with respect to momentum both at the target and at the exit of the last lens
International Nuclear Information System (INIS)
Astvatsaturov, R.G.; Arkhipov, V.V.; Vasil'ev, S.E.
1988-01-01
Hadron suppression is investigated, using 2 GeV/c momentum π - -meson beam, which contains 6% electrons, by means of detection system, including lead glass active counter, lead glass Cherenkov γ-spectrometers, 1 m length scintillation counter scanned by two photomultipliers from the ends. Selection of events by means of scintillation counter has allowed to reduce hadron contribution by one order at ≥ 90% efficiency of electron detection. Accuracy of determination of electromagnetic shower axis coordinate by time difference of light signal propagation in scintillation converter practically does not depend on converter thickness within 2-6 rad. un. thickness range and equals to 2.1 cm
Directory of Open Access Journals (Sweden)
Georgia Doxani
2015-10-01
Full Text Available The Sentinel missions have been designed to support the operational services of the Copernicus program, ensuring long-term availability of data for a wide range of spectral, spatial and temporal resolutions. In particular, Sentinel-2 (S-2 data with improved high spatial resolution and higher revisit frequency (five days with the pair of satellites in operation will play a fundamental role in recording land cover types and monitoring land cover changes at regular intervals. Nevertheless, cloud coverage usually hinders the time series availability and consequently the continuous land surface monitoring. In an attempt to alleviate this limitation, the synergistic use of instruments with different features is investigated, aiming at the future synergy of the S-2 MultiSpectral Instrument (MSI and Sentinel-3 (S-3 Ocean and Land Colour Instrument (OLCI. To that end, an unmixing model is proposed with the intention of integrating the benefits of the two Sentinel missions, when both in orbit, in one composite image. The main goal is to fill the data gaps in the S-2 record, based on the more frequent information of the S-3 time series. The proposed fusion model has been applied on MODIS (MOD09GA L2G and SPOT4 (Take 5 data and the experimental results have demonstrated that the approach has high potential. However, the different acquisition characteristics of the sensors, i.e. illumination and viewing geometry, should be taken into consideration and bidirectional effects correction has to be performed in order to reduce noise in the reflectance time series.
Economic model of pipeline transportation systems
Energy Technology Data Exchange (ETDEWEB)
Banks, W. F.
1977-07-29
The objective of the work reported here was to develop a model which could be used to assess the economic effects of energy-conservative technological innovations upon the pipeline industry. The model is a dynamic simulator which accepts inputs of two classes: the physical description (design parameters, fluid properties, and financial structures) of the system to be studied, and the postulated market (throughput and price) projection. The model consists of time-independent submodels: the fluidics model which simulates the physical behavior of the system, and the financial model which operates upon the output of the fluidics model to calculate the economics outputs. Any of a number of existing fluidics models can be used in addition to that developed as a part of this study. The financial model, known as the Systems, Science and Software (S/sup 3/) Financial Projection Model, contains user options whereby pipeline-peculiar characteristics can be removed and/or modified, so that the model can be applied to virtually any kind of business enterprise. The several dozen outputs are of two classes: the energetics and the economics. The energetics outputs of primary interest are the energy intensity, also called unit energy consumption, and the total energy consumed. The primary economics outputs are the long-run average cost, profit, cash flow, and return on investment.
International Nuclear Information System (INIS)
Hartje, U.A.J.
2005-01-01
This script contains theses for an universal 'Spiral-Field-Theory' that are capable to dissolve problems in parallel from different areas which are far from each other. Starting point is the stuck principle discussion about the relationships between the Classic Physics and the Quantum Physics. Aim is the clarification of questions which remained open. In 1925 Max Planck had formulated as follows: 'The research of physics can not rest, so long not has been together-welded: on the one hand the mechanics and the electrodynamics with on the other hand the lesson of the stationary one and the radiating heat to a sole unitary theory'. The Spiral-Field-Model develops a supporting structure from General Field into which they will class the secure knowledge from experiments and well-proved theories. The most important thing of this new Final Theory is the detailed generating of all nature courses of phenomena exclusively from radiation and that in the direct meaning of the word. In the final effect the two great disciplines of the physics which are drifted from each other, become bonded together to a super ordinate theoretical building of the nature sciences. (orig.)
Directory of Open Access Journals (Sweden)
Pedro Vasconcellos Eisenlohr
2014-06-01
Full Text Available Rigorous and well-defined criteria for the classification of vegetation constitute a prerequisite for effective biodiversity conservation strategies. In 2009, a new classification system was proposed for vegetation types in extra-Andean tropical and subtropical South America. The new system expanded upon the criteria established in the existing Brazilian Institute of Geography and Statistics classification system. Here, we attempted to determine whether the tree species composition of the formations within the Atlantic Forest Biome of Brazil is consistent with this new classification system. We compiled floristic surveys of 394 sites in southeastern Brazil (between 15º and 25ºS; and between the Atlantic coast and 55ºW. To assess the floristic consistency of the vegetation types, we performed non-metric multidimensional scaling (NMDS ordination analysis, followed by multifactorial ANOVA. The vegetation types, especially in terms of their thermal regimes, elevational belts and top-tier vegetation categories, were consistently discriminated in the first NMDS axis, and all assessed attributes showed at least one significant difference in the second axis. As was expected on the basis of the theoretical background, we found that tree species composition, in the areas of Atlantic Forest studied, was highly consistent with the new system of classification. Our findings not only help solidify the position of this new classification system but also contribute to expanding the knowledge of the patterns and underlying driving forces of the distribution of vegetation in the region.
Energy Technology Data Exchange (ETDEWEB)
Ebert, Joerg
2007-08-31
In this work the short-term and long-term stability of the nanoscale metallic multilayers at elevated temperatures is studied. Reasons and mechanisms for breakdown of the GMR-effect have been analyzed by different physical methods. The multilayered samples investigated in this work exhibit a GMR effect of GMR (alloy)=20.7 % which is significantly smaller than the effect of the standard system with pure Cu interlayers (GMR(Cu)=25.2 %). For protection against oxidation during the use a passivation coating consisting of SiO{sub 2} and Si{sub 3}N{sub 4} has been deposited by the means of plasma CVD. Typical parameters for this process are times of t{sub short-term}=1 h in the temperature range of 200 C
Model Reduction of Hybrid Systems
DEFF Research Database (Denmark)
Shaker, Hamid Reza
for model reduction of switched systems is based on the switching generalized gramians. The reduced order switched system is guaranteed to be stable for all switching signal in this method. This framework uses stability conditions which are based on switching quadratic Lyapunov functions which are less...... conservative than the stability conditions based on common quadratic Lyapunov functions. The stability conditions which are used for this method are very useful in model reduction and design problems because they have slack variables in the conditions. Similar conditions for a class of switched nonlinear......High-Technological solutions of today are characterized by complex dynamical models. A lot of these models have inherent hybrid/switching structure. Hybrid/switched systems are powerful models for distributed embedded systems design where discrete controls are applied to continuous processes...
Tesio, Luigi; Simone, Anna; Grzeda, Mariuzs T; Ponzio, Michela; Dati, Gabriele; Zaratin, Paola; Perucca, Laura; Battaglia, Mario A
2015-01-01
The funding policy of research projects often relies on scores assigned by a panel of experts (referees). The non-linear nature of raw scores and the severity and inconsistency of individual raters may generate unfair numeric project rankings. Rasch measurement (many-facets version, MFRM) provides a valid alternative to scoring. MFRM was applied to the scores achieved by 75 research projects on multiple sclerosis sent in response to a previous annual call by FISM-Italian Foundation for Multiple Sclerosis. This allowed to simulate, a posteriori, the impact of MFRM on the funding scenario. The applications were each scored by 2 to 4 independent referees (total = 131) on a 10-item, 0-3 rating scale called FISM-ProQual-P. The rotation plan assured "connection" of all pairs of projects through at least 1 shared referee.The questionnaire fulfilled satisfactorily the stringent criteria of Rasch measurement for psychometric quality (unidimensionality, reliability and data-model fit). Arbitrarily, 2 acceptability thresholds were set at a raw score of 21/30 and at the equivalent Rasch measure of 61.5/100, respectively. When the cut-off was switched from score to measure 8 out of 18 acceptable projects had to be rejected, while 15 rejected projects became eligible for funding. Some referees, of various severity, were grossly inconsistent (z-std fit indexes less than -1.9 or greater than 1.9). The FISM-ProQual-P questionnaire seems a valid and reliable scale. MFRM may help the decision-making process for allocating funds to MS research projects but also in other fields. In repeated assessment exercises it can help the selection of reliable referees. Their severity can be steadily calibrated, thus obviating the need to connect them with other referees assessing the same projects.
Distribution system modeling and analysis
Kersting, William H
2001-01-01
For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures. Distribution System Modeling and Analysis helps prevent those errors. It gives readers a basic understanding of the modeling and operating characteristics of the major components of a distribution system. One by one, the author develops and analyzes each component as a stand-alone element, then puts them all together to analyze a distribution system comprising the various shunt and series devices for power-flow and short-circuit studies. He includes the derivation of all models and includes many num...
Directory of Open Access Journals (Sweden)
Susanne H Landis
Full Text Available Extreme climate events such as heat waves are expected to increase in frequency under global change. As one indirect effect, they can alter magnitude and direction of species interactions, for example those between hosts and parasites. We simulated a summer heat wave to investigate how a changing environment affects the interaction between the broad-nosed pipefish (Syngnathus typhle as a host and its digenean trematode parasite (Cryptocotyle lingua. In a fully reciprocal laboratory infection experiment, pipefish from three different coastal locations were exposed to sympatric and allopatric trematode cercariae. In order to examine whether an extreme climatic event disrupts patterns of locally adapted host-parasite combinations we measured the parasite's transmission success as well as the host's adaptive and innate immune defence under control and heat wave conditions. Independent of temperature, sympatric cercariae were always more successful than allopatric ones, indicating that parasites are locally adapted to their hosts. Hosts suffered from heat stress as suggested by fewer cells of the adaptive immune system (lymphocytes compared to the same groups that were kept at 18°C. However, the proportion of the innate immune cells (monocytes was higher in the 18°C water. Contrary to our expectations, no interaction between host immune defence, parasite infectivity and temperature stress were found, nor did the pattern of local adaptation change due to increased water temperature. Thus, in this host-parasite interaction, the sympatric parasite keeps ahead of the coevolutionary dynamics across sites, even under increasing temperatures as expected under marine global warming.
Modelling and Analysing Socio-Technical Systems
DEFF Research Database (Denmark)
Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming
2015-01-01
Modern organisations are complex, socio-technical systems consisting of a mixture of physical infrastructure, human actors, policies and processes. An in-creasing number of attacks on these organisations exploits vulnerabilities on all different levels, for example combining a malware attack...... with social engineering. Due to this combination of attack steps on technical and social levels, risk assessment in socio-technical systems is complex. Therefore, established risk assessment methods often abstract away the internal structure of an organisation and ignore human factors when modelling...... and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...
Lin, F.; Hilairet, N.; Raterron, P.; Addad, A.; Immoor, J.; Marquardt, H.; Tomé, C. N.; Miyagi, L.; Merkel, S.
2017-11-01
Anisotropy has a crucial effect on the mechanical response of polycrystalline materials. Polycrystal anisotropy is a consequence of single crystal anisotropy and texture (crystallographic preferred orientation) development, which can result from plastic deformation by dislocation glide. The plastic behavior of polycrystals is different under varying hydrostatic pressure conditions, and understanding the effect of hydrostatic pressure on plasticity is of general interest. Moreover, in the case of geological materials, it is useful for understanding material behavior in the deep earth and for the interpretation of seismic data. Periclase is a good material to test because of its simple and stable crystal structure (B1), and it is of interest to geosciences, as (Mg,Fe)O is the second most abundant phase in Earth's lower mantle. In this study, a polycrystalline sintered sample of periclase is deformed at ˜5.4 GPa and ambient temperature, to a total strain of 37% at average strain rates of 2.26 × 10-5/s and 4.30 × 10-5/s. Lattice strains and textures in the polycrystalline sample are recorded using in-situ synchrotron x-ray diffraction and are modeled with Elasto-Viscoplastic Self Consistent (EVPSC) methods. Parameters such as critical resolved shear stress (CRSS) for the various slip systems, strain hardening, initial grain shape, and the strength of the grain-neighborhood interaction are tested in order to optimize the simulation. At the beginning of deformation, a transient maximum occurs in lattice strains, then lattice strains relax to a "steady-state" value, which, we believe, corresponds to the true flow strength of periclase. The "steady state" CRSS of the {" separators="| 110 } ⟨" separators="| 1 1 ¯ 0 ⟩ slip system is 1.2 GPa, while modeling the transient maximum requires a CRSS of 2.2 GPa. Interpretation of the overall experimental data via modeling indicates dominant {" separators="| 110 } ⟨" separators="| 1 1 ¯ 0 ⟩ slip with initial strain
DEFF Research Database (Denmark)
Thomsen, Christa; Nielsen, Anne Ellerup
2006-01-01
This chapter first outlines theory and literature on CSR and Stakeholder Relations focusing on the different perspectives and the contextual and dynamic character of the CSR concept. CSR reporting challenges are discussed and a model of analysis is proposed. Next, our paper presents the results...... in the reporting material. By implementing consistent discourse strategies that interact according to a well-defined pattern or order, it is possible to communicate a strong social commitment on the one hand, and to take into consideration the expectations of the shareholders and the other stakeholders...
Martinez-Alvarado, Oscar; Gray, Suzanne; Methven, John
2016-04-01
Diabatic processes in the atmosphere can be characterised by the changes they produce on potential temperature (θ) and potential vorticity (PV) following an air parcel. Diabatic tracers of θ and PV track the changes undergone by those two variables due to the action of diabatic processes in a Lagrangian frame by splitting θ and PV into components that are materially conserved and components that are diabatically generated. Since diabatic tracers are subject to advection by the three-dimensional wind field, they are useful tools for the investigation of the interaction of diabatic processes with the atmospheric flow and the impact of diabatic processes on the evolution of the atmosphere. In this contribution, we present a novel integral interpretation of diabatic tracers over suitably defined control volumes, which depend on the weather system under consideration. Using two contrasting extratropical cyclones as examples, it is shown that θ tracers can be used to assess and systematically compare the cross-isentropic mass transport around each cyclone, which is related to the amount and distribution of heat produced during each cyclone's development. PV tracers are related to circulation and area-average isentropic vorticity through the application of Stoke's theorem. Using the impermeability theorem for PV, which states there can be no PV flux across isentropic surfaces, it is also shown that cross-isentropic motion within the control volumes does not directly influence circulation. Instead, the influence of diabatic processes on the circulation crucially depends on the balance between the fluxes along isentropic surfaces of the materially-conserved and diabatically-generated PV components across the lateral boundaries of the control volumes. Finally, the application of the integral interpretation of diabatic tracers for the assessment of model consistency across different model resolutions is discussed.
Intrusion detection: systems and models
Sherif, J. S.; Dearmond, T. G.
2002-01-01
This paper puts forward a review of state of the art and state of the applicability of intrusion detection systems, and models. The paper also presents a classfication of literature pertaining to intrusion detection.
International Nuclear Information System (INIS)
Beltracchi, Leo
1999-01-01
The design and development of a digital computer-based safety system for a nuclear power plant is a complex process. The process of design and product development must result in a final product free of critical errors; operational safety of nuclear power plants must not be compromised. This paper focuses on the development of a safety system model to assist designers, developers, and regulators in establishing and evaluating requirements for a digital computer-based safety system. The model addresses hardware, software, and human elements for use in the requirements definition process. The purpose of the safety system model is to assist and serve as a guide to humans in the cognitive reasoning process of establishing requirements. The goals in the use of the model are to: (1) enhance the completeness of the requirements and (2) reduce the number of errors associated with the requirements definition phase of a project
Mathematical modeling of aeroelastic systems
Velmisov, Petr A.; Ankilov, Andrey V.; Semenova, Elizaveta P.
2017-12-01
In the paper, the stability of elastic elements of a class of designs that are in interaction with a gas or liquid flow is investigated. The definition of the stability of an elastic body corresponds to the concept of stability of dynamical systems by Lyapunov. As examples the mathematical models of flowing channels (models of vibration devices) at a subsonic flow and the mathematical models of protective surface at a supersonic flow are considered. Models are described by the related systems of the partial differential equations. An analytic investigation of stability is carried out on the basis of the construction of Lyapunov-type functionals, a numerical investigation is carried out on the basis of the Galerkin method. The various models of the gas-liquid environment (compressed, incompressible) and the various models of a deformable body (elastic linear and elastic nonlinear) are considered.
Directory of Open Access Journals (Sweden)
Jürgen Geiser
2011-01-01
processes. In this paper we present a new model taken into account a self-consistent electrostatic-particle in cell model with low density Argon plasma. The collision model are based of Monte Carlo simulations is discussed for DC sputtering in lower pressure regimes. In order to simulate transport phenomena within sputtering processes realistically, a spatial and temporal knowledge of the plasma density and electrostatic field configuration is needed. Due to relatively low plasma densities, continuum fluid equations are not applicable. We propose instead a Particle-in-cell (PIC method, which allows the study of plasma behavior by computing the trajectories of finite-size particles under the action of an external and self-consistent electric field defined in a grid of points.
ABSTRACT MODELS FOR SYSTEM VIRTUALIZATION
Directory of Open Access Journals (Sweden)
M. G. Koveshnikov
2015-05-01
Full Text Available The paper is dedicated to issues of system objects securing (system files and user system or application configuration files against unauthorized access including denial of service attacks. We have suggested the method and developed abstract system virtualization models, which are used toresearch attack scenarios for different virtualization modes. Estimation for system tools virtualization technology effectiveness is given. Suggested technology is based on redirection of access requests to system objects shared among access subjects. Whole and partial system virtualization modes have been modeled. The difference between them is the following: in the whole virtualization mode all copies of access system objects are created whereon subjects’ requests are redirected including corresponding application objects;in the partial virtualization mode corresponding copies are created only for part of a system, for example, only system objects for applications. Alternative solutions effectiveness is valued relating to different attack scenarios. We consider proprietary and approved technical solution which implements system virtualization method for Microsoft Windows OS family. Administrative simplicity and capabilities of correspondingly designed system objects security tools are illustrated on this example. Practical significance of the suggested security method has been confirmed.
Aerodynamic and Mechanical System Modelling
DEFF Research Database (Denmark)
Jørgensen, Martin Felix
This thesis deals with mechanical multibody-systems applied to the drivetrain of a 500 kW wind turbine. Particular focus has been on gearbox modelling of wind turbines. The main part of the present project involved programming multibody systems to investigate the connection between forces, moments...
Shaffer, W O; Spratt, K F; Weinstein, J; Lehmann, T R; Goel, V
1990-08-01
An experimental model of the L4-L5 lumbar motion segment was developed that allowed precise manipulation of sagittal translation, rotation of L5 relative to L4, tilt of L4 on L5, and control of roentgenogram quality (image clarity) by placing a water bath between the tube and the vertebral body. A series of experiments were designed to systematically assess the consistency and accuracy of sagittal translation measurements from roentgenograms of varying quality, using different measurement protocols and various rater combinations on models with varying degrees of concomitant motions (rotations and tilts). Study 1 assessed the effects of roentgenogram quality, raters, and seven measurement methods on the consistency and accuracy of evaluating translations in the sagittal plane. Results indicated very high reliabilities across roentgenogram quality, raters, and measurement. As expected, high-quality roentgenograms were more accurately evaluated than lower-quality roentgenograms. However, closer inspection of the consequences of errors in measured translations indicated surprisingly high false-positive and false-negative rates, with significant differences observed between measurement methods. Study 2 assessed the effects of concomitant motions and measurement methods on the consistency and accuracy of evaluations. Within-rater consistency and accuracy indices were remarkably high and similar across measurement methods and degrees of concomitant motions. However, important differences in the false-positive and false-negative rates were again observed. Method 2, described by Morgan and King, demonstrated the overall best performance and the least interference due to concomitant motions. Study 3 assessed the effects of raters and measurement methods on the consistency of measuring translation in clinical roentgenograms, where concomitant motion factors may be present, but not explicitly considered. Results indicated substantially lower within- and between
Spatial Models and Networks of Living Systems
DEFF Research Database (Denmark)
Juul, Jeppe Søgaard
. Such systems are known to be stabilized by spatial structure. Finally, I analyse data from a large mobile phone network and show that people who are topologically close in the network have similar communication patterns. This main part of the thesis is based on six different articles, which I have co...... with interactions defined by network topology. In this thesis I first describe three different biological models of ageing and cancer, in which spatial structure is important for the system dynamics. I then turn to describe characteristics of ecosystems consisting of three cyclically interacting species...
Preface: the hydra model system.
Galliot, Brigitte
2012-01-01
The freshwater Hydra polyp emerged as a model system in 1741 when Abraham Trembley not only discovered its amazing regenerative potential, but also demonstrated that experimental manipulations pave the way to research in biology. Since then, Hydra flourished as a potent and fruitful model system to help answer questions linked to cell and developmental biology, as such as the setting up of an organizer to regenerate a complex missing structure, the establishment and maintainance of polarity in a multicellular organism, the development of mathematical models to explain the robust developmental rules observed in this animal, the maintainance of stemness and multipotency in a highly dynamic environment, the plasticity of differentiated cells, to name but a few. However the Hydra model system is not restricted to cell and developmental biology; during the past 270 years it has also been heavily used to investigate the relationships between Hydra and its environment, opening new horizons concerning neurophysiology, innate immunity, ecosystems, ecotoxicology, symbiosis...
Iordache, Octavian
2011-01-01
This book is devoted to modeling of multi-level complex systems, a challenging domain for engineers, researchers and entrepreneurs, confronted with the transition from learning and adaptability to evolvability and autonomy for technologies, devices and problem solving methods. Chapter 1 introduces the multi-scale and multi-level systems and highlights their presence in different domains of science and technology. Methodologies as, random systems, non-Archimedean analysis, category theory and specific techniques as model categorification and integrative closure, are presented in chapter 2. Chapters 3 and 4 describe polystochastic models, PSM, and their developments. Categorical formulation of integrative closure offers the general PSM framework which serves as a flexible guideline for a large variety of multi-level modeling problems. Focusing on chemical engineering, pharmaceutical and environmental case studies, the chapters 5 to 8 analyze mixing, turbulent dispersion and entropy production for multi-scale sy...
Energy Technology Data Exchange (ETDEWEB)
Gepraegs, R.; Schmitz, G.; Peters, D. [Institut fuer Atmosphaerenphysik, Kuehlungsborn (Germany)
1997-12-31
A 2D version of the ECHAM T21 climate model has been developed. The new model includes an efficient spectral transport scheme with implicit diffusion. Furthermore, photodissociation and chemistry of the NCAR 2D model have been incorporated. A self consistent parametrization scheme is used for eddy heat- and momentum flux in the troposphere. It is based on the heat flux parametrization of Branscome and mixing-length formulation for quasi-geostrophic vorticity. Above 150 hPa the mixing-coefficient K{sub yy} is prescribed. Some of the model results are discussed, concerning especially the impact of aircraft NO{sub x} emission on the model chemistry. (author) 6 refs.
Modeling and Control of Underwater Robotic Systems
Energy Technology Data Exchange (ETDEWEB)
Schjoelberg, I:
1996-12-31
This doctoral thesis describes modeling and control of underwater vehicle-manipulator systems. The thesis also presents a model and a control scheme for a system consisting of a surface vessel connected to an underwater robotic system by means of a slender marine structure. The equations of motion of the underwater vehicle and manipulator are described and the system kinematics and properties presented. Feedback linearization technique is applied to the system and evaluated through a simulation study. Passivity-based controllers for vehicle and manipulator control are presented. Stability of the closed loop system is proved and simulation results are given. The equation of motion for lateral motion of a cable/riser system connected to a surface vessel at the top end and to a thruster at the bottom end is described and stability analysis and simulations are presented. The equations of motion in 3 degrees of freedom of the cable/riser, surface vessel and robotic system are given. Stability analysis of the total system with PD-controllers is presented. 47 refs., 32 figs., 7 tabs.
Energy Technology Data Exchange (ETDEWEB)
Waldhoff, Stephanie T.; Martinich, Jeremy; Sarofim, Marcus; DeAngelo, B. J.; McFarland, Jim; Jantarasami, Lesley; Shouse, Kate C.; Crimmins, Allison; Ohrel, Sara; Li, Jia
2015-07-01
The Climate Change Impacts and Risk Analysis (CIRA) modeling exercise is a unique contribution to the scientific literature on climate change impacts, economic damages, and risk analysis that brings together multiple, national-scale models of impacts and damages in an integrated and consistent fashion to estimate climate change impacts, damages, and the benefits of greenhouse gas (GHG) mitigation actions in the United States. The CIRA project uses three consistent socioeconomic, emissions, and climate scenarios across all models to estimate the benefits of GHG mitigation policies: a Business As Usual (BAU) and two policy scenarios with radiative forcing (RF) stabilization targets of 4.5 W/m2 and 3.7 W/m2 in 2100. CIRA was also designed to specifically examine the sensitivity of results to uncertainties around climate sensitivity and differences in model structure. The goals of CIRA project are to 1) build a multi-model framework to produce estimates of multiple risks and impacts in the U.S., 2) determine to what degree risks and damages across sectors may be lowered from a BAU to policy scenarios, 3) evaluate key sources of uncertainty along the causal chain, and 4) provide information for multiple audiences and clearly communicate the risks and damages of climate change and the potential benefits of mitigation. This paper describes the motivations, goals, and design of the CIRA modeling exercise and introduces the subsequent papers in this special issue.
Energy Technology Data Exchange (ETDEWEB)
Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Best, D. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-07-07
In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for several simulated low activity waste (LAW) glasses (designated as the August and October 2014 LAW glasses) fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions.
Energy Technology Data Exchange (ETDEWEB)
Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Riley, W. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Best, D. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-09-03
In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for several simulated low activity waste (LAW) glasses (designated as the January, March, and April 2015 LAW glasses) fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions.
International Nuclear Information System (INIS)
Christianson, O; Winslow, J; Samei, E
2014-01-01
Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using optical character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image
International Nuclear Information System (INIS)
Fredrickson, E.D.; McGuire, K.M.; Goldston, R.J.
1987-01-01
Electron heat transport on TFTR and other tokamaks is several orders of magnitude larger than neoclassical calculations predict. Despite considerable effort, there is still no clear theoretical understanding of this anomalous transport. The electron temperature profile, T e (r), has shown a marked consistency on many machines for a wide range of plasma parameters and heating profiles. This could be an important clue as to the process responsible for this enhanced thermal transport. In the first section of the paper the result is presented that TFTR electron temperature profile shapes are even more constrained than previous models of profile consistency suggested. The profile shapes, T e (r)/T e (a/2), are found to be invariant (for r>0.4 a) for a wide range of parameters, including q(a). In the second section, an experiment is discussed which uses a fast current ramp to transiently decouple the current density profile, J(r), and the T e (r) profiles. From this experiment, it has been determined that the J(r) profile can be strongly modified with no measureable effect on the electron temperature profile shape. Thus, while the electron temperature profile is apparently constrained, the current profile is not. (author). Letter-to-the-editor. 25 refs, 9 figs
Executive Information Systems' Multidimensional Models
Directory of Open Access Journals (Sweden)
2007-01-01
Full Text Available Executive Information Systems are design to improve the quality of strategic level of management in organization through a new type of technology and several techniques for extracting, transforming, processing, integrating and presenting data in such a way that the organizational knowledge filters can easily associate with this data and turn it into information for the organization. These technologies are known as Business Intelligence Tools. But in order to build analytic reports for Executive Information Systems (EIS in an organization we need to design a multidimensional model based on the business model from the organization. This paper presents some multidimensional models that can be used in EIS development and propose a new model that is suitable for strategic business requests.
Systems modelling and the development of coherent cell biological knowledge
Verhoeff, R.; Waarlo, A.J.; Boersma, K.T.
2008-01-01
This article reports on educational design research concerning a learning and teaching strategy for cell biology in upper-secondary education introducing systems modelling as a key competence. The strategy consists of four modelling phases in which students subsequently develop models of freeliving
Numerical Modeling of Microelectrochemical Systems
DEFF Research Database (Denmark)
Adesokan, Bolaji James
for the reactants in the bulk electrolyte that are traveling waves. The first paper presents the mathematical model which describes an electrochemical system and simulates an electroanalytical technique called cyclic voltammetry. The model is governed by a system of advection–diffusion equations with a nonlinear...... reaction term at the boundary. We investigate the effect of flow rates, scan rates, and concentration on the cyclic voltammetry. We establish that high flow rates lead to the reduced hysteresis in the cyclic voltammetry curves and increasing scan rates lead to more pronounced current peaks. The final part...... of the paper shows that the response current in a cyclic voltammetry increases proportionally to the electrolyte concentration. In the second paper we present an experiment of an electrochemical system in a microfluidc system and compare the result to the numerical solutions. We investigate how the position...
System Code Models and Capabilities
International Nuclear Information System (INIS)
Bestion, D.
2008-01-01
System thermalhydraulic codes such as RELAP, TRACE, CATHARE or ATHLET are now commonly used for reactor transient simulations. The whole methodology of code development is described including the derivation of the system of equations, the analysis of experimental data to obtain closure relation and the validation process. The characteristics of the models are briefly presented starting with the basic assumptions, the system of equations and the derivation of closure relationships. An extensive work was devoted during the last three decades to the improvement and validation of these models, which resulted in some homogenisation of the different codes although separately developed. The so called two-fluid model is the common basis of these codes and it is shown how it can describe both thermal and mechanical nonequilibrium. A review of some important physical models allows to illustrate the main capabilities and limitations of system codes. Attention is drawn on the role of flow regime maps, on the various methods for developing closure laws, on the role of interfacial area and turbulence on interfacial and wall transfers. More details are given for interfacial friction laws and their relation with drift flux models. Prediction of chocked flow and CFFL is also addressed. Based on some limitations of the present generation of codes, perspectives for future are drawn.
Experimental Modeling of Dynamic Systems
DEFF Research Database (Denmark)
Knudsen, Morten Haack
2006-01-01
An engineering course, Simulation and Experimental Modeling, has been developed that is based on a method for direct estimation of physical parameters in dynamic systems. Compared with classical system identification, the method appears to be easier to understand, apply, and combine with physical...... insight. It is based on a sensitivity approach that is useful for choice of model structure, for experiment design, and for accuracy verification. The method is implemented in the Matlab toolkit Senstools. The method and the presentation have been developed with generally preferred learning styles in mind...
長岡, 雅美; 赤松, 喜久; Masami, Nagaoka; Yoshihisa, Akamatsu
2008-01-01
The purpose of this study is to clarify the concept of Guidance and Support on community sports and to specify the directionality of organization and support for achievement of the sports society through life. The authors have stressed that it is necessary for achievement of the society for longlife sports，to cooperate with other groups and to construct a consistent support system. This study is also to explore the condition of community sports club management through analyzing the Japan Juni...
Compositional Modeling of Biological Systems
Zámborszky, Judit
2010-01-01
Molecular interactions are wired in a fascinating way resulting in complex behavior of bio-logical systems. Theoretical modeling provides us a useful framework for understanding the dynamics and the function of such networks. The complexity of the biological systems calls for conceptual tools that manage the combinatorial explosion of the set of possible interac-tions. A suitable conceptual tool to attack complexity is compositionality, already success-fully used in the process algebra field ...
Energy Technology Data Exchange (ETDEWEB)
Gupta, Mukesh [URS Professional Solutions LLC, Aiken, SC (United States); Niemi, Belinda [Washington River Protection Solutions, LLC, Richland, WA (United States); Paik, Ingle [Washington River Protection Solutions, LLC, Richland, WA (United States)
2015-09-02
In 2012, One System Nuclear Safety performed a comparison of the safety bases for the Tank Farms Operations Contractor (TOC) and Hanford Tank Waste Treatment and Immobilization Plant (WTP) (RPP-RPT-53222 / 24590-WTP-RPT-MGT-12-018, “One System Report of Comparative Evaluation of Safety Bases for Hanford Waste Treatment and Immobilization Plant Project and Tank Operations Contract”), and identified 25 recommendations that required further evaluation for consensus disposition. This report documents ten NSSC approved consistent methodologies and guides and the results of the additional evaluation process using a new set of evaluation criteria developed for the evaluation of the new methodologies.
Executable UML Modeling For Automotive Embedded Systems
International Nuclear Information System (INIS)
Gerard, Sebastien
2000-01-01
Engineers are more and more faced to the hard problem of sophisticated real-time System whereas time to market becomes always smaller. Object oriented modeling supported by UML standard brings effective solutions to such problems. However the possibility to specify real-time aspects of an application are not yet fully satisfactory Indeed, existing industrial proposals supply good answers to concurrency specification problem but they are yet limited regarding to real-time quantitative properties specification of an application. This work aims to construct a complete and consistent UML methodology based on a profile dedicated to automotive embedded Systems modeling and prototyping. This profile contains ail needed extensions to express easily the real-time quantitative properties of an application. Moreover, thanks to the formalization of UML protocol state machines, real-time concepts have been well-integrated in the object oriented paradigm. The main result of this deep integration is that a user is now able to model real-time Systems through the classical object oriented view i.e. without needing any specific knowing in real-time area. In order to answer to an industrial requirement, Systems prototyping (key point for car industry) the ACCORD/UML approach allows also to build executable models of an application. For that purpose, the method supplies a set of rules allow.ng to remove UML ambiguous semantics points, to complete semantics variation points and then to obtain a complete and coherent global model of an application being executable. The work of UML extension and its using formalization realized all along this thesis supplied also a complete and non-ambiguous modeling framework for automotive electronics Systems development. This is also a base particularly well-suited to tackle other facets of the Systems development as automatic and optimized code generation, validation, simulation or tests. (author) [fr
Model checking embedded system designs
Brinksma, Hendrik; Mader, Angelika H.
2002-01-01
Model checking has established itself as a successful tool supported technique for the verification and debugging of various hardware and software systems [16]. Not only in academia, but also by industry this technique is increasingly being regarded as a promising and practical proposition,
GENERIC model for multiphase systems
Sagis, L.M.C.
2010-01-01
GENERIC is a nonequilibrium thermodynamic formalism in which the dynamic behavior of a system is described by a single compact equation involving two types of brackets: a Poisson bracket and a dissipative bracket. This formalism has proved to be a very powerful instrument to model the dynamic
Energy Technology Data Exchange (ETDEWEB)
Zecevic, Milovan [Department of Mechanical Engineering, University of New Hampshire, Durham, NH 03824 (United States); Knezevic, Marko, E-mail: marko.knezevic@unh.edu [Department of Mechanical Engineering, University of New Hampshire, Durham, NH 03824 (United States); Beyerlein, Irene J. [Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Tomé, Carlos N. [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2015-06-25
In this work, we develop a polycrystal mean-field constitutive model based on an elastic–plastic self-consistent (EPSC) framework. In this model, we incorporate recently developed subgrain models for dislocation density evolution with thermally activated slip, twin activation via statistical stress fluctuations, reoriented twin domains within the grain and associated stress relaxation, twin boundary hardening, and de-twinning. The model is applied to a systematic set of strain path change tests on pure beryllium (Be). Under the applied deformation conditions, Be deforms by multiple slip modes and deformation twinning and thereby provides a challenging test for model validation. With a single set of material parameters, determined using the flow-stress vs. strain responses during monotonic testing, the model predicts well the evolution of texture, lattice strains, and twinning. With further analysis, we demonstrate the significant influence of internal residual stresses on (1) the flow stress drop when reloading from one path to another, (2) deformation twin activation, (3) de-twinning during a reversal strain path change, and (4) the formation of additional twin variants during a cross-loading sequence. The model presented here can, in principle, be applied to other metals, deforming by multiple slip and twinning modes under a wide range of temperature, strain rate, and strain path conditions.
An extensible analysable system model
DEFF Research Database (Denmark)
Probst, Christian W.; Hansen, Rene Rydhof
2008-01-01
Analysing real-world systems for vulnerabilities with respect to security and safety threats is a difficult undertaking, not least due to a lack of availability of formalisations for those systems. While both formalisations and analyses can be found for artificial systems such as software......, this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...... are based on (quite successful) ad-hoc techniques. We believe they can be significantly improved beyond the state-of-the-art by pairing them with static analyses techniques. In this paper we present an approach to both formalising those real-world systems, as well as providing an underlying semantics, which...
Smith, J. A.; Froyd, K. D.; Toon, O. B.
2012-12-01
We construct tables of reaction enthalpies and entropies for the association reactions involving sulfuric acid vapor, water vapor, and the bisulfate ion. These tables are created from experimental measurements and quantum chemical calculations for molecular clusters and a classical thermodynamic model for larger clusters. These initial tables are not thermodynamically consistent. For example, the Gibbs free energy of associating a cluster consisting of one acid molecule and two water molecules depends on the order in which the cluster was assembled: add two waters and then the acid or add an acid and a water and then the second water. We adjust the values within the tables using the method of Lagrange multipliers to minimize the adjustments and produce self-consistent Gibbs free energy surfaces for the neutral clusters and the charged clusters. With the self-consistent Gibbs free energy surfaces, we calculate size distributions of neutral and charged clusters for a variety of atmospheric conditions. Depending on the conditions, nucleation can be dominated by growth along the neutral channel or growth along the ion channel followed by ion-ion recombination.
Model based management of a reservoir system
Energy Technology Data Exchange (ETDEWEB)
Scharaw, B.; Westerhoff, T. [Fraunhofer IITB, Ilmenau (Germany). Anwendungszentrum Systemtechnik; Puta, H.; Wernstedt, J. [Technische Univ. Ilmenau (Germany)
2000-07-01
The main goals of reservoir management systems consist of prevention against flood water damages, the catchment of raw water and keeping all of the quality parameters within their limits besides controlling the water flows. In consideration of these goals a system model of the complete reservoir system Ohra-Schmalwasser-Tambach Dietharz was developed. This model has been used to develop optimized strategies for minimization of raw water production cost, for maximization of electrical energy production and to cover flood situations, as well. Therefore a proper forecast of the inflow to the reservoir from the catchment areas (especially flooding rivers) and the biological processes in the reservoir is important. The forecast model for the inflow to the reservoir is based on the catchment area model of Lorent and Gevers. It uses area precipitation, water supply from the snow cover, evapotranspiration and soil wetness data to calculate the amount of flow in rivers. The other aim of the project is to ensure the raw water quality using quality models, as well. Then a quality driven raw water supply will be possible. (orig.)
System of systems dependability – Theoretical models and applications examples
International Nuclear Information System (INIS)
Bukowski, L.
2016-01-01
The aim of this article is to generalise the concept of "dependability" in a way, that could be applied to all types of systems, especially the system of systems (SoS), operating under both normal and abnormal work conditions. In order to quantitatively assess the dependability we applied service continuity oriented approach. This approach is based on the methodology of service engineering and is closely related to the idea of resilient enterprise as well as to the concept of disruption-tolerant operation. On this basis a framework for evaluation of SoS dependability has been developed in a static as well as dynamic approach. The static model is created as a fuzzy logic-oriented advisory expert system and can be particularly useful at the design stage of SoS. The dynamic model is based on the risk oriented approach, and can be useful both at the design stage and for management of SoS. The integrated model of dependability can also form the basis for a new definition of the dependability engineering, namely as a superior discipline to reliability engineering, safety engineering, security engineering, resilience engineering and risk engineering. - Highlights: • A framework for evaluation of system of systems dependability is presented. • The model is based on the service continuity concept and consists of two parts. • The static part can be created as a fuzzy logic-oriented advisory expert system. • The dynamic, risk oriented part, is related to the concept of throughput chain. • A new definition of dependability engineering is proposed.
Cotangent Models for Integrable Systems
Kiesenhofer, Anna; Miranda, Eva
2017-03-01
We associate cotangent models to a neighbourhood of a Liouville torus in symplectic and Poisson manifolds focusing on b-Poisson/ b-symplectic manifolds. The semilocal equivalence with such models uses the corresponding action-angle theorems in these settings: the theorem of Liouville-Mineur-Arnold for symplectic manifolds and an action-angle theorem for regular Liouville tori in Poisson manifolds (Laurent- Gengoux et al., IntMath Res Notices IMRN 8: 1839-1869, 2011). Our models comprise regular Liouville tori of Poisson manifolds but also consider the Liouville tori on the singular locus of a b-Poisson manifold. For this latter class of Poisson structures we define a twisted cotangent model. The equivalence with this twisted cotangent model is given by an action-angle theorem recently proved by the authors and Scott (Math. Pures Appl. (9) 105(1):66-85, 2016). This viewpoint of cotangent models provides a new machinery to construct examples of integrable systems, which are especially valuable in the b-symplectic case where not many sources of examples are known. At the end of the paper we introduce non-degenerate singularities as lifted cotangent models on b-symplectic manifolds and discuss some generalizations of these models to general Poisson manifolds.