Consistent model driven architecture
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Chip Multithreaded Consistency Model
Institute of Scientific and Technical Information of China (English)
Zu-Song Li; Dan-Dan Huan; Wei-Wu Hu; Zhi-Min Tang
2008-01-01
Multithreaded technique is the developing trend of high performance processor. Memory consistency model is essential to the correctness, performance and complexity of multithreaded processor. The chip multithreaded consistency model adapting to multithreaded processor is proposed in this paper. The restriction imposed on memory event ordering by chip multithreaded consistency is presented and formalized. With the idea of critical cycle built by Wei-Wu Hu, we prove that the proposed chip multithreaded consistency model satisfies the criterion of correct execution of sequential consistency model. Chip multithreaded consistency model provides a way of achieving high performance compared with sequential consistency model and ensures the compatibility of software that the execution result in multithreaded processor is the same as the execution result in uniprocessor. The implementation strategy of chip multithreaded consistency model in Godson-2 SMT processor is also proposed. Godson-2 SMT processor supports chip multithreaded consistency model correctly by exception scheme based on the sequential memory access queue of each thread.
Consistency in Distributed Systems
Kemme, Bettina; Ramalingam, Ganesan; Schiper, André; Shapiro, Marc; Vaswani, Kapil
2013-01-01
International audience; In distributed systems, there exists a fundamental trade-off between data consistency, availability, and the ability to tolerate failures. This trade-off has significant implications on the design of the entire distributed computing infrastructure such as storage systems, compilers and runtimes, application development frameworks and programming languages. Unfortunately, it also has significant, and poorly understood, implications for the designers and developers of en...
Self-consistent models of quasi-relaxed rotating stellar systems
Varri, A L
2012-01-01
Two new families of self-consistent axisymmetric truncated equilibrium models for the description of quasi-relaxed rotating stellar systems are presented. The first extends the spherical King models to the case of solid-body rotation. The second is characterized by differential rotation, designed to be rigid in the central regions and to vanish in the outer parts, where the energy truncation becomes effective. The models are constructed by solving the nonlinear Poisson equation for the self-consistent mean-field potential. For rigidly rotating configurations, the solutions are obtained by an asymptotic expansion on the rotation strength parameter. The differentially rotating models are constructed by means of an iterative approach based on a Legendre series expansion of the density and the potential. The two classes of models exhibit complementary properties. The rigidly rotating configurations are flattened toward the equatorial plane, with deviations from spherical symmetry that increase with the distance f...
Fox-Rabinovitz, Michael S.; Lindzen, Richard S.
1993-01-01
Simple numerical experiments are performed in order to determine the effects of inconsistent combinations of horizontal and vertical resolution in both atmospheric models and observing systems. In both cases, we find that inconsistent spatial resolution is associated with enhanced noise generation. A rather fine horizontal resolution in a satellite-data observing system seems to be excessive when combined with the usually available relatively coarse vertical resolution. Using horizontal filters of different strengths, adjusted in such a way as to render the effective horizontal resolution more consistent with vertical resolution for the observing system, may result in improvement of the analysis accuracy. The increase of vertical resolution for a satellite data observing system with better vertically resolved data, the results are different in that little or no horizontal filtering is needed to make spatial resolution more consistent for the system. The obtained experimental estimates of consistent vertical and effective horizontal resolution are in a general agreement with consistent resolution estimates previously derived theoretically by the authors.
Feng, Bin; Shi, Zelin; Zhang, Chengshuo; Xu, Baoshu; Zhang, Xiaodong
2016-05-01
The point spread function (PSF) inconsistency caused by temperature variation leads to artifacts in decoded images of a wavefront coding infrared imaging system. Therefore, this paper proposes an analytical model for the effect of temperature variation on the PSF consistency. In the proposed model, a formula for the thermal deformation of an optical phase mask is derived. This formula indicates that a cubic optical phase mask (CPM) is still cubic after thermal deformation. A proposed equivalent cubic phase mask (E-CPM) is a virtual and room-temperature lens which characterizes the optical effect of temperature variation on the CPM. Additionally, a calculating method for PSF consistency after temperature variation is presented. Numerical simulation illustrates the validity of the proposed model and some significant conclusions are drawn. Given the form parameter, the PSF consistency achieved by a Ge-material CPM is better than the PSF consistency by a ZnSe-material CPM. The effect of the optical phase mask on PSF inconsistency is much slighter than that of the auxiliary lens group. A large form parameter of the CPM will introduce large defocus-insensitive aberrations, which improves the PSF consistency but degrades the room-temperature MTF.
The self-consistent field model for Fermi systems with account of three-body interactions
Directory of Open Access Journals (Sweden)
Yu.M. Poluektov
2015-12-01
Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.
Self-consistent triaxial models
Sanders, Jason L
2015-01-01
We present self-consistent triaxial stellar systems that have analytic distribution functions (DFs) expressed in terms of the actions. These provide triaxial density profiles with cores or cusps at the centre. They are the first self-consistent triaxial models with analytic DFs suitable for modelling giant ellipticals and dark haloes. Specifically, we study triaxial models that reproduce the Hernquist profile from Williams & Evans (2015), as well as flattened isochrones of the form proposed by Binney (2014). We explore the kinematics and orbital structure of these models in some detail. The models typically become more radially anisotropic on moving outwards, have velocity ellipsoids aligned in Cartesian coordinates in the centre and aligned in spherical polar coordinates in the outer parts. In projection, the ellipticity of the isophotes and the position angle of the major axis of our models generally changes with radius. So, a natural application is to elliptical galaxies that exhibit isophote twisting....
Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems
Garg, Vikram V
2014-09-27
Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.
Maintaining consistency in distributed systems
Birman, Kenneth P.
1991-01-01
In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.
Connolly, Mark; He, Xing; Gonzalez, Nestor; Vespa, Paul; DiStefano, Joe; Hu, Xiao
2014-03-01
Due to the inaccessibility of the cranial vault, it is difficult to study cerebral blood flow dynamics directly. A mathematical model can be useful to study these dynamics. The model presented here is a novel combination of a one-dimensional fluid flow model representing the major vessels of the circle of Willis (CoW), with six individually parameterized auto-regulatory models of the distal vascular beds. This model has the unique ability to simulate high temporal resolution flow and velocity waveforms, amenable to pulse-waveform analysis, as well as sophisticated phenomena such as auto-regulation. Previous work with human patients has shown that vasodilation induced by CO2 inhalation causes 12 consistent pulse-waveform changes as measured by the morphological clustering and analysis of intracranial pressure algorithm. To validate this model, we simulated vasodilation and successfully reproduced 9 out of the 12 pulse-waveform changes. A subsequent sensitivity analysis found that these 12 pulse-waveform changes were most affected by the parameters associated with the shape of the smooth muscle tension response and vessel elasticity, providing insight into the physiological mechanisms responsible for observed changes in the pulse-waveform shape.
Application of Computer Model to Estimate the Consistency of Air Conditioning Systems Engineering
Directory of Open Access Journals (Sweden)
Amal El-Berry
2013-04-01
Full Text Available Reliability engineering is utilized to predict the performance and optimization of the design and maintenance of air conditioning systems. There are a number of failures associated with the conditioning systems. The failures of an air conditioner such as turn on, loss of air conditioner cooling capacity, reduced air conditioning output temperatures, loss of cool air supply and loss of air flow entirely are mainly due to a variety of problems with one or more components of an air conditioner or air conditioning system. To maintain the system forecasting for system failure rates are very important. The focus of this paper is the reliability of the air conditioning systems. The most common applied statistical distributions in reliability settings are the standard (2 parameter Weibull and Gamma distributions. Reliability estimations and predictions are used to evaluate, when the estimation of distributionsparameters is done. To estimate good operating condition in a building, the reliability of the air conditioning system that supplies conditioned air to the several companies’ departments is checked. This air conditioning system is divided into two systems, namely the main chilled water system and the ten air handling systems that serves the ten departments. In a chilled-water system the air conditioner cools water down to 40 - 45oF (4 - 7oC. The chilled water is distributed throughout the building in a piping system and connected to air condition cooling units wherever needed. Data analysis has been done with support a computer aided reliability software, with the application of the Weibull and Gamma distributions it is indicated that the reliability for the systems equal to 86.012% and 77.7% respectively . A comparison between the two important families of distribution functions, namely, the Weibull and Gamma families is studied. It is found that Weibull method has performed well for decision making .
Yuan, Yao-Ming; Jiang, Rui; Hu, Mao-Bin; Wu, Qing-Song; Wang, Ruili
2009-06-01
In this paper, we have investigated traffic flow characteristics in a traffic system consisting of a mixture of adaptive cruise control (ACC) vehicles and manual-controlled (manual) vehicles, by using a hybrid modelling approach. In the hybrid approach, (i) the manual vehicles are described by a cellular automaton (CA) model, which can reproduce different traffic states (i.e., free flow, synchronised flow, and jam) as well as probabilistic traffic breakdown phenomena; (ii) the ACC vehicles are simulated by using a car-following model, which removes artificial velocity fluctuations due to intrinsic randomisation in the CA model. We have studied the traffic breakdown probability from free flow to congested flow, the phase transition probability from synchronised flow to jam in the mixed traffic system. The results are compared with that, where both ACC vehicles and manual vehicles are simulated by CA models. The qualitative and quantitative differences are indicated.
A Framework of Memory Consistency Models
Institute of Scientific and Technical Information of China (English)
胡伟武; 施巍松; 等
1998-01-01
Previous descriptions of memory consistency models in shared-memory multiprocessor systems are mainly expressed as constraints on the memory access event ordering and hence are hardware-centric.This paper presents a framework of memory consistency models which describes the memory consistency model on the behavior level.Based on the understanding that the behavior of an execution is determined by the execution order of conflicting accesses,a memory consistency model is defined as an interprocessor synchronization mechanism which orders the execution of operations from different processors.Synchronization order of an execution under certain consistency model is also defined.The synchronization order,together with the program order determines the behavior of an execution.This paper also presents criteria for correct program and correct implementation of consistency models.Regarding an implementation of a consistency model as certain memory event ordering constraints,this paper provides a method to prove the correctness of consistency model implementations,and the correctness of the lock-based cache coherence protocol is proved with this method.
Entropy-based consistent model driven architecture
Niepostyn, Stanisław Jerzy
2016-09-01
A description of software architecture is a plan of the IT system construction, therefore any architecture gaps affect the overall success of an entire project. The definitions mostly describe software architecture as a set of views which are mutually unrelated, hence potentially inconsistent. Software architecture completeness is also often described in an ambiguous way. As a result most methods of IT systems building comprise many gaps and ambiguities, thus presenting obstacles for software building automation. In this article the consistency and completeness of software architecture are mathematically defined based on calculation of entropy of the architecture description. Following this approach, in this paper we also propose our method of automatic verification of consistency and completeness of the software architecture development method presented in our previous article as Consistent Model Driven Architecture (CMDA). The proposed FBS (Functionality-Behaviour-Structure) entropy-based metric applied in our CMDA approach enables IT architects to decide whether the modelling process is complete and consistent. With this metric, software architects could assess the readiness of undergoing modelling work for the start of IT system building. It even allows them to assess objectively whether the designed software architecture of the IT system could be implemented at all. The overall benefit of such an approach is that it facilitates the preparation of complete and consistent software architecture more effectively as well as it enables assessing and monitoring of the ongoing modelling development status. We demonstrate this with a few industry examples of IT system designs.
Consistent ranking of volatility models
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2006-01-01
We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...
Consistent ranking of volatility models
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2006-01-01
result in an inferior model being chosen as "best" with a probability that converges to one as the sample size increases. We document the practical relevance of this problem in an empirical application and by simulation experiments. Our results provide an additional argument for using the realized...... variance in out-of-sample evaluations rather than the squared return. We derive the theoretical results in a general framework that is not specific to the comparison of volatility models. Similar problems can arise in comparisons of forecasting models whenever the predicted variable is a latent variable.......We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...
Zhao, Jianshi; Cai, Ximing; Wang, Zhongjing
2013-07-15
Water allocation can be undertaken through administered systems (AS), market-based systems (MS), or a combination of the two. The debate on the performance of the two systems has lasted for decades but still calls for attention in both research and practice. This paper compares water users' behavior under AS and MS through a consistent agent-based modeling framework for water allocation analysis that incorporates variables particular to both MS (e.g., water trade and trading prices) and AS (water use violations and penalties/subsidies). Analogous to the economic theory of water markets under MS, the theory of rational violation justifies the exchange of entitled water under AS through the use of cross-subsidies. Under water stress conditions, a unique water allocation equilibrium can be achieved by following a simple bargaining rule that does not depend upon initial market prices under MS, or initial economic incentives under AS. The modeling analysis shows that the behavior of water users (agents) depends on transaction, or administrative, costs, as well as their autonomy. Reducing transaction costs under MS or administrative costs under AS will mitigate the effect that equity constraints (originating with primary water allocation) have on the system's total net economic benefits. Moreover, hydrologic uncertainty is shown to increase market prices under MS and penalties/subsidies under AS and, in most cases, also increases transaction, or administrative, costs.
Energy Technology Data Exchange (ETDEWEB)
Johnson, B. C.; Melosh, H. J. [Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907 (United States); Lisse, C. M. [JHU-APL, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States); Chen, C. H. [STScI, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Wyatt, M. C. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Thebault, P. [LESIA, Observatoire de Paris, F-92195 Meudon Principal Cedex (France); Henning, W. G. [NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Gaidos, E. [Department of Geology and Geophysics, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); Elkins-Tanton, L. T. [Department of Terrestrial Magnetism, Carnegie Institution for Science, Washington, DC 20015 (United States); Bridges, J. C. [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom); Morlok, A., E-mail: johns477@purdue.edu [Department of Physical Sciences, Open University, Walton Hall, Milton Keynes MK7 6AA (United Kingdom)
2012-12-10
Spectral modeling of the large infrared excess in the Spitzer IRS spectra of HD 172555 suggests that there is more than 10{sup 19} kg of submicron dust in the system. Using physical arguments and constraints from observations, we rule out the possibility of the infrared excess being created by a magma ocean planet or a circumplanetary disk or torus. We show that the infrared excess is consistent with a circumstellar debris disk or torus, located at {approx}6 AU, that was created by a planetary scale hypervelocity impact. We find that radiation pressure should remove submicron dust from the debris disk in less than one year. However, the system's mid-infrared photometric flux, dominated by submicron grains, has been stable within 4% over the last 27 years, from the Infrared Astronomical Satellite (1983) to WISE (2010). Our new spectral modeling work and calculations of the radiation pressure on fine dust in HD 172555 provide a self-consistent explanation for this apparent contradiction. We also explore the unconfirmed claim that {approx}10{sup 47} molecules of SiO vapor are needed to explain an emission feature at {approx}8 {mu}m in the Spitzer IRS spectrum of HD 172555. We find that unless there are {approx}10{sup 48} atoms or 0.05 M{sub Circled-Plus} of atomic Si and O vapor in the system, SiO vapor should be destroyed by photo-dissociation in less than 0.2 years. We argue that a second plausible explanation for the {approx}8 {mu}m feature can be emission from solid SiO, which naturally occurs in submicron silicate ''smokes'' created by quickly condensing vaporized silicate.
Gritsenko, O. V.; Rubio, A.; Balbás, L. C.; Alonso, J. A.
1993-03-01
The model Coulomb pair-correlation functions proposed several years ago by Gritsenko, Bagaturyants, Kazansky, and Zhidomirov are incorporated into the self-consistent local-density approximation (LDA) scheme for electronic systems. Different correlation functions satisfying well-established local boundary conditions and integral conditions have been tested by performing LDA calculations for closed-shell atoms. Those correlation functions contain a single parameter which can be optimized by fitting the atomic correlation energies to empirical data. In this way, a single (universal) value of the parameter is found to give a very good fit for all the atoms studied. The results provide a substantial improvement of calculated correlation energies as compared to the usual LDA functionals and the scheme should be useful for molecular and cluster calculations.
Thermodynamically consistent model calibration in chemical kinetics
Directory of Open Access Journals (Sweden)
Goutsias John
2011-05-01
Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new
Consistent Design of Dependable Control Systems
DEFF Research Database (Denmark)
Blanke, M.
1996-01-01
Design of fault handling in control systems is discussed, and a method for consistent design is presented.......Design of fault handling in control systems is discussed, and a method for consistent design is presented....
Zervas, P. L.; Sarimveis, H.; Palyvos, J. A.; Markatos, N. C. G.
Hybrid renewable energy systems are expected to become competitive to conventional power generation systems in the near future and, thus, optimization of their operation is of particular interest. In this work, a hybrid power generation system is studied consisting of the following main components: photovoltaic array (PV), electrolyser, metal hydride tanks, and proton exchange membrane fuel cells (PEMFC). The key advantage of the hybrid system compared to stand-alone photovoltaic systems is that it can store efficiently solar energy by transforming it to hydrogen, which is the fuel supplied to the fuel cell. However, decision making regarding the operation of this system is a rather complicated task. A complete framework is proposed for managing such systems that is based on a rolling time horizon philosophy.
Energy Technology Data Exchange (ETDEWEB)
Barik, N.; Jena, S.N.
1982-11-01
We show here that the relativistic consistency of an effective power-law potential V(r) = Ar/sup ..nu../+V/sub 0/ (with A, ..nu..>0) (used successfully to describe the heavy-meson spectra) in generating Dirac bound states of QQ-bar and Qq-bar systems implies, and also at the same time is implied by, an equally mixed vector-scalar Lorentz structure which was observed phenomenologically in the fine-hyperfine splittings of meson spectra.
Consistent thermodynamic properties of lipids systems
DEFF Research Database (Denmark)
Cunico, Larissa; Ceriani, Roberta; Sarup, Bent
Physical and thermodynamic properties of pure components and their mixtures are the basic requirement for process design, simulation, and optimization. In the case of lipids, our previous works[1-3] have indicated a lack of experimental data for pure components and also for their mixtures...... different pressures, with azeotrope behavior observed. Available thermodynamic consistency tests for TPx data were applied before performing parameter regressions for Wilson, NRTL, UNIQUAC and original UNIFAC models. The relevance of enlarging experimental databank of lipids systems data in order to improve...
Modeling and Testing Legacy Data Consistency Requirements
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard
2003-01-01
An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...
Sticky continuous processes have consistent price systems
DEFF Research Database (Denmark)
Bender, Christian; Pakkanen, Mikko; Sayit, Hasanjan
Under proportional transaction costs, a price process is said to have a consistent price system, if there is a semimartingale with an equivalent martingale measure that evolves within the bid-ask spread. We show that a continuous, multi-asset price process has a consistent price system, under arb...
Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen
2016-07-01
The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our
CONSISTENT AGGREGATION IN FOOD DEMAND SYSTEMS
Levedahl, J. William; Reed, Albert J.; Clark, J. Stephen
2002-01-01
Two aggregation schemes for food demand systems are tested for consistency with the Generalized Composite Commodity Theorem (GCCT). One scheme is based on the standard CES classification of food expenditures. The second scheme is based on the Food Guide Pyramid. Evidence is found that both schemes are consistent with the GCCT.
Consistent quadrupole-octupole collective model
Dobrowolski, A.; Mazurek, K.; Góźdź, A.
2016-11-01
Within this work we present a consistent approach to quadrupole-octupole collective vibrations coupled with the rotational motion. A realistic collective Hamiltonian with variable mass-parameter tensor and potential obtained through the macroscopic-microscopic Strutinsky-like method with particle-number-projected BCS (Bardeen-Cooper-Schrieffer) approach in full vibrational and rotational, nine-dimensional collective space is diagonalized in the basis of projected harmonic oscillator eigensolutions. This orthogonal basis of zero-, one-, two-, and three-phonon oscillator-like functions in vibrational part, coupled with the corresponding Wigner function is, in addition, symmetrized with respect to the so-called symmetrization group, appropriate to the collective space of the model. In the present model it is D4 group acting in the body-fixed frame. This symmetrization procedure is applied in order to provide the uniqueness of the Hamiltonian eigensolutions with respect to the laboratory coordinate system. The symmetrization is obtained using the projection onto the irreducible representation technique. The model generates the quadrupole ground-state spectrum as well as the lowest negative-parity spectrum in 156Gd nucleus. The interband and intraband B (E 1 ) and B (E 2 ) reduced transition probabilities are also calculated within those bands and compared with the recent experimental results for this nucleus. Such a collective approach is helpful in searching for the fingerprints of the possible high-rank symmetries (e.g., octahedral and tetrahedral) in nuclear collective bands.
A self-consistent Maltsev pulse model
Buneman, O.
1985-04-01
A self-consistent model for an electron pulse propagating through a plasma is presented. In this model, the charge imbalance between plasma ions, plasma electrons and pulse electrons creates the travelling potential well in which the pulse electrons are trapped.
Self-Consistent Asset Pricing Models
Malevergne, Y
2006-01-01
We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alpha's and beta's of the factor model are unobservable. Self-consistency leads to renormalized beta's with zero effective alpha's, which are observable with standard OLS regressions. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value $\\alpha_i$ at the origin between an asset $i$'s return and the proxy's return. Self-consistency also introduces ``orthogonality'' and ``normality'' conditions linking the beta's, alpha's (as well as the residuals) and the weights of the proxy por...
On the existence of consistent price systems
DEFF Research Database (Denmark)
Bayraktar, Erhan; Pakkanen, Mikko S.; Sayit, Hasanjan
2014-01-01
We formulate a sufficient condition for the existence of a consistent price system (CPS), which is weaker than the conditional full support condition (CFS). We use the new condition to show the existence of CPSs for certain processes that fail to have the CFS property. In particular this condition...
Self-consistent model of fermions
Yershov, V N
2002-01-01
We discuss a composite model of fermions based on three-flavoured preons. We show that the opposite character of the Coulomb and strong interactions between these preons lead to formation of complex structures reproducing three generations of quarks and leptons with all their quantum numbers and masses. The model is self-consistent (it doesn't use input parameters). Nevertheless, the masses of the generated structures match the experimental values.
Consistent Stochastic Modelling of Meteocean Design Parameters
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Sterndorff, M. J.
2000-01-01
Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...... height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...
Developing consistent pronunciation models for phonemic variants
CSIR Research Space (South Africa)
Davel, M
2006-09-01
Full Text Available from a lexicon containing variants. In this paper we (the authors) address both these issues by creating ‘pseudo-phonemes’ associated with sets of ‘generation restriction rules’ to model those pronunciations that are consistently realised as two or more...
Are there consistent models giving observable NSI ?
Martinez, Enrique Fernandez
2013-01-01
While the existing direct bounds on neutrino NSI are rather weak, order 10(−)(1) for propagation and 10(−)(2) for production and detection, the close connection between these interactions and new NSI affecting the better-constrained charged letpon sector through gauge invariance make these bounds hard to saturate in realistic models. Indeed, Standard Model extensions leading to neutrino NSI typically imply constraints at the 10(−)(3) level. The question of whether consistent models leading to observable neutrino NSI naturally arises and was discussed in a dedicated session at NUFACT 11. Here we summarize that discussion.
Banerjee, S.; Hassenklover, E.; Kleijn, J.M.; Cohen Stuart, M.A.; Leermakers, F.A.M.
2013-01-01
This paper presents experimental and modeling results on water–CO2 interfacial tension (IFT) together with wettability studies of water on both hydrophilic and hydrophobic surfaces immersed in CO2. CO2–water interfacial tension (IFT) measurements showed that the IFT decreased with increasing pressur
Consistence beats causality in recommender systems
Zhu, Xuzhen; Hu, Zheng; Zhang, Ping; Zhou, Tao
2015-01-01
The explosive growth of information challenges people's capability in finding out items fitting to their own interests. Recommender systems provide an efficient solution by automatically push possibly relevant items to users according to their past preferences. Recommendation algorithms usually embody the causality from what having been collected to what should be recommended. In this article, we argue that in many cases, a user's interests are stable, and thus the previous and future preferences are highly consistent. The temporal order of collections then does not necessarily imply a causality relationship. We further propose a consistence-based algorithm that outperforms the state-of-the-art recommendation algorithms in disparate real data sets, including \\textit{Netflix}, \\textit{MovieLens}, \\textit{Amazon} and \\textit{Rate Your Music}.
Consistent Partial Least Squares Path Modeling
Dijkstra, Theo K.; Henseler, Jörg
2015-01-01
This paper resumes the discussion in information systems research on the use of partial least squares (PLS) path modeling and shows that the inconsistency of PLS path coefficient estimates in the case of reflective measurement can have adverse consequences for hypothesis testing. To remedy this, the
Consistency and Reconciliation Model In Regional Development Planning
Directory of Open Access Journals (Sweden)
Dina Suryawati
2016-10-01
Full Text Available The aim of this study was to identify the problems and determine the conceptual model of regional development planning. Regional development planning is a systemic, complex and unstructured process. Therefore, this study used soft systems methodology to outline unstructured issues with a structured approach. The conceptual models that were successfully constructed in this study are a model of consistency and a model of reconciliation. Regional development planning is a process that is well-integrated with central planning and inter-regional planning documents. Integration and consistency of regional planning documents are very important in order to achieve the development goals that have been set. On the other hand, the process of development planning in the region involves technocratic system, that is, both top-down and bottom-up system of participation. Both must be balanced, do not overlap and do not dominate each other. regional, development, planning, consistency, reconciliation
Consistent estimators in random censorship semiparametric models
Institute of Scientific and Technical Information of China (English)
王启华
1996-01-01
For the fixed design regression modelwhen Y, are randomly censored on the right, the estimators of unknown parameter and regression function g from censored observations are defined in the two cases .where the censored distribution is known and unknown, respectively. Moreover, the sufficient conditions under which these estimators are strongly consistent and pth (p>2) mean consistent are also established.
Pressure-Balance Consistency in Magnetospheric Modelling
Institute of Scientific and Technical Information of China (English)
肖永登; 陈出新
2003-01-01
There have been many magnetic field models for geophysical and astrophysical bodies.These theoretical or empirical models represent the reality very well in some cases,but in other cases they may be far from reality.We argue that these models will become more reasonable if they are modified by some coordinate transformations.In order to demonstrate the transformation,we use this method to resolve the "pressure-balance inconsistency"problem that occurs when plasma transports from the outer plasma sheet of the Earth into the inner plasma sheet.
Information, Consistent Estimation and Dynamic System Identification.
1976-11-01
the chesis . The rest of Chapter 4 4 is believed to be of theoretical interest and also of practical value, I which is demonstrated in sections 6.1...in the mean of the identification procedures at a certain rate. The condition in (6.3) also involves the system’s coefficients and thus, the selected
Monari, Antonio; Rivail, Jean-Louis; Assfeld, Xavier
2013-02-19
Molecular mechanics methods can efficiently compute the macroscopic properties of a large molecular system but cannot represent the electronic changes that occur during a chemical reaction or an electronic transition. Quantum mechanical methods can accurately simulate these processes, but they require considerably greater computational resources. Because electronic changes typically occur in a limited part of the system, such as the solute in a molecular solution or the substrate within the active site of enzymatic reactions, researchers can limit the quantum computation to this part of the system. Researchers take into account the influence of the surroundings by embedding this quantum computation into a calculation of the whole system described at the molecular mechanical level, a strategy known as the mixed quantum mechanics/molecular mechanics (QM/MM) approach. The accuracy of this embedding varies according to the types of interactions included, whether they are purely mechanical or classically electrostatic. This embedding can also introduce the induced polarization of the surroundings. The difficulty in QM/MM calculations comes from the splitting of the system into two parts, which requires severing the chemical bonds that link the quantum mechanical subsystem to the classical subsystem. Typically, researchers replace the quantoclassical atoms, those at the boundary between the subsystems, with a monovalent link atom. For example, researchers might add a hydrogen atom when a C-C bond is cut. This Account describes another approach, the Local Self Consistent Field (LSCF), which was developed in our laboratory. LSCF links the quantum mechanical portion of the molecule to the classical portion using a strictly localized bond orbital extracted from a small model molecule for each bond. In this scenario, the quantoclassical atom has an apparent nuclear charge of +1. To achieve correct bond lengths and force constants, we must take into account the inner shell of
Nishiyama, Katsura; Watanabe, Yasuhiro; Yoshida, Norio; Hirata, Fumio
2013-09-01
The Stokes shift magnitudes for coumarin 153 (C153) in 13 organic solvents with various polarities have been determined by means of steady-state spectroscopy and reference interaction-site model-self-consistent-field (RISM-SCF) theory. RISM-SCF calculations have reproduced experimental results fairly well, including individual solvent characteristics. It is empirically known that in some solvents, larger Stokes shift magnitudes are detected than anticipated on the basis of the solvent relative permittivity, ɛr. In practice, 1,4-dioxane (ɛr = 2.21) provides almost identical Stokes shift magnitudes to that of tetrahydrofuran (THF, ɛr = 7.58), for C153 and other typical organic solutes. In this work, RISM-SCF theory has been used to estimate the energetics of C153-solvent systems involved in the absorption and fluorescence processes. The Stokes shift magnitudes estimated by RISM-SCF theory are ∼5 kJ mol(-1) (400 cm(-1)) less than those determined by spectroscopy; however, the results obtained are still adequate for dipole moment comparisons, in a qualitative sense. We have also calculated the solute-solvent site-site radial distributions by this theory. It is shown that solvation structures with respect to the C-O-C framework, which is common to dioxane and THF, in the near vicinity (∼0.4 nm) of specific solute sites can largely account for their similar Stokes shift magnitudes. In previous works, such solute-solvent short-range interactions have been explained in terms of the higher-order multipole moments of the solvents. Our present study shows that along with the short-range interactions that contribute most significantly to the energetics, long-range electrostatic interactions are also important. Such long-range interactions are effective up to 2 nm from the solute site, as in the case of a typical polar solvent, acetonitrile.
CONSISTENCY OF LS ESTIMATOR IN SIMPLE LINEAR EV REGRESSION MODELS
Institute of Scientific and Technical Information of China (English)
Liu Jixue; Chen Xiru
2005-01-01
Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadratic-mean consistency.
On multidimensional consistent systems of asymmetric quad-equations
Boll, Raphael
2012-01-01
Multidimensional Consistency becomes more and more important in the theory of discrete integrable systems. Recently, we gave a classification of all 3D consistent 6-tuples of equations with the tetrahedron property, where several novel asymmetric systems have been found. In the present paper we discuss higher-dimensional consistency for 3D consistent systems coming up with this classification. In addition, we will give a classification of certain 4D consistent systems of quad-equations. The results of this paper allow for a proof of the Bianchi permutability among other applications.
Applications of self-consistent field theory in polymer systems
Institute of Scientific and Technical Information of China (English)
YANG; Yuliang; QIU; Feng; TANG; Ping; ZHANG; Hongdong
2006-01-01
The self-consistent field theory (SCFT) based upon coarse-grained model is especially suitable for investigating thermodynamic equilibrium morphology and the phase diagram of inhomogeneous polymer systems subjected to phase separation. The advantage of this model is that the details of the chain such as the architecture of the chain and the sequence of blocks can be considered. We present here an overview of SCFT approach and its applications in polymeric systems. In particular, we wish to focus on our group's achievements in applications of SCFT in such fields: simulation of microphase separation morphologies of multiblock copolymers with a complex molecular architecture, interactions between brush-coated sheets in a polymer matrix, mixtures of flexible polymers and small molecular liquid crystals at the interface, shapes of polymer-chain-anchored fluid vesicles, self-assembled morphologies of block copolymers in dilute solution, and so on. Finally, the further developments as well as the perspective applications of SCFT are discussed.
An Extended Model Driven Framework for End-to-End Consistent Model Transformation
Directory of Open Access Journals (Sweden)
Mr. G. Ramesh
2016-08-01
Full Text Available Model Driven Development (MDD results in quick transformation from models to corresponding systems. Forward engineering features of modelling tools can help in generating source code from models. To build a robust system it is important to have consistency checking in the design models and the same between design model and the transformed implementation. Our framework named as Extensible Real Time Software Design Inconsistency Checker (XRTSDIC proposed in our previous papers supports consistency checking in design models. This paper focuses on automatic model transformation. An algorithm and defined transformation rules for model transformation from UML class diagram to ERD and SQL are being proposed. The model transformation bestows many advantages such as reducing cost of development, improving quality, enhancing productivity and leveraging customer satisfaction. Proposed framework has been enhanced to ensure that the transformed implementations conform to their model counterparts besides checking end-to-end consistency.
Self consistent tight binding model for dissociable water
Lin, You; Wynveen, Aaron; Halley, J. W.; Curtiss, L. A.; Redfern, P. C.
2012-05-01
We report results of development of a self consistent tight binding model for water. The model explicitly describes the electrons of the liquid self consistently, allows dissociation of the water and permits fast direct dynamics molecular dynamics calculations of the fluid properties. It is parameterized by fitting to first principles calculations on water monomers, dimers, and trimers. We report calculated radial distribution functions of the bulk liquid, a phase diagram and structure of solvated protons within the model as well as ac conductivity of a system of 96 water molecules of which one is dissociated. Structural properties and the phase diagram are in good agreement with experiment and first principles calculations. The estimated DC conductivity of a computational sample containing a dissociated water molecule was an order of magnitude larger than that reported from experiment though the calculated ratio of proton to hydroxyl contributions to the conductivity is very close to the experimental value. The conductivity results suggest a Grotthuss-like mechanism for the proton component of the conductivity.
Logical consistency and sum-constrained linear models
van Perlo -ten Kleij, Frederieke; Steerneman, A.G.M.; Koning, Ruud H.
2006-01-01
A topic that has received quite some attention in the seventies and eighties is logical consistency of sum-constrained linear models. Loosely defined, a sum-constrained model is logically consistent if the restrictions on the parameters and explanatory variables are such that the sum constraint is a
Thermodynamically consistent Bayesian analysis of closed biochemical reaction systems
Directory of Open Access Journals (Sweden)
Goutsias John
2010-11-01
Full Text Available Abstract Background Estimating the rate constants of a biochemical reaction system with known stoichiometry from noisy time series measurements of molecular concentrations is an important step for building predictive models of cellular function. Inference techniques currently available in the literature may produce rate constant values that defy necessary constraints imposed by the fundamental laws of thermodynamics. As a result, these techniques may lead to biochemical reaction systems whose concentration dynamics could not possibly occur in nature. Therefore, development of a thermodynamically consistent approach for estimating the rate constants of a biochemical reaction system is highly desirable. Results We introduce a Bayesian analysis approach for computing thermodynamically consistent estimates of the rate constants of a closed biochemical reaction system with known stoichiometry given experimental data. Our method employs an appropriately designed prior probability density function that effectively integrates fundamental biophysical and thermodynamic knowledge into the inference problem. Moreover, it takes into account experimental strategies for collecting informative observations of molecular concentrations through perturbations. The proposed method employs a maximization-expectation-maximization algorithm that provides thermodynamically feasible estimates of the rate constant values and computes appropriate measures of estimation accuracy. We demonstrate various aspects of the proposed method on synthetic data obtained by simulating a subset of a well-known model of the EGF/ERK signaling pathway, and examine its robustness under conditions that violate key assumptions. Software, coded in MATLAB®, which implements all Bayesian analysis techniques discussed in this paper, is available free of charge at http://www.cis.jhu.edu/~goutsias/CSS%20lab/software.html. Conclusions Our approach provides an attractive statistical methodology for
Consistency of the tachyon warm inflationary universe models
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiao-Min; Zhu, Jian-Yang, E-mail: zhangxm@mail.bnu.edu.cn, E-mail: zhujy@bnu.edu.cn [Department of Physics, Beijing Normal University, Beijing 100875 (China)
2014-02-01
This study concerns the consistency of the tachyon warm inflationary models. A linear stability analysis is performed to find the slow-roll conditions, characterized by the potential slow-roll (PSR) parameters, for the existence of a tachyon warm inflationary attractor in the system. The PSR parameters in the tachyon warm inflationary models are redefined. Two cases, an exponential potential and an inverse power-law potential, are studied, when the dissipative coefficient Γ = Γ{sub 0} and Γ = Γ(φ), respectively. A crucial condition is obtained for a tachyon warm inflationary model characterized by the Hubble slow-roll (HSR) parameter ε{sub H}, and the condition is extendable to some other inflationary models as well. A proper number of e-folds is obtained in both cases of the tachyon warm inflation, in contrast to existing works. It is also found that a constant dissipative coefficient (Γ = Γ{sub 0}) is usually not a suitable assumption for a warm inflationary model.
The Self-Consistency Model of Subjective Confidence
Koriat, Asher
2012-01-01
How do people monitor the correctness of their answers? A self-consistency model is proposed for the process underlying confidence judgments and their accuracy. In answering a 2-alternative question, participants are assumed to retrieve a sample of representations of the question and base their confidence on the consistency with which the chosen…
Dynamic Consistency between Value and Coordination Models - Research Issues.
Bodenstaff, L.; Wombacher, Andreas; Reichert, M.U.; meersman, R; Tari, Z; herrero, p
Inter-organizational business cooperations can be described from different viewpoints each fulfilling a specific purpose. Since all viewpoints describe the same system they must not contradict each other, thus, must be consistent. Consistency can be checked based on common semantic concepts of the
Aggregated wind power plant models consisting of IEC wind turbine models
DEFF Research Database (Denmark)
Altin, Müfit; Göksu, Ömer; Hansen, Anca Daniela
2015-01-01
turbines, parameters and models to represent each individual wind turbine in detail makes it necessary to develop aggregated wind power plant models considering the simulation time for power system stability studies. In this paper, aggregated wind power plant models consisting of the IEC 61400-27 variable...
Model Checking Data Consistency for Cache Coherence Protocols
Institute of Scientific and Technical Information of China (English)
Hong Pan; Hui-Min Lin; Yi Lv
2006-01-01
A method for automatic verification of cache coherence protocols is presented, in which cache coherence protocols are modeled as concurrent value-passing processes, and control and data consistency requirement are described as formulas in first-orderμ-calculus. A model checker is employed to check if the protocol under investigation satisfies the required properties. Using this method a data consistency error has been revealed in a well-known cache coherence protocol.The error has been corrected, and the revised protocol has been shown free from data consistency error for any data domain size, by appealing to data independence technique.
Self-consistent theory for systems with mesoscopic fluctuations
Ciach, A.; Góźdź, W. T.
2016-10-01
We have developed a theory for inhomogeneous systems that allows for the incorporation of the effects of mesoscopic fluctuations. A hierarchy of equations relating the correlation and direct correlation functions for the local excess φ ≤ft(\\mathbf{r}\\right) of the volume fraction of particles ζ has been obtained, and an approximation leading to a closed set of equations for the two-point functions has been introduced for the disordered inhomogeneous phase. We have numerically solved the self-consistent equations for one-dimensional (1D) and three-dimensional (3D) models with short-range attraction and long-range repulsion. Predictions for all of the qualitative properties of the 1D model agree with the exact results, but only semi-quantitative agreement is obtained in the simplest version of the theory. The effects of fluctuations in the two 3D models considered are significantly different, despite the very similar properties of these models in the mean-field approximation. In both cases we obtain the sequence of large-small-large compressibility for increasing ζ. The very small compressibility is accompanied by the oscillatory decay of correlations with correlation lengths that are orders of magnitude larger than the size of particles. In one of the two models considered, the small compressibility becomes very small and the large compressibility becomes very large with decreasing temperature, and eventually van der Waals loops appear. Further studies are necessary in order to determine the nature of the strongly inhomogeneous phase present for intermediate volume fractions in 3D.
Standard Model Vacuum Stability and Weyl Consistency Conditions
DEFF Research Database (Denmark)
Antipin, Oleg; Gillioz, Marc; Krog, Jens;
2013-01-01
At high energy the standard model possesses conformal symmetry at the classical level. This is reflected at the quantum level by relations between the different beta functions of the model. These relations are known as the Weyl consistency conditions. We show that it is possible to satisfy them...... order by order in perturbation theory, provided that a suitable coupling constant counting scheme is used. As a direct phenomenological application, we study the stability of the standard model vacuum at high energies and compare with previous computations violating the Weyl consistency conditions....
Quantum monadology: a consistent world model for consciousness and physics.
Nakagomi, Teruaki
2003-04-01
The NL world model presented in the previous paper is embodied by use of relativistic quantum mechanics, which reveals the significance of the reduction of quantum states and the relativity principle, and locates consciousness and the concept of flowing time consistently in physics. This model provides a consistent framework to solve apparent incompatibilities between consciousness (as our interior experience) and matter (as described by quantum mechanics and relativity theory). Does matter have an inside? What is the flowing time now? Does physics allow the indeterminism by volition? The problem of quantum measurement is also resolved in this model.
Model-Consistent Sparse Estimation through the Bootstrap
Bach, Francis
2009-01-01
We consider the least-square linear regression problem with regularization by the $\\ell^1$-norm, a problem usually referred to as the Lasso. In this paper, we first present a detailed asymptotic analysis of model consistency of the Lasso in low-dimensional settings. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection. For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection procedure, referred to as the Bolasso, is extended to high-dimensional settings by a provably consistent two-step procedure.
Consistency analysis of a nonbirefringent Lorentz-violating planar model
Casana, Rodolfo; Moreira, Roemir P M
2011-01-01
In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman's propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor $\\kappa_{\\mu\
Multiscale Parameter Regionalization for consistent global water resources modelling
Wanders, Niko; Wood, Eric; Pan, Ming; Samaniego, Luis; Thober, Stephan; Kumar, Rohini; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc F. P.
2017-04-01
Due to an increasing demand for high- and hyper-resolution water resources information, it has become increasingly important to ensure consistency in model simulations across scales. This consistency can be ensured by scale independent parameterization of the land surface processes, even after calibration of the water resource model. Here, we use the Multiscale Parameter Regionalization technique (MPR, Samaniego et al. 2010, WRR) to allow for a novel, spatially consistent, scale independent parameterization of the global water resource model PCR-GLOBWB. The implementation of MPR in PCR-GLOBWB allows for calibration at coarse resolutions and subsequent parameter transfer to the hyper-resolution. In this study, the model was calibrated at 50 km resolution over Europe and validation carried out at resolutions of 50 km, 10 km and 1 km. MPR allows for a direct transfer of the calibrated transfer function parameters across scales and we find that we can maintain consistent land-atmosphere fluxes across scales. Here we focus on the 2003 European drought and show that the new parameterization allows for high-resolution calibrated simulations of water resources during the drought. For example, we find a reduction from 29% to 9.4% in the percentile difference in the annual evaporative flux across scales when compared against default simulations. Soil moisture errors are reduced from 25% to 6.9%, clearly indicating the benefits of the MPR implementation. This new parameterization allows us to show more spatial detail in water resources simulations that are consistent across scales and also allow validation of discharge for smaller catchments, even with calibrations at a coarse 50 km resolution. The implementation of MPR allows for novel high-resolution calibrated simulations of a global water resources model, providing calibrated high-resolution model simulations with transferred parameter sets from coarse resolutions. The applied methodology can be transferred to other
Emergent Dynamics of a Thermodynamically Consistent Particle Model
Ha, Seung-Yeal; Ruggeri, Tommaso
2017-03-01
We present a thermodynamically consistent particle (TCP) model motivated by the theory of multi-temperature mixture of fluids in the case of spatially homogeneous processes. The proposed model incorporates the Cucker-Smale (C-S) type flocking model as its isothermal approximation. However, it is more complex than the C-S model, because the mutual interactions are not only " mechanical" but are also affected by the "temperature effect" as individual particles may exhibit distinct internal energies. We develop a framework for asymptotic weak and strong flocking in the context of the proposed model.
Viscoelastic models with consistent hypoelasticity for fluids undergoing finite deformations
Altmeyer, Guillaume; Rouhaud, Emmanuelle; Panicaud, Benoit; Roos, Arjen; Kerner, Richard; Wang, Mingchuan
2015-08-01
Constitutive models of viscoelastic fluids are written with rate-form equations when considering finite deformations. Trying to extend the approach used to model these effects from an infinitesimal deformation to a finite transformation framework, one has to ensure that the tensors and their rates are indifferent with respect to the change of observer and to the superposition with rigid body motions. Frame-indifference problems can be solved with the use of an objective stress transport, but the choice of such an operator is not obvious and the use of certain transports usually leads to physically inconsistent formulation of hypoelasticity. The aim of this paper is to present a consistent formulation of hypoelasticity and to combine it with a viscosity model to construct a consistent viscoelastic model. In particular, the hypoelastic model is reversible.
Consistency in experiments on multistable driven delay systems
Oliver, Neus; Larger, Laurent; Fischer, Ingo
2016-10-01
We investigate the consistency properties in the responses of a nonlinear delay optoelectronic intensity oscillator subject to different drives, in particular, harmonic and self-generated waveforms. This system, an implementation of the Ikeda oscillator, is operating in a closed-loop configuration, exhibiting its autonomous dynamics while the drive signals are additionally introduced. Applying the same drive multiple times, we compare the dynamical responses of the optoelectronic oscillator and quantify the degree of consistency among them via their correlation. Our results show that consistency is not restricted to conditions close to the first Hopf bifurcation but can be found in a broad range of dynamical regimes, even in the presence of multistability. Finally, we discuss the dependence of consistency on the nature of the drive signal.
Bolasso: model consistent Lasso estimation through the bootstrap
Bach, Francis
2008-01-01
We consider the least-square linear regression problem with regularization by the l1-norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning rep...
Detection and quantification of flow consistency in business process models
DEFF Research Database (Denmark)
Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel
2017-01-01
Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect......, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second......, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics...
A consistent transported PDF model for treating differential molecular diffusion
Wang, Haifeng; Zhang, Pei
2016-11-01
Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.
The consistency service of the ATLAS Distributed Data Management system
Serfon, C; The ATLAS collaboration
2011-01-01
With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failures is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically corrects the errors reported and informs the users in case of irrecoverable file loss.
The Consistency Service of the ATLAS Distributed Data Management system
Serfon, C; The ATLAS collaboration
2010-01-01
With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failure is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically correct the errors reported and informs the users in case of irrecoverable file loss.
Simplified Models for Dark Matter Face their Consistent Completions
Energy Technology Data Exchange (ETDEWEB)
Goncalves, Dorival [Pittsburgh U.; Machado, Pedro N. [Madrid, IFT; No, Jose Miguel [Sussex U.
2016-11-14
Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.
Simplified Models for Dark Matter Face their Consistent Completions
Goncalves, Dorival; No, Jose Miguel
2016-01-01
Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.
The internal consistency of the North Sea carbonate system
Salt, Lesley A.; Thomas, Helmuth; Bozec, Yann; Borges, Alberto V.; de Baar, Hein J. W.
2016-05-01
In 2002 (February) and 2005 (August), the full suite of carbonate system parameters (total alkalinity (AT), dissolved inorganic carbon (DIC), pH, and partial pressure of CO2 (pCO2) were measured on two re-occupations of the entire North Sea basin, with three parameters (AT, DIC, pCO2) measured on four additional re-occupations, covering all four seasons, allowing an assessment of the internal consistency of the carbonate system. For most of the year, there is a similar level of internal consistency, with AT being calculated to within ± 6 μmol kg- 1 using DIC and pH, DIC to ± 6 μmol kg- 1 using AT and pH, pH to ± 0.008 using AT and pCO2, and pCO2 to ± 8 μatm using DIC and pH, with the dissociation constants of Millero et al. (2006). In spring, however, we observe a significant decline in the ability to accurately calculate the carbonate system. Lower consistency is observed with an increasing fraction of Baltic Sea water, caused by the high contribution of organic alkalinity in this water mass, not accounted for in the carbonate system calculations. Attempts to improve the internal consistency by accounting for the unconventional salinity-borate relationships in freshwater and the Baltic Sea, and through application of the new North Atlantic salinity-boron relationship (Lee et al., 2010), resulted in no significant difference in the internal consistency.
Towards consistent nuclear models and comprehensive nuclear data evaluations
Energy Technology Data Exchange (ETDEWEB)
Bouland, O [Los Alamos National Laboratory; Hale, G M [Los Alamos National Laboratory; Lynn, J E [Los Alamos National Laboratory; Talou, P [Los Alamos National Laboratory; Bernard, D [FRANCE; Litaize, O [FRANCE; Noguere, G [FRANCE; De Saint Jean, C [FRANCE; Serot, O [FRANCE
2010-01-01
The essence of this paper is to enlighten the consistency achieved nowadays in nuclear data and uncertainties assessments in terms of compound nucleus reaction theory from neutron separation energy to continuum. Making the continuity of theories used in resolved (R-matrix theory), unresolved resonance (average R-matrix theory) and continuum (optical model) rangcs by the generalization of the so-called SPRT method, consistent average parameters are extracted from observed measurements and associated covariances are therefore calculated over the whole energy range. This paper recalls, in particular, recent advances on fission cross section calculations and is willing to suggest some hints for future developments.
Enhanced data consistency of a portable gait measurement system
Lin, Hsien-I.; Chiang, Y. P.
2013-11-01
A gait measurement system is a useful tool for rehabilitation applications. Such a system is used to conduct gait experiments in large workplaces such as laboratories where gait measurement equipment can be permanently installed. However, a gait measurement system should be portable if it is to be used in clinics or community centers for aged people. In a portable gait measurement system, the workspace is limited and landmarks on a subject may not be visible to the cameras during experiments. Thus, we propose a virtual-marker function to obtain positions of unseen landmarks for maintaining data consistency. This work develops a portable clinical gait measurement system consisting of lightweight motion capture devices, force plates, and a walkway assembled from plywood boards. We evaluated the portable clinic gait system with 11 normal subjects in three consecutive days in a limited experimental space. Results of gait analysis based on the verification of within-day and between-day coefficients of multiple correlations show that the proposed portable gait system is reliable.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Cont, Rama; Kokholm, Thomas
options on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... on S&P 500 across strikes and maturities as well as options on the VIX volatility index. The calibration of the model is done in two steps, first by matching VIX option prices and then by matching prices of options on the underlying....
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... strikes and maturities as well as options on the VIX volatility index. The calibration of the model is done in two steps, first by matching VIX option prices and then by matching prices of options on the underlying....
Consistency Across Standards or Standards in a New Business Model
Russo, Dane M.
2010-01-01
Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.
A detailed self-consistent vertical Milky Way disc model
Directory of Open Access Journals (Sweden)
Gao S.
2012-02-01
Full Text Available We present a self-consistent vertical disc model of thin and thick disc in the solar vicinity. The model is optimized to fit the local kinematics of main sequence stars by varying the star formation history and the dynamical heating function. The star formation history and the dynamical heating function are not uniquely determined by the local kinematics alone. For four different pairs of input functions we calculate star count predictions at high galactic latitude as a function of colour. The comparison with North Galactic Pole data of SDSS/SEGUE leads to significant constraints of the local star formation history.
Radio data and synchrotron emission in consistent cosmic ray models
Bringmann, Torsten; Lineros, Roberto A
2011-01-01
We consider the propagation of electrons in phenomenological two-zone diffusion models compatible with cosmic-ray nuclear data and compute the diffuse synchrotron emission resulting from their interaction with galactic magnetic fields. We find models in agreement not only with cosmic ray data but also with radio surveys at essentially all frequencies. Requiring such a globally consistent description strongly disfavors both a very large (L>15 kpc) and small (L<1 kpc) effective size of the diffusive halo. This has profound implications for, e.g., indirect dark matter searches.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Cont, Rama; Kokholm, Thomas
2013-01-01
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
Self consistent modeling of accretion columns in accretion powered pulsars
Falkner, Sebastian; Schwarm, Fritz-Walter; Wolff, Michael Thomas; Becker, Peter A.; Wilms, Joern
2016-04-01
We combine three physical models to self-consistently derive the observed flux and pulse profiles of neutron stars' accretion columns. From the thermal and bulk Comptonization model by Becker & Wolff (2006) we obtain seed photon continua produced in the dense inner regions of the accretion column. In a thin outer layer these seed continua are imprinted with cyclotron resonant scattering features calculated using Monte Carlo simulations. The observed phase and energy dependent flux corresponding to these emission profiles is then calculated, taking relativistic light bending into account. We present simulated pulse profiles and the predicted dependency of the observable X-ray spectrum as a function of pulse phase.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
A consistent collinear triad approximation for operational wave models
Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.
2016-08-01
In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.
Warped 5D Standard Model Consistent with EWPT
Cabrer, Joan A; Quiros, Mariano
2011-01-01
For a 5D Standard Model propagating in an AdS background with an IR localized Higgs, compatibility of bulk KK gauge modes with EWPT yields a phenomenologically unappealing KK spectrum (m > 12.5 TeV) and leads to a "little hierarchy problem". For a bulk Higgs the solution to the hierarchy problem reduces the previous bound only by sqrt(3). As a way out, models with an enhanced bulk gauge symmetry SU(2)_R x U(1)_(B-L) were proposed. In this note we describe a much simpler (5D Standard) Model, where introduction of an enlarged gauge symmetry is no longer required. It is based on a warped gravitational background which departs from AdS at the IR brane and a bulk propagating Higgs. The model is consistent with EWPT for a range of KK masses within the LHC reach.
Consistent regularization and renormalization in models with inhomogeneous phases
Adhikari, Prabal
2016-01-01
In many models in condensed matter physics and high-energy physics, one finds inhomogeneous phases at high density and low temperature. These phases are characterized by a spatially dependent condensate or order parameter. A proper calculation requires that one takes the vacuum fluctuations of the model into account. These fluctuations are ultraviolet divergent and must be regularized. We discuss different consistent ways of regularizing and renormalizing quantum fluctuations, focusing on a symmetric energy cutoff scheme and dimensional regularization. We apply these techniques calculating the vacuum energy in the NJL model in 1+1 dimensions in the large-$N_c$ limit and the 3+1 dimensional quark-meson model in the mean-field approximation both for a one-dimensional chiral-density wave.
Consistent regularization and renormalization in models with inhomogeneous phases
Adhikari, Prabal; Andersen, Jens O.
2017-02-01
In many models in condensed matter and high-energy physics, one finds inhomogeneous phases at high density and low temperature. These phases are characterized by a spatially dependent condensate or order parameter. A proper calculation requires that one takes the vacuum fluctuations of the model into account. These fluctuations are ultraviolet divergent and must be regularized. We discuss different ways of consistently regularizing and renormalizing quantum fluctuations, focusing on momentum cutoff, symmetric energy cutoff, and dimensional regularization. We apply these techniques calculating the vacuum energy in the Nambu-Jona-Lasinio model in 1 +1 dimensions in the large-Nc limit and in the 3 +1 dimensional quark-meson model in the mean-field approximation both for a one-dimensional chiral-density wave.
Self-consistent triaxial de Zeeuw-Carollo Models
Thakur, Parijat; Das, Mousumi; Chakraborty, D K; Ann, H B
2007-01-01
We use the usual method of Schwarzschild to construct self-consistent solutions for the triaxial de Zeeuw & Carollo (1996) models with central density cusps. ZC96 models are triaxial generalisations of spherical $\\gamma$-models of Dehnen whose densities vary as $r^{-\\gamma}$ near the center and $r^{-4}$ at large radii and hence, possess a central density core for $\\gamma=0$ and cusps for $\\gamma > 0$. We consider four triaxial models from ZC96, two prolate triaxials: $(p, q) = (0.65, 0.60)$ with $\\gamma = 1.0$ and 1.5, and two oblate triaxials: $(p, q) = (0.95, 0.60)$ with $\\gamma = 1.0$ and 1.5. We compute 4500 orbits in each model for time periods of $10^{5} T_{D}$. We find that a large fraction of the orbits in each model are stochastic by means of their nonzero Liapunov exponents. The stochastic orbits in each model can sustain regular shapes for $\\sim 10^{3} T_{D}$ or longer, which suggests that they diffuse slowly through their allowed phase-space. Except for the oblate triaxial models with $\\gamma ...
Are paleoclimate model ensembles consistent with the MARGO data synthesis?
Directory of Open Access Journals (Sweden)
J. C. Hargreaves
2011-03-01
Full Text Available We investigate the consistency of various ensembles of model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day, however, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.
Are paleoclimate model ensembles consistent with the MARGO data synthesis?
Directory of Open Access Journals (Sweden)
J. C. Hargreaves
2011-08-01
Full Text Available We investigate the consistency of various ensembles of climate model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day. However, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.
Consistency analysis of a nonbirefringent Lorentz-violating planar model
Energy Technology Data Exchange (ETDEWEB)
Casana, Rodolfo; Ferreira, Manoel M.; Moreira, Roemir P.M. [Universidade Federal do Maranhao (UFMA), Departamento de Fisica, Sao Luis, MA (Brazil)
2012-07-15
In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor {kappa}{sub {mu}{nu}}. The propagator of the gauge field is explicitly evaluated and expressed in terms of linear independent symmetric tensors, presenting only one physical mode. The same holds for the scalar propagator. A consistency analysis is performed based on the poles of the propagators. The isotropic parity-even sector is stable, causal and unitary mode for 0{<=}{kappa}{sub 00}<1. On the other hand, the anisotropic sector is stable and unitary but in general noncausal. Finally, it is shown that this planar model interacting with a {lambda}{phi}{sup 4}-Higgs field supports compact-like vortex configurations. (orig.)
Consistency analysis of a nonbirefringent Lorentz-violating planar model
Casana, Rodolfo; Ferreira, Manoel M.; Moreira, Roemir P. M.
2012-07-01
In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor κ μν . The propagator of the gauge field is explicitly evaluated and expressed in terms of linear independent symmetric tensors, presenting only one physical mode. The same holds for the scalar propagator. A consistency analysis is performed based on the poles of the propagators. The isotropic parity-even sector is stable, causal and unitary mode for 0≤ κ 00<1. On the other hand, the anisotropic sector is stable and unitary but in general noncausal. Finally, it is shown that this planar model interacting with a λ| φ|4-Higgs field supports compactlike vortex configurations.
Self-Consistent Modeling of Reionization in Cosmological Hydrodynamical Simulations
Oñorbe, Jose; Lukić, Zarija
2016-01-01
The ultraviolet background (UVB) emitted by quasars and galaxies governs the ionization and thermal state of the intergalactic medium (IGM), regulates the formation of high-redshift galaxies, and is thus a key quantity for modeling cosmic reionization. The vast majority of cosmological hydrodynamical simulations implement the UVB via a set of spatially uniform photoionization and photoheating rates derived from UVB synthesis models. We show that simulations using canonical UVB rates reionize, and perhaps more importantly, spuriously heat the IGM, much earlier z ~ 15 than they should. This problem arises because at z > 6, where observational constraints are non-existent, the UVB amplitude is far too high. We introduce a new methodology to remedy this issue, and generate self-consistent photoionization and photoheating rates to model any chosen reionization history. Following this approach, we run a suite of hydrodynamical simulations of different reionization scenarios, and explore the impact of the timing of ...
Consistent Static Models of Local Thermospheric Composition Profiles
Picone, J M; Drob, D P
2016-01-01
The authors investigate the ideal, nondriven multifluid equations of motion to identify consistent (i.e., truly stationary), mechanically static models for composition profiles within the thermosphere. These physically faithful functions are necessary to define the parametric core of future empirical atmospheric models and climatologies. Based on the strength of interspecies coupling, the thermosphere has three altitude regions: (1) the lower thermosphere (herein z ~200 km), in which the species flows are approximately uncoupled; and (3) a transition region in between, where the effective species particle mass and the effective species vertical flow interpolate between the solutions for the upper and lower thermosphere. We place this view in the context of current terminology within the community, i.e., a fully mixed (lower) region and an upper region in diffusive equilibrium (DE). The latter condition, DE, currently used in empirical composition models, does not represent a truly static composition profile ...
Thermodynamically consistent model of brittle oil shales under overpressure
Izvekov, Oleg
2016-04-01
The concept of dual porosity is a common way for simulation of oil shale production. In the frame of this concept the porous fractured media is considered as superposition of two permeable continua with mass exchange. As a rule the concept doesn't take into account such as the well-known phenomenon as slip along natural fractures, overpressure in low permeability matrix and so on. Overpressure can lead to development of secondary fractures in low permeability matrix in the process of drilling and pressure reduction during production. In this work a new thermodynamically consistent model which generalizes the model of dual porosity is proposed. Particularities of the model are as follows. The set of natural fractures is considered as permeable continuum. Damage mechanics is applied to simulation of secondary fractures development in low permeability matrix. Slip along natural fractures is simulated in the frame of plasticity theory with Drucker-Prager criterion.
A minimal model of self-consistent partial synchrony
Clusella, Pau; Politi, Antonio; Rosenblum, Michael
2016-09-01
We show that self-consistent partial synchrony in globally coupled oscillatory ensembles is a general phenomenon. We analyze in detail appearance and stability properties of this state in possibly the simplest setup of a biharmonic Kuramoto-Daido phase model as well as demonstrate the effect in limit-cycle relaxational Rayleigh oscillators. Such a regime extends the notion of splay state from a uniform distribution of phases to an oscillating one. Suitable collective observables such as the Kuramoto order parameter allow detecting the presence of an inhomogeneous distribution. The characteristic and most peculiar property of self-consistent partial synchrony is the difference between the frequency of single units and that of the macroscopic field.
Short Polymer Modeling using Self-Consistent Integral Equation Method
Kim, Yeongyoon; Park, So Jung; Kim, Jaeup
2014-03-01
Self-consistent field theory (SCFT) is an excellent mean field theoretical tool for predicting the morphologies of polymer based materials. In the standard SCFT, the polymer is modeled as a Gaussian chain which is suitable for a polymer of high molecular weight, but not necessarily for a polymer of low molecular weight. In order to overcome this limitation, Matsen and coworkers have recently developed SCFT of discrete polymer chains in which one polymer is modeled as finite number of beads joined by freely jointed bonds of fixed length. In their model, the diffusion equation of the canonical SCFT is replaced by an iterative integral equation, and the full spectral method is used for the production of the phase diagram of short block copolymers. In this study, for the finite length chain problem, we apply pseudospectral method which is the most efficient numerical scheme to solve the iterative integral equation. We use this new numerical method to investigate two different types of polymer bonds: spring-beads model and freely-jointed chain model. By comparing these results with those of the Gaussian chain model, the influences on the morphologies of diblock copolymer melts due to the chain length and the type of bonds are examined. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (no. 2012R1A1A2043633).
Deterministic Consistency: A Programming Model for Shared Memory Parallelism
Aviram, Amittai; Ford, Bryan
2009-01-01
The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "...
Mean-field theory and self-consistent dynamo modeling
Energy Technology Data Exchange (ETDEWEB)
Yoshizawa, Akira; Yokoi, Nobumitsu [Tokyo Univ. (Japan). Inst. of Industrial Science; Itoh, Sanae-I [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Itoh, Kimitaka [National Inst. for Fusion Science, Toki, Gifu (Japan)
2001-12-01
Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)
A Consistent Design Methodology for Wireless Embedded Systems
Directory of Open Access Journals (Sweden)
Sauzon G
2005-01-01
Full Text Available Complexity demand of modern communication systems, particularly in the wireless domain, grows at an astounding rate, a rate so high that the available complexity and even worse the design productivity required to convert algorithms into silicon are left far behind. This effect is commonly referred to as the design productivity crisis or simply the design gap. Since the design gap is predicted to widen every year, it is of utmost importance to look closer at the design flow of such communication systems in order to find improvements. While various ideas for speeding up designs have been proposed, very few have found their path into existing EDA products. This paper presents requirements for such tools and shows how an open design environment offers a solution to integrate existing EDA tools, allowing for a consistent design flow, considerably speeding up design times.
A self-consistent spin-diffusion model for micromagnetics
Abert, Claas
2016-12-17
We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.
Moreno Chaparro, Nicolas
2015-06-30
We introduce a framework for model reduction of polymer chain models for dissipative particle dynamics (DPD) simulations, where the properties governing the phase equilibria such as the characteristic size of the chain, compressibility, density, and temperature are preserved. The proposed methodology reduces the number of degrees of freedom required in traditional DPD representations to model equilibrium properties of systems with complex molecules (e.g., linear polymers). Based on geometrical considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling of the simulation parameters. In order to satisfy the geometrical constraints in the reduced model we introduce bond-angle potentials that account for the changes in the chain free energy after the model reduction. Following this coarse-graining process we represent high molecular weight DPD chains (i.e., ≥200≥200 beads per chain) with a significant reduction in the number of particles required (i.e., ≥20≥20 times the original system). We show that our methodology has potential applications modeling systems of high molecular weight molecules at large scales, such as diblock copolymer and DNA.
Consistent Prediction of Properties of Systems with Lipids
DEFF Research Database (Denmark)
Cunico, Larissa; Ceriani, Roberta; Sarup, Bent
Equilibria between vapour, liquid and/or solid phases, pure component properties and also the mixture-phase properties are necessary for synthesis, design and analysis of different unit operations found in the production of edible oils, fats and biodiesel. A systematic numerical analysis...... is employed to determine the needs of phase equilibria and related properties in processes such as Deodorization, Dry Fractionation, Solvent Extraction and Biodiesel Production. Other important use for the data and analysis is in property model development for correct and consistent property prediction....... Lipids are found in almost all mixtures involving edible oils, fats and biodiesel. They are also being extracted for use in the pharma-industry. A database for pure components (lipids) present in these processes and mixtures properties has been developed and made available for different applications...
Improving risk assessment by defining consistent and reliable system scenarios
Directory of Open Access Journals (Sweden)
B. Mazzorana
2009-02-01
Full Text Available During the entire procedure of risk assessment for hydrologic hazards, the selection of consistent and reliable scenarios, constructed in a strictly systematic way, is fundamental for the quality and reproducibility of the results. However, subjective assumptions on relevant impact variables such as sediment transport intensity on the system loading side and weak point response mechanisms repeatedly cause biases in the results, and consequently affect transparency and required quality standards. Furthermore, the system response of mitigation measures to extreme event loadings represents another key variable in hazard assessment, as well as the integral risk management including intervention planning. Formative Scenario Analysis, as a supplement to conventional risk assessment methods, is a technique to construct well-defined sets of assumptions to gain insight into a specific case and the potential system behaviour. By two case studies, carried out (1 to analyse sediment transport dynamics in a torrent section equipped with control measures, and (2 to identify hazards induced by woody debris transport at hydraulic weak points, the applicability of the Formative Scenario Analysis technique is presented. It is argued that during scenario planning in general and with respect to integral risk management in particular, Formative Scenario Analysis allows for the development of reliable and reproducible scenarios in order to design more specifically an application framework for the sustainable assessment of natural hazards impact. The overall aim is to optimise the hazard mapping and zoning procedure by methodologically integrating quantitative and qualitative knowledge.
Consistent Probabilistic Description of the Neutral Kaon System
Bernabeu, J; Villanueva-Perez, P
2013-01-01
The neutral Kaon system has both CP violation in the mass matrix and a non-vanishing lifetime difference in the width matrix. This leads to an effective Hamiltonian which is not a normal operator, with incompatible (non-commuting) masses and widths. In the Weisskopf-Wigner Approach (WWA), by diagonalizing the entire Hamiltonian, the unphysical non-orthogonal "stationary" states $K_{L,S}$ are obtained. These states have complex eigenvalues whose real (imaginary) part does not coincide with the eigenvalues of the mass (width) matrix. In this work we describe the system as an open Lindblad-type quantum mechanical system due to Kaon decays. This approach, in terms of density matrices for initial and final states, provides a consistent probabilistic description, avoiding the standard problems because the width matrix becomes a composite operator not included in the Hamiltonian. We consider the dominant-decay channel to two pions, so that one of the Kaon states with definite lifetime becomes stable. This new approa...
Classical and Quantum Consistency of the DGP Model
Nicolis, A; Nicolis, Alberto; Rattazzi, Riccardo
2004-01-01
We study the Dvali-Gabadadze-Porrati model by the method of the boundary effective action. The truncation of this action to the bending mode \\pi consistently describes physics in a wide range of regimes both at the classical and at the quantum level. The Vainshtein effect, which restores agreement with precise tests of general relativity, follows straightforwardly. We give a simple and general proof of stability, i.e. absence of ghosts in the fluctuations, valid for most of the relevant cases, like for instance the spherical source in asymptotically flat space. However we confirm that around certain interesting self-accelerating cosmological solutions there is a ghost. We consider the issue of quantum corrections. Around flat space \\pi becomes strongly coupled below a macroscopic length of 1000 km, thus impairing the predictivity of the model. Indeed the tower of higher dimensional operators which is expected by a generic UV completion of the model limits predictivity at even larger length scales. We outline ...
Consistent constraints on the Standard Model Effective Field Theory
Berthier, Laure
2015-01-01
We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, $\\Lambda \\gtrsim \\, 3 \\, {\\rm TeV}$. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an $\\rm S,T$ analysis is modified by the theory errors we include as an illustrative example.
Creation of Consistent Burn Wounds: A Rat Model
Directory of Open Access Journals (Sweden)
Elijah Zhengyang Cai
2014-07-01
Full Text Available Background Burn infliction techniques are poorly described in rat models. An accurate study can only be achieved with wounds that are uniform in size and depth. We describe a simple reproducible method for creating consistent burn wounds in rats. Methods Ten male Sprague-Dawley rats were anesthetized and dorsum shaved. A 100 g cylindrical stainless-steel rod (1 cm diameter was heated to 100℃ in boiling water. Temperature was monitored using a thermocouple. We performed two consecutive toe-pinch tests on different limbs to assess the depth of sedation. Burn infliction was limited to the loin. The skin was pulled upwards, away from the underlying viscera, creating a flat surface. The rod rested on its own weight for 5, 10, and 20 seconds at three different sites on each rat. Wounds were evaluated for size, morphology and depth. Results Average wound size was 0.9957 cm2 (standard deviation [SD] 0.1845 (n=30. Wounds created with duration of 5 seconds were pale, with an indistinct margin of erythema. Wounds of 10 and 20 seconds were well-defined, uniformly brown with a rim of erythema. Average depths of tissue damage were 1.30 mm (SD 0.424, 2.35 mm (SD 0.071, and 2.60 mm (SD 0.283 for duration of 5, 10, 20 seconds respectively. Burn duration of 5 seconds resulted in full-thickness damage. Burn duration of 10 seconds and 20 seconds resulted in full-thickness damage, involving subjacent skeletal muscle. Conclusions This is a simple reproducible method for creating burn wounds consistent in size and depth in a rat burn model.
Gas Clumping in Self-Consistent Reionisation Models
Finlator, K; Özel, F; Davé, R
2012-01-01
We use a suite of cosmological hydrodynamic simulations including a self-consistent treatment for inhomogeneous reionisation to study the impact of galactic outflows and photoionisation heating on the volume-averaged recombination rate of the intergalactic medium (IGM). By incorporating an evolving ionising escape fraction and a treatment for self-shielding within Lyman limit systems, we have run the first simulations of "photon-starved" reionisation scenarios that simultaneously reproduce observations of the abundance of galaxies, the optical depth to electron scattering of cosmic microwave background photons \\tau, and the effective optical depth to Lyman\\alpha absorption at z=5. We confirm that an ionising background reduces the clumping factor C by more than 50% by smoothing moderately-overdense (\\Delta=1--100) regions. Meanwhile, outflows increase clumping only modestly. The clumping factor of ionised gas is much lower than the overall baryonic clumping factor because the most overdense gas is self-shield...
A self-consistent dynamo model for fully convective stars
Yadav, Rakesh Kumar; Christensen, Ulrich; Morin, Julien; Gastine, Thomas; Reiners, Ansgar; Poppenhaeger, Katja; Wolk, Scott J.
2016-01-01
The tachocline region inside the Sun, where the rigidly rotating radiative core meets the differentially rotating convection zone, is thought to be crucial for generating the Sun's magnetic field. Low-mass fully convective stars do not possess a tachocline and were originally expected to generate only weak small-scale magnetic fields. Observations, however, have painted a different picture of magnetism in rapidly-rotating fully convective stars: (1) Zeeman broadening measurements revealed average surface field of several kiloGauss (kG), which is similar to the typical field strength found in sunspots. (2) Zeeman-Doppler-Imaging (ZDI) technique discovered large-scale magnetic fields with a morphology often similar to the Earth's dipole-dominated field. (3) Comparison of Zeeman broadening and ZDI results showed that more than 80% of the magnetic flux resides at small scales. So far, theoretical and computer simulation efforts have not been able to reproduce these features simultaneously. Here we present a self-consistent global model of magnetic field generation in low-mass fully convective stars. A distributed dynamo working in the model spontaneously produces a dipole-dominated surface magnetic field of the observed strength. The interaction of this field with the turbulent convection in outer layers shreds it, producing small-scale fields that carry most of the magnetic flux. The ZDI technique applied to synthetic spectropolarimetric data based on our model recovers most of the large-scale field. Our model simultaneously reproduces the morphology and magnitude of the large-scale field as well as the magnitude of the small-scale field observed on low-mass fully convective stars.
Consistent Steering System using SCTP for Bluetooth Scatternet Sensor Network
Dhaya, R.; Sadasivam, V.; Kanthavel, R.
2012-12-01
Wireless communication is the best way to convey information from source to destination with flexibility and mobility and Bluetooth is the wireless technology suitable for short distance. On the other hand a wireless sensor network (WSN) consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants. Using Bluetooth piconet wireless technique in sensor nodes creates limitation in network depth and placement. The introduction of Scatternet solves the network restrictions with lack of reliability in data transmission. When the depth of the network increases, it results in more difficulties in routing. No authors so far focused on the reliability factors of Scatternet sensor network's routing. This paper illustrates the proposed system architecture and routing mechanism to increase the reliability. The another objective is to use reliable transport protocol that uses the multi-homing concept and supports multiple streams to prevent head-of-line blocking. The results show that the Scatternet sensor network has lower packet loss even in the congestive environment than the existing system suitable for all surveillance applications.
Pluralistic and stochastic gene regulation: examples, models and consistent theory.
Salas, Elisa N; Shu, Jiang; Cserhati, Matyas F; Weeks, Donald P; Ladunga, Istvan
2016-06-01
We present a theory of pluralistic and stochastic gene regulation. To bridge the gap between empirical studies and mathematical models, we integrate pre-existing observations with our meta-analyses of the ENCODE ChIP-Seq experiments. Earlier evidence includes fluctuations in levels, location, activity, and binding of transcription factors, variable DNA motifs, and bursts in gene expression. Stochastic regulation is also indicated by frequently subdued effects of knockout mutants of regulators, their evolutionary losses/gains and massive rewiring of regulatory sites. We report wide-spread pluralistic regulation in ≈800 000 tightly co-expressed pairs of diverse human genes. Typically, half of ≈50 observed regulators bind to both genes reproducibly, twice more than in independently expressed gene pairs. We also examine the largest set of co-expressed genes, which code for cytoplasmic ribosomal proteins. Numerous regulatory complexes are highly significant enriched in ribosomal genes compared to highly expressed non-ribosomal genes. We could not find any DNA-associated, strict sense master regulator. Despite major fluctuations in transcription factor binding, our machine learning model accurately predicted transcript levels using binding sites of 20+ regulators. Our pluralistic and stochastic theory is consistent with partially random binding patterns, redundancy, stochastic regulator binding, burst-like expression, degeneracy of binding motifs and massive regulatory rewiring during evolution.
A seismologically consistent compositional model of Earth's core.
Badro, James; Côté, Alexander S; Brodholt, John P
2014-05-27
Earth's core is less dense than iron, and therefore it must contain "light elements," such as S, Si, O, or C. We use ab initio molecular dynamics to calculate the density and bulk sound velocity in liquid metal alloys at the pressure and temperature conditions of Earth's outer core. We compare the velocity and density for any composition in the (Fe-Ni, C, O, Si, S) system to radial seismological models and find a range of compositional models that fit the seismological data. We find no oxygen-free composition that fits the seismological data, and therefore our results indicate that oxygen is always required in the outer core. An oxygen-rich core is a strong indication of high-pressure and high-temperature conditions of core differentiation in a deep magma ocean with an FeO concentration (oxygen fugacity) higher than that of the present-day mantle.
Flood damage: a model for consistent, complete and multipurpose scenarios
Menoni, Scira; Molinari, Daniela; Ballio, Francesco; Minucci, Guido; Mejri, Ouejdane; Atun, Funda; Berni, Nicola; Pandolfo, Claudia
2016-12-01
Effective flood risk mitigation requires the impacts of flood events to be much better and more reliably known than is currently the case. Available post-flood damage assessments usually supply only a partial vision of the consequences of the floods as they typically respond to the specific needs of a particular stakeholder. Consequently, they generally focus (i) on particular items at risk, (ii) on a certain time window after the occurrence of the flood, (iii) on a specific scale of analysis or (iv) on the analysis of damage only, without an investigation of damage mechanisms and root causes. This paper responds to the necessity of a more integrated interpretation of flood events as the base to address the variety of needs arising after a disaster. In particular, a model is supplied to develop multipurpose complete event scenarios. The model organizes available information after the event according to five logical axes. This way post-flood damage assessments can be developed that (i) are multisectoral, (ii) consider physical as well as functional and systemic damage, (iii) address the spatial scales that are relevant for the event at stake depending on the type of damage that has to be analyzed, i.e., direct, functional and systemic, (iv) consider the temporal evolution of damage and finally (v) allow damage mechanisms and root causes to be understood. All the above features are key for the multi-usability of resulting flood scenarios. The model allows, on the one hand, the rationalization of efforts currently implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.
Consistency of modified MLE in EV model with replicated observations
Institute of Scientific and Technical Information of China (English)
ZHANG; Sanguo
2001-01-01
［1］Kendall, M., Stuart, A., The Advanced Theory of Statistics, Vol. 2, New York: Charles Griffin, 1979.［2］Anderson, T. W., Estimating linear statistical relationships, Ann. Statist., 1984, 12: 1.［3］Cui Hengjian, Asymptotic normality of M-estimates in the EV model, Sys. Sci. and Math. Sci., 1997, 10(3): 225.［4］Madansky, A., The fitting of straight lines when both variables are subject to error, JASA, 1959, 54: 173.［5］Villegas, C., Maximum likelihood estimations of a linear functional relationship, Ann. Math. Statist., 1961, 32(4): 1048.［6］Stout, W. F., Almost Sure Convergence, New York: Academic Press, 1974.［7］Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975.［8］Lai, T. L., Robbins, H., Wei, C. Z., Strong consistency of least squares estimates in multiple regression, J. Multivariate Anal., 1979, 9: 343.［9］Chen Xiru, On limiting properties of U-statistics and von-Mises statistics, Scientia Sinica (in Chinese), 1980, (6): 522.
Institute of Scientific and Technical Information of China (English)
秦啸
2002-01-01
The advanced multimedia and high-speed networks make distributed interactive systems more promising and practical. These systems are distributed systems, which allow many clients located in different locations to concurrently explore and interact with each other. The systems can be built either in the localarea network (LAN), or the wide area network (WAN), such as the Internet. Operations issued at one site are immediately executed at the local sites for a good response time, and are propagated to other sites. One of the challenging issues raised in the systems is consistency maintenance. Such issue in the discrete interactive media has been studied in many literatures. However, the consistency maintenance scheme for discrete interactive media is not suitable for continuous media domain. This paper illustrates a consistency problem in continuous interactive media by a simple example. The absolute consistency model, a strong requirement, is suitable for LAN and results in a bad responsiveness in WAN. To make themodel more practical for WAN, a new consistency model, named delayed consistency model (DCM), is proposed. In this model, if an operation on an object x is issued at site i, every site is required to execute the operation at a specified time. The essential idea behind the proposed model is that other sites are enforced to update the state at a certain amount of time later than site i does. Thus, other sites will finally view the same state of x as that of site i. The DCM model is flexible, since it is unnecessary for all sites to have the identical delayed time. In case that the system is based on a real-time network, another advantage of the model is providing the real-time network scheduling with important timing parameters.%随着多媒体和网络技术的发展,分布式交互系统被广泛应用.在这种系统中,多个客户端通过局域或广域网交互连接.为使响应时间短,本地节点产生的操作立即在本地
Guinot, Vincent
2017-09-01
The Integral Porosity and Dual Integral Porosity two-dimensional shallow water models have been proposed recently as efficient upscaled models for urban floods. Very little is known so far about their consistency and wave propagation properties. Simple numerical experiments show that both models are unusually sensitive to the computational grid. In the present paper, a two-dimensional consistency and characteristic analysis is carried out for these two models. The following results are obtained: (i) the models are almost insensitive to grid design when the porosity is isotropic, (ii) anisotropic porosity fields induce an artificial polarization of the mass/momentum fluxes along preferential directions when triangular meshes are used and (iii) extra first-order derivatives appear in the governing equations when regular, quadrangular cells are used. The hyperbolic system is thus mesh-dependent, and with it the wave propagation properties of the model solutions. Criteria are derived to make the solution less mesh-dependent, but it is not certain that these criteria can be satisfied at all computational points when real-world situations are dealt with.
A proposal for a consistent parametrization of earth models
Forbriger, Thomas; Friederich, Wolfgang
2005-08-01
The current way to parametrize earth models in terms of real-valued seismic velocities and quality factors is incomplete as it does not specify how complex-valued viscoelastic moduli or complex velocities should be computed from them. Various ways to do this can be found in the literature. Depending on the context they may specify (1) the real part of the viscoelastic modulus, (2) the absolute value of the viscoelastic modulus, (3) the real part of complex velocity or (4) the phase velocity of a propagating plane wave. We propose here to exclusively use the first alternative because it is the only one which allows both a flexible choice of elastic parameters and a mathematically rigorous evaluation of the complex-valued viscoelastic moduli. The other definitions only permit an evaluation of viscoelastic moduli if the tabulated quality factors are directly associated with the listed velocities. Ignoring the subtle differences between the three definitions leads to variations in viscoelastic moduli which are second order in 1/Q where Q is a quality factor. This may be the reason why the topic has never been discussed in the literature. In case of shallow seismic media, however, where quality factors may assume values of less than 10, the subtle differences become noticeable in synthetic seismograms. It is then essential to use the same definition in all algorithms to make results comparable. Matters become worse for anisotropic media, which are commonly specified in terms of real elastic moduli and quality factors for effective isotropic moduli. In that case, the complex-valued viscoelastic moduli cannot be determined uniquely. However, interpreting the tabulated constants as the real parts of the complex-valued viscoelastic moduli at least allows a consistent definition, which respects the relative magnitude of the anelastic and anisotropic parts compared to the elastic parts. It should be noted that all these considerations apply to complex-valued viscoelastic
Self-consistent modelling of resonant tunnelling structures
DEFF Research Database (Denmark)
Fiig, T.; Jauho, A.P.
1992-01-01
We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated with the ......We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated...... applied voltages and carrier densities at the emitter-barrier interface. We include the two-dimensional accumulation layer charge and the quantum well charge in our self-consistent scheme. We discuss the evaluation of the current contribution originating from the two-dimensional accumulation layer charges...
Understanding and Improving the Performance Consistency of Distributed Computing Systems
Yigitbasi, M.N.
2012-01-01
With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput sens
Understanding and Improving the Performance Consistency of Distributed Computing Systems
Yigitbasi, M.N.
2012-01-01
With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput sens
An Evaluation of Information Consistency in Grid Information Systems
Field, Laurence
2016-01-01
A Grid information system resolves queries that may need to consider all information sources (Grid services), which are widely distributed geographically, in order to enable efficient Grid functions that may utilise multiple cooperating services. Fundamentally this can be achieved by either moving the query to the data (query shipping) or moving the data to the query (data shipping). Existing Grid information system implementations have adopted one of the two approaches. This paper explores the two approaches in further detail by evaluating them to the best possible extent with respect to Grid information system benchmarking metrics. A Grid information system that follows the data shipping approach based on the replication of information that aims to improve the currency for highly-mutable information is presented. An implementation of this, based on an Enterprise Messaging System, is evaluated using the benchmarking method and the consequence of the results for the design of Grid information systems is discu...
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across......We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...
Consistency of System Identification by Global Total Least Squares
C. Heij (Christiaan); W. Scherrer
1996-01-01
textabstractGlobal total least squares (GTLS) is a method for the identification of linear systems where no distinction between input and output variables is required. This method has been developed within the deterministic behavioural approach to systems. In this paper we analyse statistical proper
Is the island universe model consistent with observations?
Piao, Yun-Song
2005-01-01
We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.
Consistent Prediction of Properties of Systems with Lipids
DEFF Research Database (Denmark)
Cunico, Larissa; Ceriani, Roberta; Sarup, Bent
(model development, property verification, property prediction, etc.). The database has verified data for fatty acids, acylglycerols, fatty esters, fatty alcohols, vegetable oils, biodiesel and minor compounds as phospholipids, tocopherols, sterols, carotene and squalene, together with a user friendly...
BUILD: A Tool for Maintaining Consistency in Modular Systems.
1985-11-01
set of definitions, BUILD can be extended to work with new programming environments and to perform new tasks. Keywords: High level languages; BUILD Computer program; C programming language; Systems engineering. (Author)
Consistent Evolution of Software Artifacts and Non-Functional Models
2014-11-14
Ruscio D., Pierantonio A., Arcelli D., Eramo R., Trubiani C., Tucci M. Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica ...Models (SRMs), and ( ii ) antipattern solutions as Target Role Models (TRMs). Hence, SRM-TRM pairs represent new instruments in the hands of developers to...helps to identify the antipatterns that more heavily contribute to the violation of performance requirements [10], and ( ii ) another one aimed at
The consistency service of the ATLAS distributed data management system
Energy Technology Data Exchange (ETDEWEB)
Serfon, Cedric; Calfayan, Philippe; Duckeck, Guenter; Ebke, Johannes; Elmsheuser, Johannes; Legger, Federica; Mitterer, Christoph; Schaile, Dorothee; Walker, Rodney [LMU, Munich (Germany)
2011-07-01
With the continuously increasing volume of data (More than 50 PB) produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses (for instance due to hardware failure) is increasing. With the current size of the disks, a pool crash that cannot be recovered typically represents O(10000) files. It is therefore important to have an automated service to recover these file losses: this is the role of the Consistency Service. This service is used by various ATLAS tools (Analysis tools, Production tools, DQ2 Site Services..) or by site administrators that report corrupted or lost files. It automatically recovers lost files or corrects the errors reported and informs the users in case of irrecoverable file loss.
Towards a self-consistent dynamical nuclear model
Roca-Maza, X.; Niu, Y. F.; Colò, G.; Bortignon, P. F.
2017-04-01
Density functional theory (DFT) is a powerful and accurate tool, exploited in nuclear physics to investigate the ground-state and some of the collective properties of nuclei along the whole nuclear chart. Models based on DFT are not, however, suitable for the description of single-particle dynamics in nuclei. Following the field theoretical approach by A Bohr and B R Mottelson to describe nuclear interactions between single-particle and vibrational degrees of freedom, we have taken important steps towards the building of a microscopic dynamic nuclear model. In connection with this, one important issue that needs to be better understood is the renormalization of the effective interaction in the particle-vibration approach. One possible way to renormalize the interaction is by the so-called subtraction method. In this contribution, we will implement the subtraction method in our model for the first time and study its consequences.
Consistency of global total least squares in stochastic system identification
C. Heij (Christiaan); W. Scherrer
1995-01-01
textabstractGlobal total least squares has been introduced as a method for the identification of deterministic system behaviours. We analyse this method within a stochastic framework, where the observed data are generated by a stationary stochastic process. Conditions are formulated so that the meth
The internal consistency of the North Sea carbonate system
Salt, S.; Thomas, H.; Bozec, Y.; Borges, A.V.; de Baar, H.J.W
2016-01-01
In 2002 (February) and 2005 (August), the full suite of carbonate system parameters (total alkalinity (A_{T}), dissolved inorganic carbon (DIC), pH, and partial pressure of CO_{2} (pCO_{2}) were measured on two re-occupations of the entire North Sea basin, with three paramete
Modelling plasticity of unsaturated soils in a thermodynamically consistent framework
Coussy, O
2010-01-01
Constitutive equations of unsaturated soils are often derived in a thermodynamically consistent framework through the use a unique 'effective' interstitial pressure. This later is naturally chosen as the space averaged interstitial pressure. However, experimental observations have revealed that two stress state variables were needed to describe the stress-strain-strength behaviour of unsaturated soils. The thermodynamics analysis presented here shows that the most general approach to the behaviour of unsaturated soils actually requires three stress state variables: the suction, which is required to describe the retention properties of the soil and two effective stresses, which are required to describe the soil deformation at water saturation held constant. Actually, it is shown that a simple assumption related to internal deformation leads to the need of a unique effective stress to formulate the stress-strain constitutive equation describing the soil deformation. An elastoplastic framework is then presented ...
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; Tartakovsky, Alexandre M.; Parks, Michael L.
2017-04-01
We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Consistency Problem with Tracer Advection in the Atmospheric Model GAMIL
Institute of Scientific and Technical Information of China (English)
ZHANG Kai; WAN Hui; WANG Bin; ZHANG Meigen
2008-01-01
The radon transport test,which is a widely used test case for atmospheric transport models,is carried out to evaluate the tracer advection schemes in the Grid-Point Atmospheric Model of IAP-LASG (GAMIL).TWO of the three available schemes in the model are found to be associated with significant biases in the polar regions and in the upper part of the atmosphere,which implies potentially large errors in the simulation of ozone-like tracers.Theoretical analyses show that inconsistency exists between the advection schemes and the discrete continuity equation in the dynamical core of GAMIL and consequently leads to spurious sources and sinks in the tracer transport equation.The impact of this type of inconsistency is demonstrated by idealized tests and identified as the cause of the aforementioned biases.Other potential effects of this inconsistency are also discussed.Results of this study provide some hints for choosing suitable advection schemes in the GAMIL model.At least for the polar-region-concentrated atmospheric components and the closely correlated chemical species,the Flux-Form Semi-Lagrangian advection scheme produces more reasonable simulations of the large-scale transport processes without significantly increasing the computational expense.
Self-consistent Models of Strong Interaction with Chiral Symmetry
Nambu, Y.; Pascual, P.
1963-04-01
Some simple models of (renormalizable) meson-nucleon interaction are examined in which the nucleon mass is entirely due to interaction and the chiral ( gamma {sub 5}) symmetry is "broken'' to become a hidden symmetry. It is found that such a scheme is possible provided that a vector meson is introduced as an elementary field. (auth)
A consistent multi-user framework for assessing system performance
Reed, C M
2010-01-01
Agreeing suitability for purpose and procurement decisions depend on assessment of real or simulated performances of sonar systems against user requirements for particular scenarios. There may be multiple pertinent aspects of performance (e.g. detection, track estimation, identification/classification and cost) and multiple users (e.g. within picture compilation, threat assessment, resource allocation and intercept control tasks), each with different requirements. Further, the estimates of performances and the user requirements are likely to be uncertain. In such circumstances, how can we reliably assess and compare the effectiveness of candidate systems? This paper presents a general yet simple mathematical framework that achieves all of this. First, the general requirements of a satisfactory framework are outlined. Then, starting from a definition of a measure of effectiveness (MOE) based on set theory, the formulae for assessing performance in various applications are obtained. These include combined MOEs,...
Reflection symmetries of Isolated Self-consistent Stellar Systems
An, J; Sanders, J L
2016-01-01
Isolated, steady-state galaxies correspond to equilibrium solutions of the Poisson--Vlasov system. We show that (i) all galaxies with a distribution function depending on energy alone must be spherically symmetric and (ii) all axisymmetric galaxies with a distribution function depending on energy and the angular momentum component parallel to the symmetry axis must also be reflection-symmetric about the plane $z=0$. The former result is Lichtenstein's Theorem, derived here by a method exploiting symmetries of solutions of elliptic partial differential equations, while the latter result is new. These results are subsumed into the Symmetry Theorem, which specifies how the symmetries of the distribution function in configuration or velocity space can control the planes of reflection symmetries of the ensuing stellar system.
Reflection symmetries of Isolated Self-consistent Stellar Systems
An, J; Evans, N.W.; Sanders, J. L.
2016-01-01
Isolated, steady-state galaxies correspond to equilibrium solutions of the Poisson--Vlasov system. We show that (i) all galaxies with a distribution function depending on energy alone must be spherically symmetric and (ii) all axisymmetric galaxies with a distribution function depending on energy and the angular momentum component parallel to the symmetry axis must also be reflection-symmetric about the plane $z=0$. The former result is Lichtenstein's Theorem, derived here by a method exploit...
A more consistent intraluminal rhesus monkey model of ischemic stroke
Institute of Scientific and Technical Information of China (English)
Bo Zhao; Fauzia Akbary; Shengli Li; Jing Lu; Feng Ling; Xunming Ji; Guowei Shang; Jian Chen; Xiaokun Geng; Xin Ye; Guoxun Xu; Ju Wang; Jiasheng Zheng; Hongjun Li
2014-01-01
Endovascular surgery is advantageous in experimentally induced ischemic stroke because it causes fewer cranial traumatic lesions than invasive surgery and can closely mimic the pathophysiol-ogy in stroke patients. However, the outcomes are highly variable, which limits the accuracy of evaluations of ischemic stroke studies. In this study, eight healthy adult rhesus monkeys were randomized into two groups with four monkeys in each group:middle cerebral artery occlusion at origin segment (M1) and middle cerebral artery occlusion at M2 segment. The blood lfow in the middle cerebral artery was blocked completely for 2 hours using the endovascular microcoil placement technique (1 mm × 10 cm) (undetachable), to establish a model of cerebral ischemia. The microcoil was withdrawn and the middle cerebral artery blood lfow was restored. A revers-ible middle cerebral artery occlusion model was identiifed by hematoxylin-eosin staining, digital subtraction angiography, magnetic resonance angiography, magnetic resonance imaging, and neurological evaluation. The results showed that the middle cerebral artery occlusion model was successfully established in eight adult healthy rhesus monkeys, and ischemic lesions were apparent in the brain tissue of rhesus monkeys at 24 hours after occlusion. The rhesus monkeys had symp-toms of neurological deifcits. Compared with the M1 occlusion group, the M2 occlusion group had lower infarction volume and higher neurological scores. These experimental ifndings indicate that reversible middle cerebral artery occlusion can be produced with the endovascular microcoil technique in rhesus monkeys. The M2 occluded model had less infarction and less neurological impairment, which offers the potential for application in the ifeld of brain injury research.
A more consistent intraluminal rhesus monkey model of ischemic stroke.
Zhao, Bo; Shang, Guowei; Chen, Jian; Geng, Xiaokun; Ye, Xin; Xu, Guoxun; Wang, Ju; Zheng, Jiasheng; Li, Hongjun; Akbary, Fauzia; Li, Shengli; Lu, Jing; Ling, Feng; Ji, Xunming
2014-12-01
Endovascular surgery is advantageous in experimentally induced ischemic stroke because it causes fewer cranial traumatic lesions than invasive surgery and can closely mimic the pathophysiology in stroke patients. However, the outcomes are highly variable, which limits the accuracy of evaluations of ischemic stroke studies. In this study, eight healthy adult rhesus monkeys were randomized into two groups with four monkeys in each group: middle cerebral artery occlusion at origin segment (M1) and middle cerebral artery occlusion at M2 segment. The blood flow in the middle cerebral artery was blocked completely for 2 hours using the endovascular microcoil placement technique (1 mm × 10 cm) (undetachable), to establish a model of cerebral ischemia. The microcoil was withdrawn and the middle cerebral artery blood flow was restored. A reversible middle cerebral artery occlusion model was identified by hematoxylin-eosin staining, digital subtraction angiography, magnetic resonance angiography, magnetic resonance imaging, and neurological evaluation. The results showed that the middle cerebral artery occlusion model was successfully established in eight adult healthy rhesus monkeys, and ischemic lesions were apparent in the brain tissue of rhesus monkeys at 24 hours after occlusion. The rhesus monkeys had symptoms of neurological deficits. Compared with the M1 occlusion group, the M2 occlusion group had lower infarction volume and higher neurological scores. These experimental findings indicate that reversible middle cerebral artery occlusion can be produced with the endovascular microcoil technique in rhesus monkeys. The M2 occluded model had less infarction and less neurological impairment, which offers the potential for application in the field of brain injury research.
Consistency problems for Heath-Jarrow-Morton interest rate models
Filipović, Damir
2001-01-01
The book is written for a reader with knowledge in mathematical finance (in particular interest rate theory) and elementary stochastic analysis, such as provided by Revuz and Yor (Continuous Martingales and Brownian Motion, Springer 1991). It gives a short introduction both to interest rate theory and to stochastic equations in infinite dimension. The main topic is the Heath-Jarrow-Morton (HJM) methodology for the modelling of interest rates. Experts in SDE in infinite dimension with interest in applications will find here the rigorous derivation of the popular "Musiela equation" (referred to in the book as HJMM equation). The convenient interpretation of the classical HJM set-up (with all the no-arbitrage considerations) within the semigroup framework of Da Prato and Zabczyk (Stochastic Equations in Infinite Dimensions) is provided. One of the principal objectives of the author is the characterization of finite-dimensional invariant manifolds, an issue that turns out to be vital for applications. Finally, ge...
Automated Verification of Memory Consistencies of DSM System on Unified Framework
Directory of Open Access Journals (Sweden)
Dr. Pankaj Kumar , Durgesh Kumar
2012-12-01
Full Text Available The consistency model of a DSM system specifies the ordering constraints on concurrent memory accesses by multiple processors, and hence has fundamental impact on DSM systems’ programming convenience and implementation efficiency. We have proposed the structural model for automated verification of memory consistencies of DSM System. DSM allows processes to assume a globally shared virtual memory even though they execute on nodes that do not physically share memory. The DSM software provide the abstraction of a globally shared memory in which each processor can access any data item without the programmer having to worry about where the data is or how to obtain its value In contrast in the native programming model on networks of workstations message passing the programmer must decide when a processor needs to communicate with whom to communicate and what data to be send. On a DSM system the programmer can focus on algorithmic development rather than on managing partitioned data sets and communicating values. The programming interfaces to DSM systems may differ in a variety of respects. The memory model refers to how updates to distributed shared memory are rejected to the processes in the system. The most intuitive model of distributed shared memory is that a read should always return the last value written unfortunately the notion of the last value written is not well defined in a distributed system.
DEFF Research Database (Denmark)
Toldbod, Thomas; Israelsen, Poul
2014-01-01
Companies rely on multiple Management Control Systems to obtain their short and long term objectives. When applying a multifaceted perspective on Management Control System the concept of internal consistency has been found to be important in obtaining goal congruency in the company. However...... of MCSs when analyzing internal consistency in the MCS package and how managers obtain internal consistency in the new MCS package when a MCS change occur. This study focuses specifically on changes to administrative controls, which are not internal consistent with the current cybernetic controls. As top......, to date we know little about how managers maintain internal consistency, when individual MCSs change and do not fit with the other MCSs. Based on a case study in a global Danish manufacturing company this study finds that it is necessary to distinguish between the design characteristics of MCS and use...
A thermodynamically consistent phase-field model for two-phase flows with thermocapillary effects
Guo, Zhenlin
2014-01-01
In this paper, we develop a phase-field model for binary incompressible fluid with thermocapillary effects, which allows the different properties (densities, viscosities and heat conductivities) for each component and meanwhile maintains the thermodynamic consistency. The governing equations of the model including the Navier-Stokes equations, Cahn-Hilliard equations and energy balance equation are derived together within a thermodynamic framework based on the entropy generation, which guarantees the thermodynamic consistency. The sharp-interface limit analysis is carried out to show that the interfacial conditions of the classical sharp-interface models can be recovered from our phase-field model. Moreover, some numerical examples including thermocapillary migration of a bubble and thermocapillary convections in a two- layer fluid system are computed by using a continuous finite element method. The results are compared to the existing analytical solutions and theoretical predictions as validations for our mod...
A consistency assessment of coupled cohesive zone models for mixed-mode debonding problems
Directory of Open Access Journals (Sweden)
R. Dimitri
2014-07-01
Full Text Available Due to their simplicity, cohesive zone models (CZMs are very attractive to describe mixed-mode failure and debonding processes of materials and interfaces. Although a large number of coupled CZMs have been proposed, and despite the extensive related literature, little attention has been devoted to ensuring the consistency of these models for mixed-mode conditions, primarily in a thermodynamical sense. A lack of consistency may affect the local or global response of a mechanical system. This contribution deals with the consistency check for some widely used exponential and bilinear mixed-mode CZMs. The coupling effect on stresses and energy dissipation is first investigated and the path-dependance of the mixed-mode debonding work of separation is analitically evaluated. Analytical predictions are also compared with results from numerical implementations, where the interface is described with zero-thickness contact elements. A node-to-segment strategy is here adopted, which incorporates decohesion and contact within a unified framework. A new thermodynamically consistent mixed-mode CZ model based on a reformulation of the Xu-Needleman model as modified by van den Bosch et al. is finally proposed and derived by applying the Coleman and Noll procedure in accordance with the second law of thermodynamics. The model holds monolithically for loading and unloading processes, as well as for decohesion and contact, and its performance is demonstrated through suitable examples.
Energy Technology Data Exchange (ETDEWEB)
Uslar, Mathias; Beenken, Petra; Beer, Sebastian [OFFIS, Oldenburg (Germany)
2009-07-01
The ongoing integration of distributed energy recourses into the existing power grid has lead to both grown communication costs and an increased need for interoperability between the involved actors. In this context, standardized and ontology- based data models help to reduce integration costs in heterogeneous system landscapes. Using ontology-based security profiles, such models can be extended with meta-data containing information about security measures for energyrelated data in need of protection. By this approach, we achieve both a unified data model and a unified security level. (orig.)
Consistency maintenance for constraint in role-based access control model
Institute of Scientific and Technical Information of China (English)
韩伟力; 陈刚; 尹建伟; 董金祥
2002-01-01
Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far'few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces correaponding formal rules, rulebased reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally,the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-ori-ented product data management (PDM) system.
Consistency maintenance for constraint in role-based access control model
Institute of Scientific and Technical Information of China (English)
韩伟力; 陈刚; 尹建伟; 董金祥
2002-01-01
Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces corresponding formal rules, rule-based reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally, the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-oriented product data management (PDM) system.
Energy Technology Data Exchange (ETDEWEB)
Myrzakulov, R.; Mamyrbekova, G.K.; Nugmanova, G.N.; Yesmakhanova, K.R. [Eurasian International Center for Theoretical Physics and Department of General and Theoretical Physics, Eurasian National University, Astana 010008 (Kazakhstan); Lakshmanan, M., E-mail: lakshman@cnld.bdu.ac.in [Centre for Nonlinear Dynamics, School of Physics, Bharathidasan University, Tiruchirapalli 620 024 (India)
2014-06-13
Motion of curves and surfaces in R{sup 3} lead to nonlinear evolution equations which are often integrable. They are also intimately connected to the dynamics of spin chains in the continuum limit and integrable soliton systems through geometric and gauge symmetric connections/equivalence. Here we point out the fact that a more general situation in which the curves evolve in the presence of additional self-consistent vector potentials can lead to interesting generalized spin systems with self-consistent potentials or soliton equations with self-consistent potentials. We obtain the general form of the evolution equations of underlying curves and report specific examples of generalized spin chains and soliton equations. These include principal chiral model and various Myrzakulov spin equations in (1+1) dimensions and their geometrically equivalent generalized nonlinear Schrödinger (NLS) family of equations, including Hirota–Maxwell–Bloch equations, all in the presence of self-consistent potential fields. The associated gauge equivalent Lax pairs are also presented to confirm their integrability. - Highlights: • Geometry of continuum spin chain with self-consistent potentials explored. • Mapping on moving space curves in R{sup 3} in the presence of potential fields carried out. • Equivalent generalized nonlinear Schrödinger (NLS) family of equations identified. • Integrability of identified nonlinear systems proved by deducing appropriate Lax pairs.
Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects
Directory of Open Access Journals (Sweden)
Guangjie Li
2015-07-01
Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.
Rudzinski, Joseph F; Bereau, Tristan
2016-01-01
Molecular simulations can provide microscopic insight into the physical and chemical driving forces of complex molecular processes. Despite continued advancement of simulation methodology, model errors may lead to inconsistencies between simulated and reference (e.g., from experiments or higher-level simulations) observables. To bound the microscopic information generated by computer simulations within reference measurements, we propose a method that reweights the microscopic transitions of the system to improve consistency with a set of coarse kinetic observables. The method employs the well-developed Markov state modeling framework to efficiently link microscopic dynamics with long-time scale constraints, thereby consistently addressing a wide range of time scales. To emphasize the robustness of the method, we consider two distinct coarse-grained models with significant kinetic inconsistencies. When applied to the simulated conformational dynamics of small peptides, the reweighting procedure systematically ...
Directory of Open Access Journals (Sweden)
J. G. Fyke
2013-04-01
Full Text Available A new technique for generating ice sheet preindustrial 1850 initial conditions for coupled ice-sheet/climate models is developed and demonstrated over the Greenland Ice Sheet using the Community Earth System Model (CESM. Paleoclimate end-member simulations and ice core data are used to derive continuous surface mass balance fields which are used to force a long transient ice sheet model simulation. The procedure accounts for the evolution of climate through the last glacial period and converges to a simulated preindustrial 1850 ice sheet that is geometrically and thermodynamically consistent with the 1850 preindustrial simulated CESM state, yet contains a transient memory of past climate that compares well to observations and independent model studies. This allows future coupled ice-sheet/climate projections of climate change that include ice sheets to integrate the effect of past climate conditions on the state of the Greenland Ice Sheet, while maintaining system-wide continuity between past and future climate simulations.
Self-consistent chaotic transport in a high-dimensional mean-field Hamiltonian map model
Martínez-del-Río, D; Olvera, A; Calleja, R
2016-01-01
Self-consistent chaotic transport is studied in a Hamiltonian mean-field model. The model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of $N$ coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherent structures. Numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of th...
2014-09-30
there is a substantial difference in inertial sea ice motion, which can be used as a proxy for ice-ocean Ekman transport. This is illustrated 13 in...respectively). The peaks at -2 cycles/day represent the inclusion of realistic transient ice-ocean Ekman transport in the model (reproduced from Roberts et al...A. DuVivier, M. Hughes, B. Nijssen, J. Cassano and M. Brunke (2014), Simulating transient ice-ocean Ekman transport in the Regional Arctic System
RELIABILITY ASSESSMENT OF ENTROPY METHOD FOR SYSTEM CONSISTED OF IDENTICAL EXPONENTIAL UNITS
Institute of Scientific and Technical Information of China (English)
Sun Youchao; Shi Jun
2004-01-01
The reliability assessment of unit-system near two levels is the most important content in the reliability multi-level synthesis of complex systems. Introducing the information theory into system reliability assessment, using the addible characteristic of information quantity and the principle of equivalence of information quantity, an entropy method of data information conversion is presented for the system consisted of identical exponential units. The basic conversion formulae of entropy method of unit test data are derived based on the principle of information quantity equivalence. The general models of entropy method synthesis assessment for system reliability approximate lower limits are established according to the fundamental principle of the unit reliability assessment. The applications of the entropy method are discussed by way of practical examples. Compared with the traditional methods, the entropy method is found to be valid and practicable and the assessment results are very satisfactory.
Quantal self-consistent cranking model for monopole excitations in even-even light nuclei
Gulshani, P
2014-01-01
In this article, we derive a quantal self-consistent time-reversal invariant cranking model for isoscalar monopole excitation coupled to intrinsic motion in even-even light nuclei. The model uses a wavefunction that is a product of monopole and intrinsic wavefunctions and a constrained variational method to derive, from a many-particle Schrodinger equation, a pair of coupled self-consistent cranking-type Schrodinger equations for the monopole and intrinsic systems. The monopole and intrinsic wavefunctions are coupled to each other by the two cranking equations and their associated parameters and by two constraints imposed on the intrinsic system. For an isotropic Nilsson shell model and an effective residual two-body interaction, the two coupled cranking equations are solved in the Tamm Dancoff approximation. The strength of the interaction is determined from a Hartree-Fock self-consistency argument. The excitation energy of the first excited state is determined and found to agree closely with those observed ...
A thermodynamically consistent model of the post-translational Kai circadian clock
Lubensky, David K.; ten Wolde, Pieter Rein
2017-01-01
The principal pacemaker of the circadian clock of the cyanobacterium S. elongatus is a protein phosphorylation cycle consisting of three proteins, KaiA, KaiB and KaiC. KaiC forms a homohexamer, with each monomer consisting of two domains, CI and CII. Both domains can bind and hydrolyze ATP, but only the CII domain can be phosphorylated, at two residues, in a well-defined sequence. While this system has been studied extensively, how the clock is driven thermodynamically has remained elusive. Inspired by recent experimental observations and building on ideas from previous mathematical models, we present a new, thermodynamically consistent, statistical-mechanical model of the clock. At its heart are two main ideas: i) ATP hydrolysis in the CI domain provides the thermodynamic driving force for the clock, switching KaiC between an active conformational state in which its phosphorylation level tends to rise and an inactive one in which it tends to fall; ii) phosphorylation of the CII domain provides the timer for the hydrolysis in the CI domain. The model also naturally explains how KaiA, by acting as a nucleotide exchange factor, can stimulate phosphorylation of KaiC, and how the differential affinity of KaiA for the different KaiC phosphoforms generates the characteristic temporal order of KaiC phosphorylation. As the phosphorylation level in the CII domain rises, the release of ADP from CI slows down, making the inactive conformational state of KaiC more stable. In the inactive state, KaiC binds KaiB, which not only stabilizes this state further, but also leads to the sequestration of KaiA, and hence to KaiC dephosphorylation. Using a dedicated kinetic Monte Carlo algorithm, which makes it possible to efficiently simulate this system consisting of more than a billion reactions, we show that the model can describe a wealth of experimental data. PMID:28296888
A new k-epsilon model consistent with Monin-Obukhov similarity theory
DEFF Research Database (Denmark)
van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.
2016-01-01
A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...
Astakhov, Vadim
2009-01-01
Interest in simulation of large-scale metabolic networks, species development, and genesis of various diseases requires new simulation techniques to accommodate the high complexity of realistic biological networks. Information geometry and topological formalisms are proposed to analyze information processes. We analyze the complexity of large-scale biological networks as well as transition of the system functionality due to modification in the system architecture, system environment, and system components. The dynamic core model is developed. The term dynamic core is used to define a set of causally related network functions. Delocalization of dynamic core model provides a mathematical formalism to analyze migration of specific functions in biosystems which undergo structure transition induced by the environment. The term delocalization is used to describe these processes of migration. We constructed a holographic model with self-poetic dynamic cores which preserves functional properties under those transitions. Topological constraints such as Ricci flow and Pfaff dimension were found for statistical manifolds which represent biological networks. These constraints can provide insight on processes of degeneration and recovery which take place in large-scale networks. We would like to suggest that therapies which are able to effectively implement estimated constraints, will successfully adjust biological systems and recover altered functionality. Also, we mathematically formulate the hypothesis that there is a direct consistency between biological and chemical evolution. Any set of causal relations within a biological network has its dual reimplementation in the chemistry of the system environment.
Luzzati, Vittorio; Tardieu, Annette; Gulik-Krzywicki, Tadeusz
1981-01-01
The observed intensities of the reflections from the body-centered cubic phase of lipid systems are shown to be incompatible with a recently reported model consisting of straight, indefinitely long rods.
A simplified stock-flow consistent post-Keynesian growth model
dos Santos, Claudio H.; Zezza, Gennaro
2005-01-01
A Simplified Stock-Flow Consistent Post-Keynesian Growth Model Claudio H. Dos Santos* and Gennaro Zezza** Abstract: Despite being arguably the most rigorous form of structuralist/post-Keynesian macroeconomics, stock-flow consistent models are quite often complex and difficult to deal with. This paper presents a model that, despite retaining the methodological advantages of the stock-flow consistent method, is intuitive enough to be taught at an undergraduate level. Moreover, the model can eas...
McClelland, James L
2013-11-01
The complementary learning systems theory of the roles of hippocampus and neocortex (McClelland, McNaughton, & O'Reilly, 1995) holds that the rapid integration of arbitrary new information into neocortical structures is avoided to prevent catastrophic interference with structured knowledge representations stored in synaptic connections among neocortical neurons. Recent studies (Tse et al., 2007, 2011) showed that neocortical circuits can rapidly acquire new associations that are consistent with prior knowledge. The findings challenge the complementary learning systems theory as previously presented. However, new simulations extending those reported in McClelland et al. (1995) show that new information that is consistent with knowledge previously acquired by a putatively cortexlike artificial neural network can be learned rapidly and without interfering with existing knowledge; it is when inconsistent new knowledge is acquired quickly that catastrophic interference ensues. Several important features of the findings of Tse et al. (2007, 2011) are captured in these simulations, indicating that the neural network model used in McClelland et al. has characteristics in common with neocortical learning mechanisms. An additional simulation generalizes beyond the network model previously used, showing how the rate of change of cortical connections can depend on prior knowledge in an arguably more biologically plausible network architecture. In sum, the findings of Tse et al. are fully consistent with the idea that hippocampus and neocortex are complementary learning systems. Taken together, these findings and the simulations reported here advance our knowledge by bringing out the role of consistency of new experience with existing knowledge and demonstrating that the rate of change of connections in real and artificial neural networks can be strongly prior-knowledge dependent.
Directory of Open Access Journals (Sweden)
G.Shanmugarathinam
2013-01-01
Full Text Available Caching is one of the important techniques in mobile computing. In caching, frequently accessed data is stored in mobile clients to avoid network traffic and improve the performance in mobile computing. In a mobile computing environment, the number of mobile users increases and requests the server for any updation, but most of the time the server is busy and the client has to wait for a long time. The cache consistency maintenance is difficult for both client and the server. This paper is proposes a technique using a queuing system consisting of one or more servers that provide services of some sort to arrive mobile hosts using agent based technology. This services mechanism of a queuing system is specified by the number of servers each server having its own queue, Agent based technology will maintain the cache consistency between the client and the server .This model saves wireless bandwidth, reduces network traffic and reduces the workload on the server. The simulation result was analyzed with previous technique and the proposed model shows significantly better performance than the earlier approach.
Pair Fluctuations in Ultra-small Fermi Systems within Self-Consistent RPA at Finite Temperature
Storozhenko, A; Dukelsky, J; Röpke, G; Vdovin, A I
2003-01-01
A self-consistent version of the Thermal Random Phase Approximation (TSCRPA) is developed within the Matsubara Green's Function (GF) formalism. The TSCRPA is applied to the many level pairing model. The normal phase of the system is considered. The TSCRPA results are compared with the exact ones calculated for the Grand Canonical Ensemble. Advantages of the TSCRPA over the Thermal Mean Field Approximation (TMFA) and the standard Thermal Random Phase Approximation (TRPA) are demonstrated. Results for correlation functions, excitation energies, single particle level densities, etc., as a function of temperature are presented.
Self-consistent Spectral Functions in the $O(N)$ Model from the FRG
Strodthoff, Nils
2016-01-01
We present the first self-consistent direct calculation of a spectral function in the framework of the Functional Renormalization Group. The study is carried out in the relativistic $O(N)$ model, where the full momentum dependence of the propagators in the complex plane as well as momentum-dependent vertices are considered. The analysis is supplemented by a comparative study of the Euclidean momentum dependence and of the complex momentum dependence on the level of spectral functions. This work lays the groundwork for the computation of full spectral functions in more complex systems.
Rudzinski, Joseph F.; Kremer, Kurt; Bereau, Tristan
2016-02-01
Molecular simulations can provide microscopic insight into the physical and chemical driving forces of complex molecular processes. Despite continued advancement of simulation methodology, model errors may lead to inconsistencies between simulated and reference (e.g., from experiments or higher-level simulations) observables. To bound the microscopic information generated by computer simulations within reference measurements, we propose a method that reweights the microscopic transitions of the system to improve consistency with a set of coarse kinetic observables. The method employs the well-developed Markov state modeling framework to efficiently link microscopic dynamics with long-time scale constraints, thereby consistently addressing a wide range of time scales. To emphasize the robustness of the method, we consider two distinct coarse-grained models with significant kinetic inconsistencies. When applied to the simulated conformational dynamics of small peptides, the reweighting procedure systematically improves the time scale separation of the slowest processes. Additionally, constraining the forward and backward rates between metastable states leads to slight improvement of their relative stabilities and, thus, refined equilibrium properties of the resulting model. Finally, we find that difficulties in simultaneously describing both the simulated data and the provided constraints can help identify specific limitations of the underlying simulation approach.
Fishkind, Donniell E; Tang, Minh; Vogelstein, Joshua T; Priebe, Carey E
2012-01-01
A stochastic block model consists of a random partition of n vertices into blocks 1,2,...,K for which, conditioned on the partition, every pair of vertices has probability of adjacency entirely determined by the block membership of the two vertices. (The model parameters are K, the distribution of the random partition, and a communication probability matrix M in [0,1]^(K x K) listing the adjacency probabilities associated with all pairs of blocks.) Suppose a realization of the n x n vertex adjacency matrix is observed, but the underlying partition of the vertices into blocks is not observed; the main inferential task is to correctly partition the vertices into the blocks with only a negligible number of vertices misassigned. For this inferential task, Rohe et al. (2011) prove the consistency of spectral partitioning applied to the normalized Laplacian, and Sussman et al. (2011) extend this to prove consistency of spectral partitioning directly on the adjacency matrix; both procedures assume that K and rankM a...
Microwave air plasmas in capillaries at low pressure I. Self-consistent modeling
Coche, P.; Guerra, V.; Alves, L. L.
2016-06-01
This work presents the self-consistent modeling of micro-plasmas generated in dry air using microwaves (2.45 GHz excitation frequency), within capillaries (model couples the system of rate balance equations for the most relevant neutral and charged species of the plasma to the homogeneous electron Boltzmann equation. The maintenance electric field is self-consistently calculated adopting a transport theory for low to intermediate pressures, taking into account the presence of O- ions in addition to several positive ions, the dominant species being O{}2+ , NO+ and O+ . The low-pressure small-radius conditions considered yield very-intense reduced electric fields (˜600-1500 Td), coherent with species losses controlled by transport and wall recombination, and kinetic mechanisms strongly dependent on electron-impact collisions. The charged-particle transport losses are strongly influenced by the presence of the negative ion, despite its low-density (˜10% of the electron density). For electron densities in the range (1-≤ft. 4\\right)× {{10}12} cm-3, the system exhibits high dissociation degrees for O2 (˜20-70%, depending on the working conditions, in contrast with the ˜0.1% dissociation obtained for N2), a high concentration of O2(a) (˜1014 cm-3) and NO(X) (5× {{10}14} cm-3) and low ozone production (<{{10}-3}% ).
Khajepor, Sorush; Chen, Baixin
2016-01-01
A method is developed to analytically and consistently implement cubic equations of state into the recently proposed multipseudopotential interaction (MPI) scheme in the class of two-phase lattice Boltzmann (LB) models [S. Khajepor, J. Wen, and B. Chen, Phys. Rev. E 91, 023301 (2015)]10.1103/PhysRevE.91.023301. An MPI forcing term is applied to reduce the constraints on the mathematical shape of the thermodynamically consistent pseudopotentials; this allows the parameters of the MPI forces to be determined analytically without the need of curve fitting or trial and error methods. Attraction and repulsion parts of equations of state (EOSs), representing underlying molecular interactions, are modeled by individual pseudopotentials. Four EOSs, van der Waals, Carnahan-Starling, Peng-Robinson, and Soave-Redlich-Kwong, are investigated and the results show that the developed MPI-LB system can satisfactorily recover the thermodynamic states of interest. The phase interface is predicted analytically and controlled via EOS parameters independently and its effect on the vapor-liquid equilibrium system is studied. The scheme is highly stable to very high density ratios and the accuracy of the results can be enhanced by increasing the interface resolution. The MPI drop is evaluated with regard to surface tension, spurious velocities, isotropy, dynamic behavior, and the stability dependence on the relaxation time.
Tides, Rotation Or Anisotropy? Self-consistent Nonspherical Models For Globular Clusters
Varri, Anna L.; Bertin, G.
2011-01-01
Spherical models of quasi-relaxed stellar systems provide a successful zeroth-order description of globular clusters. Yet, the great progress made in recent years in the acquisition of detailed information of the structure of these stellar systems calls for a renewed effort on the side of modeling. In particular, more general analytical models would allow to address the long-standing issue of the physical origin of the deviations from spherical symmetry of the globular clusters, that now can be properly measured. In fact, it remains to be established which is the cause of the observed flattening, among external tides, internal rotation, and pressure anisotropy. In this paper we focus on the first two physical ingredients. We start by briefly describing a recently studied family of triaxial models that incorporate in a self-consistent way the tidal effects of the host galaxy, as a collisionless analogue of the Roche problem (Varri & Bertin ApJ 2009). We then present two new families of axisymmetric models in which the deviations from spherical symmetry are induced by the presence of internal rotation. The first one is an extension of the well-known family of King models to the case of axisymmetric equilibria flattened by solid-body rotation. The second family is characterized by differential rotation, designed to be rigid in the center and to vanish in the outer parts, where the imposed truncation in phase space becomes effective. For possible application to globular clusters, models of interest should be those, in both families, characterized by low values of the rotation strength parameter and quasi-spherical shape. For general interest in stellar dynamics, we show that, for high values of that parameter, the differentially rotating models may exhibit unexpected morphologies, even with a toroidal core.
DEFF Research Database (Denmark)
Kock, Anders Bredahl
2015-01-01
the tuning parameter by Bayesian Information Criterion (BIC) results in consistent model selection. However, it is also shown that the adaptive Lasso has no power against shrinking alternatives of the form c/T if it is tuned to perform consistent model selection. We show that if the adaptive Lasso is tuned...
Macro-particle FEL model with self-consistent spontaneous radiation
Litvinenko, Vladimir N
2015-01-01
Spontaneous radiation plays an important role in SASE FELs and storage ring FELs operating in giant pulse mode. It defines the correlation function of the FEL radiation as well as its many spectral features. Simulations of these systems using randomly distributed macro-particles with charge much higher that of a single electron create the problem of anomalously strong spontaneous radiation, limiting the capabilities of many FEL codes. In this paper we present a self-consistent macro-particle model which provided statistically exact simulation of multi-mode, multi-harmonic and multi-frequency short-wavelength 3-D FELs including the high power and saturation effects. The use of macro-particle clones allows both spontaneous and induced radiation to be treated in the same fashion. Simulations using this model do not require a seed and provide complete temporal and spatial structure of the FEL optical field.
Towards a consistent model of the Galaxy; 2, Derivation of the model
Méra, D; Schäffer, R
1998-01-01
We use the calculations derived in a previous paper (Méra, Chabrier and Schaeffer, 1997), based on observational constraints arising from star counts, microlensing experiments and kinematic properties, to determine the amount of dark matter under the form of stellar and sub-stellar objects in the different parts of the Galaxy. This yields the derivation of different mass-models for the Galaxy. In the light of all the afore-mentioned constraints, we discuss two models that correspond to different conclusions about the nature and the location of the Galactic dark matter. In the first model there is a small amount of dark matter in the disk, and a large fraction of the dark matter in the halo is still undetected and likely to be non-baryonic. The second, less conventional model is consistent with entirely, or at least predominantly baryonic dark matter, under the form of brown dwarfs in the disk and white dwarfs in the dark halo. We derive observational predictions for these two models which should be verifiabl...
Scale-consistent two-way coupling of land-surface and atmospheric models
Schomburg, A.; Venema, V.; Ament, F.; Simmer, C.
2009-04-01
Processes at the land surface and in the atmosphere act on different spatial scales. While in the atmosphere small-scale heterogeneity is smoothed out quickly by turbulent mixing, this is not the case at the land surface where small-scale variability of orography, land cover, soil texture, soil moisture etc. varies only slowly in time. For the modelling of the fluxes between the land-surface and the atmosphere it is consequently more scale consistent to model the surface processes at a higher spatial resolution than the atmospheric processes. The mosaic approach is one way to deal with this problem. Using this technique the Soil Vegetation Atmosphere Transfer (SVAT) scheme is solved on a higher resolution than the atmosphere, which is possible since a SVAT module generally demands considerably less computation time than the atmospheric part. The upscaling of the turbulent fluxes of sensible and latent heat at the interface to the atmosphere is realized by averaging, due to the nonlinearities involved this is a more sensible approach than averaging the soil properties and computing the fluxes in a second step. The atmospheric quantities are usually assumed to be homogeneous for all soil-subpixels pertaining to one coarse atmospheric grid box. In this work, the aim is to develop a downscaling approach in which the atmospheric quantities at the lowest model layer are disaggregated before they enter the SVAT module at the higher mosaic resolution. The overall aim is a better simulation of the heat fluxes which play an important role for the energy and moisture budgets at the surface. The disaggregation rules for the atmospheric variables will depend on high-resolution surface properties and the current atmospheric conditions. To reduce biases due to nonlinearities we will add small-scale variability according to such rules as well as noise for the variability we can not explain. The model used in this work is the COSMO-model, the weather forecast model (and regional
Self-consistent modeling of DEMOs with 1.5D BALDUR integrated predictive modeling code
Wisitsorasak, A.; Somjinda, B.; Promping, J.; Onjun, T.
2017-02-01
Self-consistent simulations of four DEMO designs proposed by teams from China, Europe, India, and Korea are carried out using the BALDUR integrated predictive modeling code in which theory-based models are used, for both core transport and boundary conditions. In these simulations, a combination of the NCLASS neoclassical transport and multimode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a pedestal temperature model based on a combination of magnetic and flow shear stabilization, pedestal width scaling and an infinite- n ballooning pressure gradient model and a pedestal density model based on a line average density. Even though an optimistic scenario is considered, the simulation results suggest that, with the exclusion of ELMs, the fusion gain Q obtained for these reactors is pessimistic compared to their original designs, i.e. 52% for the Chinese design, 63% for the European design, 22% for the Korean design, and 26% for the Indian design. In addition, the predicted bootstrap current fractions are also found to be lower than their original designs, as fractions of their original designs, i.e. 0.49 (China), 0.66 (Europe), and 0.58 (India). Furthermore, in relation to sensitivity, it is found that increasing values of the auxiliary heating power and the electron line average density from their design values yield an enhancement of fusion performance. In addition, inclusion of sawtooth oscillation effects demonstrate positive impacts on the plasma and fusion performance in European, Indian and Korean DEMOs, but degrade the performance in the Chinese DEMO.
Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül
2017-05-01
A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.
DYNAMICAL CONSISTENCE IN 3-DIMENSIONAL TYPE-K COMPETITIVE LOTKA-VOLTERRA SYSTEM
Institute of Scientific and Technical Information of China (English)
无
2012-01-01
A 3-dimensional type-K competitive Lotka-Volterra system is considered in this paper. Two discretization schemes are applied to the system with an positive interior fixed point, and two corresponding discrete systems are obtained. By analyzing the local dynamics of the corresponding discrete system near the interior fixed point, it is showed that this system is not dynamically consistent with the continuous counterpart system.
SELF-CONSISTENT FIELD MODEL OF BRUSHES FORMED BY ROOT-TETHERED DENDRONS
Directory of Open Access Journals (Sweden)
E. B. Zhulina
2015-05-01
Full Text Available We present an analytical self-consistent field (scf theory that describes planar brushes formed by regularly branched root-tethered dendrons of the second and third generations. The developed approach gives the possibility for calculation of the scf molecular potential acting at monomers of the tethered chains. In the linear elasticity regime for stretched polymers, the molecular potential has a parabolic shape with the parameter k depending on architectural parameters of tethered macromolecules: polymerization degrees of spacers, branching functionalities, and number of generations. For dendrons of the second generation, we formulate a general equation for parameter k and analyze how variations in the architectural parameters of these dendrons affect the molecular potential. For dendrons of the third generation, an analytical expression for parameter k is available only for symmetric macromolecules with equal lengths of all spacers and equal branching functionalities in all generations. We analyze how the thickness of dendron brush in a good solvent is affected by variations in the chain architecture. Results of the developed scf theory are compared with predictions of boxlike scaling model. We demonstrate that in the limit of high branching functionalities, the results of both approaches become consistent if the value of exponent bin boxlike model is put to unity.In conclusion, we briefly discuss the systems to which the developed scf theory is applicable. These are: planar and concave spherical and cylindrical brushes under various solvent conditions (including solvent-free melted brushes and brush-like layers of ionic (polyelectrolyte dendrons.
Motte, Fabrice; Bugler-Lamb, Samuel L.; Falcoz, Quentin
2015-07-01
The attraction of solar energy is greatly enhanced by the possibility of it being used during times of reduced or non-existent solar flux, such as weather induced intermittences or the darkness of the night. Therefore optimizing thermal storage for use in solar energy plants is crucial for the success of this sustainable energy source. Here we present a study of a structured bed filler dedicated to Thermocline type thermal storage, believed to outweigh the financial and thermal benefits of other systems currently in use such as packed bed Thermocline tanks. Several criterions such as Thermocline thickness and Thermocline centering are defined with the purpose of facilitating the assessment of the efficiency of the tank to complement the standard concepts of power output. A numerical model is developed that reduces to two dimensions the modeling of such a tank. The structure within the tank is designed to be built using simple bricks harboring rectangular channels through which the solar heat transfer and storage fluid will flow. The model is scrutinized and tested for physical robustness, and the results are presented in this paper. The consistency of the model is achieved within particular ranges for each physical variable.
A Symplectic Multi-Particle Tracking Model for Self-Consistent Space-Charge Simulation
Qiang, Ji
2016-01-01
Symplectic tracking is important in accelerator beam dynamics simulation. So far, to the best of our knowledge, there is no self-consistent symplectic space-charge tracking model available in the accelerator community. In this paper, we present a two-dimensional and a three-dimensional symplectic multi-particle spectral model for space-charge tracking simulation. This model includes both the effect from external fields and the effect of self-consistent space-charge fields using a split-operator method. Such a model preserves the phase space structure and shows much less numerical emittance growth than the particle-in-cell model in the illustrative examples.
A CVAR scenario for a standard monetary model using theory-consistent expectations
DEFF Research Database (Denmark)
Juselius, Katarina
2017-01-01
A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...
CONSISTENCY OF THE PERFORMANCE MANAGEMENT SYSTEM AND ITS QUANTIFICATION USING THE Z-MESOT FRAMEWORK
Directory of Open Access Journals (Sweden)
Jan Zavadsky
2016-12-01
Full Text Available The main purpose of this paper is: (1 to present the theoretical approach for testing a performance management system`s consistency using the Z-MESOT framework and (2 to present the results of empirical analysis in selected manufacturing companies. The Z-MESOT framework is a managerial approach, based on the definitions of attributes for measuring and assessing the performance of a company. It is a quantitative approach which can proof the degree of the performance management system`s consistency. The quantification comes from arithmetical calculation in the Z-MESOT matrix. The consistency of the performance management system does not assure the final performance. Consistency is a part of the systemic approach to the management even if we do not call it as quality management. A consistent definition of the performance management system can help enterprises to be flexible and to be able to quickly respond in the case of any changes in the internal or external business environment. A consistent definition is represented by a set of 21 performance indicator attributes including the requirement for measuring and evaluating strategic and operational goals. In the paper, we also describe the relationships between selected requirements of the ISO 9001:2015 standard and the Z-MESOT framework.
Development of a Kohn-Sham like potential in the Self-Consistent Atomic Deformation Model
Mehl, M J; Stokes, H T
1996-01-01
This is a brief description of how to derive the local ``atomic'' potentials from the Self-Consistent Atomic Deformation (SCAD) model density function. Particular attention is paid to the spherically averaged case.
Development of a Kohn-Sham like potential in the Self-Consistent Atomic Deformation Model
Mehl, M. J.; Boyer, L. L.; Stokes, H. T.
1996-01-01
This is a brief description of how to derive the local ``atomic'' potentials from the Self-Consistent Atomic Deformation (SCAD) model density function. Particular attention is paid to the spherically averaged case.
Bayesian nonparametric estimation and consistency of mixed multinomial logit choice models
De Blasi, Pierpaolo; Lau, John W; 10.3150/09-BEJ233
2011-01-01
This paper develops nonparametric estimation for discrete choice models based on the mixed multinomial logit (MMNL) model. It has been shown that MMNL models encompass all discrete choice models derived under the assumption of random utility maximization, subject to the identification of an unknown distribution $G$. Noting the mixture model description of the MMNL, we employ a Bayesian nonparametric approach, using nonparametric priors on the unknown mixing distribution $G$, to estimate choice probabilities. We provide an important theoretical support for the use of the proposed methodology by investigating consistency of the posterior distribution for a general nonparametric prior on the mixing distribution. Consistency is defined according to an $L_1$-type distance on the space of choice probabilities and is achieved by extending to a regression model framework a recent approach to strong consistency based on the summability of square roots of prior probabilities. Moving to estimation, slightly different te...
Thermodynamically consistent mesoscopic fluid particle models for a van der Waals fluid
Serrano, Mar; Español, Pep
2000-01-01
The GENERIC structure allows for a unified treatment of different discrete models of hydrodynamics. We first propose a finite volume Lagrangian discretization of the continuum equations of hydrodynamics through the Voronoi tessellation. We then show that a slight modification of these discrete equations has the GENERIC structure. The GENERIC structure ensures thermodynamic consistency and allows for the introduction of correct thermal noise. In this way, we obtain a consistent discrete model ...
Directory of Open Access Journals (Sweden)
Nawal Sad Houari
2016-12-01
Full Text Available Capitalization and reuse of expert knowledge are very important for the survival of an enterprise. This paper presents a collaborative approach that utilizes domain ontology and agents. Thanks to our knowledge formalizing process, we give to domain expert an opportunity to store different forms of retrieved knowledge from experiences, design rules, business rules, decision processes, etc. The ontology is built to support business rules management. The global architecture is mainly composed of agents such as Expert agent, Evaluator agent, Translator agent, Security agent and Supervisor agent. The Evaluator agent is at the heart of our functional architecture, its role is to detect the problems that may arise in the consistency management module and provides a solution to these problems in order to validate the accuracy of business rules. In addition, a Security agent is defined to handle both security aspects in rules modeling and multi-agent system. The proposed approach is different from the others in terms of the number of rule’s inconsistencies which are detected and treated like contradiction, redundancy, invalid rules, domain violation and rules never applicable, the collaboration that is initiated among business experts and the guarantee of security of the business rules and all the agents which constitute our system. The developed collaborative system is applied in an industrial case study.C
Self-consistent model of a solid for the description of lattice and magnetic properties
Balcerzak, T.; Szałowski, K.; Jaščur, M.
2017-03-01
In the paper a self-consistent theoretical description of the lattice and magnetic properties of a model system with magnetoelastic interaction is presented. The dependence of magnetic exchange integrals on the distance between interacting spins is assumed, which couples the magnetic and the lattice subsystem. The framework is based on summation of the Gibbs free energies for the lattice subsystem and magnetic subsystem. On the basis of minimization principle for the Gibbs energy, a set of equations of state for the system is derived. These equations of state combine the parameters describing the elastic properties (relative volume deformation) and the magnetic properties (magnetization changes). The formalism is extensively illustrated with the numerical calculations performed for a system of ferromagnetically coupled spins S=1/2 localized at the sites of simple cubic lattice. In particular, the significant influence of the magnetic subsystem on the elastic properties is demonstrated. It manifests itself in significant modification of such quantities as the relative volume deformation, thermal expansion coefficient or isothermal compressibility, in particular, in the vicinity of the magnetic phase transition. On the other hand, the influence of lattice subsystem on the magnetic one is also evident. It takes, for example, the form of dependence of the critical (Curie) temperature and magnetization itself on the external pressure, which is thoroughly investigated.
Assessment of the Degree of Consistency of the System of Fuzzy Rules
Directory of Open Access Journals (Sweden)
Pospelova Lyudmila Yakovlevna
2013-12-01
Full Text Available The article analyses recent achievements and publications and shows that difficulties of explaining the nature of fuzziness and equivocation arise in socio-economic models that use the traditional paradigm of classical rationalism (computational, agent and econometric models. The accumulated collective experience of development of optimal models confirms prospectiveness of application of the fuzzy set approach in modelling the society. The article justifies the necessity of study of the nature of inconsistency in fuzzy knowledge bases both on the generalised ontology level and on pragmatic functional level of the logical inference. The article offers the method of search for logical and conceptual contradictions in the form of a combination of the abduction and modus ponens. It discusses the key issue of the proposed method: what properties should have the membership function of the secondary fuzzy set, which describes in fuzzy inference models such a resulting state of the object of management, which combines empirically incompatible properties with high probability. The degree of membership of the object of management in several incompatible classes with respect to the fuzzy output variable is the degree of fuzziness of the “Intersection of all results of the fuzzy inference of the set, applied at some input of rules, is an empty set” statement. The article describes an algorithm of assessment of the degree of consistency. It provides an example of the step-by-step detection of contradictions in statistical fuzzy knowledge bases at the pragmatic functional level of the logical output. The obtained results of testing in the form of sets of incompatible facts, output chains, sets of non-crossing intervals and computed degrees of inconsistency allow experts timely elimination of inadmissible contradictions and, at the same time, increase of quality of recommendations and assessment of fuzzy expert systems.
Michaels, Patrick J; Christy, John R; Herman, Chad S; Liljegren, Lucia M; Annan, James D
2013-01-01
Assessing the consistency between short-term global temperature trends in observations and climate model projections is a challenging problem. While climate models capture many processes governing short-term climate fluctuations, they are not expected to simulate the specific timing of these somewhat random phenomena - the occurrence of which may impact the realized trend. Therefore, to assess model performance, we develop distributions of projected temperature trends from a collection of climate models running the IPCC A1B emissions scenario. We evaluate where observed trends of length 5 to 15 years fall within the distribution of model trends of the same length. We find that current trends lie near the lower limits of the model distributions, with cumulative probability-of-occurrence values typically between 5 percent and 20 percent, and probabilities below 5 percent not uncommon. Our results indicate cause for concern regarding the consistency between climate model projections and observed climate behavior...
Steps towards a consistent Climate Forecast System Reanalysis wave hindcast (1979-2016)
Stopa, Justin E.; Ardhuin, Fabrice; Huchet, Marion; Accensi, Mickael
2017-04-01
Surface gravity waves are being increasingly recognized as playing an important role within the climate system. Wave hindcasts and reanalysis products of long time series (>30 years) have been instrumental in understanding and describing the wave climate for the past several decades and have allowed a better understanding of extreme waves and inter-annual variability. Wave hindcasts have the advantage of covering the oceans in higher space-time resolution than possible with conventional observations from satellites and buoys. Wave reanalysis systems like ECWMF's ERA-Interim directly included a wave model that is coupled to the ocean and atmosphere, otherwise reanalysis wind fields are used to drive a wave model to reproduce the wave field in long time series. The ERA Interim dataset is consistent in time, but cannot adequately resolve extreme waves. On the other hand, the NCEP Climate Forecast System (CFSR) wind field better resolves the extreme wind speeds, but suffers from discontinuous features in time which are due to the quantity and quality of the remote sensing data incorporated into the product. Therefore, a consistent hindcast that resolves the extreme waves still alludes us limiting our understanding of the wave climate. In this study, we systematically correct the CFSR wind field to reproduce a homogeneous wave field in time. To verify the homogeneity of our hindcast we compute error metrics on a monthly basis using the observations from a merged altimeter wave database which has been calibrated and quality controlled from 1985-2016. Before 1985 only few wave observations exist and are limited to a select number of wave buoys mostly in the North Hemisphere. Therefore we supplement our wave observations with seismic data which responds to nonlinear wave interactions created by opposing waves with nearly equal wavenumbers. Within the CFSR wave hindcast, we find both spatial and temporal discontinuities in the error metrics. The Southern Hemisphere often
Method used to test the imaging consistency of binocular camera's left-right optical system
Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui
2016-09-01
To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.
Institute of Scientific and Technical Information of China (English)
Yee LEUNG; WU Kefa; DONG Tianxin
2001-01-01
In this paper, a multivariate linear functional relationship model, where the covariance matrix of the observational errors is not restricted, is considered. The parameter estimation of this model is discussed. The estimators are shown to be a strongly consistent estimation under some mild conditions on the incidental parameters.
Institute of Scientific and Technical Information of China (English)
Nan-nan ZHAO; Ji-guang WAN; Jun WANG; Chang-sheng XIE
2016-01-01
Distributed key value storage systems are among the most important types of distributed storage systems currently deployed in data centers. Nowadays, enterprise data centers are facing growing pressure in reducing their power consumption. In this paper, we propose GreenCHT, a reliable power management scheme for consistent hashing based distributed key value storage systems. It consists of a multi-tier replication scheme, a reliable distributed log store, and a predictive power mode scheduler (PMS). Instead of randomly placing replicas of each object on a number of nodes in the consistent hash ring, we arrange the replicas of objects on nonoverlapping tiers of nodes in the ring. This allows the system to fall in various power modes by powering down subsets of servers while not violating data availability. The predictive PMS predicts workloads and adapts to load fluctuation. It cooperates with the multi-tier replication strategy to provide power proportionality for the system. To ensure that the reliability of the system is maintained when replicas are powered down, we distribute the writes to standby replicas to active servers, which ensures failure tolerance of the system. GreenCHT is implemented based on Sheepdog, a distributed key value storage system that uses consistent hashing as an underlying distributed hash table. By replaying 12 typical real workload traces collected from Microsoft, the evaluation results show that GreenCHT can provide significant power savings while maintaining a desired performance. We observe that GreenCHT can reduce power consumption by up to 35%–61%.
Bowman, Kaye; McKenna, Suzy
2016-01-01
This occasional paper provides an overview of the development of Australia's national training system and is a key knowledge document of a wider research project "Consistency with flexibility in the Australian national training system." This research project investigates the various approaches undertaken by each of the jurisdictions to…
Silvis, Maurits H
2015-01-01
Assuming a general constitutive relation for the turbulent stresses in terms of the local large-scale velocity gradient, we constructed a class of subgrid-scale models for large-eddy simulation that are consistent with important physical and mathematical properties. In particular, they preserve symmetries of the Navier-Stokes equations and exhibit the proper near-wall scaling. They furthermore show desirable dissipation behavior and are capable of describing nondissipative effects. We provided examples of such physically-consistent models and showed that existing subgrid-scale models do not all satisfy the desired properties.
A consistent description of kinetics and hydrodynamics of quantum Bose-systems
Directory of Open Access Journals (Sweden)
P.A.Hlushak
2004-01-01
Full Text Available A consistent approach to the description of kinetics and hydrodynamics of many-Boson systems is proposed. The generalized transport equations for strongly and weakly nonequilibrium Bose systems are obtained. Here we use the method of nonequilibrium statistical operator by D.N. Zubarev. New equations for the time distribution function of the quantum Bose system with a separate contribution from both the kinetic and potential energies of particle interactions are obtained. The generalized transport coefficients are determined accounting for the consistent description of kinetic and hydrodynamic processes.
On the Lagrangian structure of 3D consistent systems of asymmetric quad-equations
Boll, Raphael
2011-01-01
Recently, the first-named author gave a classification of 3D consistent 6-tuples of quad-equations with the tetrahedron property; several novel asymmetric 6-tuples have been found. Due to 3D consistency, these 6-tuples can be extended to discrete integrable systems on Z^m. We establish Lagrangian structures and flip-invariance of the action functional for the class of discrete integrable systems involving equations for which some of the biquadratics are non-degenerate and some are degenerate. This class covers, among others, some of the above mentioned novel systems.
Quantum thermal transport through anharmonic systems: A self-consistent approach
He, Dahai; Thingna, Juzar; Wang, Jian-Sheng; Li, Baowen
2016-10-01
We propose a feasible and effective approach to study quantum thermal transport through anharmonic systems. The main idea is to obtain an effective harmonic Hamiltonian for the anharmonic system by applying the self-consistent phonon theory. By using the effective harmonic Hamiltonian, we study thermal transport within the framework of the nonequilibrium Green's function method using the celebrated Caroli formula. We corroborate our quantum self-consistent approach by using the quantum master equation that can deal with anharmonicity exactly, but is limited to the weak system-bath coupling regime. Finally, in order to demonstrate its strength, we apply the quantum self-consistent approach to study thermal rectification in a weakly coupled two-segment anharmonic system.
Mokshin, A. V.
2015-04-01
The concept of time correlation functions is a very convenient theoretical tool in describing relaxation processes in multiparticle systems because, on one hand, correlation functions are directly related to experimentally measured quantities (for example, intensities in spectroscopic studies and kinetic coefficients via the Kubo-Green relation) and, on the other hand, the concept is also applicable beyond the equilibrium case. We show that the formalism of memory functions and the method of recurrence relations allow formulating a self-consistent approach for describing relaxation processes in classical multiparticle systems without needing a priori approximations of time correlation functions by model dependences and with the satisfaction of sum rules and other physical conditions guaranteed. We also demonstrate that the approach can be used to treat the simplest relaxation scenarios and to develop microscopic theories of transport phenomena in liquids, the propagation of density fluctuations in equilibrium simple liquids, and structure relaxation in supercooled liquids. This approach generalizes the mode-coupling approximation in the Götze-Leutheusser realization and the Yulmetyev-Shurygin correlation approximations.
New geometric design consistency model based on operating speed profiles for road safety evaluation.
Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo
2013-12-01
To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment.
A Self-consistent and Spatially Dependent Model of the Multiband Emission of Pulsar Wind Nebulae
Lu, Fang-Wu; Gao, Quan-Gui; Zhang, Li
2017-01-01
A self-consistent and spatially dependent model is presented to investigate the multiband emission of pulsar wind nebulae (PWNe). In this model, a spherically symmetric system is assumed and the dynamical evolution of the PWN is included. The processes of convection, diffusion, adiabatic loss, radiative loss, and photon–photon pair production are taken into account in the electron’s evolution equation, and the processes of synchrotron radiation, inverse Compton scattering, synchrotron self-absorption, and pair production are included for the photon’s evolution equation. Both coupled equations are simultaneously solved. The model is applied to explain observed results of the PWN in MSH 15–52. Our results show that the spectral energy distributions (SEDs) of both electrons and photons are all a function of distance. The observed photon SED of MSH 15–52 can be well reproduced in this model. With the parameters obtained by fitting the observed SED, the spatial variations of photon index and surface brightness observed in the X-ray band can also be well reproduced. Moreover, it can be derived that the present-day diffusion coefficient of MSH 15–52 at the termination shock is {κ }0=6.6× {10}24 {{cm}}2 {{{s}}}-1, the spatial average has a value of \\bar{κ }=1.4× {10}25 {{cm}}2 {{{s}}}-1, and the present-day magnetic field at the termination shock has a value of {B}0=26.6 μ {{G}} and the spatial averaged magnetic field is \\bar{B}=14.9 μ {{G}}. The spatial changes of the spectral index and surface brightness at different bands are predicted.
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
Cogntive Consistency Analysis in Adaptive Bio-Metric Authentication System Design
Directory of Open Access Journals (Sweden)
Gahangir Hossain
2015-07-01
Full Text Available Cognitive consistency analysis aims to continuously monitor one's perception equilibrium towards successful accomplishment of cognitive task. Opposite to cognitive flexibility analysis – cognitive consistency analysis identifies monotone of perception towards successful interaction process (e.g., biometric authentication and useful in generation of decision support to assist one in need. This study consider fingertip dynamics (e.g., keystroke, tapping, clicking etc. to have insights on instantaneous cognitive states and its effects in monotonic advancement towards successful authentication process. Keystroke dynamics and tapping dynamics are analyzed based on response time data. Finally, cognitive consistency and confusion (inconsistency are computed with Maximal Information Coefficient (MIC and Maximal Asymmetry Score (MAS, respectively. Our preliminary study indicates that a balance between cognitive consistency and flexibility are needed in successful authentication process. Moreover, adaptive and cognitive interaction system requires in depth analysis of user’s cognitive consistency to provide a robust and useful assistance.
The fundamental solution for a consistent complex model of the shallow shell equations
Matthew P. Coleman
1999-01-01
The calculation of the Fourier transforms of the fundamental solution in shallow shell theory ostensibly was accomplished by J. L. Sanders [J. Appl. Mech. 37 (1970), 361-366]. However, as is shown in detail in this paper, the complex model used by Sanders is, in fact, inconsistent. This paper provides a consistent version of Sanders's complex model, along with the Fourier transforms of the fundamental solution for this corrected model. The inverse Fourier transforms are then calculated for th...
Kukush, A.; Markovsky, I.; Van Huffel, S.
2002-01-01
Consistent estimators of the rank-deficient fundamental matrix yielding information on the relative orientation of two images in two-view motion analysis are derived. The estimators are derived by minimizing a corrected contrast function in a quadratic measurement error model. In addition, a consistent estimator for the measurement error variance is obtained. Simulation results show the improved accuracy of the newly proposed estimator compared to the ordinary total least-squares estimator.
Postmus, B.R.; Leermakers, F.A.M.; Cohen Stuart, M.A.
2008-01-01
We have constructed a model to predict the properties of non-ionic (alkyl-ethylene oxide) (C(n)E(m)) surfactants, both in aqueous solutions and near a silica surface, based upon the self-consistent field theory using the Scheutjens-Fleer discretisation scheme. The system has the pH and the ionic
Progress on Developing Consistent Earth System Data Records for the Global Terrestrial Water Cycle
Wood, E. F.; Lettenmaier, D. P.; Houser, P.; Pinker, R. T.; Kummerow, C. D.; Pan, M.; Gao, H.; Sahoo, A. K.
2009-12-01
Consistent, long-term Earth System Data Records (ESDRs) for the terrestrial water cycle are needed to provide a basis for estimating the mean state and variability of the land surface water cycle for the major global river basins and the global terrestrial hydrosphere. For consistency, the ESDRs for each component must be done within a framework that assures such consistency. In this project that started one year ago, five institutions are collaborating to jointly develop the terrestrial water cycle ESRDs, with the goal of producing ESDRs at a spatial resolution of 0.5 degrees (latitude-longitude) for the period 1950 to near-present. The strategy for the ESDRs is to (i) retrieve through state-of-the-art remote sensing algorithms surface radiation and water cycle variables applied to the satellite records that extend as far back as possible, which in most cases is the early 1980’s; (ii) estimate water cycle components through off-line land surface model integrations that will extend back to 1950; and (iii) to merge the remote sensing estimates with the land surface estimates using advanced data assimilation techniques. Over the last year the project has completed the Algorithm Theoretical Basis Documents (ATBDs), which provide documentation for all algorithms that will generate the data products. The production of the ESDRs also started for (1) surface meteorology (precipitation, air temperature, humidity and wind), (2) surface downward radiation (solar and longwave) and (3) derived and/or assimilated fluxes and storages such as surface soil moisture storage, total basin water storage, snow water equivalent, storage in large lakes, reservoirs, and wetlands, evapotranspiration, and surface runoff. Where our products overlap other Measures ESDR products (e.g. snow extent), we plan to work with those project to assure overall consistency. On the modeling part, a global surface meteorology data set that covers 1900-2006 has been established by merging satellite, in
Self-consistent core-pedestal transport simulations with neural network accelerated models
Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.
2017-08-01
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.
An internally consistent inverse model to calculate ridge-axis hydrothermal fluxes
Coogan, L. A.; Dosso, S.
2010-12-01
Fluid and chemical fluxes from high-temperature, on-axis, hydrothermal systems at mid-ocean ridges have been estimated in a number of ways. These generally use simple mass balances based on either vent fluid compositions or the compositions of altered sheeted dikes. Here we combine these approaches in an internally consistent model. Seawater is assumed to enter the crust and react with the sheeted dike complex at high temperatures. Major element fluxes for both the rock and fluid are calculated from balanced stoichiometric reactions. These reactions include end-member components of the minerals plagioclase, pyroxene, amphibole, chlorite and epidote along with pure anhydrite, quartz, pyrite, pyrrhotite, titanite, magnetite, ilmenite and ulvospinel and the fluid species H2O, Mg2+, Ca2+, Fe2+, Na+, Si4+, H2S, H+ and H2. Trace element abundances (Li, B, K, Rb, Cs, Sr, Ba, U, Tl, Mn, Cu, Zn, Co, Ni, Pb and Os) and isotopic ratios (Li, B, O, Sr, Tl, Os) are calculated from simple mass balance of a fluid-rock reaction. A fraction of the Cu, Zn, Pb, Co, Ni, Os and Mn in the fluid after fluid-rock reaction is allowed to precipitate during discharge before the fluid reaches the seafloor. S-isotopes are tied to mineralogical reactions involving S-bearing phases. The free parameters in the model are the amounts of each mineralogical reaction that occurs, the amounts of the metals precipitated during discharge, and the water-to-rock ratio. These model parameters, and their uncertainties, are constrained by: (i) mineral abundances and mineral major element compositions in altered dikes from ODP Hole 504B and the Pito and Hess Deep tectonic windows (EPR crust); (ii) changes in dike bulk-rock trace element and isotopic compositions from these locations relative to fresh MORB glass compositions; and (iii) published vent fluid compositions from basalt-hosted high-temperature ridge axis hydrothermal systems. Using a numerical inversion algorithm, the probability density of different
The Spectrum of the Baryon Masses in a Self-consistent SU(3) Quantum Skyrme Model
Jurciukonis, Darius; Regelskis, Vidas
2012-01-01
The semiclassical SU(3) Skyrme model is traditionally considered as describing a rigid quantum rotator with the profile function being fixed by the classical solution of the corresponding SU(2) Skyrme model. In contrast, we go beyond the classical profile function by quantizing the SU(3) Skyrme model canonically. The quantization of the model is performed in terms of the collective coordinate formalism and leads to the establishment of purely quantum corrections of the model. These new corrections are of fundamental importance. They are crucial in obtaining stable quantum solitons of the quantum SU(3) Skyrme model, thus making the model self-consistent and not dependent on the classical solution of the SU(2) case. We show that such a treatment of the model leads to a family of stable quantum solitons that describe the baryon octet and decuplet and reproduce the experimental values of their masses.
A consistent modelling methodology for secondary settling tanks in wastewater treatment.
Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar
2011-03-01
The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models.
Towards an Information Model of Consistency Maintenance in Distributed Interactive Applications
Directory of Open Access Journals (Sweden)
Xin Zhang
2008-01-01
Full Text Available A novel framework to model and explore predictive contract mechanisms in distributed interactive applications (DIAs using information theory is proposed. In our model, the entity state update scheme is modelled as an information generation, encoding, and reconstruction process. Such a perspective facilitates a quantitative measurement of state fidelity loss as a result of the distribution protocol. Results from an experimental study on a first-person shooter game are used to illustrate the utility of this measurement process. We contend that our proposed model is a starting point to reframe and analyse consistency maintenance in DIAs as a problem in distributed interactive media compression.
Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a Dual Risk Model
Directory of Open Access Journals (Sweden)
Lidong Zhang
2014-01-01
Full Text Available We are concerned with optimal investment strategy for a dual risk model. We assume that the company can invest into a risk-free asset and a risky asset. Short-selling and borrowing money are allowed. Due to lack of iterated-expectation property, the Bellman Optimization Principle does not hold. Thus we investigate the precommitted strategy and time-consistent strategy, respectively. We take three steps to derive the precommitted investment strategy. Furthermore, the time-consistent investment strategy is also obtained by solving the extended Hamilton-Jacobi-Bellman equations. We compare the precommitted strategy with time-consistent strategy and find that these different strategies have different advantages: the former can make value function maximized at the original time t=0 and the latter strategy is time-consistent for the whole time horizon. Finally, numerical analysis is presented for our results.
Nonparametric test of consistency between cosmological models and multiband CMB measurements
Aghamousa, Amir
2015-01-01
We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare between the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit $\\Lambda$CDM model at $95\\% (\\sim 2\\sigma)$ confidence distance from the center of the nonparametri...
Hirai, Kenta; Mita, Akira
2016-04-01
Because of social background, such as repeated large earthquakes and cheating in design and construction, structural health monitoring (SHM) systems are getting strong attention. The SHM systems are in a practical phase. An SHM system consisting of small number of sensors has been introduced to 6 tall buildings in Shinjuku area. Including them, there are 2 major issues in the SHM systems consisting of small number of sensors. First, optimal system number of sensors and the location are not well-defined. In the practice, system placement is determined based on rough prediction and experience. Second, there are some uncertainties in estimation results by the SHM systems. Thus, the purpose of this research is to provide useful information for increasing reliability of SHM system and to improve estimation results based on uncertainty analysis of the SHM systems. The important damage index used here is the inter-story drift angle. The uncertainty considered here are number of sensors, earthquake motion characteristics, noise in data, error between numerical model and real building, nonlinearity of parameter. Then I have analyzed influence of each factor to estimation accuracy. The analysis conducted here will help to decide sensor system design considering valance of cost and accuracy. Because of constraint on the number of sensors, estimation results by the SHM system has tendency to provide smaller values. To overcome this problem, a compensation algorithm was discussed and presented. The usefulness of this compensation method was demonstrated for 40 story S and RC building models with nonlinear response.
A simplified benchmark Stock-Flow Consistent (SFC) post-Keynesian growth model
Cláudio H. dos Santos; Zezza, Gennaro
2007-01-01
Despite being arguably one of the most active areas of research in heterodox macroeconomics, the study of the dynamic properties of stock-flow consistent (SFC) growth models of financially sophisticated economies is still in its early stages. This paper attempts to offer a contribution to this line of research by presenting a simplified Post-Keynesian SFC growth model with well-defined dynamic properties, and using it to shed light on the merits and limitations of the current heterodox SFC li...
A Consistent Direct Method for Estimating Parameters in Ordinary Differential Equations Models
Holte, Sarah E.
2016-01-01
Ordinary differential equations provide an attractive framework for modeling temporal dynamics in a variety of scientific settings. We show how consistent estimation for parameters in ODE models can be obtained by modifying a direct (non-iterative) least squares method similar to the direct methods originally developed by Himmelbau, Jones and Bischoff. Our method is called the bias-corrected least squares (BCLS) method since it is a modification of least squares methods known to be biased. Co...
Discretizing LTI Descriptor (Regular Differential Input Systems with Consistent Initial Conditions
Directory of Open Access Journals (Sweden)
Athanasios D. Karageorgos
2010-01-01
Full Text Available A technique for discretizing efficiently the solution of a Linear descriptor (regular differential input system with consistent initial conditions, and Time-Invariant coefficients (LTI is introduced and fully discussed. Additionally, an upper bound for the error ‖x¯(kT−x¯k‖ that derives from the procedure of discretization is also provided. Practically speaking, we are interested in such kind of systems, since they are inherent in many physical, economical and engineering phenomena.
The convergence of the modified Gauss-Seidel methods for consistent linear systems
Li, Wen
2003-05-01
In this paper we present a convergence analysis for the modified Gauss-Seidel methods given in Gunawardena et al. (Linear Algebra Appl. 154-156 (1991) 125) and Kohno et al. (Linear Algebra Appl. 267 (1997) 113) for consistent linear systems. We prove that the modified Gauss-Seidel method converges for some values of the parameters in the preconditioned matrix.
Directory of Open Access Journals (Sweden)
David Paul Eric Herzog
2014-01-01
Full Text Available Bone tissue is a highly vascularized and dynamic system with a complex construction. In order to develop a construct for implant purposes in bone tissue engineering, a proper understanding of the complex dependencies between different cells and cell types would provide further insight into the highly regulated processes during bone repair, namely, angiogenesis and osteogenesis, and might result in sufficiently equipped constructs to be beneficial to patients and thereby accomplish their task. This study is based on an in vitro coculture model consisting of outgrowth endothelial cells and primary osteoblasts and is currently being used in different studies of bone repair processes with special regard to angiogenesis and osteogenesis. Coculture systems of OECs and pOBs positively influence the angiogenic potential of endothelial cells by inducing the formation of angiogenic structures in long-term cultures. Although many studies have focused on cell communication, there are still numerous aspects which remain poorly understood. Therefore, the aim of this study is to investigate certain growth factors and cell communication molecules that are important during bone repair processes. Selected growth factors like VEGF, angiopoietins, BMPs, and IGFs were investigated during angiogenesis and osteogenesis and their expression in the cultures was observed and compared after one and four weeks of cultivation. In addition, to gain a better understanding on the origin of different growth factors, both direct and indirect coculture strategies were employed. Another important focus of this study was to investigate the role of “gap junctions,” small protein pores which connect adjacent cells. With these bridges cells are able to exchange signal molecules, growth factors, and other important mediators. It could be shown that connexins, the gap junction proteins, were located around cell nuclei, where they await their transport to the cell membrane. In
Comment on Self-Consistent Model of Black Hole Formation and Evaporation
Ho, Pei-Ming
2015-01-01
In an earlier work, Kawai et al proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.
Consistent phase-change modeling for CO2-based heat mining operation
DEFF Research Database (Denmark)
Singh, Ashok Kumar; Veje, Christian
2017-01-01
–gas phase transition with more accuracy and consistency. Calculation of fluid properties and saturation state were based on the volume translated Peng–Robinson equation of state and results verified. The present model has been applied to a scenario to simulate a CO2-based heat mining process. In this paper...
Comment on self-consistent model of black hole formation and evaporation
Energy Technology Data Exchange (ETDEWEB)
Ho, Pei-Ming [Department of Physics and Center for Theoretical Sciences, Center for Advanced Study in Theoretical Sciences,National Taiwan University, Taipei 106, Taiwan, R.O.C. (China)
2015-08-18
In an earlier work, Kawai et al. proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.
Spatial coincidence modelling, automated database updating and data consistency in vector GIS.
Kufoniyi, O.
1995-01-01
This thesis presents formal approaches for automated database updating and consistency control in vector- structured spatial databases. To serve as a framework, a conceptual data model is formalized for the representation of geo-data from multiple map layers in which a map layer denotes a set of ter
Song, Y.; Wright, D.
1998-01-01
A formulation of the pressure gradient force for use in models with topography-following coordinates is proposed and diagnostically analyzed by Song. We investigate numerical consistency with respect to global energy conservation, depth-integrated momentum changes, and the represent of the bottom pressure torque.
Subjective Confidence in Perceptual Judgments: A Test of the Self-Consistency Model
Koriat, Asher
2011-01-01
Two questions about subjective confidence in perceptual judgments are examined: the bases for these judgments and the reasons for their accuracy. Confidence in perceptual judgments has been claimed to rest on qualitatively different processes than confidence in memory tasks. However, predictions from a self-consistency model (SCM), which had been…
Subjective Confidence in Perceptual Judgments: A Test of the Self-Consistency Model
Koriat, Asher
2011-01-01
Two questions about subjective confidence in perceptual judgments are examined: the bases for these judgments and the reasons for their accuracy. Confidence in perceptual judgments has been claimed to rest on qualitatively different processes than confidence in memory tasks. However, predictions from a self-consistency model (SCM), which had been…
STRONG CONSISTENCY OF M ESTIMATOR IN LINEAR MODEL FOR NEGATIVELY ASSOCIATED SAMPLES
Institute of Scientific and Technical Information of China (English)
Qunying WU
2006-01-01
This paper discusses the strong consistency of M estimator of regression parameter in linear model for negatively associated samples. As a result, the author extends Theorem 1 and Theorem 2 of Shanchao YANG (2002) to the NA errors without necessarily imposing any extra condition.
Self-consistent modeling of radio-frequency plasma generation in stellarators
Moiseenko, V. E.; Stadnik, Yu. S.; Lysoivan, A. I.; Korovin, V. B.
2013-11-01
A self-consistent model of radio-frequency (RF) plasma generation in stellarators in the ion cyclotron frequency range is described. The model includes equations for the particle and energy balance and boundary conditions for Maxwell's equations. The equation of charged particle balance takes into account the influx of particles due to ionization and their loss via diffusion and convection. The equation of electron energy balance takes into account the RF heating power source, as well as energy losses due to the excitation and electron-impact ionization of gas atoms, energy exchange via Coulomb collisions, and plasma heat conduction. The deposited RF power is calculated by solving the boundary problem for Maxwell's equations. When describing the dissipation of the energy of the RF field, collisional absorption and Landau damping are taken into account. At each time step, Maxwell's equations are solved for the current profiles of the plasma density and plasma temperature. The calculations are performed for a cylindrical plasma. The plasma is assumed to be axisymmetric and homogeneous along the plasma column. The system of balance equations is solved using the Crank-Nicholson scheme. Maxwell's equations are solved in a one-dimensional approximation by using the Fourier transformation along the azimuthal and longitudinal coordinates. Results of simulations of RF plasma generation in the Uragan-2M stellarator by using a frame antenna operating at frequencies lower than the ion cyclotron frequency are presented. The calculations show that the slow wave generated by the antenna is efficiently absorbed at the periphery of the plasma column, due to which only a small fraction of the input power reaches the confinement region. As a result, the temperature on the axis of the plasma column remains low, whereas at the periphery it is substantially higher. This leads to strong absorption of the RF field at the periphery via the Landau mechanism.
Toward self-consistent tectono-magmatic numerical model of rift-to-ridge transition
Gerya, Taras; Bercovici, David; Liao, Jie
2017-04-01
Natural data from modern and ancient lithospheric extension systems suggest three-dimensional (3D) character of deformation and complex relationship between magmatism and tectonics during the entire rift-to-ridge transition. Therefore, self-consistent high-resolution 3D magmatic-thermomechanical numerical approaches stand as a minimum complexity requirement for modeling and understanding of this transition. Here we present results from our new high-resolution 3D finite-difference marker-in-cell rift-to-ridge models, which account for magmatic accretion of the crust and use non-linear strain-weakened visco-plastic rheology of rocks that couples brittle/plastic failure and ductile damage caused by grain size reduction. Numerical experiments suggest that nucleation of rifting and ridge-transform patterns are decoupled in both space and time. At intermediate stages, two patterns can coexist and interact, which triggers development of detachment faults, failed rift arms, hyper-extended margins and oblique proto-transforms. En echelon rift patterns typically develop in the brittle upper-middle crust whereas proto-ridge and proto-transform structures nucleate in the lithospheric mantle. These deep proto-structures propagate upward, inter-connect and rotate toward a mature orthogonal ridge-transform patterns on the timescale of millions years during incipient thermal-magmatic accretion of the new oceanic-like lithosphere. Ductile damage of the extending lithospheric mantle caused by grain size reduction assisted by Zenner pinning plays critical role in rift-to-ridge transition by stabilizing detachment faults and transform structures. Numerical results compare well with observations from incipient spreading regions and passive continental margins.
Directory of Open Access Journals (Sweden)
Damiano Monelli
2010-11-01
Full Text Available We present here two self-consistent implementations of a short-term earthquake probability (STEP model that produces daily seismicity forecasts for the area of the Italian national seismic network. Both implementations combine a time-varying and a time-invariant contribution, for which we assume that the instrumental Italian earthquake catalog provides the best information. For the time-invariant contribution, the catalog is declustered using the clustering technique of the STEP model; the smoothed seismicity model is generated from the declustered catalog. The time-varying contribution is what distinguishes the two implementations: 1 for one implementation (STEP-LG, the original model parameterization and estimation is used; 2 for the other (STEP-NG, the mean abundance method is used to estimate aftershock productivity. In the STEP-NG implementation, earthquakes with magnitude up to ML= 6.2 are expected to be less productive compared to the STEP-LG implementation, whereas larger earthquakes are expected to be more productive. We have retrospectively tested the performance of these two implementations and applied likelihood tests to evaluate their consistencies with observed earthquakes. Both of these implementations were consistent with the observed earthquake data in space: STEP-NG performed better than STEP-LG in terms of forecast rates. More generally, we found that testing earthquake forecasts issued at regular intervals does not test the full power of clustering models, and future experiments should allow for more frequent forecasts starting at the times of triggering events.
Altmeyer, Guillaume; Panicaud, Benoit; Rouhaud, Emmanuelle; Wang, Mingchuan; Roos, Arjen; Kerner, Richard
2016-11-01
When constructing viscoelastic models, rate-form relations appear naturally to relate strain and stress tensors. One has to ensure that these tensors and their rates are indifferent with respect to the change of observers and to the superposition with rigid body motions. Objective transports are commonly accepted to ensure this invariance. However, the large number of transport operators developed makes the choice often difficult for the user and may lead to physically inconsistent formulation of hypoelasticity. In this paper, a methodology based on the use of the Lie derivative is proposed to model consistent hypoelasticity as an equivalent incremental formulation of hyperelasticity. Both models are shown to be reversible and completely equivalent. Extension to viscoelasticity is then proposed from this consistent model by associating consistent hypoelastic models with viscous behavior. As an illustration, Mooney-Rivlin nonlinear elasticity is coupled with Newton viscosity and a Maxwell-like material is investigated. Numerical solutions are then presented to illustrate a viscoelastic material subjected to finite deformations for a large range of strain rates.
Lu, Wei; Song, Joo Hyun; Christensen, Gary E.; Parikh, Parag J.; Bradley, Jeffrey D.; Low, Daniel A.
2006-03-01
Respiratory motion is a significant source of error in conformal radiation therapy for the thorax and upper abdomen. Four-dimensional computed tomography (4D CT) has been proposed to reduce the uncertainty caused by internal respiratory organ motion. A 4D CT dataset is retrospectively reconstructed at various stages of a respiratory cycle. An important tool for 4D treatment planning is deformable image registration. An inverse consistent image registration is used to model lung motion from one respiratory stage to another during a breathing cycle. This diffeomorphic registration jointly estimates the forward and reverse transformations providing more accurate correspondence between two images. Registration results and modeled motions in the lung are shown for three example respiratory stages. The results demonstrate that the consistent image registration satisfactorily models the large motions in the lung, providing a useful tool for 4D planning and delivering.
Zhang, Zhen; Guo, Chonghui
2016-08-01
Due to the uncertainty of the decision environment and the lack of knowledge, decision-makers may use uncertain linguistic preference relations to express their preferences over alternatives and criteria. For group decision-making problems with preference relations, it is important to consider the individual consistency and the group consensus before aggregating the preference information. In this paper, consistency and consensus models for group decision-making with uncertain 2-tuple linguistic preference relations (U2TLPRs) are investigated. First of all, a formula which can construct a consistent U2TLPR from the original preference relation is presented. Based on the consistent preference relation, the individual consistency index for a U2TLPR is defined. An iterative algorithm is then developed to improve the individual consistency of a U2TLPR. To help decision-makers reach consensus in group decision-making under uncertain linguistic environment, the individual consensus and group consensus indices for group decision-making with U2TLPRs are defined. Based on the two indices, an algorithm for consensus reaching in group decision-making with U2TLPRs is also developed. Finally, two examples are provided to illustrate the effectiveness of the proposed algorithms.
The fundamental solution for a consistent complex model of the shallow shell equations
Directory of Open Access Journals (Sweden)
Matthew P. Coleman
1999-09-01
Full Text Available The calculation of the Fourier transforms of the fundamental solution in shallow shell theory ostensibly was accomplished by J. L. Sanders [J. Appl. Mech. 37 (1970, 361-366]. However, as is shown in detail in this paper, the complex model used by Sanders is, in fact, inconsistent. This paper provides a consistent version of Sanders's complex model, along with the Fourier transforms of the fundamental solution for this corrected model. The inverse Fourier transforms are then calculated for the particular cases of the shallow spherical and circular cylindrical shells, and the results of the latter are seen to be in agreement with results appearing elsewhere in the literature.
Tests and applications of self-consistent cranking in the interacting boson model
Kuyucak, S; Kuyucak, Serdar; Sugita, Michiaki
1999-01-01
The self-consistent cranking method is tested by comparing the cranking calculations in the interacting boson model with the exact results obtained from the SU(3) and O(6) dynamical symmetries and from numerical diagonalization. The method is used to study the spin dependence of shape variables in the $sd$ and $sdg$ boson models. When realistic sets of parameters are used, both models lead to similar results: axial shape is retained with increasing cranking frequency while fluctuations in the shape variable $\\gamma$ are slightly reduced.
The consistent Riccati expansion and new interaction solution for a Boussinesq-type coupled system
Ruan, Shao-Qing; Yu, Wei-Feng; Yu, Jun; Yu, Guo-Xiang
2015-06-01
Starting from the Davey-Stewartson equation, a Boussinesq-type coupled equation system is obtained by using a variable separation approach. For the Boussinesq-type coupled equation system, its consistent Riccati expansion (CRE) solvability is studied with the help of a Riccati equation. It is significant that the soliton-cnoidal wave interaction solution, expressed explicitly by Jacobi elliptic functions and the third type of incomplete elliptic integral, of the system is also given. Project supported by the National Natural Science Foundation of China (Grant No. 11275129).
A New Hierarchy of Phylogenetic Models Consistent with Heterogeneous Substitution Rates.
Woodhams, Michael D; Fernández-Sánchez, Jesús; Sumner, Jeremy G
2015-07-01
When the process underlying DNA substitutions varies across evolutionary history, some standard Markov models underlying phylogenetic methods are mathematically inconsistent. The most prominent example is the general time-reversible model (GTR) together with some, but not all, of its submodels. To rectify this deficiency, nonhomogeneous Lie Markov models have been identified as the class of models that are consistent in the face of a changing process of DNA substitutions regardless of taxon sampling. Some well-known models in popular use are within this class, but are either overly simplistic (e.g., the Kimura two-parameter model) or overly complex (the general Markov model). On a diverse set of biological data sets, we test a hierarchy of Lie Markov models spanning the full range of parameter richness. Compared against the benchmark of the ever-popular GTR model, we find that as a whole the Lie Markov models perform well, with the best performing models having 8-10 parameters and the ability to recognize the distinction between purines and pyrimidines. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society of Systematic Biologists.
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
Semiparametric reproductive dispersion nonlinear model (SRDNM) is an extension of nonlinear reproductive dispersion models and semiparametric nonlinear regression models, and includes semiparametric nonlinear model and semiparametric generalized linear model as its special cases. Based on the local kernel estimate of nonparametric component, profile-kernel and backfitting estimators of parameters of interest are proposed in SRDNM, and theoretical comparison of both estimators is also investigated in this paper. Under some regularity conditions, strong consistency and asymptotic normality of two estimators are proved. It is shown that the backfitting method produces a larger asymptotic variance than that for the profile-kernel method. A simulation study and a real example are used to illustrate the proposed methodologies.
Detecting consistent patterns of directional adaptation using differential selection codon models.
Parto, Sahar; Lartillot, Nicolas
2017-06-23
Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.
Hess, Julian; Wang, Yongqi
2016-11-01
A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.
A control-oriented self-consistent model of an inductively-coupled plasma
Keville, Bernard; Turner, Miles
2009-10-01
An essential first step in the design of real time control algorithms for plasma processes is to determine dynamical relationships between actuator quantities such as gas flow rate set points and plasma states such electron density. An ideal first principles-based, control-oriented model should exhibit the simplicity and computational requirements of an empirical model and, in addition, despite sacrificing first principles detail, capture enough of the essential physics and chemistry of the process in order to provide reasonably accurate qualitative predictions. This presentation describes a control-oriented model of a cylindrical low pressure planar inductive discharge with a stove top antenna. The model consists of equivalent circuit coupled to a global model of the plasma chemistry to produce a self-consistent zero-dimensional model of the discharge. The non-local plasma conductivity and the fields in the plasma are determined from the wave equation and the two-term solution of the Boltzmann equation. Expressions for the antenna impedance and the parameters of the transformer equivalent circuit in terms of the isotropic electron distribution and the geometry of the chamber are presented.
Consistent increase in Indian monsoon rainfall and its variability across CMIP-5 models
Directory of Open Access Journals (Sweden)
A. Menon
2013-01-01
Full Text Available The possibility of an impact of global warming on the Indian monsoon is of critical importance for the large population of this region. Future projections within the Coupled Model Intercomparison Project Phase 3 (CMIP-3 showed a wide range of trends with varying magnitude and sign across models. Here the Indian summer monsoon rainfall is evaluated in 20 CMIP-5 models for the period 1850 to 2100. In the new generation of climate models a consistent increase in seasonal mean rainfall during the summer monsoon periods arises. All models simulate stronger seasonal mean rainfall in the future compared to the historic period under the strongest warming scenario RCP-8.5. Increase in seasonal mean rainfall is the largest for the RCP-8.5 scenario compared to other RCPs. The interannual variability of the Indian monsoon rainfall also shows a consistent positive trend under unabated global warming. Since both the long-term increase in monsoon rainfall as well as the increase in interannual variability in the future is robust across a wide range of models, some confidence can be attributed to these projected trends.
Directory of Open Access Journals (Sweden)
Roy E Barnewall
2012-06-01
Full Text Available Repeated low-level exposures to Bacillus anthracis could occur before or after the remediation of an environmental release. This is especially true for persistent agents such as Bacillus anthracis spores, the causative agent of anthrax. Studies were conducted to examine aerosol methods needed for consistent daily low aerosol concentrations to deliver a low-dose (less than 106 colony forming units (CFU of B. anthracis spores and included a pilot feasibility characterization study, acute exposure study, and a multiple fifteen day exposure study. This manuscript focuses on the state-of-the-science aerosol methodologies used to generate and aerosolize consistent daily low aerosol concentrations and resultant low inhalation doses. The pilot feasibility characterization study determined that the aerosol system was consistent and capable of producing very low aerosol concentrations. In the acute, single day exposure experiment, targeted inhaled doses of 1 x 102, 1 x 103, 1 x 104, and 1 x 105 CFU were used. In the multiple daily exposure experiment, rabbits were exposed multiple days to targeted inhaled doses of 1 x 102, 1 x 103, and 1 x 104 CFU. In all studies, targeted inhaled doses remained fairly consistent from rabbit to rabbit and day to day. The aerosol system produced aerosolized spores within the optimal mass median aerodynamic diameter particle size range to reach deep lung alveoli. Consistency of the inhaled dose was aided by monitoring and recording respiratory parameters during the exposure with real-time plethysmography. Overall, the presented results show that the animal aerosol system was stable and highly reproducible between different studies and multiple exposure days.
Schmidtke, Daniel; Gemmer, Jochen
2016-01-01
Closed quantum systems obey the Schrödinger equation, whereas nonequilibrium behavior of many systems is routinely described in terms of classical, Markovian stochastic processes. Evidently, there are fundamental differences between those two types of behavior. We discuss the conditions under which the unitary dynamics may be mapped onto pertinent classical stochastic processes. This is first principally addressed based on the notions of "consistency" and "Markovianity." Numerical data are presented that show that the above conditions are to good approximation fulfilled for Heisenberg-type spin models comprising 12-20 spins. The accuracy to which these conditions are met increases with system size.
Non-Perturbative Self-Consistent Model in SU(N Gauge Field Theory
Directory of Open Access Journals (Sweden)
Koshelkin A.V.
2012-06-01
Full Text Available Non-perturbative quasi-classical model in a gauge theory with the Yang-Mills (YM field is developed. The self-consistent solutions of the Dirac equation in the SU(N gauge field, which is in the eikonal approximation, and the Yang-Mills (YM equations containing the external fermion current are solved. It shown that the developed model has the self-consistent solutions of the Dirac and Yang-Mills equations at N ≥ 3. In this way, the solutions take place provided that the fermion and gauge fields exist simultaneously, so that the fermion current completely compensates the current generated by the gauge field due to self-interaction of it.
Directory of Open Access Journals (Sweden)
Yen Na eYum
2014-04-01
Full Text Available Phonological access is an important component in theories and models of word reading. However, phonological regularity and consistency effects are not clearly separable in alphabetic writing systems. We investigated these effects in Chinese, where the two variables are operationally distinct. In this orthographic system, regularity is defined as the congruence between the pronunciation of a complex character (or phonogram, and that of its phonetic radical, while phonological consistency indexes the proportion of orthographic neighbors that share the same pronunciation as the phonogram. In the current investigation, regularity and consistency were contrasted in an event-related potential (ERP study using a lexical decision task and a delayed naming task with native Chinese readers. ERP results showed that effects of regularity occurred early after stimulus onset and were long-lasting. Regular characters elicited larger N170, smaller P200, and larger N400 compared to irregular characters. In contrast, significant effects of consistency were only seen at the P200 and consistent characters showed a greater P200 than inconsistent characters. Thus, both the time course and the direction of the effects indicated that regularity and consistency operated under different mechanisms and were distinct constructs. Additionally, both of these phonological effects were only found in the delayed naming task and absent in lexical decision, suggesting that phonological access was non-obligatory for lexical decision. The study demonstrated cross-language variability in how phonological information was accessed from print and how task demands could influence this process.
Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel
2017-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.
Elif Uğur; Reyhan Nergiz Ünal
2017-01-01
During the prevention and treatment of cardiovascular diseases, first cause of deaths in the world, diet has a vital role. While nutrition programs for the cardiovascular health generally focus on lipids and carbohydrates, effects of proteins are not well concerned. Thus this review is written in order to examine effect of proteins, amino acids, and the other amine consisting compounds on cardiovascular system. Because of that animal or plant derived proteins have different protein compositio...
2012-01-01
We develop a first-principles computational method for investigating the dielectric screening in extended systems using the self-consistent Sternheimer equation and localized non-orthogonal basis sets. Our approach does not require the explicit calculation of unoccupied electronic states, only uses two-center integrals, and has a theoretical scaling of order O(N^3). We demonstrate this method by comparing our calculations for silicon, germanium, diamond, and LiCl with reference planewaves cal...
Institute of Scientific and Technical Information of China (English)
John Jack P. RIEGEL III; David DAVISON
2016-01-01
Historically, there has been little correlation between the material properties used in (1) empirical formulae, (2) analytical formulations, and (3) numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014) to show how the Effective Flow Stress (EFS) strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN) (Anderson and Walker, 1991) and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical) to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D=10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a baseline with a full
Directory of Open Access Journals (Sweden)
John (Jack P. Riegel III
2016-04-01
Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a
O. Fovet; L. Ruiz; M. Hrachowitz; M. Faucheux; C. Gascuel-Odoux
2015-01-01
While most hydrological models reproduce the general flow dynamics, they frequently fail to adequately mimic system-internal processes. In particular, the relationship between storage and discharge, which often follows annual hysteretic patterns in shallow hard-rock aquifers, is rarely considered in modelling studies. One main reason is that catchment storage is...
Institute of Scientific and Technical Information of China (English)
YIN; Changming; ZHAO; Lincheng; WEI; Chengdong
2006-01-01
In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.
ICFD modeling of final settlers - developing consistent and effective simulation model structures
DEFF Research Database (Denmark)
Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham
analysis exercises is kept to a minimum (4). Consequently, detailed information related to, for instance, design boundaries, may be ignored, and their effects may only be accounted for through calibration of model parameters used as catchalls, and by arbitrary amendments of structural uncertainty...... of (6). Further details are shown in (5). Results and discussions Factor screening. Factor screening is carried out by imposing statistically designed moderate (under-loaded) and extreme (under-, critical and overloaded) operational boundary conditions on the 2-D CFD SST model (8). Results obtained...
Directory of Open Access Journals (Sweden)
Jan Zavadsky
2014-07-01
Full Text Available Purpose: The performance management system (PMS is a metasystem over all business processes at the strategic and operational level. Effectiveness of the various management systems depends on many factors. One of them is the consistent definition of each system elements. The main purpose of this study is to explore if the performance management systems of the sample companies is consistent and how companies can create such a system. The consistency in this case is based on the homogenous definition of attributes relating to the performance indicator as a basic element of PMS.Methodology: At the beginning, we used an affinity diagram that helped us to clarify and to group various attributes of performance indicators. The main research results we achieved are through empirical study. The empirical study was carried out in a sample of Slovak companies. The criterion for selection was the existence of the certified management systems according to the ISO 9001. Representativeness of the sample companies was confirmed by application of Pearson´s chi-squared test (χ2 - test due to above standards. Findings: Coming from the review of various literature, we defined four groups of attributes relating to the performance indicator: formal attributes, attributes of target value, informational attributes and attributes of evaluation. The whole set contains 21 attributes. The consistency of PMS is based not on maximum or minimum number of attributes, but on the same type of attributes for each performance indicator used in PMS at both the operational and strategic level. The main findings are: companies use various financial and non-financial indicators at strategic or operational level; companies determine various attributes of performance indicator, but most of the performance indicators are otherwise determined; we identified the common attributes for the whole sample of companies. Practical implications: The research results have got an implication for
Directory of Open Access Journals (Sweden)
Elif Uğur
2017-01-01
Full Text Available During the prevention and treatment of cardiovascular diseases, first cause of deaths in the world, diet has a vital role. While nutrition programs for the cardiovascular health generally focus on lipids and carbohydrates, effects of proteins are not well concerned. Thus this review is written in order to examine effect of proteins, amino acids, and the other amine consisting compounds on cardiovascular system. Because of that animal or plant derived proteins have different protein composition in different foods such as dairy products, egg, meat, chicken, fish, pulse and grains, their effects on blood pressure and regulation of lipid profile are unlike. In parallel amino acids made up proteins have different effect on cardiovascular system. From this point, sulfur containing amino acids, branched chain amino acids, aromatic amino acids, arginine, ornithine, citrulline, glycine, and glutamine may affect cardiovascular system in different metabolic pathways. In this context, one carbon metabolism, synthesis of hormone, stimulation of signaling pathways and effects of intermediate and final products that formed as a result of amino acids metabolism is determined. Despite the protein and amino acids, some other amine consisting compounds in diet include trimethylamine N-oxide, heterocyclic aromatic amines, polycyclic aromatic hydrocarbons and products of Maillard reaction. These amine consisting compounds generally increase the risk for cardiovascular diseases by stimulating oxidative stress, inflammation, and formation of atherosclerotic plaque.
Self-consistent Dark Matter simplified models with an s-channel scalar mediator
Bell, Nicole F.; Busoni, Giorgio; Sanderson, Isaac W.
2017-03-01
We examine Simplified Models in which fermionic DM interacts with Standard Model (SM) fermions via the exchange of an s-channel scalar mediator. The single-mediator version of this model is not gauge invariant, and instead we must consider models with two scalar mediators which mix and interfere. The minimal gauge invariant scenario involves the mixing of a new singlet scalar with the Standard Model Higgs boson, and is tightly constrained. We construct two Higgs doublet model (2HDM) extensions of this scenario, where the singlet mixes with the 2nd Higgs doublet. Compared with the one doublet model, this provides greater freedom for the masses and mixing angle of the scalar mediators, and their coupling to SM fermions. We outline constraints on these models, and discuss Yukawa structures that allow enhanced couplings, yet keep potentially dangerous flavour violating processes under control. We examine the direct detection phenomenology of these models, accounting for interference of the scalar mediators, and interference of different quarks in the nucleus. Regions of parameter space consistent with direct detection measurements are determined.
Self-Consistent Ring Current/Electromagnetic Ion Cyclotron Waves Modeling
Khazanov, G. V.; Gamayunov, K. V.; Gallagher, D. L.
2006-01-01
The self-consistent treatment of the RC ion dynamics and EMIC waves, which are thought to exert important influences on the ion dynamical evolution, is an important missing element in our understanding of the storm-and recovery-time ring current evolution. For example, the EMlC waves cause the RC decay on a time scale of about one hour or less during the main phase of storms. The oblique EMIC waves damp due to Landau resonance with the thermal plasmaspheric electrons, and subsequent transport of the dissipating wave energy into the ionosphere below causes an ionosphere temperature enhancement. Under certain conditions, relativistic electrons, with energies 21 MeV, can be removed from the outer radiation belt by EMIC wave scattering during a magnetic storm. That is why the modeling of EMIC waves is critical and timely issue in magnetospheric physics. This study will generalize the self-consistent theoretical description of RC ions and EMIC waves that has been developed by Khazanov et al. [2002, 2003] and include the heavy ions and propagation effects of EMIC waves in the global dynamic of self-consistent RC - EMIC waves coupling. The results of our newly developed model that will be presented at the meeting, focusing mainly on the dynamic of EMIC waves and comparison of these results with the previous global RC modeling studies devoted to EMIC waves formation. We also discuss RC ion precipitations and wave induced thermal electron fluxes into the ionosphere.
Synchronization in node of complex networks consist of complex chaotic system
Directory of Open Access Journals (Sweden)
Qiang Wei
2014-07-01
Full Text Available A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.
Predicting giant magnetoresistance using a self-consistent micromagnetic diffusion model
Abert, Claas; Bruckner, Florian; Vogler, Christoph; Praetorius, Dirk; Suess, Dieter
2015-01-01
We propose a self-consistent micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. Potential calculations for a magnetic multilayer structure with perpendicular current flow confirm experimental findings of a non-sinosoidal dependence of the resistivity on the tilting angle of the magnetization in the different layers. While the sinosoidal dependency is observed for certain material parameter limits, a realistic choice of these parameters leads to a notably narrower distribution.
Self-consistent tight-binding atomic-relaxation model of titanium dioxide
Energy Technology Data Exchange (ETDEWEB)
Schelling, P.K.; Yu, N.; Halley, J.W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455 (United States)
1998-07-01
We report a self-consistent tight-binding atomic-relaxation model for titanium dioxide. We fit the parameters of the model to first-principles electronic structure calculations of the band structure and energy as a function of lattice parameters in bulk rutile. We report the method and results for the surface structures and energies of relaxed (110), (100), and (001) surfaces of rutile TiO{sub 2} as well as work functions for these surfaces. Good agreement with first-principles calculations and experiments, where available, is found for these surfaces. We find significant charge transfer (increased covalency) at the surfaces. {copyright} {ital 1998} {ital The American Physical Society}
A Self-Consistent Model for Thermal Oxidation of Silicon at Low Oxide Thickness
Directory of Open Access Journals (Sweden)
Gerald Gerlach
2016-01-01
Full Text Available Thermal oxidation of silicon belongs to the most decisive steps in microelectronic fabrication because it allows creating electrically insulating areas which enclose electrically conductive devices and device areas, respectively. Deal and Grove developed the first model (DG-model for the thermal oxidation of silicon describing the oxide thickness versus oxidation time relationship with very good agreement for oxide thicknesses of more than 23 nm. Their approach named as general relationship is the basis of many similar investigations. However, measurement results show that the DG-model does not apply to very thin oxides in the range of a few nm. Additionally, it is inherently not self-consistent. The aim of this paper is to develop a self-consistent model that is based on the continuity equation instead of Fick’s law as the DG-model is. As literature data show, the relationship between silicon oxide thickness and oxidation time is governed—down to oxide thicknesses of just a few nm—by a power-of-time law. Given by the time-independent surface concentration of oxidants at the oxide surface, Fickian diffusion seems to be neglectable for oxidant migration. The oxidant flux has been revealed to be carried by non-Fickian flux processes depending on sites being able to lodge dopants (oxidants, the so-called DOCC-sites, as well as on the dopant jump rate.
A consistent modelling methodology for secondary settling tanks: a reliable numerical method.
Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena
2013-01-01
The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.
Towards a self-consistent halo model for the nonlinear large-scale structure
Schmidt, Fabian
2015-01-01
The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: $(i)$ they do not enforce the stress-energy conservation of matter; $(ii)$ they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model ("EHM") that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed, and results of perturbation theory and the effective field theory can in principle be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written he...
Amruth, B R; R., Amruth B.; Patwardhan, Ajay
2006-01-01
Cosmological inflation models with modifications to include recent cosmological observations has been an active area of research after WMAP 3 results, which have given us information about the composition of dark matter, normal matter and dark energy and the anisotropy at the 300,000 years horizon with high precision. We work on inflation models of Guth and Linde and modify them by introducing a doublet scalar field to give normal matter particles and their supersymmetric partners which result in normal and dark matter of our universe. We include the cosmological constant term as the vaccuum expectation value of the stress energy tensor, as the dark energy. We callibrate the parameters of our model using recent observations of density fluctuations. We develop a model which consistently fits with the recent observations.
SALT Spectropolarimetry and Self-Consistent SED and Polarization Modeling of Blazars
Böttcher, Markus; van Soelen, Brian; Britto, Richard; Buckley, David; Marais, Johannes; Schutte, Hester
2017-09-01
We report on recent results from a target-of-opportunity program to obtain spectropolarimetry observations with the Southern African Large Telescope (SALT) on flaring gamma-ray blazars. SALT spectropolarimetry and contemporaneous multi-wavelength spectral energy distribution (SED) data are being modelled self-consistently with a leptonic single-zone model. Such modeling provides an accurate estimate of the degree of order of the magnetic field in the emission region and the thermal contributions (from the host galaxy and the accretion disk) to the SED, thus putting strong constraints on the physical parameters of the gamma-ray emitting region. For the specific case of the $\\gamma$-ray blazar 4C+01.02, we demonstrate that the combined SED and spectropolarimetry modeling constrains the mass of the central black hole in this blazar to $M_{\\rm BH} \\sim 10^9 \\, M_{\\odot}$.
Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers
Cartar, William; Mørk, Jesper; Hughes, Stephen
2017-08-01
We present a powerful computational approach to simulate the threshold behavior of photonic-crystal quantum-dot (QD) lasers. Using a finite-difference time-domain (FDTD) technique, Maxwell-Bloch equations representing a system of thousands of statistically independent and randomly positioned two-level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within the model. These Maxwell-Bloch equations are implemented by using Lumerical's flexible material plug-in tool, which allows a user to define additional equations of motion for the nonlinear polarization. We implement the gain ensemble within triangular-lattice photonic-crystal cavities of various length N (where N refers to the number of missing holes), and investigate the cavity mode characteristics and the threshold regime as a function of cavity length. We develop effective two-dimensional model simulations which are derived after studying the full three-dimensional passive material structures by matching the cavity quality factors and resonance properties. We also demonstrate how to obtain the correct point-dipole radiative decay rate from Fermi's golden rule, which is captured naturally by the FDTD method. Our numerical simulations predict that the pump threshold plateaus around cavity lengths greater than N =9 , which we identify as a consequence of the complex spatial dynamics and gain coupling from the inhomogeneous QD ensemble. This behavior is not expected from simple rate-equation analysis commonly adopted in the literature, but is in qualitative agreement with recent experiments. Single-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also
Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling
Energy Technology Data Exchange (ETDEWEB)
Pera, H.; Kleijn, J. M.; Leermakers, F. A. M., E-mail: Frans.leermakers@wur.nl [Laboratory of Physical Chemistry and Colloid Science, Wageningen University, Dreijenplein 6, 6307 HB Wageningen (Netherlands)
2014-02-14
To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k{sub c} and k{sup ¯} and the preferred monolayer curvature J{sub 0}{sup m}, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k{sub c} and the area compression modulus k{sub A} are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k{sup ¯} and J{sub 0}{sup m} can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k{sup ¯} and J{sub 0}{sup m} change sign with relevant parameter changes. Although typically k{sup ¯}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J{sub 0}{sup m}≫0, especially at low ionic
Dragović, Ivana; Turajlić, Nina; Pilčević, Dejan; Petrović, Bratislav; Radojević, Dragan
2015-01-01
Fuzzy inference systems (FIS) enable automated assessment and reasoning in a logically consistent manner akin to the way in which humans reason. However, since no conventional fuzzy set theory is in the Boolean frame, it is proposed that Boolean consistent fuzzy logic should be used in the evaluation of rules. The main distinction of this approach is that it requires the execution of a set of structural transformations before the actual values can be introduced, which can, in certain cases, lead to different results. While a Boolean consistent FIS could be used for establishing the diagnostic criteria for any given disease, in this paper it is applied for determining the likelihood of peritonitis, as the leading complication of peritoneal dialysis (PD). Given that patients could be located far away from healthcare institutions (as peritoneal dialysis is a form of home dialysis) the proposed Boolean consistent FIS would enable patients to easily estimate the likelihood of them having peritonitis (where a high likelihood would suggest that prompt treatment is indicated), when medical experts are not close at hand. PMID:27069500
Directory of Open Access Journals (Sweden)
Ivana Dragović
2015-01-01
Full Text Available Fuzzy inference systems (FIS enable automated assessment and reasoning in a logically consistent manner akin to the way in which humans reason. However, since no conventional fuzzy set theory is in the Boolean frame, it is proposed that Boolean consistent fuzzy logic should be used in the evaluation of rules. The main distinction of this approach is that it requires the execution of a set of structural transformations before the actual values can be introduced, which can, in certain cases, lead to different results. While a Boolean consistent FIS could be used for establishing the diagnostic criteria for any given disease, in this paper it is applied for determining the likelihood of peritonitis, as the leading complication of peritoneal dialysis (PD. Given that patients could be located far away from healthcare institutions (as peritoneal dialysis is a form of home dialysis the proposed Boolean consistent FIS would enable patients to easily estimate the likelihood of them having peritonitis (where a high likelihood would suggest that prompt treatment is indicated, when medical experts are not close at hand.
Dragović, Ivana; Turajlić, Nina; Pilčević, Dejan; Petrović, Bratislav; Radojević, Dragan
2015-01-01
Fuzzy inference systems (FIS) enable automated assessment and reasoning in a logically consistent manner akin to the way in which humans reason. However, since no conventional fuzzy set theory is in the Boolean frame, it is proposed that Boolean consistent fuzzy logic should be used in the evaluation of rules. The main distinction of this approach is that it requires the execution of a set of structural transformations before the actual values can be introduced, which can, in certain cases, lead to different results. While a Boolean consistent FIS could be used for establishing the diagnostic criteria for any given disease, in this paper it is applied for determining the likelihood of peritonitis, as the leading complication of peritoneal dialysis (PD). Given that patients could be located far away from healthcare institutions (as peritoneal dialysis is a form of home dialysis) the proposed Boolean consistent FIS would enable patients to easily estimate the likelihood of them having peritonitis (where a high likelihood would suggest that prompt treatment is indicated), when medical experts are not close at hand.
Giorgi, F.; Coppola, E.; Raffaele, F.
2014-10-01
We analyze trends of six daily precipitation-based and physically interconnected hydroclimatic indices in an ensemble of historical and 21st century climate projections under forcing from increasing greenhouse gas (GHG) concentrations (Representative Concentration Pathways (RCP)8.5), along with gridded (land only) observations for the late decades of the twentieth century. The indices include metrics of intensity (SDII) and extremes (R95) of precipitation, dry (DSL), and wet spell length, the hydroclimatic intensity index (HY-INT), and a newly introduced index of precipitation area (PA). All the indices in both the 21st century and historical simulations provide a consistent picture of a predominant shift toward a hydroclimatic regime of more intense, shorter, less frequent, and less widespread precipitation events in response to GHG-induced global warming. The trends are larger and more spatially consistent over tropical than extratropical regions, pointing to the importance of tropical convection in regulating this response, and show substantial regional spatial variability. Observed trends in the indices analyzed are qualitatively and consistently in line with the simulated ones, at least at the global and full tropical scale, further supporting the robustness of the identified prevailing hydroclimatic responses. The HY-INT, PA, and R95 indices show the most consistent response to global warming, and thus offer the most promising tools for formal hydroclimatic model validation and detection/attribution studies. The physical mechanism underlying this response and some of the applications of our results are also discussed.
Relativistic Consistent Angular-Momentum Projected Shell-Model:Relativistic Mean Field
Institute of Scientific and Technical Information of China (English)
LI Yan-Song; LONG Gui-Lu
2004-01-01
We develop a relativistic nuclear structure model, relativistic consistent angular-momentum projected shellmodel (RECAPS), which combines the relativistic mean-field theory with the angular-momentum projection method.In this new model, nuclear ground-state properties are first calculated consistently using relativistic mean-field (RMF)theory. Then angular momentum projection method is used to project out states with good angular momentum from a few important configurations. By diagonalizing the hamiltonian, the energy levels and wave functions are obtained.This model is a new attempt for the understanding of nuclear structure of normal nuclei and for the prediction of nuclear properties of nuclei far from stability. In this paper, we will describe the treatment of the relativistic mean field. A computer code, RECAPS-RMF, is developed. It solves the relativistic mean field with axial-symmetric deformation in the spherical harmonic oscillator basis. Comparisons between our calculations and existing relativistic mean-field calculations are made to test the model. These include the ground-state properties of spherical nuclei 16O and 208Pb,the deformed nucleus 20Ne. Good agreement is obtained.
Ring current Atmosphere interactions Model with Self-Consistent Magnetic field
Energy Technology Data Exchange (ETDEWEB)
2016-09-09
The Ring current Atmosphere interactions Model with Self-Consistent magnetic field (B) is a unique code that combines a kinetic model of ring current plasma with a three dimensional force-balanced model of the terrestrial magnetic field. The kinetic portion, RAM, solves the kinetic equation to yield the bounce-averaged distribution function as a function of azimuth, radial distance, energy and pitch angle for three ion species (H+, He+, and O+) and, optionally, electrons. The domain is a circle in the Solar-Magnetic (SM) equatorial plane with a radial span of 2 to 6.5 RE. It has an energy range of approximately 100 eV to 500 KeV. The 3-D force balanced magnetic field model, SCB, balances the JxB force with the divergence of the general pressure tensor to calculate the magnetic field configuration within its domain. The domain ranges from near the Earth’s surface, where the field is assumed dipolar, to the shell created by field lines passing through the SM equatorial plane at a radial distance of 6.5 RE. The two codes work in tandem, with RAM providing anisotropic pressure to SCB and SCB returning the self-consistent magnetic field through which RAM plasma is advected.
Silvis, Maurits H; Verstappen, Roel
2016-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is p...
RNA secondary structure modeling at consistent high accuracy using differential SHAPE.
Rice, Greggory M; Leonard, Christopher W; Weeks, Kevin M
2014-06-01
RNA secondary structure modeling is a challenging problem, and recent successes have raised the standards for accuracy, consistency, and tractability. Large increases in accuracy have been achieved by including data on reactivity toward chemical probes: Incorporation of 1M7 SHAPE reactivity data into an mfold-class algorithm results in median accuracies for base pair prediction that exceed 90%. However, a few RNA structures are modeled with significantly lower accuracy. Here, we show that incorporating differential reactivities from the NMIA and 1M6 reagents--which detect noncanonical and tertiary interactions--into prediction algorithms results in highly accurate secondary structure models for RNAs that were previously shown to be difficult to model. For these RNAs, 93% of accepted canonical base pairs were recovered in SHAPE-directed models. Discrepancies between accepted and modeled structures were small and appear to reflect genuine structural differences. Three-reagent SHAPE-directed modeling scales concisely to structurally complex RNAs to resolve the in-solution secondary structure analysis problem for many classes of RNA.
Self-consistent field theory based molecular dynamics with linear system-size scaling.
Richters, Dorothee; Kühne, Thomas D
2014-04-01
We present an improved field-theoretic approach to the grand-canonical potential suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is circumvented by means of a properly modified Langevin equation. The predictive power of the present approach is illustrated using the example of liquid methane under extreme conditions.
Self-consistent field theory based molecular dynamics with linear system-size scaling
Energy Technology Data Exchange (ETDEWEB)
Richters, Dorothee [Institute of Mathematics and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 9, D-55128 Mainz (Germany); Kühne, Thomas D., E-mail: kuehne@uni-mainz.de [Institute of Physical Chemistry and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 7, D-55128 Mainz (Germany); Technical and Macromolecular Chemistry, University of Paderborn, Warburger Str. 100, D-33098 Paderborn (Germany)
2014-04-07
We present an improved field-theoretic approach to the grand-canonical potential suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is circumvented by means of a properly modified Langevin equation. The predictive power of the present approach is illustrated using the example of liquid methane under extreme conditions.
Institute of Scientific and Technical Information of China (English)
ZHOU Qiu-Jua; BI Ya-Jing; XIANG Jun-Feng; TANG Ya-Lin; YANG Qian-Fan; XU Guang-Zhi
2008-01-01
A potential targeting drug delivery system consisting of folate (FA), the targeting molecule, human serum al- bumin (HSA), the carrier, and mitoxantrone (MTO), the medicine, has been designed. Data obtained by UV absorp-tion, fluorescence, and NMR techniques indicated the formation of ternary complexes and possible application to building a targeting drug delivery system by using FA, MTO and HSA. Furthermore, cytotoxicity assay indicated that the toxicity of the FA-HSA-MTO against PC-3 cell line was 79.95%, which was much higher than that of free MTO tested in totally the same conditions. About 30% increase of the toxicity should be owed to the targeting ef-fect of FA. Thus, the feasibility and validity of a novel targeting drug delivery system, FA-HSA-MTO, was con-firmed.
Gas cooling in semi-analytic models and SPH simulations: are results consistent?
Saro, A; Borgani, S; Dolag, K
2010-01-01
We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical SPH simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: a) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; b) while all stars associated with the BCG were formed in its progenitors i...
A Fully Nonlinear, Dynamically Consistent Numerical Model for Ship Maneuvering in a Seaway
Directory of Open Access Journals (Sweden)
Ray-Qing Lin
2011-01-01
Full Text Available This is the continuation of our research on development of a fully nonlinear, dynamically consistent, numerical ship motion model (DiSSEL. In this paper we report our results on modeling ship maneuvering in arbitrary seaway that is one of the most challenging and important problems in seakeeping. In our modeling, we developed an adaptive algorithm to maintain dynamical balances numerically as the encounter frequencies (the wave frequencies as measured on the ship varying with the ship maneuvering state. The key of this new algorithm is to evaluate the encounter frequency variation differently in the physical domain and in the frequency domain, thus effectively eliminating possible numerical dynamical imbalances. We have tested this algorithm with several well-documented maneuvering experiments, and our results agree very well with experimental data. In particular, the numerical time series of roll and pitch motions and the numerical ship tracks (i.e., surge, sway, and yaw are nearly identical to those of experiments.
Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.; Waas, Anthony M.
2013-01-01
A mesh objective crack band model was implemented within the generalized method of cells micromechanics theory. This model was linked to a macroscale finite element model to predict post-peak strain softening in composite materials. Although a mesh objective theory was implemented at the microscale, it does not preclude pathological mesh dependence at the macroscale. To ensure mesh objectivity at both scales, the energy density and the energy release rate must be preserved identically across the two scales. This requires a consistent characteristic length or localization limiter. The effects of scaling (or not scaling) the dimensions of the microscale repeating unit cell (RUC), according to the macroscale element size, in a multiscale analysis was investigated using two examples. Additionally, the ramifications of the macroscale element shape, compared to the RUC, was studied.
Consistent neutron star models with magnetic field dependent equations of state
Chatterjee, Debarati; Novak, Jerome; Oertel, Micaela
2014-01-01
We present a self-consistent model for the study of the structure of a neutron star in strong magnetic fields. Starting from a microscopic Lagrangian, this model includes the effect of the magnetic field on the equation of state, the interaction of the electromagnetic field with matter (magnetisation), and anisotropies in the energy-momentum tensor, as well as general relativistic aspects. We build numerical axisymmetric stationary models and show the applicability of the approach with one example quark matter equation of state (EoS) often employed in the recent literature for studies of strongly magnetised neutron stars. For this EoS, the effect of inclusion of magnetic field dependence or the magnetisation do not increase the maximum mass significantly in contrast to what has been claimed by previous studies.
A self consistent chemically stratified atmosphere model for the roAp star 10 Aquilae
Nesvacil, Nicole; Ryabchikova, Tanya A; Kochukhov, Oleg; Akberov, Artur; Weiss, Werner W
2012-01-01
Context: Chemically peculiar A type (Ap) stars are a subgroup of the CP2 stars which exhibit anomalous overabundances of numerous elements, e.g. Fe, Cr, Sr and rare earth elements. The pulsating subgroup of the Ap stars, the roAp stars, present ideal laboratories to observe and model pulsational signatures as well as the interplay of the pulsations with strong magnetic fields and vertical abundance gradients. Aims: Based on high resolution spectroscopic observations and observed stellar energy distributions we construct a self consistent model atmosphere, that accounts for modulations of the temperature-pressure structure caused by vertical abundance gradients, for the roAp star 10 Aquilae (HD 176232). We demonstrate that such an analysis can be used to determine precisely the fundamental atmospheric parameters required for pulsation modelling. Methods: Average abundances were derived for 56 species. For Mg, Si, Ca, Cr, Fe, Co, Sr, Pr, and Nd vertical stratification profiles were empirically derived using the...
A self-consistent model for a longitudinal discharge excited He-Sr recombination laser
Energy Technology Data Exchange (ETDEWEB)
Carman, R.J. (Centre for Lasers and Applications, Macquarie University, Sydney NSW 2109 (AU))
1990-09-01
A computer model has been developed to simulate the plasma kinetics in a high-repetition frequency, discharge excited He-Sr recombination laser. A detailed rate equation analysis, incorporating about 80 collisional and radiative processes, is used to determine the temporal and spatial (radial) behavior of the discharge parameters and the intracavity laser field during the current pulse, recombination phase, and afterglow periods. The set of coupled first-order ordinary differential equations used to describe the plasma and external electrical circuit are integrated over multiple discharge cycles to yield fully self-consistent results. The computer model has been used to simulate the behavior of the laser for a set of standard conditions corresponding to typical operating conditions. The species population densities predicted by the model are compared with radial and time-dependent Hook measurements determined experimentally for the same set of standard conditions.
A heterogeneous traffic flow model consisting of two types of vehicles with different sensitivities
Li, Zhipeng; Xu, Xun; Xu, Shangzhi; Qian, Yeqing
2017-01-01
A heterogeneous car following model is constructed for traffic flow consisting of low- and high-sensitivity vehicles. The stability criterion of new model is obtained by using the linear stability theory. We derive the neutral stability diagram for the proposed model with five distinct regions. We conclude the effect of the percentage of low-sensitivity vehicle on the traffic stability in each region. In addition, we further consider a special case that the number of the low-sensitivity vehicles is equal to that of the high-sensitivity ones. We explore the dependence of traffic stability on the average value and the standard deviation of two sensitivities characterizing two vehicle types. The direct numerical simulation results verify the conclusion of theoretical analysis.
nIFTy cosmology: the clustering consistency of galaxy formation models
Pujol, Arnau; Skibba, Ramin A.; Gaztañaga, Enrique; Benson, Andrew; Blaizot, Jeremy; Bower, Richard; Carretero, Jorge; Castander, Francisco J.; Cattaneo, Andrea; Cora, Sofia A.; Croton, Darren J.; Cui, Weiguang; Cunnama, Daniel; De Lucia, Gabriella; Devriendt, Julien E.; Elahi, Pascal J.; Font, Andreea; Fontanot, Fabio; Garcia-Bellido, Juan; Gargiulo, Ignacio D.; Gonzalez-Perez, Violeta; Helly, John; Henriques, Bruno M. B.; Hirschmann, Michaela; Knebe, Alexander; Lee, Jaehyun; Mamon, Gary A.; Monaco, Pierluigi; Onions, Julian; Padilla, Nelson D.; Pearce, Frazer R.; Power, Chris; Somerville, Rachel S.; Srisawat, Chaichalit; Thomas, Peter A.; Tollet, Edouard; Vega-Martínez, Cristian A.; Yi, Sukyoung K.
2017-07-01
We present a clustering comparison of 12 galaxy formation models [including semi-analytic models (SAMs) and halo occupation distribution (HOD) models] all run on halo catalogues and merger trees extracted from a single Λ cold dark matter N-body simulation. We compare the results of the measurements of the mean halo occupation numbers, the radial distribution of galaxies in haloes and the two-point correlation functions (2PCF). We also study the implications of the different treatments of orphan (galaxies not assigned to any dark matter subhalo) and non-orphan galaxies in these measurements. Our main result is that the galaxy formation models generally agree in their clustering predictions but they disagree significantly between HOD and SAMs for the orphan satellites. Although there is a very good agreement between the models on the 2PCF of central galaxies, the scatter between the models when orphan satellites are included can be larger than a factor of 2 for scales smaller than 1 h-1 Mpc. We also show that galaxy formation models that do not include orphan satellite galaxies have a significantly lower 2PCF on small scales, consistent with previous studies. Finally, we show that the 2PCF of orphan satellites is remarkably different between SAMs and HOD models. Orphan satellites in SAMs present a higher clustering than in HOD models because they tend to occupy more massive haloes. We conclude that orphan satellites have an important role on galaxy clustering and they are the main cause of the differences in the clustering between HOD models and SAMs.
Energy Technology Data Exchange (ETDEWEB)
Guy, Aurélien, E-mail: aurelien.guy@onera.fr; Bourdon, Anne, E-mail: anne.bourdon@lpp.polytechnique.fr; Perrin, Marie-Yvonne, E-mail: marie-yvonne.perrin@ecp.fr [CNRS, UPR 288, Laboratoire d' Énergétique Moléculaire et Macroscopique, Combustion (EM2C), Grande Voie des Vignes, 92295 Châtenay-Malabry (France); Ecole Centrale Paris, Grande Voie des Vignes, 92295 Châtenay-Malabry (France)
2015-04-15
In this work, a state-to-state vibrational and electronic collisional model is developed to investigate nonequilibrium phenomena behind a shock wave in an ionized nitrogen flow. In the ionization dynamics behind the shock wave, the electron energy budget is of key importance and it is found that the main depletion term corresponds to the electronic excitation of N atoms, and conversely the major creation terms are the electron-vibration term at the beginning, then replaced by the electron ions elastic exchange term. Based on these results, a macroscopic multi-internal-temperature model for the vibration of N{sub 2} and the electronic levels of N atoms is derived with several groups of vibrational levels of N{sub 2} and electronic levels of N with their own internal temperatures to model the shape of the vibrational distribution of N{sub 2} and of the electronic excitation of N, respectively. In this model, energy and chemistry source terms are calculated self-consistently from the rate coefficients of the state-to-state database. For the shock wave condition studied, a good agreement is observed on the ionization dynamics as well as on the atomic bound-bound radiation between the state-to-state model and the macroscopic multi-internal temperature model with only one group of vibrational levels of N{sub 2} and two groups of electronic levels of N.
2012-06-13
daily low-dose Bacillus anthracis spore inhalation exposures in the rabbit model Roy E. Barnewall 1, Jason E. Comer 1, Brian D. Miller 1, BradfordW...multiple exposure days. Keywords: Bacillus anthracis , inhalation exposures, low-dose, subchronic exposures, spores, anthrax, aerosol system INTRODUCTION... Bacillus Anthracis Spore Inhalation Exposures In The Rabbit Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d
Validity test and its consistency in the construction of patient loyalty model
Yanuar, Ferra
2016-04-01
The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.
Choi, Sung W; Gerencser, Akos A; Ng, Ryan; Flynn, James M; Melov, Simon; Danielson, Steven R; Gibson, Bradford W; Nicholls, David G; Bredesen, Dale E; Brand, Martin D
2012-11-21
Depressed cortical energy supply and impaired synaptic function are predominant associations of Alzheimer's disease (AD). To test the hypothesis that presynaptic bioenergetic deficits are associated with the progression of AD pathogenesis, we compared bioenergetic variables of cortical and hippocampal presynaptic nerve terminals (synaptosomes) from commonly used mouse models with AD-like phenotypes (J20 age 6 months, Tg2576 age 16 months, and APP/PS age 9 and 14 months) to age-matched controls. No consistent bioenergetic deficiencies were detected in synaptosomes from the three models; only APP/PS cortical synaptosomes from 14-month-old mice showed an increase in respiration associated with proton leak. J20 mice were chosen for a highly stringent investigation of mitochondrial function and content. There were no significant differences in the quality of the synaptosomal preparations or the mitochondrial volume fraction. Furthermore, respiratory variables, calcium handling, and membrane potentials of synaptosomes from symptomatic J20 mice under calcium-imposed stress were not consistently impaired. The recovery of marker proteins during synaptosome preparation was the same, ruling out the possibility that the lack of functional bioenergetic defects in synaptosomes from J20 mice was due to the selective loss of damaged synaptosomes during sample preparation. Our results support the conclusion that the intrinsic bioenergetic capacities of presynaptic nerve terminals are maintained in these symptomatic AD mouse models.
Directory of Open Access Journals (Sweden)
Francisco Rodríguez-Trelles
1998-12-01
Full Text Available Current efforts to study the biological effects of global change have focused on ecological responses, particularly shifts in species ranges. Mostly ignored are microevolutionary changes. Genetic changes may be at least as important as ecological ones in determining species' responses. In addition, such changes may be a sensitive indicator of global changes that will provide different information than that provided by range shifts. We discuss potential candidate systems to use in such monitoring programs. Studies of Drosophila subobscura suggest that its chromosomal inversion polymorphisms are responding to global warming. Drosophila inversion polymorphisms can be useful indicators of the effects of climate change on populations and ecosystems. Other species also hold the potential to become important indicators of global change. Such studies might significantly influence ecosystem conservation policies and research priorities.
Buchanan, John J; Dean, Noah
2014-02-01
The experiment undertaken was designed to elucidate the impact of model skill level on observational learning processes. The task was bimanual circle tracing with a 90° relative phase lead of one hand over the other hand. Observer groups watched videos of either an instruction model, a discovery model, or a skilled model. The instruction and skilled model always performed the task with the same movement strategy, the right-arm traced clockwise and the left-arm counterclockwise around circle templates with the right-arm leading. The discovery model used several movement strategies (tracing-direction/hand-lead) during practice. Observation of the instruction and skilled model provided a significant benefit compared to the discovery model when performing the 90° relative phase pattern in a post-observation test. The observers of the discovery model had significant room for improvement and benefited from post-observation practice of the 90° pattern. The benefit of a model is found in the consistency with which that model uses the same movement strategy, and not within the skill level of the model. It is the consistency in strategy modeled that allows observers to develop an abstract perceptual representation of the task that can be implemented into a coordinated action. Theoretically, the results show that movement strategy information (relative motion direction, hand lead) and relative phase information can be detected through visual perception processes and be successfully mapped to outgoing motor commands within an observational learning context.
Hübener, Hannes; Pérez-Osorio, Miguel A.; Ordejón, Pablo; Giustino, Feliciano
2012-06-01
We develop a first-principles computational method for investigating the dielectric screening in extended systems using the self-consistent Sternheimer equation and localized nonorthogonal basis sets. Our approach does not require the explicit calculation of unoccupied electronic states, uses only two-center integrals, and has a theoretical scaling of order O(N3). We demonstrate this method by comparing our calculations for silicon, germanium, diamond, and LiCl with reference plane-wave calculations. We show that accuracy comparable to that of plane-wave calculations can be achieved via a systematic optimization of the basis set.
A Globally Consistent Methodology for an Exposure Model for Natural Catastrophe Risk Assessment
Gunasekera, Rashmin; Ishizawa, Oscar; Pandey, Bishwa; Saito, Keiko
2013-04-01
There is a high demand for the development of a globally consistent and robust exposure data model employing a top down approach, to be used in national level catastrophic risk profiling for the public sector liability. To this effect, there are currently several initiatives such as UN-ISDR Global Assessment Report (GAR) and Global Exposure Database for Global Earthquake Model (GED4GEM). However, the consistency and granularity differs from region to region, a problem that is overcome in this proposed approach using national datasets for example in Latin America and the Caribbean Region (LCR). The methodology proposed in this paper aim to produce a global open exposure dataset based upon population, country specific building type distribution and other global/economic indicators such as World Bank indices that are suitable for natural catastrophe risk modelling purposes. The output would be a GIS raster grid at approximately 1 km spatial resolution which would highlight urbaness (building typology distribution, occupancy and use) for each cell at sub national level and compatible with other global initiatives and datasets. It would make use of datasets on population, census, demographic, building data and land use/land cover which are largely available in the public domain. The resultant exposure dataset could be used in conjunction with hazard and vulnerability components to create views of risk for multiple hazards that include earthquake, flood and windstorms. The model we hope would also assist in steps towards future initiatives for open, interchangeable and compatible databases for catastrophe risk modelling. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent.
Self-consistent description of $\\Lambda$ hypernuclei in the quark-meson coupling model
Tsushima, K; Thomas, A W
1997-01-01
The quark-meson coupling model, which has been successfully used to describe the properties of both finite nuclei and infinite nuclear matter, is applied to a study of $\\Lambda$ hypernuclei. With the assumption that the (self-consistent) exchanged scalar, and vector, mesons couple only to the u and d quarks, a very weak spin-orbit force in the $\\Lambda$-nucleus interaction is achieved automatically. This can be interpreted as a direct consequence of the quark structure of the $\\Lambda$ hyperon. Possible implications and extensions of the present investigation are also discussed.
Premixed Combustion Simulations with a Self-Consistent Plasma Model for Initiation
Energy Technology Data Exchange (ETDEWEB)
Sitaraman, Hariswaran; Grout, Ray
2016-01-08
Combustion simulations of H2-O2 ignition are presented here, with a self-consistent plasma fluid model for ignition initiation. The plasma fluid equations for a nanosecond pulsed discharge are solved and coupled with the governing equations of combustion. The discharge operates with the propagation of cathode directed streamer, with radical species produced at streamer heads. These radical species play an important role in the ignition process. The streamer propagation speeds and radical production rates were found to be sensitive to gas temperature and fuel-oxidizer equivalence ratio. The oxygen radical production rates strongly depend on equivalence ratio and subsequently results in faster ignition of leaner mixtures.
Supporting Consistency in Linked Specialized Engineering Models Through Bindings and Updating
Institute of Scientific and Technical Information of China (English)
Albertus H. Olivier; Gert C. van Rooyen; Berthold Firmenich; Karl E. Beucke
2008-01-01
Currently, some commercial software applications support users to work in an integrated environ-ment. However, this is limited to the suite of models provided by the software vendor and consequently it forces all the parties to use the same software. In contrast, the research described in this paper investigates ways of using standard software applications, which may be specialized for different professional domains.These are linked for effective transfer of information and a binding mechanism is provided to support consis-tency. The proposed solution was implemented using a CAD application and an independent finite element application in order to verify the theoretical aspects of this work.
A “Minsky crisis” in a Stock-Flow Consistent model
Mouakil, Tarik
2014-01-01
This study uses the Stock-Flow Consistent modelling approach to assess the relevance of Minsky’s demonstration of his financial instability hypothesis. We show that this demonstration, based on the assumption of a pro-cyclical leverage ratio, is incompatible with the Kaleckian analysis of profits endorsed by Minsky. Therefore we suggest replacing the assumption of a pro-cyclical leverage ratio with one of a pro-cyclical short-term borrowing, which also appears in Minsky’s work. Cet article...
Hoteit, Ibrahim
2010-03-02
An eddy-permitting adjoint-based assimilation system has been implemented to estimate the state of the tropical Pacific Ocean. The system uses the Massachusetts Institute of Technology\\'s general circulation model and its adjoint. The adjoint method is used to adjust the model to observations by controlling the initial temperature and salinity; temperature, salinity, and horizontal velocities at the open boundaries; and surface fluxes of momentum, heat, and freshwater. The model is constrained with most of the available data sets in the tropical Pacific, including Tropical Atmosphere and Ocean, ARGO, expendable bathythermograph, and satellite SST and sea surface height data, and climatologies. Results of hindcast experiments in 2000 suggest that the iterated adjoint-based descent is able to significantly improve the model consistency with the multivariate data sets, providing a dynamically consistent realization of the tropical Pacific circulation that generally matches the observations to within specified errors. The estimated model state is evaluated both by comparisons with observations and by checking the controls, the momentum balances, and the representation of small-scale features that were not well sampled by the observations used in the assimilation. As part of these checks, the estimated controls are smoothed and applied in independent model runs to check that small changes in the controls do not greatly change the model hindcast. This is a simple ensemble-based uncertainty analysis. In addition, the original and smoothed controls are applied to a version of the model with doubled horizontal resolution resulting in a broadly similar “downscaled” hindcast, showing that the adjustments are not tuned to a single configuration (meaning resolution, topography, and parameter settings). The time-evolving model state and the adjusted controls should be useful for analysis or to supply the forcing, initial, and boundary conditions for runs of other models.
Keller, D. E.; Fischer, A. M.; Frei, C.; Liniger, M. A.; Appenzeller, C.; Knutti, R.
2014-07-01
Many climate impact assessments over topographically complex terrain require high-resolution precipitation time-series that have a spatio-temporal correlation structure consistent with observations. This consistency is essential for spatially distributed modelling of processes with non-linear responses to precipitation input (e.g. soil water and river runoff modelling). In this regard, weather generators (WGs) designed and calibrated for multiple sites are an appealing technique to stochastically simulate time-series that approximate the observed temporal and spatial dependencies. In this study, we present a stochastic multi-site precipitation generator and validate it over the hydrological catchment Thur in the Swiss Alps. The model consists of several Richardson-type WGs that are run with correlated random number streams reflecting the observed correlation structure among all possible station pairs. A first-order two-state Markov process simulates intermittence of daily precipitation, while precipitation amounts are simulated from a mixture model of two exponential distributions. The model is calibrated separately for each month over the time-period 1961-2011. The WG is skilful at individual sites in representing the annual cycle of the precipitation statistics, such as mean wet day frequency and intensity as well as monthly precipitation sums. It reproduces realistically the multi-day statistics such as the frequencies of dry and wet spell lengths and precipitation sums over consecutive wet days. Substantial added value is demonstrated in simulating daily areal precipitation sums in comparison to multiple WGs that lack the spatial dependency in the stochastic process: the multi-site WG is capable to capture about 95% of the observed variability in daily area sums, while the summed time-series from multiple single-site WGs only explains about 13%. Limitation of the WG have been detected in reproducing observed variability from year to year, a component that has
Consistent approach to edge detection using multiscale fuzzy modeling analysis in the human retina
Directory of Open Access Journals (Sweden)
Mehdi Salimian
2012-06-01
Full Text Available Today, many widely used image processing algorithms based on human visual system have been developed. In this paper a smart edge detection based on modeling the performance of simple and complex cells and also modeling and multi-scale image processing in the primary visual cortex is presented. A way to adjust the parameters of Gabor filters (mathematical models of simple cells And the proposed non-linear threshold response are presented in order to Modeling of simple and complex cells. Also, due to multi-scale modeling analysis conducted in the human retina, in the proposed algorithm, all edges of the small and large structures with high precision are detected and localized. Comparing the results of the proposed method for a reliable database with conventional methods shows the higher Performance (about 4-13% and reliability of the proposed method in the detection and localization of edge.
Rácz, A; Bajusz, D; Héberger, K
2015-01-01
Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.
Kim, Younsu; Kim, Sungmin; Boctor, Emad M.
2017-03-01
An ultrasound image-guided needle tracking systems have been widely used due to their cost-effectiveness and nonionizing radiation properties. Various surgical navigation systems have been developed by utilizing state-of-the-art sensor technologies. However, ultrasound transmission beam thickness causes unfair initial evaluation conditions due to inconsistent placement of the target with respect to the ultrasound probe. This inconsistency also brings high uncertainty and results in large standard deviations for each measurement when we compare accuracy with and without the guidance. To resolve this problem, we designed a complete evaluation platform by utilizing our mid-plane detection and time of flight measurement systems. The evaluating system uses a PZT element target and an ultrasound transmitting needle. In this paper, we evaluated an optical tracker-based surgical ultrasound-guided navigation system whereby the optical tracker tracks marker frames attached on the ultrasound probe and the needle. We performed ten needle trials of guidance experiment with a mid-plane adjustment algorithm and with a B-mode segmentation method. With the midplane adjustment, the result showed a mean error of 1.62+/-0.72mm. The mean error increased to 3.58+/-2.07mm without the mid-plane adjustment. Our evaluation system can reduce the effect of the beam-thickness problem, and measure ultrasound image-guided technologies consistently with a minimal standard deviation. Using our novel evaluation system, ultrasound image-guided technologies can be compared under equal initial conditions. Therefore, the error can be evaluated more accurately, and the system provides better analysis on the error sources such as ultrasound beam thickness.
A new self-consistent hybrid chemistry model for Mars and cometary environments
Wedlund, Cyril Simon; Kallio, Esa; Jarvinen, Riku; Dyadechkin, Sergey; Alho, Markku
2014-05-01
Over the last 15 years, a 3-D hybrid-PIC planetary plasma interaction modelling platform, named HYB, has been developed, which was applied to several planetary environment such as those of Mars, Venus, Mercury, and more recently, the Moon. We present here another evolution of HYB including a fully consistent ionospheric-chemistry package designed to reproduce the main ions in the lower boundary of the model. This evolution, also permitted by the increase in computing power and the switch to spherical coordinates for higher spatial resolution (Dyadechkin et al., 2013), is motivated by the imminent arrival of the Rosetta spacecraft in the vicinity of comet 67P/Churyumov-Gerasimenko. In this presentation we show the application of the new HYB-ionosphere model to 1D and 2D hybrid simulations at Mars above 100 km altitude and demonstrate that with a limited number of chemical reactions, good agreement with 1D kinetic models may be found. This is a first validation step before applying the model to the 67P/CG comet environment, which, like Mars, is expected be rich in carbon oxide compounds.
[THE MODEL OF NEUROVASCULAR UNIT IN VITRO CONSISTING OF THREE CELLS TYPES].
Khilazheva, E D; Boytsova, E B; Pozhilenkova, E A; Solonchuk, Yu R; Salmina, A B
2015-01-01
There are many ways to model blood brain barrier and neurovascular unit in vitro. All existing models have their disadvantages, advantages and some peculiarities of preparation and usage. We obtained the three-cells neurovascular unit model in vitro using progenitor cells isolated from the rat embryos brain (Wistar, 14-16 d). After withdrawal of the progenitor cells the neurospheres were cultured with subsequent differentiation into astrocytes and neurons. Endothelial cells were isolated from embryonic brain too. During the differentiation of progenitor cells the astrocytes monolayer formation occurs after 7-9 d, neurons monolayer--after 10-14 d, endothelial cells monolayer--after 7 d. Our protocol for simultaneous isolation and cultivation of neurons, astrocytes and endothelial cells reduces the time needed to obtain neurovascular unit model in vitro, consisting of three cells types and reduce the number of animals used. It is also important to note the cerebral origin of all cell types, which is also an advantage of our model in vitro.
Application of a Multigrid Method to a Mass-Consistent Diagnostic Wind Model.
Wang, Yansen; Williamson, Chatt; Garvey, Dennis; Chang, Sam; Cogan, James
2005-07-01
A multigrid numerical method has been applied to a three-dimensional, high-resolution diagnostic model for flow over complex terrain using a mass-consistent approach. The theoretical background for the model is based on a variational analysis using mass conservation as a constraint. The model was designed for diagnostic wind simulation at the microscale in complex terrain and in urban areas. The numerical implementation takes advantage of a multigrid method that greatly improves the computation speed. Three preliminary test cases for the model's numerical efficiency and its accuracy are given. The model results are compared with an analytical solution for flow over a hemisphere. Flow over a bell-shaped hill is computed to demonstrate that the numerical method is applicable in the case of parameterized lee vortices. A simulation of the mean wind field in an urban domain has also been carried out and compared with observational data. The comparison indicated that the multigrid method takes only 3%-5% of the time that is required by the traditional Gauss-Seidel method.
General second order complete active space self-consistent-field solver for large-scale systems
Sun, Qiming
2016-01-01
One challenge of the complete active space self-consistent field (CASSCF) program is to solve the transition metal complexes which are typically medium or large-size molecular systems with large active space. We present an AO-driven second order CASSCF solver to efficiently handle systems which have a large number of AO functions and many active orbitals. This solver allows user to replace the active space Full CI solver with any multiconfigurational solver without breaking the quadratic convergence feature. We demonstrate the capability of the CASSCF solver with the study of Fe(ii)-porphine ground state using DMRG-CASSCF method for 22 electrons in 27 active orbitals and 3000 basis functions.
Directory of Open Access Journals (Sweden)
Hans-Jörg Rheinberger
2011-06-01
Full Text Available It is generally accepted that the development of the modern sciences is rooted in experiment. Yet for a long time, experimentation did not occupy a prominent role, neither in philosophy nor in history of science. With the 'practical turn' in studying the sciences and their history, this has begun to change. This paper is concerned with systems and cultures of experimentation and the consistencies that are generated within such systems and cultures. The first part of the paper exposes the forms of historical and structural coherence that characterize the experimental exploration of epistemic objects. In the second part, a particular experimental culture in the life sciences is briefly described as an example. A survey will be given of what it means and what it takes to analyze biological functions in the test tube.
Self-Consistent Model for Pulsed Direct-Current N2 Glow Discharge
Institute of Scientific and Technical Information of China (English)
Liu Chengsen; Wang Dezhen
2005-01-01
A self-consistent analysis of a pulsed direct-current (DC) N2 glow discharge is presented. The model is based on a numerical solution of the continuity equations for electron and ions coupled with Poisson's equation. The spatial-temporal variations of ionic and electronic densities and electric field are obtained. The electric field structure exhibits all the characteristic regions of a typical glow discharge (the cathode fall, the negative glow, and the positive column).Current-voltage characteristics of the discharge can be obtained from the model. The calculated current-voltage results using a constant secondary electron emission coefficient for the gas pressure 133.32 Pa are in reasonable agreement with experiment.
Consistency in Regularizations of the Gauged NJL Model at One Loop Level
Battistel, O A
1999-01-01
In this work we revisit questions recently raised in the literature associated to relevant but divergent amplitudes in the gauged NJL model. The questions raised involve ambiguities and symmetry violations which concern the model's predictive power at one loop level. Our study shows by means of an alternative prescription to handle divergent amplitudes, that it is possible to obtain unambiguous and symmetry preserving amplitudes. The procedure adopted makes use solely of {\\it general} properties of an eventual regulator, thus avoiding an explicit form. We find, after a thorough analysis of the problem that there are well established conditions to be fulfiled by any consistent regularization prescription in order to avoid the problems of concern at one loop level.
Self-consistent theory of finite Fermi systems and Skyrme–Hartree–Fock method
Energy Technology Data Exchange (ETDEWEB)
Saperstein, E. E., E-mail: saper@mbslab.kiae.ru; Tolokonnikov, S. V. [National Research Center Kurchatov Institute (Russian Federation)
2016-11-15
Recent results obtained on the basis of the self-consistent theory of finite Fermi systems by employing the energy density functional proposed by Fayans and his coauthors are surveyed. These results are compared with the predictions of Skyrme–Hartree–Fock theory involving several popular versions of the Skyrme energy density functional. Spherical nuclei are predominantly considered. The charge radii of even and odd nuclei and features of low-lying 2{sup +} excitations in semimagic nuclei are discussed briefly. The single-particle energies ofmagic nuclei are examined inmore detail with allowance for corrections to mean-field theory that are induced by particle coupling to low-lying collective surface excitations (phonons). The importance of taking into account, in this problem, nonpole (tadpole) diagrams, which are usually disregarded, is emphasized. The spectroscopic factors of magic and semimagic nuclei are also considered. In this problem, only the surface term stemming from the energy dependence induced in the mass operator by the exchange of surface phonons is usually taken into account. The volume contribution associated with the energy dependence initially present in the mass operator within the self-consistent theory of finite Fermi systems because of the exchange of high-lying particle–hole excitations is also included in the spectroscopic factor. The results of the first studies that employed the Fayans energy density functional for deformed nuclei are also presented.
A Hybrid EAV-Relational Model for Consistent and Scalable Capture of Clinical Research Data.
Khan, Omar; Lim Choi Keung, Sarah N; Zhao, Lei; Arvanitis, Theodoros N
2014-01-01
Many clinical research databases are built for specific purposes and their design is often guided by the requirements of their particular setting. Not only does this lead to issues of interoperability and reusability between research groups in the wider community but, within the project itself, changes and additions to the system could be implemented using an ad hoc approach, which may make the system difficult to maintain and even more difficult to share. In this paper, we outline a hybrid Entity-Attribute-Value and relational model approach for modelling data, in light of frequently changing requirements, which enables the back-end database schema to remain static, improving the extensibility and scalability of an application. The model also facilitates data reuse. The methods used build on the modular architecture previously introduced in the CURe project.
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.
Institute of Scientific and Technical Information of China (English)
ZHANG SanGuo; LIAO Yuan
2008-01-01
In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.
Towards self-consistent modelling of the Sgr A* accretion flow: linking theory and observation
Roberts, Shawn R.; Jiang, Yan-Fei; Wang, Q. Daniel; Ostriker, Jeremiah P.
2017-04-01
The interplay between supermassive black holes (SMBHs) and their environments is believed to command an essential role in galaxy evolution. The majority of these SMBHs are in the radiative inefficient accretion phase where this interplay remains elusive, but suggestively important, due to few observational constraints. To remedy this, we directly fit 2D hydrodynamic simulations to Chandra observations of Sgr A* with Markov chain Monte Carlo sampling, self-consistently modelling the 2D inflow-outflow solution for the first time. We find the temperature and density at flow onset are consistent with the origin of the gas in the stellar winds of massive stars in the vicinity of Sgr A*. We place the first observational constraints on the angular momentum of the gas and estimate the centrifugal radius, rc ≈ 0.056 rb ≈ 8 × 10-3 pc, where rb is the Bondi radius. Less than 1 per cent of the inflowing gas accretes on to the SMBH, the remainder being ejected in a polar outflow. We decouple the quiescent point-like emission from the spatially extended flow. We find this point-like emission, accounting for ˜4 per cent of the quiescent flux, is spectrally too steep to be explained by unresolved flares, nor bremsstrahlung, but is likely a combination of a relatively steep synchrotron power law and the high-energy tail of inverse-Compton emission. With this self-consistent model of the accretion flow structure, we make predictions for the flow dynamics and discuss how future X-ray spectroscopic observations can further our understanding of the Sgr A* accretion flow.
George, Jeffrey A.
A new nuclear electric propulsion (NEP) systems analysis code is discussed. The new code is modular and consists of a driver code and various subsystem models. The code models five different subsystems: (1) reactor/shield; (2) power conversion; (3) heat rejection; (4) power management and distribution (PMAD); and (5) thrusters. The code optimizes for the following design criteria: minimum mass; minimum radiator area; and low mass/low area. The code also optimizes the following parameters: separation distance; temperature ratio; pressure ratio; and transmission frequency. The discussion is presented in vugraph form.
Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models
Vignal, Philippe
2016-02-11
Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are
Yang, Yuyi; Wei, Buqing; Zhao, Yuhua; Wang, Jun
2013-02-01
Azo dyes are toxic and carcinogenic and are often present in industrial effluents. In this research, azoreductase and glucose 1-dehydrogenase were coupled for both continuous generation of the cofactor NADH and azo dye removal. The results show that 85% maximum relative activity of azoreductase in an integrated enzyme system was obtained at the conditions: 1U azoreductase:10U glucose 1-dehydrogenase, 250mM glucose, 1.0mM NAD(+) and 150μM methyl red. Sensitivity analysis of the factors in the enzyme system affecting dye removal examined by an artificial neural network model shows that the relative importance of enzyme ratio between azoreductase and glucose 1-dehydrogenase was 22%, followed by dye concentration (27%), NAD(+) concentration (23%) and glucose concentration (22%), indicating none of the variables could be ignored in the enzyme system. Batch results show that the enzyme system has application potential for dye removal.
Berg, Matthew; Hartley, Brian; Richters, Oliver
2015-01-01
By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.
Tumaneng, Paul W.; Pandit, Sagar A.; Zhao, Guijun; Scott, H. L.
2011-03-01
The connection between membrane inhomogeneity and the structural basis of lipid rafts has sparked interest in the lateral organization of model lipid bilayers of two and three components. In an effort to investigate anisotropic lipid distribution in mixed bilayers, a self-consistent mean-field theoretical model is applied to palmitoyloleoylphosphatidylcholine (POPC)-palmitoyl sphingomyelin (PSM)-cholesterol mixtures. The compositional dependence of lateral organization in these mixtures is mapped onto a ternary plot. The model utilizes molecular dynamics simulations to estimate interaction parameters and to construct chain conformation libraries. We find that at some concentration ratios the bilayers separate spatially into regions of higher and lower chain order coinciding with areas enriched with PSM and POPC, respectively. To examine the effect of the asymmetric chain structure of POPC on bilayer lateral inhomogeneity, we consider POPC-lipid interactions with and without angular dependence. Results are compared with experimental data and with results from a similar model for mixtures of dioleoylphosphatidylcholine, steroyl sphingomyelin, and cholesterol.
A parameter study of self-consistent disk models around Herbig AeBe stars
Meijer, J; De Koter, A; Dullemond, C P; Van Boekel, R; Waters, L B F M
2008-01-01
We present a parameter study of self-consistent models of protoplanetary disks around Herbig AeBe stars. We use the code developed by Dullemond and Dominik, which solves the 2D radiative transfer problem including an iteration for the vertical hydrostatic structure of the disk. This grid of models will be used for several studies on disk emission and mineralogy in followup papers. In this paper we take a first look on the new models, compare them with previous modeling attempts and focus on the effects of various parameters on the overall structure of the SED that leads to the classification of Herbig AeBe stars into two groups, with a flaring (group I) or self-shadowed (group II) SED. We find that the parameter of overriding importance to the SED is the total mass in grains smaller than 25um, confirming the earlier results by Dullemond and Dominik. All other parameters studied have only minor influences, and will alter the SED type only in borderline cases. We find that there is no natural dichotomy between ...
A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints
Directory of Open Access Journals (Sweden)
L. Kantha
2016-01-01
Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.
Self-consistent modeling of terahertz waveguide and cavity with frequency-dependent conductivity
Huang, Y. J.; Chu, K. R.; Thumm, M.
2015-01-01
The surface resistance of metals, and hence the Ohmic dissipation per unit area, scales with the square root of the frequency of an incident electromagnetic wave. As is well recognized, this can lead to excessive wall losses at terahertz (THz) frequencies. On the other hand, high-frequency oscillatory motion of conduction electrons tends to mitigate the collisional damping. As a result, the classical theory predicts that metals behave more like a transparent medium at frequencies above the ultraviolet. Such a behavior difference is inherent in the AC conductivity, a frequency-dependent complex quantity commonly used to treat electromagnetics of metals at optical frequencies. The THz region falls in the gap between microwave and optical frequencies. However, metals are still commonly modeled by the DC conductivity in currently active vacuum electronics research aimed at the development of high-power THz sources (notably the gyrotron), although a small reduction of the DC conductivity due to surface roughness is sometimes included. In this study, we present a self-consistent modeling of the gyrotron interaction structures (a metallic waveguide or cavity) with the AC conductivity. The resulting waveguide attenuation constants and cavity quality factors are compared with those of the DC-conductivity model. The reduction in Ohmic losses under the AC-conductivity model is shown to be increasingly significant as the frequency reaches deeper into the THz region. Such effects are of considerable importance to THz gyrotrons for which the minimization of Ohmic losses constitutes a major design consideration.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
Institute of Scientific and Technical Information of China (English)
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
Consistent post-reaction vibrational energy redistribution in DSMC simulations using TCE model
Borges Sebastião, Israel; Alexeenko, Alina
2016-10-01
The direct simulation Monte Carlo (DSMC) method has been widely applied to study shockwaves, hypersonic reentry flows, and other nonequilibrium flow phenomena. Although there is currently active research on high-fidelity models based on ab initio data, the total collision energy (TCE) and Larsen-Borgnakke (LB) models remain the most often used chemistry and relaxation models in DSMC simulations, respectively. The conventional implementation of the discrete LB model, however, may not satisfy detailed balance when recombination and exchange reactions play an important role in the flow energy balance. This issue can become even more critical in reacting mixtures involving polyatomic molecules, such as in combustion. In this work, this important shortcoming is addressed and an empirical approach to consistently specify the post-reaction vibrational states close to thermochemical equilibrium conditions is proposed within the TCE framework. Following Bird's quantum-kinetic (QK) methodology for populating post-reaction states, the new TCE-based approach involves two main steps. The state-specific TCE reaction probabilities for a forward reaction are first pre-computed from equilibrium 0-D simulations. These probabilities are then employed to populate the post-reaction vibrational states of the corresponding reverse reaction. The new approach is illustrated by application to exchange and recombination reactions relevant to H2-O2 combustion processes.
Modeling Extreme Solar Energetic Particle Acceleration with Self-Consistent Wave Generation
Arthur, A. D.; le Roux, J. A.
2015-12-01
Observations of extreme solar energetic particle (SEP) events associated with coronal mass ejection driven shocks have detected particle energies up to a few GeV at 1 AU within the first ~10 minutes to 1 hour of shock acceleration. Whether or not acceleration by a single shock is sufficient in these events or if some combination of multiple shocks or solar flares is required is currently not well understood. Furthermore, the observed onset times of the extreme SEP events place the shock in the corona when the particles escape upstream. We have updated our focused transport theory model that has successfully been applied to the termination shock and traveling interplanetary shocks in the past to investigate extreme SEP acceleration in the solar corona. This model solves the time-dependent Focused Transport Equation including particle preheating due to the cross shock electric field and the divergence, adiabatic compression, and acceleration of the solar wind flow. Diffusive shock acceleration of SEPs is included via the first-order Fermi mechanism for parallel shocks. To investigate the effects of the solar corona on the acceleration of SEPs, we have included an empirical model for the plasma number density, temperature, and velocity. The shock acceleration process becomes highly time-dependent due to the rapid variation of these coronal properties with heliocentric distance. Additionally, particle interaction with MHD wave turbulence is modeled in terms of gyroresonant interactions with parallel propagating Alfven waves. However, previous modeling efforts suggest that the background amplitude of the solar wind turbulence is not sufficient to accelerate SEPs to extreme energies over the short time scales observed. To account for this, we have included the transport and self-consistent amplification of MHD waves by the SEPs through wave-particle gyroresonance. We will present the results of this extended model for a single fast quasi-parallel CME driven shock in the
DEFF Research Database (Denmark)
Yang, Laurence; Tan, Justin; O'Brien, Edward J.
2015-01-01
at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass...... genitalium). Based on transcriptomics data across environmental and genetic backgrounds, the systems biology core proteome is significantly enriched in nondifferentially expressed genes and depleted in differentially expressed genes. Compared with the noncore, core gene expression levels are also similar...... based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma...
Bibliographic Relationships in MARC and Consistent with FRBR Model According to RDA Rules
Directory of Open Access Journals (Sweden)
Mahsa Fardehoseiny
2013-03-01
Full Text Available This study was conducted to investigate the bibliographic relationships in the MARC and it’s consistency with the FRBR model. With establishing the necessary relations between bibliographic records, users will retrieve their necessary information faster and more easily. It is important to make a good communication in existing bibliographic records to help users to find what they need. This study’s purpose was to define the relationships between bibliographic records in the National Library's OPAC database and the study’s method was descriptive content analysis approach. In this study, the online catalog (OPAC National Library of Iran has been used to collect information. All records with the mentioned criteria listed in the final report of the IFLA bibliographic relations about the first group entities in FRBR model and RDA rules has been implemented and analyzed. According to this study, if software has been developed in which the data transferring was based on the conceptual model and the MARC’s data that already exists in the National Library's bibliographic database, these relationships will not be transferable. Withal, in this study the relationships on consistent FRBR and MARC concluded with an intelligent mind and the machine is unable to detect them. The results of this study showed that the relations which conveyed from MARC to FRBR, was about 47/70 percent of the MARC fields, in other hand by FRBR to MARC with the use of all intelligent efforts, and diagnosis of MARC relationships, only 31/38 percent of the relations can be covered through the MARC. But based on real data and usable fields in Boostan-e-Saadi with MARC pattern, records on the National Library of Iran showed that the results reduced to 16/95 percent..
Formulation of a self-consistent model for quantum well pin solar cells
Ramey, S.; Khoie, R.
1997-04-01
A self-consistent numerical simulation model for a pin single-cell solar cell is formulated. The solar cell device consists of a p-AlGaAs region, an intrinsic i-AlGaAs/GaAs region with several quantum wells, and a n-AlGaAs region. Our simulator solves a field-dependent Schrödinger equation self-consistently with Poisson and Drift-Diffusion equations. The emphasis is given to the study of the capture of electrons by the quantum wells, the escape of electrons from the quantum wells, and the absorption and recombination within the quantum wells. We believe this would be the first such comprehensive model ever reported. The field-dependent Schrödinger equation is solved using the transfer matrix method. The eigenfunctions and eigenenergies obtained are used to calculate the escape rate of electrons from the quantum wells, and the non-radiative recombination rates of electrons at the boundaries of the quantum wells. These rates together with the capture rates of electrons by the quantum wells are then used in a self-consistent numerical Poisson-Drift-Diffusion solver. The resulting field profiles are then used in the field-dependent Schrödinger solver, and the iteration process is repeated until convergence is reached. In a p-AlGaAs i-AlGaAs/GaAs n-AlGaAs cell with aluminum mole fraction of 0.3, with one 100 Å-wide 284 meV-deep quantum well, the eigenenergies with zero field are 36meV, 136meV, and 267meV, for the first, second and third subbands, respectively. With an electric field of 50 kV/cm, the eigenenergies are shifted to 58meV, 160meV, and 282meV, respectively. With these eigenenergies, the thermionic escape time of electrons from the GaAs Γ-valley, varies from 220 pS to 90 pS for electric fields ranging from 10 to 50 kV/cm. These preliminary results are in good agreement with those reported by other researchers.
System of systems modeling and analysis.
Energy Technology Data Exchange (ETDEWEB)
Campbell, James E.; Anderson, Dennis James; Longsine, Dennis E. (Intera, Inc., Austin, TX); Shirah, Donald N.
2005-01-01
This report documents the results of an LDRD program entitled 'System of Systems Modeling and Analysis' that was conducted during FY 2003 and FY 2004. Systems that themselves consist of multiple systems (referred to here as System of Systems or SoS) introduce a level of complexity to systems performance analysis and optimization that is not readily addressable by existing capabilities. The objective of the 'System of Systems Modeling and Analysis' project was to develop an integrated modeling and simulation environment that addresses the complex SoS modeling and analysis needs. The approach to meeting this objective involved two key efforts. First, a static analysis approach, called state modeling, has been developed that is useful for analyzing the average performance of systems over defined use conditions. The state modeling capability supports analysis and optimization of multiple systems and multiple performance measures or measures of effectiveness. The second effort involves time simulation which represents every system in the simulation using an encapsulated state model (State Model Object or SMO). The time simulation can analyze any number of systems including cross-platform dependencies and a detailed treatment of the logistics required to support the systems in a defined mission.
Ferri, Nicola; Distasio, Robert A., Jr.; Ambrosetti, Alberto; Car, Roberto; Scheffler, Matthias; Tkatchenko, Alexandre
2015-03-01
Ubiquitous long-range van der Waals (vdW) interactions play a fundamental role in the structure and stability of a wide range of systems. Within the DFT framework, the vdW energy represents a crucial, but tiny part of the total energy, hence its influence on the electronic density, n (r) , and electronic properties is typically assumed to be rather small. Here, we address this question via a fully self-consistent (SC) implementation of the interatomic Tkatchenko-Scheffler vdW functional and its extension to surfaces. Self-consistency leads to large changes in the binding energies and electrostatic moments of highly polarizable alkali metal dimers. For some metal surfaces, vdW interactions increase dipole moments and induce non-trivial charge rearrangements, leading to visible changes in the metal workfunctions. Similar behavior is observed for molecules adsorbed on metals. Our study reveals a non-trivial connection between electrostatics and long-range electron correlation effects.
Thermodynamically consistent modeling for dissolution/growth of bubbles in an incompressible solvent
Bothe, Dieter
2014-01-01
We derive mathematical models of the elementary process of dissolution/growth of bubbles in a liquid under pressure control. The modeling starts with a fully compressible version, both for the liquid and the gas phase so that the entropy principle can be easily evaluated. This yields a full PDE system for a compressible two-phase fluid with mass transfer of the gaseous species. Then the passage to an incompressible solvent in the liquid phase is discussed, where a carefully chosen equation of state for the liquid mixture pressure allows for a limit in which the solvent density is constant. We finally provide a simplification of the PDE system in case of a dilute solution.
Self-consistent second-order Green’s function perturbation theory for periodic systems
Energy Technology Data Exchange (ETDEWEB)
Rusakov, Alexander A., E-mail: rusakov@umich.edu; Zgid, Dominika [Department of Chemistry, University of Michigan, Ann Arbor, Michigan 48109 (United States)
2016-02-07
Despite recent advances, systematic quantitative treatment of the electron correlation problem in extended systems remains a formidable task. Systematically improvable Green’s function methods capable of quantitatively describing weak and at least qualitatively strong correlations appear as promising candidates for computational treatment of periodic systems. We present a periodic implementation of temperature-dependent self-consistent 2nd-order Green’s function (GF2) method, where the self-energy is evaluated in the basis of atomic orbitals. Evaluating the real-space self-energy in atomic orbitals and solving the Dyson equation in k-space are the key components of a computationally feasible algorithm. We apply this technique to the one-dimensional hydrogen lattice — a prototypical crystalline system with a realistic Hamiltonian. By analyzing the behavior of the spectral functions, natural occupations, and self-energies, we claim that GF2 is able to recover metallic, band insulating, and at least qualitatively Mott regimes. We observe that the iterative nature of GF2 is essential to the emergence of the metallic and Mott phases.
Directory of Open Access Journals (Sweden)
Damian M Cummings
2010-05-01
Full Text Available Since the identification of the gene responsible for HD (Huntington's disease, many genetic mouse models have been generated. Each employs a unique approach for delivery of the mutated gene and has a different CAG repeat length and background strain. The resultant diversity in the genetic context and phenotypes of these models has led to extensive debate regarding the relevance of each model to the human disorder. Here, we compare and contrast the striatal synaptic phenotypes of two models of HD, namely the YAC128 mouse, which carries the full-length huntingtin gene on a yeast artificial chromosome, and the CAG140 KI*** (knock-in mouse, which carries a human/mouse chimaeric gene that is expressed in the context of the mouse genome, with our previously published data obtained from the R6/2 mouse, which is transgenic for exon 1 mutant huntingtin. We show that striatal MSNs (medium-sized spiny neurons in YAC128 and CAG140 KI mice have similar electrophysiological phenotypes to that of the R6/2 mouse. These include a progressive increase in membrane input resistance, a reduction in membrane capacitance, a lower frequency of spontaneous excitatory postsynaptic currents and a greater frequency of spontaneous inhibitory postsynaptic currents in a subpopulation of striatal neurons. Thus, despite differences in the context of the inserted gene between these three models of HD, the primary electrophysiological changes observed in striatal MSNs are consistent. The outcomes suggest that the changes are due to the expression of mutant huntingtin and such alterations can be extended to the human condition.
Directory of Open Access Journals (Sweden)
Damian M Cummings
2010-06-01
Full Text Available Since the identification of the gene responsible for HD (Huntington's disease, many genetic mouse models have been generated. Each employs a unique approach for delivery of the mutated gene and has a different CAG repeat length and background strain. The resultant diversity in the genetic context and phenotypes of these models has led to extensive debate regarding the relevance of each model to the human disorder. Here, we compare and contrast the striatal synaptic phenotypes of two models of HD, namely the YAC128 mouse, which carries the full-length huntingtin gene on a yeast artificial chromosome, and the CAG140 KI (knock-in mouse, which carries a human/mouse chimaeric gene that is expressed in the context of the mouse genome, with our previously published data obtained from the R6/2 mouse, which is transgenic for exon 1 mutant huntingtin. We show that striatal MSNs (medium-sized spiny neurons in YAC128 and CAG140 KI mice have similar electrophysiological phenotypes to that of the R6/2 mouse. These include a progressive increase in membrane input resistance, a reduction in membrane capacitance, a lower frequency of spontaneous excitatory postsynaptic currents and a greater frequency of spontaneous inhibitory postsynaptic currents in a subpopulation of striatal neurons. Thus, despite differences in the context of the inserted gene between these three models of HD, the primary electrophysiological changes observed in striatal MSNs are consistent. The outcomes suggest that the changes are due to the expression of mutant huntingtin and such alterations can be extended to the human condition.
Zimmermann, Eva; Seifert, Udo
2015-02-01
Many single-molecule experiments for molecular motors comprise not only the motor but also large probe particles coupled to it. The theoretical analysis of these assays, however, often takes into account only the degrees of freedom representing the motor. We present a coarse-graining method that maps a model comprising two coupled degrees of freedom which represent motor and probe particle to such an effective one-particle model by eliminating the dynamics of the probe particle in a thermodynamically and dynamically consistent way. The coarse-grained rates obey a local detailed balance condition and reproduce the net currents. Moreover, the average entropy production as well as the thermodynamic efficiency is invariant under this coarse-graining procedure. Our analysis reveals that only by assuming unrealistically fast probe particles, the coarse-grained transition rates coincide with the transition rates of the traditionally used one-particle motor models. Additionally, we find that for multicyclic motors the stall force can depend on the probe size. We apply this coarse-graining method to specific case studies of the F(1)-ATPase and the kinesin motor.
McGlynn, Thomas; Fabbiano, Giuseppina; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; Pevunova, Olga; Imel, David; Berriman, Graham B.; Teplitz, Harry I.; Groom, Steve L.; Desai, Vandana R.; Landry, Walter
2016-07-01
Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.
A three-dimensional PEM fuel cell model with consistent treatment of water transport in MEA
Meng, Hua
In this paper, a three-dimensional PEM fuel cell model with a consistent water transport treatment in the membrane electrode assembly (MEA) has been developed. In this new PEM fuel cell model, the conservation equation of the water concentration is solved in the gas channels, gas diffusion layers, and catalyst layers while a conservation equation of the water content is established in the membrane. These two equations are connected using a set of internal boundary conditions based on the thermodynamic phase equilibrium and flux equality at the interface of the membrane and the catalyst layer. The existing fictitious water concentration treatment, which assumes thermodynamic phase equilibrium between the water content in the membrane phase and the water concentration, is applied in the two catalyst layers to consider water transport in the membrane phase. Since all the other conservation equations are still developed and solved in the single-domain framework without resort to interfacial boundary conditions, the present new PEM fuel cell model is termed as a mixed-domain method. Results from this mixed-domain approach have been compared extensively with those from the single-domain method, showing good accuracy in terms of not only cell performances and current distributions but also water content variations in the membrane.
Consistency of non-flat $\\Lambda$CDM model with the new result from BOSS
Kumar, Suresh
2015-01-01
Using 137,562 quasars in the redshift range $2.1\\leq z\\leq3.5$ from the Data Release 11 (DR11) of the Baryon Oscillation Spectroscopic Survey (BOSS) of Sloan Digital Sky Survey (SDSS)-III, the BOSS-SDSS collaboration estimated the expansion rate $H(z=2.34)=222\\pm7$ km/s/Mpc of Universe, and reported that this value is in tension with the predictions of flat $\\Lambda$CDM model at around 2.5$\\sigma$ level. In this letter, we briefly describe some attempts made in the literature to relieve the tension, and show that the tension can naturally be alleviated in non-flat $\\Lambda$CDM model with positive curvature. However, this idea confronts with the inflation paradigm which predicts almost a spatially flat Universe. Nevertheless, the theoretical consistency of the non-flat $\\Lambda$CDM model with the new result from BOSS deserves attention of the community.
Baraffe, I; Méra, D; Chabrier, G; Beaulieu, J P
1998-01-01
We have computed stellar evolutionary models for stars in a mass range characteristic of Cepheid variables ($3
Self-Consistent, Axisymmetric Two-Integral Models of Elliptical Galaxies with Embedded Nuclear Discs
Bosch, van den, PPJ Paul; de, Zeeuw, W.
1996-01-01
Recently, observations with the Hubble Space Telescope have revealed small stellar discs embedded in the nuclei of a number of ellipticals and S0s. In this paper we construct two-integral axisymmetric models for such systems. We calculate the even part of the phase-space distribution function, and specify the odd part by means of a simple parameterization. We investigate the photometric as well as the kinematic signatures of nuclear discs, including their velocity profiles (VPs), and study th...
Towards Self-Consistent Modelling of the Sgr A* Accretion Flow: Linking Theory and Observation
Roberts, Shawn R; Jiang, Yan-Fei; Ostriker, Jeremiah P
2016-01-01
The interplay between supermassive black holes (SMBHs) and their environments is believed to command an essential role in galaxy evolution. The majority of these SMBHs are in the radiative inefficient accretion phase where this interplay remains elusive, but suggestively important, due to few observational constraints. To remedy this, we directly fit 2-D hydrodynamic simulations to Chandra observations of Sgr A* with Markov Chain Monte Carlo sampling, self-consistently modelling the 2-D inflow-outflow solution for the first time. We find the temperature and density at flow onset are consistent with the origin of the gas in the stellar winds of massive stars in the vicinity of Sgr A*. We place the first observational constraints on the angular momentum of the gas and estimate the centrifugal radius, r$_c$ $\\approx$ 0.056 r$_b$ $\\approx8\\times10^{-3}$ pc, where r$_b$ is the Bondi radius. Less than 1\\% of the inflowing gas accretes onto the SMBH, the remainder being ejected in a polar outflow. For the first time...
A self-consistent linear-mode model of stellar convection
Macauslan, J.
1985-01-01
A normal-mode expansion of the linearized fluid equations in terms of small subset of spherical harmonics can provide a foundation for a physically motivated, self-consistent description of a solar-type convection zone. In the absence of dissipation, a second-order differential equation governs the radial dependence of the modes, so that interpretation of the effects on convection quantities of the normal-form 'potential well' is straightforward. The philosophy is quite different from the more recent work of Narasimha and Antia (1982): all envelopes presented here differ substantially from MLT envelopes, and therefore, from theirs, which are constructed to be consistent with MLT. The amplitude of all modes is set by a Kelvin-Helmholtz-('shear'-) instability argument unrelated to solar observations, with the result that the convection description may be considered to arise from 'first-hueristic-principles'. The thermodynamics modelled vaguely resemble the sun's, and more vigorously convective envelopes show some phenomena qualitatively like solar observations (e.g., atmospheric velocity spectra).
Self-consistent 2-phase AGN torus models: SED library for observers
Siebenmorgen, Ralf; Efstathiou, Andreas
2015-01-01
We assume that dust near active galactic nuclei (AGN) is distributed in a torus-like geometry, which may be described by a clumpy medium or a homogeneous disk or as a combination of the two (i.e. a 2-phase medium). The dust particles considered are fluffy and have higher submillimeter emissivities than grains in the diffuse ISM. The dust-photon interaction is treated in a fully self-consistent three dimensional radiative transfer code. We provide an AGN library of spectral energy distributions (SEDs). Its purpose is to quickly obtain estimates of the basic parameters of the AGN, such as the intrinsic luminosity of the central source, the viewing angle, the inner radius, the volume filling factor and optical depth of the clouds, and the optical depth of the disk midplane, and to predict the flux at yet unobserved wavelengths. The procedure is simple and consists of finding an element in the library that matches the observations. We discuss the general properties of the models and in particular the 10mic. silic...
Directory of Open Access Journals (Sweden)
Jiateng Guo
2016-02-01
Full Text Available Three-dimensional (3D geological models are important representations of the results of regional geological surveys. However, the process of constructing 3D geological models from two-dimensional (2D geological elements remains difficult and is not necessarily robust. This paper proposes a method of migrating from 2D elements to 3D models. First, the geological interfaces were constructed using the Hermite Radial Basis Function (HRBF to interpolate the boundaries and attitude data. Then, the subsurface geological bodies were extracted from the spatial map area using the Boolean method between the HRBF surface and the fundamental body. Finally, the top surfaces of the geological bodies were constructed by coupling the geological boundaries to digital elevation models. Based on this workflow, a prototype system was developed, and typical geological structures (e.g., folds, faults, and strata were simulated. Geological modes were constructed through this workflow based on realistic regional geological survey data. The model construction process was rapid, and the resulting models accorded with the constraints of the original data. This method could also be used in other fields of study, including mining geology and urban geotechnical investigations.
Height-Diameter Models for Mixed-Species Forests Consisting of Spruce, Fir, and Beech
Directory of Open Access Journals (Sweden)
Petráš Rudolf
2014-06-01
Full Text Available Height-diameter models define the general relationship between the tree height and diameter at each growth stage of the forest stand. This paper presents generalized height-diameter models for mixed-species forest stands consisting of Norway spruce (Picea abies Karst., Silver fir (Abies alba L., and European beech (Fagus sylvatica L. from Slovakia. The models were derived using two growth functions from the exponential family: the two-parameter Michailoff and three-parameter Korf functions. Generalized height-diameter functions must normally be constrained to pass through the mean stand diameter and height, and then the final growth model has only one or two parameters to be estimated. These “free” parameters are then expressed over the quadratic mean diameter, height and stand age and the final mathematical form of the model is obtained. The study material included 50 long-term experimental plots located in the Western Carpathians. The plots were established 40-50 years ago and have been repeatedly measured at 5 to 10-year intervals. The dataset includes 7,950 height measurements of spruce, 21,661 of fir and 5,794 of beech. As many as 9 regression models were derived for each species. Although the “goodness of fit” of all models showed that they were generally well suited for the data, the best results were obtained for silver fir. The coefficient of determination ranged from 0.946 to 0.948, RMSE (m was in the interval 1.94-1.97 and the bias (m was -0.031 to 0.063. Although slightly imprecise parameter estimation was established for spruce, the estimations of the regression parameters obtained for beech were quite less precise. The coefficient of determination for beech was 0.854-0.860, RMSE (m 2.67-2.72, and the bias (m ranged from -0.144 to -0.056. The majority of models using Korf’s formula produced slightly better estimations than Michailoff’s, and it proved immaterial which estimated parameter was fixed and which parameters
Motion of the Philippine Sea plate consistent with the NUVEL-1A model
Zang, Shao Xian; Chen, Qi Yong; Ning, Jie Yuan; Shen, Zheng Kang; Liu, Yong Gang
2002-09-01
We determine Euler vectors for 12 plates, including the Philippine Sea plate (PH), relative to the fixed Pacific plate (PA) by inverting the earthquake slip vectors along the boundaries of the Philippine Sea plate, GPS observed velocities, and 1122 data from the NUVEL-1 and the NUVEL-1A global plate motion model, respectively. This analysis thus also yields Euler vectors for the Philippine Sea plate relative to adjacent plates. Our results are consistent with observed data and can satisfy the geological and geophysical constraints along the Caroline (CR)-PH and PA-CR boundaries. The results also give insight into internal deformation of the Philippine Sea plate. The area enclosed by the Ryukyu Trench-Nankai Trough, Izu-Bonin Trench and GPS stations S102, S063 and Okino Torishima moves uniformly as a rigid plate, but the areas near the Philippine Trench, Mariana Trough and Yap-Palau Trench have obvious deformation.
Plasma Processes : A self-consistent kinetic modeling of a 1-D, bounded, plasma in equilibrium
Indian Academy of Sciences (India)
Monojoy Goswami; H Ramachandran
2000-11-01
A self-consistent kinetic treatment is presented here, where the Boltzmann equation is solved for a particle conserving Krook collision operator. The resulting equations have been implemented numerically. The treatment solves for the entire quasineutral column, making no assumptions about mfp/, where mfp is the ion-neutral collision mean free path and the size of the device. Coulomb collisions are neglected in favour of collisions with neutrals, and the particle source is modeled as a uniform Maxwellian. Electrons are treated as an inertialess but collisional ﬂuid. The ion distribution function for the trapped and the transiting orbits is obtained. Interesting ﬁndings include the anomalous heating of ions as they approach the presheath, the development of strongly non-Maxwellian features near the last mfp, and strong modiﬁcations of the sheath criterion.
Consistent treatment of viscoelastic effects at junctions in one-dimensional blood flow models
Müller, Lucas O.; Leugering, Günter; Blanco, Pablo J.
2016-06-01
While the numerical discretization of one-dimensional blood flow models for vessels with viscoelastic wall properties is widely established, there is still no clear approach on how to couple one-dimensional segments that compose a network of viscoelastic vessels. In particular for Voigt-type viscoelastic models, assumptions with regard to boundary conditions have to be made, which normally result in neglecting the viscoelastic effect at the edge of vessels. Here we propose a coupling strategy that takes advantage of a hyperbolic reformulation of the original model and the inherent information of the resulting system. We show that applying proper coupling conditions is fundamental for preserving the physical coherence and numerical accuracy of the solution in both academic and physiologically relevant cases.
Pranger, C. C.; Le Pourhiet, L.; May, D.; van Dinther, Y.; Gerya, T.
2016-12-01
Subduction zones evolve over millions of years. The state of stress, the distribution of materials, and the strength and structure of the interface between the two plates is intricately tied to a host of time-dependent physical processes, such as damage, friction, (nonlinear) viscous relaxation, and fluid migration. In addition, the subduction interface has a complex three-dimensional geometry that evolves with time and can adjust in response to a changing stress environment or in response to impinging topographical features, and can even branch off as a splay fault. All in all, the behaviour of (large) earthquakes at the millisecond to minute timescale is heavily dependent on the pattern of stress accumulation during the 100 year inter-seismic period, the events occurring on or near the interface in the past thousands of years, as well as the extended geological history of the region. We address the aforementioned modeling requirements by developing a self-consistent 3D staggered grid finite difference continuum description of motion, thermal advection-diffusion, and poro-visco-elastic two-phase flow. Faults are modelled as plastic shear bands that can develop and evolve in response to a changing stress environment without having a prescribed geometry. They obey a Mohr-Coulomb or Drucker-Prager yield criterion and a rate-and-state friction law. For a sound treatment of plasticity, we borrow elements from mechanical engineering, and extend these with high-quality nonlinear iteration schemes and adaptive time-stepping to resolve the rupture process at all time scales. We will present these techniques together with proof-of-concept examples of self-consistently developing seismic cycles in 2D and 3D, including phases of stress accumulation, fault nucleation, dynamic rupture, and healing.
Energy Technology Data Exchange (ETDEWEB)
BRANNON,REBECCA M.
2000-11-01
A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion
Gollmer, Anita; Regensburger, Johannes; Maisch, Tim; Bäumler, Wolfgang
2013-07-21
The interaction of singlet oxygen ((1)O2) generated in a photosensitized process with well-known reference photosensitizers Perinaphthenone (PN) and TMPyP is investigated in a model system consisting of fatty acids and the respective exogenous photosensitizer (PS) in solution by direct detection of the luminescence photons of (1)O2 at 1270 nm. Such a model system is a first approach to mimic the complex environment of (1)O2 in a biological cell which consists mainly of water, proteins, sugars and lipids. Firstly, the important issue of oxygen consumption is evaluated which has to be considered during luminescence detection of (1)O2. It is known that the luminescence signal of (1)O2 is dependent on the oxygen concentration of the environment. Cellular components such as lipids represent oxygen consumers due to peroxidation of their unsaturated double bonds. Secondly, the experimental conditions for this model system regarding oxygen consumption are optimized to estimate the rates and rate constants of the coupled system. Thirdly, the triplet decay of the PS can provide more precise information about the actual oxygen concentration close to the PS and can be used, therefore, as a more precise method to determine the oxygen concentration in more complex systems such as a biological cell. The aim is to get a better understanding of photosensitized reactions of (1)O2 with cellular components to further improve methodologies, in particular at a cellular level using luminescence spectroscopy. In conclusion, luminescence detection might be a helpful tool to monitor precisely and promptly changes in oxygen concentration in a complex environment.
Subdiffusion-absorption process in a system consisting of two different media
Kosztołowicz, Tadeusz
2017-02-01
Subdiffusion with reaction A +B →B is considered in a system which consists of two homogeneous media joined together; the A particles are mobile, whereas B are static. Subdiffusion and reaction parameters, which are assumed to be independent of time and space variables, can be different in both media. Particles A move freely across the border between the media. In each part of the system, the process is described by the subdiffusion-reaction equations with fractional time derivative. By means of the method presented in this paper, we derive both the fundamental solutions (the Green's functions) P(x, t) to the subdiffusion-reaction equations and the boundary conditions at the border between the media. One of the conditions demands the continuity of a flux and the other one contains the Riemann-Liouville fractional time derivatives ∂α1P (0+,t ) /∂tα1 =(D1/D2 ) ∂α2P (0-,t ) /∂tα2 , where the subdiffusion parameters α1, D1 and α2, D2 are defined in the regions x 0 , respectively.
Subdiffusion-absorption process in a system consisting of two different media.
Kosztołowicz, Tadeusz
2017-02-28
Subdiffusion with reaction A+B→B is considered in a system which consists of two homogeneous media joined together; the A particles are mobile, whereas B are static. Subdiffusion and reaction parameters, which are assumed to be independent of time and space variables, can be different in both media. Particles A move freely across the border between the media. In each part of the system, the process is described by the subdiffusion-reaction equations with fractional time derivative. By means of the method presented in this paper, we derive both the fundamental solutions (the Green's functions) P(x, t) to the subdiffusion-reaction equations and the boundary conditions at the border between the media. One of the conditions demands the continuity of a flux and the other one contains the Riemann-Liouville fractional time derivatives ∂(α1) P(0(+),t)/∂t(α1) =(D1/D2)∂(α2) P(0(-),t)/∂t(α2) , where the subdiffusion parameters α1, D1 and α2, D2 are defined in the regions x0, respectively.
Choi, H. J.; Lee, S. B.; Lee, H. G.; Y Back, S.; Kim, S. H.; Kang, H. S.
2017-07-01
Several parts that comprise the large scientific device should be installed and operated at the accurate three-dimensional location coordinates (X, Y, and Z) where they should be subjected to survey and alignment. The location of the aligned parts should not be changed in order to ensure that the electron beam parameters (Energy 10 GeV, Charge 200 pC, and Bunch Length 60 fs, Emittance X/Y 0.481 μm/0.256 μm) of PAL-XFEL (X-ray Free Electron Laser of the Pohang Accelerator Laboratory) remain stable and can be operated without any problems. As time goes by, however, the ground goes through uplift and subsidence, which consequently deforms building floors. The deformation of the ground and buildings changes the location of several devices including magnets and RF accelerator tubes, which eventually leads to alignment errors (∆X, ∆Y, and ∆Z). Once alignment errors occur with regard to these parts, the electron beam deviates from its course and beam parameters change accordingly. PAL-XFEL has installed the Hydrostatic Leveling System (HLS) to measure and record the vertical change of buildings and ground consistently and systematically and the Wire Position System (WPS) to measure the two dimensional changes of girders. This paper is designed to introduce the operating principle and design concept of WPS and discuss the current situation regarding installation and operation.
Self-consistent modeling of CFETR baseline scenarios for steady-state operation
Chen, Jiale; Jian, Xiang; Chan, Vincent S.; Li, Zeyu; Deng, Zhao; Li, Guoqiang; Guo, Wenfeng; Shi, Nan; Chen, Xi; CFETR Physics Team
2017-07-01
Integrated modeling for core plasma is performed to increase confidence in the proposed baseline scenario in the 0D analysis for the China Fusion Engineering Test Reactor (CFETR). The steady-state scenarios are obtained through the consistent iterative calculation of equilibrium, transport, auxiliary heating and current drives (H&CD). Three combinations of H&CD schemes (NB + EC, NB + EC + LH, and EC + LH) are used to sustain the scenarios with q min > 2 and fusion power of ˜70-150 MW. The predicted power is within the target range for CFETR Phase I, although the confinement based on physics models is lower than that assumed in 0D analysis. Ideal MHD stability analysis shows that the scenarios are stable against n = 1-10 ideal modes, where n is the toroidal mode number. Optimization of RF current drive for the RF-only scenario is also presented. The simulation workflow for core plasma in this work provides a solid basis for a more extensive research and development effort for the physics design of CFETR.
Wen, Sun; Chen, Shihua; Wang, Changping
2008-04-01
Recently a large set of dynamical systems have been intensively investigated as models of complex networks in which there exist a class of very common systems with the property of x-leading asymptotic stability [R. Zhang, M. Hu, Z. Xu, Phys. Lett. A 368 (2007) 276]. In this Letter, we introduced a new complex network model consisted of this systems, then considered its global synchronization. Based on Lasalle invariance principle, global synchronization criteria is derived. We also do not assume coupling matrix is symmetric and irreducible, so our model is more general than that of [R. Zhang, M. Hu, Z. Xu, Phys. Lett. A 368 (2007) 276]. What is more, our assumption f∈Quad(θ,P,α) is weaker than the assumption f∈Quad(D,P,α) in [W. Lu, T. Chen, Physica D 213 (2006) 214], but it improves synchronization results greatly. Numerical simulations of Lorenz systems as the nodes are given to show the effectiveness of the proposed global asymptotic synchronization criteria.
Energy Technology Data Exchange (ETDEWEB)
Wen Sun [College of Mathematics and Statistics, Wuhan University, Wuhan 430072 (China)], E-mail: sunwen_2201@163.com; Chen Shihua [College of Mathematics and Statistics, Wuhan University, Wuhan 430072 (China); Wang Changping [Department of Mathematics and Statistics, Dalhousie University, Halifax NS, B3H 3J5 (Canada)
2008-04-21
Recently a large set of dynamical systems have been intensively investigated as models of complex networks in which there exist a class of very common systems with the property of x{sub k}-leading asymptotic stability [R. Zhang, M. Hu, Z. Xu, Phys. Lett. A 368 (2007) 276]. In this Letter, we introduced a new complex network model consisted of this systems, then considered its global synchronization. Based on Lasalle invariance principle, global synchronization criteria is derived. We also do not assume coupling matrix is symmetric and irreducible, so our model is more general than that of [R. Zhang, M. Hu, Z. Xu, Phys. Lett. A 368 (2007) 276]. What is more, our assumption f element of Quad*({theta},P,{alpha}) is weaker than the assumption f element of Quad(D,P,{alpha}) in [W. Lu, T. Chen, Physica D 213 (2006) 214], but it improves synchronization results greatly. Numerical simulations of Lorenz systems as the nodes are given to show the effectiveness of the proposed global asymptotic synchronization criteria.
Toward A Self Consistent MHD Model of Chromospheres and Winds From Late Type Evolved Stars
Airapetian, V. S.; Leake, J. E.; Carpenter, Kenneth G.
2015-01-01
We present the first magnetohydrodynamic model of the stellar chromospheric heating and acceleration of the outer atmospheres of cool evolved stars, using α Tau as a case study. We used a 1.5D MHD code with a generalized Ohm's law that accounts for the effects of partial ionization in the stellar atmosphere to study Alfvén wave dissipation and wave reflection. We have demonstrated that due to inclusion of the effects of ion-neutral collisions in magnetized weakly ionized chromospheric plasma on resistivity and the appropriate grid resolution, the numerical resistivity becomes 1-2 orders of magnitude smaller than the physical resistivity. The motions introduced by non-linear transverse Alfvé waves can explain non-thermally broadened and non-Gaussian profiles of optically thin UV lines forming in the stellar chromosphere of α Tau and other late-type giant and supergiant stars. The calculated heating rates in the stellar chromosphere due to resistive (Joule) dissipation of electric currents, induced by upward propagating non-linear Alfvé waves, are consistent with observational constraints on the net radiative losses in UV lines and the continuum from α Tau. At the top of the chromosphere, Alfvé waves experience significant reflection, producing downward propagating transverse waves that interact with upward propagating waves and produce velocity shear in the chromosphere. Our simulations also suggest that momentum deposition by non-linear Alfvé waves becomes significant in the outer chromosphere at 1 stellar radius from the photosphere. The calculated terminal velocity and the mass loss rate are consistent with the observationally derived wind properties in α Tau.
Hazard-consistent ground motions generated with a stochastic fault-rupture model
Energy Technology Data Exchange (ETDEWEB)
Nishida, Akemi, E-mail: nishida.akemi@jaea.go.jp [Center for Computational Science and e-Systems, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba 277-0871 (Japan); Igarashi, Sayaka, E-mail: igrsyk00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Sakamoto, Shigehiro, E-mail: shigehiro.sakamoto@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Uchiyama, Yasuo, E-mail: yasuo.uchiyama@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Yamamoto, Yu, E-mail: ymmyu-00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Muramatsu, Ken, E-mail: kmuramat@tcu.ac.jp [Department of Nuclear Safety Engineering, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo 158-8557 (Japan); Takada, Tsuyoshi, E-mail: takada@load.arch.t.u-tokyo.ac.jp [Department of Architecture, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2015-12-15
Conventional seismic probabilistic risk assessments (PRAs) of nuclear power plants consist of probabilistic seismic hazard and fragility curves. Even when earthquake ground-motion time histories are required, they are generated to fit specified response spectra, such as uniform hazard spectra at a specified exceedance probability. These ground motions, however, are not directly linked with seismic-source characteristics. In this context, the authors propose a method based on Monte Carlo simulations to generate a set of input ground-motion time histories to develop an advanced PRA scheme that can explain exceedance probability and the sequence of safety-functional loss in a nuclear power plant. These generated ground motions are consistent with seismic hazard at a reference site, and their seismic-source characteristics can be identified in detail. Ground-motion generation is conducted for a reference site, Oarai in Japan, the location of a hypothetical nuclear power plant. A total of 200 ground motions are generated, ranging from 700 to 1100 cm/s{sup 2} peak acceleration, which corresponds to a 10{sup −4} to 10{sup −5} annual exceedance frequency. In the ground-motion generation, seismic sources are selected according to their hazard contribution at the site, and Monte Carlo simulations with stochastic parameters for the seismic-source characteristics are then conducted until ground motions with the target peak acceleration are obtained. These ground motions are selected so that they are consistent with the hazard. Approximately 110,000 simulations were required to generate 200 ground motions with these peak accelerations. Deviations of peak ground motion acceleration generated for 1000–1100 cm/s{sup 2} range from 1.5 to 3.0, where the deviation is evaluated with peak ground motion accelerations generated from the same seismic source. Deviations of 1.0 to 3.0 for stress drops, one of the stochastic parameters of seismic-source characteristics, are required to
Genome scale models of yeast: towards standardized evaluation and consistent omic integration
DEFF Research Database (Denmark)
Sanchez, Benjamin J.; Nielsen, Jens
2015-01-01
Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published...... and are currently used for metabolic engineering and elucidating biological interactions. Here we review the history of yeast's GEMs, focusing on recent developments. We study how these models are typically evaluated, using both descriptive and predictive metrics. Additionally, we analyze the different ways...... in which all levels of omics data (from gene expression to flux) have been integrated in yeast GEMs. Relevant conclusions and current challenges for both GEM evaluation and omic integration are highlighted....
Towards three-dimensional continuum models of self-consistent along-strike megathrust segmentation
Pranger, Casper; van Dinther, Ylona; May, Dave; Le Pourhiet, Laetitia; Gerya, Taras
2016-04-01
into one algorithm. We are working towards presenting the first benchmarked 3D dynamic rupture models as an important step towards seismic cycle modelling of megathrust segmentation in a three-dimensional subduction setting with slow tectonic loading, self consistent fault development, and spontaneous seismicity.
Saro, A.; De Lucia, G.; Borgani, S.; Dolag, K.
2010-08-01
We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical smoothed particle hydrodynamics (SPH) simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. This simplified comparison is thus not meant to be compared with observational data, but is aimed at understanding the level of agreement, at the stripped-down level considered, between two techniques that are widely used to model galaxy formation in a cosmological framework and which present complementary advantages and disadvantages. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: (i) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; (ii) while all stars associated with the BCG were formed in its progenitors in the SAM used here, this holds true only for half of the final BCG stellar mass in the SPH simulation, the remaining half being contributed by tidal stripping of stars from the diffuse stellar component associated with galaxies accreted on the cluster halo; (iii) SPH satellites can lose up to 90 per cent of their stellar mass at the time of accretion, due to tidal stripping, a process not included in the SAM used in this paper; (iv) in the SPH simulation, significant cooling occurs on the most massive satellite galaxies and this lasts for up to 1 Gyr after accretion. This physical process is
DEFF Research Database (Denmark)
Cachorro, Irene Albacete; Daraban, Iulia Maria; Lainé, Guillaume
2013-01-01
and absorption heat pump. The model is validated using data available in open literature. Overall this system shows better performance in terms of efficiency and CO2 emissions compared with cogeneration or tri-generation systems. Specifically, it suits better for applications, such food industry, where...
Sasaki, T.; Iba, D.; Hongu, J.; Nakamura, M.; Moriwaki, I.
2016-09-01
This paper shows experimental performance evaluation of a new control system for active mass dampers (AMDs). The proposed control system consists of a position controller and neural oscillator, and is designed for the solution of a stroke limitation problem of an auxiliary mass of the AMDs. The neural oscillator synchronizing with the response of a structure generates a signal, which is utilized for switching of motion direction of the auxiliary mass and for travel distances of the auxiliary mass. According to the generated signal, the position controller drives the auxiliary mass to the target values, and the reaction force resulting from the movement of the auxiliary mass is transmitted to the structure, and reduces the vibration amplitude of the structure. Our previous research results showed that the proposed system could reduce the vibration of the structure while the motion of auxiliary mass was suppressed within the restriction; however the control performance was evaluated numerically. In order to put the proposed system to practical use, the system should be evaluated experimentally. This paper starts by illustrating the relation among subsystems of the proposed system, and then, shows experimental responses of a structure model with the AMD excited by earthquakes on a shaker to confirm the validity of the system.
A self-consistent first-principle based approach to model carrier mobility in organic materials
Energy Technology Data Exchange (ETDEWEB)
Meded, Velimir; Friederich, Pascal; Symalla, Franz; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)
2015-12-31
Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using a fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.
Self-consistent Keldysh approach to quenches in the weakly interacting Bose-Hubbard model
Lo Gullo, N.; Dell'Anna, L.
2016-11-01
We present a nonequilibrium Green's-functional approach to study the dynamics following a quench in weakly interacting Bose-Hubbard model (BHM). The technique is based on the self-consistent solution of a set of equations which represents a particular case of the most general set of Hedin's equations for the interacting single-particle Green's function. We use the ladder approximation as a skeleton diagram for the two-particle scattering amplitude useful, through the self-energy in the Dyson equation, for finding the interacting single-particle Green's function. This scheme is then implemented numerically by a parallelized code. We exploit this approach to study the correlation propagation after a quench in the interaction parameter, for one and two dimensions. In particular, we show how our approach is able to recover the crossover from the ballistic to the diffusive regime by increasing the boson-boson interaction. Finally we also discuss the role of a thermal initial state on the dynamics both for one- and two-dimensional BHMs, finding that, surprisingly, at high temperature a ballistic evolution is restored.
How consistent is cloudiness over Canada from satellite observations and modeling data?
Trishchenko, A. P.; Khlopenkov, K.; Latifovic, R.
2004-05-01
Being one of the major modulators of radiation budget and hydrological cycle, clouds are still significant challenge for modeling and satellite retrievals. For example, our analysis shows that for Western Canada the systematic difference in total cloud amounts between NCAR/NCEP Reanalysis-2 and ISCCP reaches 20-30 per cent. Especially difficult are satellite retrievals for Northern climate regions over snow-covered surface and during night-time. To understand better these differences and their influence on earth radiation budget in Northern latitudes, we are attempting to undertake the re-analysis of satellite AVHRR data over Canada using improved data processing and cloud detection algorithms. Details of cloud detection algorithm for day-time and night-time conditions over snow-free and snow-covered surfaces are discussed. Selected results of satellite retrievals for typical summer and winter conditions over Canada are compared to previous analyses, such as ISCCP and Pathfinder projects. Consistency between our cloud retrievals using AVHRR data and those available from MODIS will be also considered.
Consistency analysis for the performance of planar detector systems used in advanced radiotherapy
Directory of Open Access Journals (Sweden)
Kanan Jassal
2015-03-01
Full Text Available Purpose: To evaluate the performance linked to the consistency of a-Si EPID and ion-chamber array detectors for dose verification in advanced radiotherapy.Methods: Planar measurements were made for 250 patients using an array of ion chamber and a-Si EPID. For pre-treatment verification, the plans were generated on the phantom for re-calculation of doses. The γ-evaluation method with the criteria: dose-difference (DD ≤ 3% and distance-to-agreement (DTA ≤ 3 mm was used for the comparison of measurements. Also, the central axis (CAX doses were measured using 0.125cc ion chamber and were compared with the central chamber of array and central pixel correlated dose value from EPID image. Two types of statistical approaches were applied for the analysis. Conventional statistics used analysis of variance (ANOVA and unpaired t-test to evaluate the performance of the detectors. And statistical process control (SPC was utilized to study the statistical variation for the measured data. Control charts (CC based on an average , standard deviation ( and exponentially weighted moving averages (EWMA were prepared. The capability index (Cpm was determined as an indicator for the performance consistency of the two systems.Results: Array and EPID measurements had the average gamma pass rates as 99.9% ± 0.15% and 98.9% ± 1.06% respectively. For the point doses, the 0.125cc chamber results were within 2.1% ± 0.5% of the central chamber of the array. Similarly, CAX doses from EPID and chamber matched within 1.5% ± 0.3%. The control charts showed that both the detectors were performing optimally and all the data points were within ± 5%. EWMA charts revealed that both the detectors had a slow drift along the mean of the processes but was found well within ± 3%. Further, higher Cpm values for EPID demonstrate its higher efficiency for radiotherapy techniques.Conclusion: The performances of both the detectors were seen to be of high quality irrespective of the
Functional connectivity modeling of consistent cortico-striatal degeneration in Huntington's disease
Directory of Open Access Journals (Sweden)
Imis Dogan
2015-01-01
Full Text Available Huntington's disease (HD is a progressive neurodegenerative disorder characterized by a complex neuropsychiatric phenotype. In a recent meta-analysis we identified core regions of consistent neurodegeneration in premanifest HD in the striatum and middle occipital gyrus (MOG. For early manifest HD convergent evidence of atrophy was most prominent in the striatum, motor cortex (M1 and inferior frontal junction (IFJ. The aim of the present study was to functionally characterize this topography of brain atrophy and to investigate differential connectivity patterns formed by consistent cortico-striatal atrophy regions in HD. Using areas of striatal and cortical atrophy at different disease stages as seeds, we performed task-free resting-state and task-based meta-analytic connectivity modeling (MACM. MACM utilizes the large data source of the BrainMap database and identifies significant areas of above-chance co-activation with the seed-region via the activation-likelihood-estimation approach. In order to delineate functional networks formed by cortical as well as striatal atrophy regions we computed the conjunction between the co-activation profiles of striatal and cortical seeds in the premanifest and manifest stages of HD, respectively. Functional characterization of the seeds was obtained using the behavioral meta-data of BrainMap. Cortico-striatal atrophy seeds of the premanifest stage of HD showed common co-activation with a rather cognitive network including the striatum, anterior insula, lateral prefrontal, premotor, supplementary motor and parietal regions. A similar but more pronounced co-activation pattern, additionally including the medial prefrontal cortex and thalamic nuclei was found with striatal and IFJ seeds at the manifest HD stage. The striatum and M1 were functionally connected mainly to premotor and sensorimotor areas, posterior insula, putamen and thalamus. Behavioral characterization of the seeds confirmed that experiments
Dosen, Strahinja; Markovic, Marko; Wille, Nicola; Henkel, Markus; Koppe, Mario; Ninu, Andrei; Frömmel, Cornelius; Farina, Dario
2015-06-01
Prosthesis users usually agree that myoelectric prostheses should be equipped with somatosensory feedback. However, the exact role of feedback and potential benefits are still elusive. The current study investigates the nature of human control processes within a specific context of routine grasping. Although the latter includes a fast feedforward control of the grasping force, the assumption was that the feedback would still be useful; it would communicate the outcome of the grasping trial, which the subjects could use to learn an internal model of feedforward control. Nine able-bodied subjects produced repeatedly a desired level of grasping force using different control configurations: feedback versus no-feedback, virtual versus real prosthetic hand, and joystick versus myocontrol. The outcome measures were the median and dispersion of the relative force errors. The results demonstrated that the feedback was successful in limiting the variability of the routine grasping due to uncertainties in the system and/or the command interface. The internal models of feedforward control could be employed by the subjects to control the prosthesis without the loss of performance even after the force feedback was removed. The models were, however, unstable over time, especially with myocontrol. Overall, the study demonstrates that the prosthesis system can be learned by the subjects using feedback. The feedback is also essential to maintain the model, and it could be delivered intermittently. This approach has practical advantages, but the level to which this mechanism can be truly exploited in practice depends directly on the consistency of the prosthesis control interface.
Mutual Inductance Problem for a System Consisting of a Current Sheet and a Thin Metal Plate
Fulton, J. P.; Wincheski, B.; Nath, S.; Namkung, M.
1993-01-01
Rapid inspection of aircraft structures for flaws is of vital importance to the commercial and defense aircraft industry. In particular, inspecting thin aluminum structures for flaws is the focus of a large scale R&D effort in the nondestructive evaluation (NDE) community. Traditional eddy current methods used today are effective, but require long inspection times. New electromagnetic techniques which monitor the normal component of the magnetic field above a sample due to a sheet of current as the excitation, seem to be promising. This paper is an attempt to understand and analyze the magnetic field distribution due to a current sheet above an aluminum test sample. A simple theoretical model, coupled with a two dimensional finite element model (FEM) and experimental data will be presented in the next few sections. A current sheet above a conducting sample generates eddy currents in the material, while a sensor above the current sheet or in between the two plates monitors the normal component of the magnetic field. A rivet or a surface flaw near a rivet in an aircraft aluminum skin will disturb the magnetic field, which is imaged by the sensor. Initial results showed a strong dependence of the flaw induced normal magnetic field strength on the thickness and conductivity of the current-sheet that could not be accounted for by skin depth attenuation alone. It was believed that the eddy current imaging method explained the dependence of the thickness and conductivity of the flaw induced normal magnetic field. Further investigation, suggested the complexity associated with the mutual inductance of the system needed to be studied. The next section gives an analytical model to better understand the phenomenon.
National Energy Outlook Modelling System
Energy Technology Data Exchange (ETDEWEB)
Volkers, C.M. [ECN Policy Studies, Petten (Netherlands)
2013-12-15
For over 20 years, the Energy research Centre of the Netherlands (ECN) has been developing the National Energy Outlook Modelling System (NEOMS) for Energy projections and policy evaluations. NEOMS enables 12 energy models of ECN to exchange data and produce consistent and detailed results.
DEFF Research Database (Denmark)
Sogachev, Andrey; Kelly, Mark C.; Leclerc, Monique Y.
2012-01-01
A self-consistent two-equation closure treating buoyancy and plant drag effects has been developed, through consideration of the behaviour of the supplementary equation for the length-scale-determining variable in homogeneous turbulent flow. Being consistent with the canonical flow regimes of gri...
Cellier, Francois E.
1991-01-01
A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.
Thermal X-ray emission from a baryonic jet: a self-consistent multicolour spectral model
Khabibullin, Ildar; Sazonov, Sergey
2015-01-01
We present a publicly-available spectral model for thermal X-ray emission from a baryonic jet in an X-ray binary system, inspired by the microquasar SS 433. The jet is assumed to be strongly collimated (half-opening angle $\\Theta\\sim 1\\deg$) and mildly relativistic (bulk velocity $\\beta=V_{b}/c\\sim 0.03-0.3$). Its X-ray spectrum is found by integrating over thin slices of constant temperature, radiating in optically thin coronal regime. The temperature profile along the jet and corresponding differential emission measure distribution are calculated with full account for gas cooling due to expansion and radiative losses. Since the model predicts both the spectral shape and luminosity of the jet's emission, its normalisation is not a free parameter if the source distance is known. We also explore the possibility of using simple X-ray observables (such as flux ratios in different energy bands) to constrain physical parameters of the jet (e.g. gas temperature and density at its base) without broad-band fitting of...
A Consistent Fuzzy Preference Relations Based ANP Model for R&D Project Selection
Directory of Open Access Journals (Sweden)
Chia-Hua Cheng
2017-08-01
Full Text Available In today’s rapidly changing economy, technology companies have to make decisions on research and development (R&D projects investment on a routine bases with such decisions having a direct impact on that company’s profitability, sustainability and future growth. Companies seeking profitable opportunities for investment and project selection must consider many factors such as resource limitations and differences in assessment, with consideration of both qualitative and quantitative criteria. Often, differences in perception by the various stakeholders hinder the attainment of a consensus of opinion and coordination efforts. Thus, in this study, a hybrid model is developed for the consideration of the complex criteria taking into account the different opinions of the various stakeholders who often come from different departments within the company and have different opinions about which direction to take. The decision-making trial and evaluation laboratory (DEMATEL approach is used to convert the cause and effect relations representing the criteria into a visual network structure. A consistent fuzzy preference relations based analytic network process (CFPR-ANP method is developed to calculate the preference-weights of the criteria based on the derived network structure. The CFPR-ANP is an improvement over the original analytic network process (ANP method in that it reduces the problem of inconsistency as well as the number of pairwise comparisons. The combined complex proportional assessment (COPRAS-G method is applied with fuzzy grey relations to resolve conflicts arising from differences in information and opinions provided by the different stakeholders about the selection of the most suitable R&D projects. This novel combination approach is then used to assist an international brand-name company to prioritize projects and make project decisions that will maximize returns and ensure sustainability for the company.
Yang, Laurence; Tan, Justin; O'Brien, Edward J; Monk, Jonathan M; Kim, Donghyuk; Li, Howard J; Charusanti, Pep; Ebrahim, Ali; Lloyd, Colton J; Yurkovich, James T; Du, Bin; Dräger, Andreas; Thomas, Alex; Sun, Yuekai; Saunders, Michael A; Palsson, Bernhard O
2015-08-25
Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma genitalium). Based on transcriptomics data across environmental and genetic backgrounds, the systems biology core proteome is significantly enriched in nondifferentially expressed genes and depleted in differentially expressed genes. Compared with the noncore, core gene expression levels are also similar across genetic backgrounds (two times higher Spearman rank correlation) and exhibit significantly more complex transcriptional and posttranscriptional regulatory features (40% more transcription start sites per gene, 22% longer 5'UTR). Thus, genome-scale systems biology approaches rigorously identify a functional core proteome needed to support growth. This framework, validated by using high-throughput datasets, facilitates a mechanistic understanding of systems-level core proteome function through in silico models; it de facto defines a paleome.
Self-consistent modelling of line-driven hot-star winds with Monte Carlo radiation hydrodynamics
Noebauer, U M
2015-01-01
Radiative pressure exerted by line interactions is a prominent driver of outflows in astrophysical systems, being at work in the outflows emerging from hot stars or from the accretion discs of cataclysmic variables, massive young stars and active galactic nuclei. In this work, a new radiation hydrodynamical approach to model line-driven hot-star winds is presented. By coupling a Monte Carlo radiative transfer scheme with a finite-volume fluid dynamical method, line-driven mass outflows may be modelled self-consistently, benefiting from the advantages of Monte Carlo techniques in treating multi-line effects, such as multiple scatterings, and in dealing with arbitrary multidimensional configurations. In this work, we introduce our approach in detail by highlighting the key numerical techniques and verifying their operation in a number of simplified applications, specifically in a series of self-consistent, one-dimensional, Sobolev-type, hot-star wind calculations. The utility and accuracy of our approach is dem...
Wan, Li; Xu, Shixin; Liao, Maijia; Liu, Chun; Sheng, Ping
2014-01-01
In this work, we treat the Poisson-Nernst-Planck (PNP) equations as the basis for a consistent framework of the electrokinetic effects. The static limit of the PNP equations is shown to be the charge-conserving Poisson-Boltzmann (CCPB) equation, with guaranteed charge neutrality within the computational domain. We propose a surface potential trap model that attributes an energy cost to the interfacial charge dissociation. In conjunction with the CCPB, the surface potential trap can cause a surface-specific adsorbed charge layer σ. By defining a chemical potential μ that arises from the charge neutrality constraint, a reformulated CCPB can be reduced to the form of the Poisson-Boltzmann equation, whose prediction of the Debye screening layer profile is in excellent agreement with that of the Poisson-Boltzmann equation when the channel width is much larger than the Debye length. However, important differences emerge when the channel width is small, so the Debye screening layers from the opposite sides of the channel overlap with each other. In particular, the theory automatically yields a variation of σ that is generally known as the "charge regulation" behavior, attendant with predictions of force variation as a function of nanoscale separation between two charged surfaces that are in good agreement with the experiments, with no adjustable or additional parameters. We give a generalized definition of the ζ potential that reflects the strength of the electrokinetic effect; its variations with the concentration of surface-specific and surface-nonspecific salt ions are shown to be in good agreement with the experiments. To delineate the behavior of the electro-osmotic (EO) effect, the coupled PNP and Navier-Stokes equations are solved numerically under an applied electric field tangential to the fluid-solid interface. The EO effect is shown to exhibit an intrinsic time dependence that is noninertial in its origin. Under a step-function applied electric field, a
Pluralistic Modeling of Complex Systems
Helbing, Dirk
2010-01-01
The modeling of complex systems such as ecological or socio-economic systems can be very challenging. Although various modeling approaches exist, they are generally not compatible and mutually consistent, and empirical data often do not allow one to decide what model is the right one, the best one, or most appropriate one. Moreover, as the recent financial and economic crisis shows, relying on a single, idealized model can be very costly. This contribution tries to shed new light on problems that arise when complex systems are modeled. While the arguments can be transferred to many different systems, the related scientific challenges are illustrated for social, economic, and traffic systems. The contribution discusses issues that are sometimes overlooked and tries to overcome some frequent misunderstandings and controversies of the past. At the same time, it is highlighted how some long-standing scientific puzzles may be solved by considering non-linear models of heterogeneous agents with spatio-temporal inte...
Self-consistent, axisymmetric two-integral models of elliptical galaxies with embedded nuclear discs
Van den Bosch, F C; van den Bosch, Frank C; de Zeeuw, P Tim
1996-01-01
Recently, observations with the Hubble Space Telescope have revealed small stellar discs embedded in the nuclei of a number of ellipticals and S0s. In this paper we construct two-integral axisymmetric models for such systems. We calculate the even part of the phase-space distribution function, and specify the odd part by means of a simple parameterization. We investigate the photometric as well as the kinematic signatures of nuclear discs, including their velocity profiles (VPs), and study the influence of seeing convolution. The rotation curve of a nuclear disc gives an excellent measure of the central mass-to-light ratio whenever the VPs clearly reveal the narrow, rapidly rotating component associated with the nuclear disc. Steep cusps and seeing convolution both result in central VPs that are dominated by the bulge light, and these VPs barely show the presence of the nuclear disc, impeding measurements of the central rotation velocities of the disc stars. However, if a massive BH is present, the disc compo...
Martinez, Guillermo F.; Gupta, Hoshin V.
2011-12-01
Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.
Graham, Mark J.; Naqvi, Zoon; Encandela, John A.; Bylund, Carma L.; Dean, Randa; Calero-Breckheimer, Ayxa; Schmidt, Hilary J.
2009-01-01
In many parts of the world the practice of medicine and medical education increasingly focus on providing patient care within context of the larger healthcare system. Our purpose is to solicit perceptions of all professional stakeholders (e.g. nurses) of the system regarding the U.S. ACGME competency Systems Based Practice to uncover the extent to…
Energy Technology Data Exchange (ETDEWEB)
Ming, Y; Ramaswamy, V; Donner, L J; Phillips, V T; Klein, S A; Ginoux, P A; Horowitz, L H
2005-05-02
This paper describes a self-consistent prognostic cloud scheme that is able to predict cloud liquid water, amount and droplet number (N{sub d}) from the same updraft velocity field, and is suitable for modeling aerosol-cloud interactions in general circulation models (GCMs). In the scheme, the evolution of droplets fully interacts with the model meteorology. An explicit treatment of cloud condensation nuclei (CCN) activation allows the scheme to take into account the contributions to N{sub d} of multiple types of aerosol (i.e., sulfate, organic and sea-salt aerosols) and kinetic limitations of the activation process. An implementation of the prognostic scheme in the Geophysical Fluid Dynamics Laboratory (GFDL) AM2 GCM yields a vertical distribution of N{sub d} characteristic of maxima in the lower troposphere differing from that obtained through diagnosing N{sub d} empirically from sulfate mass concentrations. As a result, the agreement of model-predicted present-day cloud parameters with satellite measurements is improved compared to using diagnosed N{sub d}. The simulations with pre-industrial and present-day aerosols show that the combined first and second indirect effects of anthropogenic sulfate and organic aerosols give rise to a global annual mean flux change of -1.8 W m{sup -2} consisting of -2.0 W m{sup -2} in shortwave and 0.2 W m{sup -2} in longwave, as model response alters cloud field, and subsequently longwave radiation. Liquid water path (LWP) and total cloud amount increase by 19% and 0.6%, respectively. Largely owing to high sulfate concentrations from fossil fuel burning, the Northern Hemisphere mid-latitude land and oceans experience strong cooling. So does the tropical land which is dominated by biomass burning organic aerosol. The Northern/Southern Hemisphere and land/ocean ratios are 3.1 and 1.4, respectively. The calculated annual zonal mean flux changes are determined to be statistically significant, exceeding the model's natural
Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.
2003-01-01
. In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling...
Using open sidewalls for modelling self-consistent lithosphere subduction dynamics
Chertova, M.V.; Geenen, T.; van den Berg, A.; Spakman, W.
2012-01-01
Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable) vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free
Directory of Open Access Journals (Sweden)
Athanasios A. Pantelous
2010-01-01
Full Text Available In some interesting applications in control and system theory, linear descriptor (singular matrix differential equations of higher order with time-invariant coefficients and (non- consistent initial conditions have been used. In this paper, we provide a study for the solution properties of a more general class of the Apostol-Kolodner-type equations with consistent and nonconsistent initial conditions.
Tsai, Min-hsiu
2012-01-01
This study investigates the consistency between human raters and an automated essay scoring system in grading high school students' English compositions. A total of 923 essays from 23 classes of 12 senior high schools in Taiwan (Republic of China) were obtained and scored manually and electronically. The results show that the consistency between…
Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.
2009-01-01
This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…
Self-consistent tight-binding model of B and N doping in graphene
DEFF Research Database (Denmark)
Pedersen, Thomas Garm; Pedersen, Jesper Goor
2013-01-01
Boron and nitrogen substitutional impurities in graphene are analyzed using a self-consistent tight-binding approach. An analytical result for the impurity Green's function is derived taking broken electron-hole symmetry into account and validated by comparison to numerical diagonalization...
Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.
2009-01-01
This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…
Directory of Open Access Journals (Sweden)
P.-P. Mathieu
2012-08-01
Full Text Available The terrestrial biosphere is currently a strong sink for anthropogenic CO2 emissions. Through the radiative properties of CO2, the strength of this sink has a direct influence on the radiative budget of the global climate system. The accurate assessment of this sink and its evolution under a changing climate is, hence, paramount for any efficient management strategies of the terrestrial carbon sink to avoid dangerous climate change. Unfortunately, simulations of carbon and water fluxes with terrestrial biosphere models exhibit large uncertainties. A considerable fraction of this uncertainty reflects uncertainty in the parameter values of the process formulations within the models. This paper describes the systematic calibration of the process parameters of a terrestrial biosphere model against two observational data streams: remotely sensed FAPAR (fraction of absorbed photosynthetically active radiation provided by the MERIS (ESA's Medium Resolution Imaging Spectrometer sensor and in situ measurements of atmospheric CO2 provided by the GLOBALVIEW flask sampling network. We use the Carbon Cycle Data Assimilation System (CCDAS to systematically calibrate some 70 parameters of the terrestrial BETHY (Biosphere Energy Transfer Hydrology model. The simultaneous assimilation of all observations provides parameter estimates and uncertainty ranges that are consistent with the observational information. In a subsequent step these parameter uncertainties are propagated through the model to uncertainty ranges for predicted carbon fluxes. We demonstrate the consistent assimilation at global scale, where the global MERIS FAPAR product and atmospheric CO2 are used simultaneously. The assimilation improves the match to independent observations. We quantify how MERIS data improve the accuracy of the current and future (net and gross carbon flux estimates (within and beyond the assimilation period. We further demonstrate the use of an interactive mission benefit
Directory of Open Access Journals (Sweden)
P.-P. Mathieu
2011-11-01
Full Text Available The terrestrial biosphere is currently a strong sink for anthropogenic CO2 emissions. Through the radiative properties of CO2 the strength of this sink has a direct influence on the radiative budget of the global climate system. The accurate assessment of this sink and its evolution under a changing climate is, hence, paramount for any efficient management strategies of the terrestrial carbon sink to avoid dangerous climate change. Unfortunately, simulations of carbon and water fluxes with terrestrial biosphere models exhibit large uncertainties. A considerable fraction of this uncertainty is reflecting uncertainty in the parameter values of the process formulations within the models. This paper describes the systematic calibration of the process parameters of a terrestrial biosphere model against two observational data streams: remotely sensed FAPAR provided by the MERIS sensor and in situ measurements of atmospheric CO2 provided by the GLOBALVIEW flask sampling network. We use the Carbon Cycle Data Assimilation System (CCDAS to systematically calibrate some 70 parameters of the terrestrial biosphere model BETHY. The simultaneous assimilation of all observations provides parameter estimates and uncertainty ranges that are consistent with the observational information. In a subsequent step these parameter uncertainties are propagated through the model to uncertainty ranges for predicted carbon fluxes. We demonstrate the consistent assimilation for two different set-ups: first at site-scale, where MERIS FAPAR observations at a range of sites are used as simultaneous constraints, and second at global scale, where the global MERIS FAPAR product and atmospheric CO2 are used simultaneously. On both scales the assimilation improves the match to independent observations. We quantify how MERIS data improve the accuracy of the current and future (net and gross carbon flux estimates (within and beyond the assimilation period. We further demonstrate the
Directory of Open Access Journals (Sweden)
Ying Jiang
2017-02-01
Full Text Available This paper presents a theoretical formalism for describing systems of semiflexible polymers, which can have density variations due to finite compressibility and exhibit an isotropic-nematic transition. The molecular architecture of the semiflexible polymers is described by a continuum wormlike-chain model. The non-bonded interactions are described through a functional of two collective variables, the local density and local segmental orientation tensor. In particular, the functional depends quadratically on local density-variations and includes a Maier–Saupe-type term to deal with the orientational ordering. The specified density-dependence stems from a free energy expansion, where the free energy of an isotropic and homogeneous homopolymer melt at some fixed density serves as a reference state. Using this framework, a self-consistent field theory is developed, which produces a Helmholtz free energy that can be used for the calculation of the thermodynamics of the system. The thermodynamic properties are analysed as functions of the compressibility of the model, for values of the compressibility realizable in mesoscopic simulations with soft interactions and in actual polymeric materials.
Ferrier, Ken L.; Austermann, Jacqueline; Mitrovica, Jerry X.; Pico, Tamara
2017-10-01
Sea-level changes are of wide interest because they regulate coastal hazards, shape the sedimentary geologic record and are sensitive to climate change. In areas where rivers deliver sediment to marine deltas and fans, sea-level changes are strongly modulated by the deposition and compaction of marine sediment. Deposition affects sea level by increasing the elevation of the seafloor, by perturbing crustal elevation and gravity fields and by reducing the volume of seawater through the incorporation of water into sedimentary pore space. In a similar manner, compaction affects sea level by lowering the elevation of the seafloor and by purging water out of sediments and into the ocean. Here we incorporate the effects of sediment compaction into a gravitationally self-consistent global sea-level model by extending the approach of Dalca et al. (2013). We show that incorporating compaction requires accounting for two quantities that are not included in the Dalca et al. (2013) analysis: the mean porosity of the sediment and the degree of saturation in the sediment. We demonstrate the effects of compaction by modelling sea-level responses to two simplified 122-kyr sediment transfer scenarios for the Amazon River system, one including compaction and one neglecting compaction. These simulations show that the largest effect of compaction is on the thickness of the compacting sediment, an effect that is largest where deposition rates are fastest. Compaction can also produce minor sea-level changes in coastal regions by influencing shoreline migration and the location of seawater loading, which perturbs crustal elevations. By providing a tool for modelling gravitationally self-consistent sea-level responses to sediment compaction, this work offers an improved approach for interpreting the drivers of past sea-level changes.
José Gómez-Navarro, Juan; Raible, Christoph C.; Blumer, Sandro; Martius, Olivia; Felder, Guido
2016-04-01
Extreme precipitation episodes, although rare, are natural phenomena that can threat human activities, especially in areas densely populated such as Switzerland. Their relevance demands the design of public policies that protect public assets and private property. Therefore, increasing the current understanding of such exceptional situations is required, i.e. the climatic characterisation of their triggering circumstances, severity, frequency, and spatial distribution. Such increased knowledge shall eventually lead us to produce more reliable projections about the behaviour of these events under ongoing climate change. Unfortunately, the study of extreme situations is hampered by the short instrumental record, which precludes a proper characterization of events with return period exceeding few decades. This study proposes a new approach that allows studying storms based on a synthetic, but physically consistent database of weather situations obtained from a long climate simulation. Our starting point is a 500-yr control simulation carried out with the Community Earth System Model (CESM). In a second step, this dataset is dynamically downscaled with the Weather Research and Forecasting model (WRF) to a final resolution of 2 km over the Alpine area. However, downscaling the full CESM simulation at such high resolution is infeasible nowadays. Hence, a number of case studies are previously selected. This selection is carried out examining the precipitation averaged in an area encompassing Switzerland in the ESM. Using a hydrological criterion, precipitation is accumulated in several temporal windows: 1 day, 2 days, 3 days, 5 days and 10 days. The 4 most extreme events in each category and season are selected, leading to a total of 336 days to be simulated. The simulated events are affected by systematic biases that have to be accounted before this data set can be used as input in hydrological models. Thus, quantile mapping is used to remove such biases. For this task
A Delay Model of Multiple-Valued Logic Circuits Consisting of Min, Max, and Literal Operations
Takagi, Noboru
Delay models for binary logic circuits have been proposed and clarified their mathematical properties. Kleene's ternary logic is one of the simplest delay models to express transient behavior of binary logic circuits. Goto first applied Kleene's ternary logic to hazard detection of binary logic circuits in 1948. Besides Kleene's ternary logic, there are many delay models of binary logic circuits, Lewis's 5-valued logic etc. On the other hand, multiple-valued logic circuits recently play an important role for realizing digital circuits. This is because, for example, they can reduce the size of a chip dramatically. Though multiple-valued logic circuits become more important, there are few discussions on delay models of multiple-valued logic circuits. Then, in this paper, we introduce a delay model of multiple-valued logic circuits, which are constructed by Min, Max, and Literal operations. We then show some of the mathematical properties of our delay model.
Olsen, Nikki S; Shorrock, Steven T
2010-03-01
This article evaluates an adaptation of the human factors analysis and classification system (HFACS) adopted by the Australian Defence Force (ADF) to classify factors that contribute to incidents. Three field studies were undertaken to assess the reliability of HFACS-ADF in the context of a particular ADF air traffic control (ATC) unit. Study one was designed to assess inter-coder consensus between many coders for two incident reports. Study two was designed to assess inter-coder consensus between one participant and the previous original analysts for a large set of incident reports. Study three was designed to test intra-coder consistency for four participants over many months. For all studies, agreement was low at the level of both fine-level HFACS-ADF descriptors and high-level HFACS-type categories. A survey of participants suggested that they were not confident that HFACS-ADF could be used consistently. The three field studies reported suggest that the ADF adaptation of HFACS is unreliable for incident analysis at the ATC unit level, and may therefore be invalid in this context. Several reasons for the results are proposed, associated with the underlying HFACS model and categories, the HFACS-ADF adaptations, the context of use, and the conduct of the studies.
Physically-consistent wall boundary conditions for the k-ω turbulence model
DEFF Research Database (Denmark)
Fuhrman, David R.; Dixen, Martin; Jacobsen, Niels Gjøl
2010-01-01
A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components of the fluc......A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components...
Antoniu, Gabriel; Cudennec, Loïc; Monnet, Sébastien
2006-01-01
This paper addresses the problem of efficient visualization of shared data within code coupling grid applications. These applications are structured as a set of distributed, autonomous, weakly-coupled codes. We focus on the case where the codes are able to interact using the abstraction of a shared data space. We propose an efficient visualization scheme by adapting the mechanisms used to maintain the data consistency. We introduce a new operation called relaxed read, as an extension to the e...
Bigagli, Lorenzo; Papeschi, Fabrizio; Nativi, Stefano; Bastin, Lucy; Masó, Joan
2013-04-01
GeoViQua (QUAlity aware VIsualisation for the Global Earth Observation System of Systems) is an FP7 project aiming at complementing the Global Earth Observation System of Systems (GEOSS) with rigorous data quality specifications and quality-aware capabilities, in order to improve reliability in scientific studies and policy decision-making. GeoViQua main scientific and technical objective is to enhance the GEOSS Common Infrastructure (GCI) providing the user community with innovative quality-aware search and visualization tools, which will be integrated in the GEOPortal, as well as made available to other end-user interfaces. To this end, GeoViQua will promote the extension of the current standard metadata for geographic information with accurate and expressive quality indicators. The project will also contribute to the definition of a quality label, the GEOLabel, reflecting scientific relevance, quality, acceptance and societal needs. The concept of Quality Information is very broad. When talking about the quality of a product, this is not limited to geophysical quality but also includes concepts like mission quality (e.g. data coverage with respect to planning). In general, it provides an indication of the overall fitness for use of a specific type of product. Employing and extending several ISO standards such as 19115, 19157 and 19139, a common set of data quality indicators has been selected to be used within the project. The resulting work, in the form of a data model, is expressed in XML Schema Language and encoded in XML. Quality information can be stated both by data producers and by data users, actually resulting in two conceptually distinct data models, the Producer Quality model and the User Quality model (or User Feedback model). A very important issue concerns the association between the quality reports and the affected products that are target of the report. This association is usually achieved by means of a Product Identifier (PID), but actually just
Woitke, P.; Min, M.; Pinte, C.; Thi, W. -F; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.
2016-01-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on
Vertical Equating: An Empirical Study of the Consistency of Thurstone and Rasch Model Approaches.
Schratz, Mary K.
To explore the appropriateness of the Rasch model for the vertical equating of a multi-level, multi-form achievement test series, both the Rasch model and the traditional Thurstone procedures were applied to the Listening Comprehension subtest scores of the Stanford Achievement Test. Two adjacent levels of these tests were administered in 1981 to…
Woitke, P.; Min, M.; Pinte, C.; Thi, W. -F; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.
2016-01-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on t
Self-consistent modelling of hot plasmas within non-extensive Tsallis' thermostatistics
Pain, Jean-Christophe; Gilleron, Franck
2011-01-01
A study of the effects of non-extensivity on the modelling of atomic physics in hot dense plasmas is proposed within Tsallis' statistics. The electronic structure of the plasma is calculated through an average-atom model based on the minimization of the non-extensive free energy.
CONSISTENT USE OF THE KALMAN FILTER IN CHEMICAL TRANSPORT MODELS (CTMS) FOR DEDUCING EMISSIONS
Past research has shown that emissions can be deduced using observed concentrations of a chemical, a Chemical Transport Model (CTM), and the Kalman filter in an inverse modeling application. An expression was derived for the relationship between the "observable" (i.e., the con...
DEFF Research Database (Denmark)
Keck, Rolf-Erik; Veldkamp, Dick; Wedel-Heinen, Jens Jakob
This thesis describes the further development and validation of the dynamic meandering wake model for simulating the flow field and power production of wind farms operating in the atmospheric boundary layer (ABL). The overall objective of the conducted research is to improve the modelling capabil...... intensity. This power drop is comparable to measurements from the North Hoyle and OWEZ wind farms....
Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie
2014-10-07
Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations.
Jacques, Kevin; Sabariego, Ruth,; Geuzaine, Christophe; GYSELINCK Johan
2015-01-01
This paper deals with the implementation of an energy-consistent ferromagnetic hysteresis model in 2D finite element computations. This vector hysteresis model relies on a strong thermodynamic foundation and ensures the closure of minor hysteresis loops. The model accuracy can be increased by controlling the number of intrinsic cell components while parameters can be easily fitted on common material measurements. Here, the native h-based material model is inverted using the Newton-Raphson met...
Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency
Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.
2013-09-01
A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.
Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.
2009-01-01
Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.
National Research Council Canada - National Science Library
Chen, Pei; Li, Yongjun
2016-01-01
... the serious deterioration, not only because of the high complexity of the biological system, but there may be few clues and apparent changes appearing until the catastrophic critical transition occurs...
Institute of Scientific and Technical Information of China (English)
ZHA Feng; HU Bai-qing; QIN Fang-jun; LUO Yin-bo
2012-01-01
An effective and flexible rotation and compensation scheme is designed to improve the accuracy of rotating inertial navigation system (RINS).The accuracy of single-axial RINS is limited by the errors on the rotating axis.A novel inertial measurement unit (IMU) scheme with error compensation for the rotating axis of fiber optic gyros (FOG) RINS is presented.In the scheme,two couples of inertial sensors with similar error characteristics are mounted oppositely on the rotating axes to compensate the sensors error.Without any change for the rotation cycle,this scheme improves the system's precision and reliability,and also offers the redundancy for the system.The results of 36 h navigation simulation prove that the accuracy of the system is improved notably compared with normal strapdown INS,besides the heading accuracy is increased by 3 times compared with single-axial RINS,and the position accuracy is improved by 1 order of magnitude.
Energy Technology Data Exchange (ETDEWEB)
Keck, R.-E.
2013-07-15
This thesis describes the further development and validation of the dynamic meandering wake model for simulating the flow field and power production of wind farms operating in the atmospheric boundary layer (ABL). The overall objective of the conducted research is to improve the modelling capability of the dynamics wake meandering model to a level where it is sufficiently mature to be applied in industrial applications and for an augmentation of the IEC-standard for wind turbine wake modelling. Based on a comparison of capabilities of the dynamic wake meandering model to the requirement of the wind industry, four areas were identified as high prioritizations for further research: 1. the turbulence distribution in a single wake. 2. multiple wake deficits and build-up of turbulence over a row of turbines. 3. the effect of the atmospheric boundary layer on wake turbulence and wake deficit evolution. 4. atmospheric stability effects on wake deficit evolution and meandering. The conducted research is to a large extent based on detailed wake investigations and reference data generated through computational fluid dynamics simulations, where the wind turbine rotor has been represented by an actuator line model. As a consequence, part of the research also targets the performance of the actuator line model when generating wind turbine wakes in the atmospheric boundary layer. Highlights of the conducted research: 1. A description is given for using the dynamic wake meandering model as a standalone flow-solver for the velocity and turbulence distribution, and power production in a wind farm. The performance of the standalone implementation is validated against field data, higher-order computational fluid dynamics models, as well as the most common engineering wake models in the wind industry. 2. The EllipSys3D actuator line model, including the synthetic methods used to model atmospheric boundary layer shear and turbulence, is verified for modelling the evolution of wind
Directory of Open Access Journals (Sweden)
Healey Sean P
2012-10-01
Full Text Available Abstract Background Lidar height data collected by the Geosciences Laser Altimeter System (GLAS from 2002 to 2008 has the potential to form the basis of a globally consistent sample-based inventory of forest biomass. GLAS lidar return data were collected globally in spatially discrete full waveform “shots,” which have been shown to be strongly correlated with aboveground forest biomass. Relationships observed at spatially coincident field plots may be used to model biomass at all GLAS shots, and well-established methods of model-based inference may then be used to estimate biomass and variance for specific spatial domains. However, the spatial pattern of GLAS acquisition is neither random across the surface of the earth nor is it identifiable with any particular systematic design. Undefined sample properties therefore hinder the use of GLAS in global forest sampling. Results We propose a method of identifying a subset of the GLAS data which can justifiably be treated as a simple random sample in model-based biomass estimation. The relatively uniform spatial distribution and locally arbitrary positioning of the resulting sample is similar to the design used by the US national forest inventory (NFI. We demonstrated model-based estimation using a sample of GLAS data in the US state of California, where our estimate of biomass (211 Mg/hectare was within the 1.4% standard error of the design-based estimate supplied by the US NFI. The standard error of the GLAS-based estimate was significantly higher than the NFI estimate, although the cost of the GLAS estimate (excluding costs for the satellite itself was almost nothing, compared to at least US$ 10.5 million for the NFI estimate. Conclusions Global application of model-based estimation using GLAS, while demanding significant consolidation of training data, would improve inter-comparability of international biomass estimates by imposing consistent methods and a globally coherent sample frame. The
An analytical system enabling consistent and long-term measurement of atmospheric dimethyl sulfide
Jang, Sehyun; Park, Ki-Tae; Lee, Kitack; Suh, Young-Sang
2016-06-01
We describe here an analytical system capable of continuous measurement of atmospheric dimethylsulfide (DMS) at pptv levels. The system uses customized devices for detector calibration and for DMS trapping and desorption that are controlled using a data acquisition system (based on Visual Basic 6.0/C 6.0) designed to maximize the efficiency of DMS analysis in a highly sensitive pulsed flame photometric detector housed in a gas chromatograph. The fully integrated system, which can sample approximately 6 L of air during a 1-hr sampling, was used to measure the atmospheric DMS mixing ratio over the Atlantic sector of the Arctic Ocean over 3 full annual growth cycles of phytoplankton in 2010, 2014, and 2015, with minimal routine maintenance and interruptions. During the field campaigns, the measured atmospheric DMS mixing ratio varied over a considerable range, from <1.5 pptv to maximum levels of 298 pptv in 2010, 82 pptv in 2014, and 429 pptv in 2015. The operational period covering the 3 full annual growth cycles of phytoplankton showed that the system is suitable for uninterrupted measurement of atmospheric DMS mixing ratios in extreme environments. Moreover, the findings obtained using the system showed it to be useful in identifying ocean DMS source regions and changes in source strength.
The Twente lower extremity model : consistent dynamic simulation of the human locomotor apparatus
Klein Horsman, Martijn Dirk
2007-01-01
Orthopedic interventions such as tendon transfers have shown to be successful in the treatment of gait disorders. Still, in many cases dysfunctions remained or worsened. To assist clinicians, an interactive tool will be useful that allows evaluation of if-then scenarios with respect to treatment methods. Comprehensive musculoskeletal models have shown a high potential to serve as such a tool. By varying anatomical model parameters, alterations in anatomy due to surgery can be implemented. Inv...
Toward a self-consistent, high-resolution absolute plate motion model for the Pacific
Wessel, Paul; Harada, Yasushi; Kroenke, Loren W.
2006-03-01
The hot spot hypothesis postulates that linear volcanic trails form as lithospheric plates move relative to stationary or slowly moving plumes. Given geometry and ages from several trails, one can reconstruct absolute plate motions (APM) that provide valuable information about past and present tectonism, paleogeography, and volcanism. Most APM models have been designed by fitting small circles to coeval volcanic chain segments and determining stage rotation poles, opening angles, and time intervals. Unlike relative plate motion (RPM) models, such APM models suffer from oversimplicity, self-inconsistencies, inadequate fits to data, and lack of rigorous uncertainty estimates; in addition, they work only for fixed hot spots. Newer methods are now available that overcome many of these limitations. We present a technique that provides high-resolution APM models derived from stationary or moving hot spots (given prescribed paths). The simplest model assumes stationary hot spots, and an example of such a model is presented. Observations of geometry and chronology on the Pacific plate appear well explained by this type of model. Because it is a one-plate model, it does not discriminate between hot spot drift or true polar wander as explanations for inferred paleolatitudes from the Emperor chain. Whether there was significant relative motion within the hot spots under the Pacific plate during the last ˜70 m.y. is difficult to quantify, given the paucity and geological uncertainty of age determinations. Evidence in support of plume drift appears limited to the period before the 47 Ma Hawaii-Emperor Bend and, apart from the direct paleolatitude determinations, may have been somewhat exaggerated.
A consistent hamiltonian treatment of the Thirring-Wess and Schwinger model in the covariant gauge
Martinovič, L'ubomír
2014-06-01
We present a unified hamiltonian treatment of the massless Schwinger model in the Landau gauge and of its non-gauge counterpart-the Thirring-Wess (TW) model. The operator solution of the Dirac equation has the same structure in the both models and identifies free fields as the true dynamical degrees of freedom. The coupled boson field equations (Maxwell and Proca, respectively) can also be solved exactly. The Hamiltonan in Fock representation is derived for the TW model and its diagonalization via a Bogoliubov transformation is suggested. The axial anomaly is derived in both models directly from the operator solution using a hermitian version of the point-splitting regularization. A subtlety of the residual gauge freedom in the covariant gauge is shown to modify the usual definition of the "gauge-invariant" currents. The consequence is that the axial anomaly and the boson mass generation are restricted to the zero-mode sector only. Finally, we discuss quantization of the unphysical gauge-field components in terms of ghost modes in an indefinite-metric space and sketch the next steps within the finite-volume treatment necessary to fully reveal physical content of the model in our hamiltonian formulation.
Hachem, Walid; Mestre, Xavier; Najim, Jamal; Vallet, Pascal
2011-01-01
In array processing, a common problem is to estimate the angles of arrival of $K$ deterministic sources impinging on an array of $M$ antennas, from $N$ observations of the source signal, corrupted by gaussian noise. The problem reduces to estimate a quadratic form (called "localization function") of a certain projection matrix related to the source signal empirical covariance matrix. Recently, a new subspace estimation method (called "G-MUSIC") has been proposed, in the context where the number of available samples $N$ is of the same order of magnitude than the number of sensors $M$. In this context, the traditional subspace methods tend to fail because the empirical covariance matrix of the observations is a poor estimate of the source signal covariance matrix. The G-MUSIC method is based on a new consistent estimator of the localization function in the regime where $M$ and $N$ tend to $+\\infty$ at the same rate. However, the consistency of the angles estimator was not adressed. The purpose of this paper is ...
Woitke, P; Pinte, C; Thi, W -F; Kamp, I; Rab, C; Anthonioz, F; Antonellini, S; Baldovin-Saavedra, C; Carmona, A; Dominik, C; Dionatos, O; Greaves, J; Güdel, M; Ilee, J D; Liebhart, A; Ménard, F; Rigon, L; Waters, L B F M; Aresu, G; Meijerink, R; Spaans, M
2015-01-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. We propose new standard dust opacities for disk models, we present a simplified treatment of PAHs sufficient to reproduce the PAH emission features, and we suggest using a simple treatment of dust settling. We roughly adjust parameters to obtain a model that predicts typical Class II T Tauri star continuum and line observations. We systematically study the impact of each model parameter (disk mass, disk extension and shape, dust settling, dust size and opacity, gas/dust ratio, etc.) on all continuum and line observables, in particular on the SED, mm-slope, continuum visibilities, and emission lines including [OI] 63um, high-J CO lines, (sub-)mm CO isotopologue lines, and CO fundamental ro-vibrational lines. We find that evolved dust properties (large grains...
Consistent parameter fixing in the quark-meson model with vacuum fluctuations
Carignano, Stefano; Buballa, Michael; Elkamhawy, Wael
2016-08-01
We revisit the renormalization prescription for the quark-meson model in an extended mean-field approximation, where vacuum quark fluctuations are included. At a given cutoff scale the model parameters are fixed by fitting vacuum quantities, typically including the sigma-meson mass mσ and the pion decay constant fπ. In most publications the latter is identified with the expectation value of the sigma field, while for mσ the curvature mass is taken. When quark loops are included, this prescription is however inconsistent, and the correct identification involves the renormalized pion decay constant and the sigma pole mass. In the present article we investigate the influence of the parameter-fixing scheme on the phase structure of the model at finite temperature and chemical potential. Despite large differences between the model parameters in the two schemes, we find that in homogeneous matter the effect on the phase diagram is relatively small. For inhomogeneous phases, on the other hand, the choice of the proper renormalization prescription is crucial. In particular, we show that if renormalization effects on the pion decay constant are not considered, the model does not even present a well-defined renormalized limit when the cutoff is sent to infinity.
Shell Effect of Superheavy Nuclei in Self-consistent Mean-Field Models
Institute of Scientific and Technical Information of China (English)
RENZhong-Zhou; TAIFei; XUChang; CHENDing-Han; ZHANGHu-Yong; CAIXiang-Zhou; SHENWen-Qing
2004-01-01
We analyze in detail the numerical results of superheavy nuclei in deformed relativistic mean-field model and deformed Skyrme-Hartree-Fock model. The common points and differences of both models are systematically compared and discussed. Their consequences on the stability of superheavy nuclei are explored and explained. The theoreticalresults are compared with new data of superheavy nuclei from GSI and from Dubna and reasonable agreement is reached.Nuclear shell effect in superheavy region is analyzed and discussed. The spherical shell effect disappears in some cases due to the appearance of deformation or superdeformation in the ground states of nuclei, where valence nucleons occupysignificantly the intruder levels of nuclei. It is shown for the first time that the significant occupation of vaJence nucleons on the intruder states plays an important role for the ground state properties of superheavy nuclei. Nuclei are stable in the deformed or superdeformed configurations. We further point out that one cannot obtain the octupole deformation of even-even nuclei in the present relativistic mean-field model with the σ，ω and ρ mesons because there is no parityviolating interaction and the conservation of parity of even-even nuclei is a basic assumption of the present relativistic mean-field model.
Institute of Scientific and Technical Information of China (English)
Mohamed BALAH; Hamdan Naser AL-GHAMEDY
2004-01-01
The paper presents an approach for the formulation of general laminated shells based on a third order shear deformation theory. These shells undergo finite (unlimited in size) rotations and large overall motions but with small strains. A singularity-free parametrization of the rotation field is adopted. The constitutive equations, derived with respect to laminate curvilinear coordinates,are applicable to shell elements with an arbitrary number of orthotropic layers and where the material principal axes can vary from layer to layer. A careful consideration of the consistent linearization procedure pertinent to the proposed parametrization of finite rotations leads to symmetric tangent stiffness matrices. The matrix formulation adopted here makes it possible to implement the present formulation within the framework of the finite element method as a straightforward task.
User dynamics in a Dutch cafeteria system Consistent choices, inconsistent participation
van der Meer, Peter; van Veen, Kees
2009-01-01
Purpose - This paper aims to contribute to the empirical literature on cafeteria systems within employment relations by analysing employees' decisions on whether or not to participate, which employees chose what options and how the factors vary over time. Design/methodology/approach - The approach t
National Research Council Canada - National Science Library
Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie
2014-01-01
.... Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic...
Modeling of etch profile evolution including wafer charging effects using self consistent ion fluxes
Energy Technology Data Exchange (ETDEWEB)
Hoekstra, R.J.; Kushner, M.J. [Univ. of Illinois, Urbana, IL (United States). Dept. of Electrical and Computer Engineering
1996-12-31
As high density plasma reactors become more predominate in industry, the need has intensified for computer aided design tools which address both equipment issues such as ion flux uniformity onto the water and process issues such etch feature profile evolution. A hierarchy of models has been developed to address these issues with the goal of producing a comprehensive plasma processing design capability. The Hybrid Plasma Equipment Model (HPEM) produces ion and neutral densities, and electric fields in the reactor. The Plasma Chemistry Monte Carlo Model (PCMC) determines the angular and energy distributions of ion and neutral fluxes to the wafer using species source functions, time dependent bulk electric fields, and sheath potentials from the HPEM. These fluxes are then used by the Monte Carlo Feature Profile Model (MCFP) to determine the time evolution of etch feature profiles. Using this hierarchy, the effects of physical modifications of the reactor, such as changing wafer clamps or electrode structures, on etch profiles can be evaluated. The effects of wafer charging on feature evolution are examined by calculating the fields produced by the charge deposited by ions and electrons within the features. The effect of radial variations and nonuniformity in angular and energy distribution of the reactive fluxes on feature profiles and feature charging will be discussed for p-Si etching in inductively-coupled plasma (ICP) sustained in chlorine gas mixtures. The effects of over- and under-wafer topography on etch profiles will also be discussed.
Application of a Mass-Consistent Wind Model to Chinook Windstorms
1988-06-01
Meteor., 6, 837--344. Endlich, R. M., F. L. Ludwig, C. M. Bhunralkar, and M. A. Estoque , 1380: A practical method for estimating wind character34szics at...Project 8349, Menlo Park, CA. 94025. Endlich, R. M., F. L. Ludwig, C. M. Bhunralkar, and M. A. Estoque , 1982: A diagnostic model for estimating winds
Baraffe, [No Value; Alibert, Y; Mera, D; Charbrier, G; Beaulieu, JP
1998-01-01
We have computed stellar evolutionary models for stars in a mass range characteristic of Cepheid variables (3
Sahoo, A. K.; Pan, M.; Gao, H.; Wood, E. F.; Houser, P. R.; Lettenmaier, D. P.; Pinker, R.; Kummerow, C. D.
2008-12-01
We aim to develop consistent, long-term Earth System Data Records (ESDRs) for the major components (storages and fluxes) of the terrestrial water cycle at a spatial resolution of 0.5 degrees (latitude-longitude) and for the period 1950 to near-present. The resulting ESDRs are intended to provide a consistent basis for estimating the mean state and variability of the land surface water cycle at the spatial scale of the major global river basins. The ESDRs to produce include a) surface meteorology (precipitation, air temperature, humidity and wind), b) surface downward radiation (solar and longwave) and c) derived and/or assimilated fluxes and storages such as surface soil moisture storage, total basin water storage, snow water equivalent, storage in large lakes, reservoirs, and wetlands, evapotranspiration, and surface runoff. We construct data records for all variables back to 1950, recognizing that the post-satellite data will be of higher quality than pre-satellite (a reasonable compromise given the need for long-term records to define interannual and interdecadal variability of key water cycle variables). A distinguishing feature will be inclusion of two variables that reflect the massive effects of anthropogenic manipulation of the terrestrial water cycle, specifically reservoir storage, and irrigation water use. The overall goal of the project is to develop long term, consistent ESDRs for terrestrial water cycle states and variables by updating and extending previously funded Pathfinder data set activities to the investigators, and by making available the data set to the scientific community and data users via a state-of-the-art internet web-portal. The ESDRs will utilize algorithms and methods that are well documented in the peer reviewed literature. The ESDRs will merge satellite-derived products with predictions of the same variables by LSMs driven by merged satellite and in situ forcing data sets (most notably precipitation), with the constraint that the
Cosmological evolution and Solar System consistency of massive scalar-tensor gravity
de Pirey Saint Alby, Thibaut Arnoulx; Yunes, Nicolás
2017-09-01
The scalar-tensor theory of Damour and Esposito-Farèse recently gained some renewed interest because of its ability to suppress modifications to general relativity in the weak field, while introducing large corrections in the strong field of compact objects through a process called scalarization. A large sector of this theory that allows for scalarization, however, has been shown to be in conflict with Solar System observations when accounting for the cosmological evolution of the scalar field. We here study an extension of this theory by endowing the scalar field with a mass to determine whether this allows the theory to pass Solar System constraints upon cosmological evolution for a larger sector of coupling parameter space. We show that the cosmological scalar field goes first through a quiescent phase, similar to the behavior of a massless field, but then it enters an oscillatory phase, with an amplitude (and frequency) that decays (and grows) exponentially. We further show that after the field enters the oscillatory phase, its effective energy density and pressure are approximately those of dust, as expected from previous cosmological studies. Due to these oscillations, we show that the scalar field cannot be treated as static today on astrophysical scales, and so we use time-dependent perturbation theory to compute the scalar-field-induced modifications to Solar System observables. We find that these modifications are suppressed when the mass of the scalar field and the coupling parameter of the theory are in a wide range, allowing the theory to pass Solar System constraints, while in principle possibly still allowing for scalarization.
Woitke, P.; Min, M.; Pinte, C.; Thi, W.-F.; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.
2016-02-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on the assumptions about the shape of the disk, the dust opacities, dust settling, and polycyclic aromatic hydrocarbons (PAHs). In particular, we propose new standard dust opacities for disk models, we present a simplified treatment of PAHs in radiative equilibrium which is sufficient to reproduce the PAH emission features, and we suggest using a simple yet physically justified treatment of dust settling. We roughly adjust parameters to obtain a model that predicts continuum and line observations that resemble typical multi-wavelength continuum and line observations of Class II T Tauri stars. We systematically study the impact of each model parameter (disk mass, disk extension and shape, dust settling, dust size and opacity, gas/dust ratio, etc.) on all mainstream continuum and line observables, in particular on the SED, mm-slope, continuum visibilities, and emission lines including [OI] 63 μm, high-J CO lines, (sub-)mm CO isotopologue lines, and CO fundamental ro-vibrational lines. We find that evolved dust properties, i.e. large grains, often needed to fit the SED, have important consequences for disk chemistry and heating/cooling balance, leading to stronger near- to far-IR emission lines in general. Strong dust settling and missing disk flaring have similar effects on continuum observations, but opposite effects on far-IR gas emission lines. PAH molecules can efficiently shield the gas from stellar UV radiation because of their strong absorption and negligible scattering opacities in comparison to evolved dust. The observable millimetre-slope of the SED can become significantly more gentle in the case of cold disk midplanes, which we find regularly in our T Tauri models
Hydronic distribution system computer model
Energy Technology Data Exchange (ETDEWEB)
Andrews, J.W.; Strasser, J.J.
1994-10-01
A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.
Advancing Nucleosynthesis in Self-consistent, Multidimensional Models of Core-Collapse Supernovae
Harris, J Austin; Chertkow, Merek A; Bruenn, Stephen W; Lentz, Eric J; Messer, O E Bronson; Mezzacappa, Anthony; Blondin, John M; Marronetti, Pedro; Yakunin, Konstantin N
2014-01-01
We investigate core-collapse supernova (CCSN) nucleosynthesis in polar axisymmetric simulations using the multidimensional radiation hydrodynamics code CHIMERA. Computational costs have traditionally constrained the evolution of the nuclear composition in CCSN models to, at best, a 14-species $\\alpha$-network. Such a simplified network limits the ability to accurately evolve detailed composition, neutronization and the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks in post-processing nucleosynthesis calculations. Limitations such as poor spatial resolution of the tracer particles, estimation of the expansion timescales, and determination of the "mass-cut" at the end of the simulation impose uncertainties inherent to this approach. We present a detailed analysis of the impact of these uncertainties on post-processing nucleosynthesis calculations and implications for future models.
Directory of Open Access Journals (Sweden)
Sam Walcott
2015-11-01
Full Text Available Muscle contracts due to ATP-dependent interactions of myosin motors with thin filaments composed of the proteins actin, troponin, and tropomyosin. Contraction is initiated when calcium binds to troponin, which changes conformation and displaces tropomyosin, a filamentous protein that wraps around the actin filament, thereby exposing myosin binding sites on actin. Myosin motors interact with each other indirectly via tropomyosin, since myosin binding to actin locally displaces tropomyosin and thereby facilitates binding of nearby myosin. Defining and modeling this local coupling between myosin motors is an open problem in muscle modeling and, more broadly, a requirement to understanding the connection between muscle contraction at the molecular and macro scale. It is challenging to directly observe this coupling, and such measurements have only recently been made. Analysis of these data suggests that two myosin heads are required to activate the thin filament. This result contrasts with a theoretical model, which reproduces several indirect measurements of coupling between myosin, that assumes a single myosin head can activate the thin filament. To understand this apparent discrepancy, we incorporated the model into stochastic simulations of the experiments, which generated simulated data that were then analyzed identically to the experimental measurements. By varying a single parameter, good agreement between simulation and experiment was established. The conclusion that two myosin molecules are required to activate the thin filament arises from an assumption, made during data analysis, that the intensity of the fluorescent tags attached to myosin varies depending on experimental condition. We provide an alternative explanation that reconciles theory and experiment without assuming that the intensity of the fluorescent tags varies.
Energy regeneration model of self-consistent field of electron beams into electric power*
Kazmin, B. N.; Ryzhov, D. R.; Trifanov, I. V.; Snezhko, A. A.; Savelyeva, M. V.
2016-04-01
We consider physic-mathematical models of electric processes in electron beams, conversion of beam parameters into electric power values and their transformation into users’ electric power grid (onboard spacecraft network). We perform computer simulation validating high energy efficiency of the studied processes to be applied in the electric power technology to produce the power as well as electric power plants and propulsion installation in the spacecraft.
Flood damage: a model for consistent, complete and multipurpose scenarios
Directory of Open Access Journals (Sweden)
S. Menoni
2016-12-01
implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.
A consistent model for leptogenesis, dark matter and the IceCube signal
Energy Technology Data Exchange (ETDEWEB)
Fiorentin, M. Re [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Niro, V. [Departamento de Física Teórica, Universidad Autónoma de Madrid,Cantoblanco, E-28049 Madrid (Spain); Instituto de Física Teórica UAM/CSIC,Calle Nicolás Cabrera 13-15, Cantoblanco, E-28049 Madrid (Spain); Fornengo, N. [Dipartimento di Fisica, Università di Torino,via P. Giuria, 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Torino,via P. Giuria, 1, 10125 Torino (Italy)
2016-11-04
We discuss a left-right symmetric extension of the Standard Model in which the three additional right-handed neutrinos play a central role in explaining the baryon asymmetry of the Universe, the dark matter abundance and the ultra energetic signal detected by the IceCube experiment. The energy spectrum and neutrino flux measured by IceCube are ascribed to the decays of the lightest right-handed neutrino N{sub 1}, thus fixing its mass and lifetime, while the production of N{sub 1} in the primordial thermal bath occurs via a freeze-in mechanism driven by the additional SU(2){sub R} interactions. The constraints imposed by IceCube and the dark matter abundance allow nonetheless the heavier right-handed neutrinos to realize a standard type-I seesaw leptogenesis, with the B−L asymmetry dominantly produced by the next-to-lightest neutrino N{sub 2}. Further consequences and predictions of the model are that: the N{sub 1} production implies a specific power-law relation between the reheating temperature of the Universe and the vacuum expectation value of the SU(2){sub R} triplet; leptogenesis imposes a lower bound on the reheating temperature of the Universe at 7×10{sup 9} GeV. Additionally, the model requires a vanishing absolute neutrino mass scale m{sub 1}≃0.
Consistent negative response of US crops to high temperatures in observations and crop models
Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja
2017-04-01
High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day above 30°C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures above 30°C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.
Consistent negative response of US crops to high temperatures in observations and crop models
Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja
2017-01-01
High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day >30 °C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures >30 °C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.
Jha, Sanjeev Kumar
2013-01-01
A downscaling approach based on multiple-point geostatistics (MPS) is presented. The key concept underlying MPS is to sample spatial patterns from within training images, which can then be used in characterizing the relationship between different variables across multiple scales. The approach is used here to downscale climate variables including skin surface temperature (TSK), soil moisture (SMOIS), and latent heat flux (LH). The performance of the approach is assessed by applying it to data derived from a regional climate model of the Murray-Darling basin in southeast Australia, using model outputs at two spatial resolutions of 50 and 10 km. The data used in this study cover the period from 1985 to 2006, with 1985 to 2005 used for generating the training images that define the relationships of the variables across the different spatial scales. Subsequently, the spatial distributions for the variables in the year 2006 are determined at 10 km resolution using the 50 km resolution data as input. The MPS geostatistical downscaling approach reproduces the spatial distribution of TSK, SMOIS, and LH at 10 km resolution with the correct spatial patterns over different seasons, while providing uncertainty estimates through the use of multiple realizations. The technique has the potential to not only bridge issues of spatial resolution in regional and global climate model simulations but also in feature sharpening in remote sensing applications through image fusion, filling gaps in spatial data, evaluating downscaled variables with available remote sensing images, and aggregating/disaggregating hydrological and groundwater variables for catchment studies.
Fioc, M; Fioc, Michel; Rocca-Volmerange, Brigitte
1999-01-01
We provide here the documentation of the new version of the spectral evolution model PEGASE. PEGASE computes synthetic spectra of galaxies in the UV to near-IR range from 0 to 20 Gyr, for a given stellar IMF and evolutionary scenario (star formation law, infall, galactic winds). The radiation emitted by stars from the main sequence to the pre-supernova or white dwarf stage is calculated, as well as the extinction by dust. A simple modeling of the nebular emission (continuum and lines) is also proposed. PEGASE may be used to model starbursts as well as old galaxies. The main improvements of PEGASE.2 relative to PEGASE.1 (Fioc & Rocca-Volmerange 1997) are the following: (1)The stellar evolutionary tracks of the Padova group for metallicities between 0.0001 and 0.1 have been included; (2)The evolution of the metallicity of the interstellar medium (ISM) due to SNII, SNIa and AGB stars is followed. Stars are formed with the same metallicity as the ISM (instead of a solar metallicity in PEGASE.1), providing thu...
The Bioenvironmental modeling of Bahar city based on Climate-consistent Architecture
Directory of Open Access Journals (Sweden)
Parna Kazemian
2014-07-01
Full Text Available The identification of the climate of a particularplace and the analysis of the climatic needs in terms of human comfort and theuse of construction materials is one of the prerequisites of aclimate-consistent design. In studies on climate and weather, usingillustrative reports, first a picture of the state of climate is offered. Then,based on the obtained results, the range of changes is determined, and thecause-effect relationships at different scales are identified. Finally, by ageneral examination of the obtained information, on the one hand, the range ofchanges is identified, and, on the other hand, their practical uses in thefuture are selected. In the present paper, the bioclimatic conditions of Baharcity, according to the 29-year-long statistics of the synoptic station between1976 and 2005 was examined, using Olgyay and Mahoney indexes. It should beadded that, because of the short distance between Bahar and Hamedan, they havea single synoptic station. The results indicate that Bahar city has dominantlycold weather during most of the months. Therefore, based on the implications ofeach method, the principles of the suggestive architectural designing can beintegrated and improved in order to achieve sustainable development.
A self-consistent impedance method for electromagnetic surface impedance modeling
Thiel, David V.; Mittra, Raj
2001-01-01
A two-dimensional, self-consistent impedance method has been derived and used to calculate the electromagnetic surface impedance above buried objects at very low frequencies. The earth half space is discretized using an array of impedance elements. Inhomogeneities in the complex permittivity of the earth are reflected in variations in these impedance elements. The magnetic field is calculated for each cell in the solution space using a difference equation derived from Faraday's and Ampere's laws. It is necessary to include an air layer above the earth's surface to allow the scattered magnetic field to be calculated at the surface. The source field is applied above the earth's surface as a Dirichlet boundary condition, whereas the Neumann condition is employed at all other boundaries in the solution space. This, in turn, enables users to use both finite and infinite magnetic field sources as excitations. The technique is shown to be computationally efficient and yields reasonably accurate results when applied to a number of one- and two-dimensional earth structures with a known surface impedance distribution.
Self-consistent physical parameters for 5 intermediate-age SMC stellar clusters from CMD modelling
Dias, Bruno; Barbuy, Beatriz; Santiago, Basilio; Ortolani, Sergio; Balbinot, Eduardo
2013-01-01
Context. Stellar clusters in the Small Magellanic Cloud (SMC) are useful probes to study the chemical and dynamical evolution of this neighbouring dwarf galaxy, enabling inspection of a large period covering over 10 Gyr. Aims. The main goals of this work are the derivation of age, metallicity, distance modulus, reddening, core radius and central density profile for six sample clusters, in order to place them in the context of the Small Cloud evolution. The studied clusters are: AM 3, HW 1, HW 34, HW 40, Lindsay 2, and Lindsay 3, where HW 1, HW 34, and Lindsay 2 are studied for the first time. Methods. Optical Colour-Magnitude Diagrams (V, B-V CMDs) and radial density profiles were built from images obtained with the 4.1m SOAR telescope, reaching V~23. The determination of structural parameters were carried out applying King profile fitting. The other parameters were derived in a self-consistent way by means of isochrone fitting, which uses the likelihood statistics to identify the synthetic CMDs that best rep...
Ohmacht, Martin
2014-09-09
In a multiprocessor system, a central memory synchronization module coordinates memory synchronization requests responsive to memory access requests in flight, a generation counter, and a reclaim pointer. The central module communicates via point-to-point communication. The module includes a global OR reduce tree for each memory access requesting device, for detecting memory access requests in flight. An interface unit is implemented associated with each processor requesting synchronization. The interface unit includes multiple generation completion detectors. The generation count and reclaim pointer do not pass one another.
Energy Technology Data Exchange (ETDEWEB)
Ohmacht, Martin
2017-08-15
In a multiprocessor system, a central memory synchronization module coordinates memory synchronization requests responsive to memory access requests in flight, a generation counter, and a reclaim pointer. The central module communicates via point-to-point communication. The module includes a global OR reduce tree for each memory access requesting device, for detecting memory access requests in flight. An interface unit is implemented associated with each processor requesting synchronization. The interface unit includes multiple generation completion detectors. The generation count and reclaim pointer do not pass one another.
Directory of Open Access Journals (Sweden)
Shuichiro Yazawa
2014-06-01
Full Text Available The role of surface protective additives becomes vital when operating conditions become severe and moving components operate in a boundary lubrication regime. After protecting film is slowly removed by rubbing, it can regenerate through the tribochemical reaction of the additives at the contact. However, there are limitations about the regeneration of the protecting film when additives are totally consumed. On the other hand, there are a lot of hard coatings to protect the steel surface from wear. These can enable the functioning of tribological systems, even in adverse lubrication conditions. However, hard coatings usually make the friction coefficient higher, because of their high interfacial shear strength. Amongst hard coatings, diamond-like carbon (DLC is widely used, because of its relatively low friction and superior wear resistance. In practice, conventional lubricants that are essentially formulated for a steel/steel surface are still used for lubricating machine component surfaces provided with protective coatings, such as DLCs, despite the fact that the surface properties of coatings are quite different from those of steel. It is therefore important that the design of additive molecules and their interaction with coatings should be re-considered. The main aim of this paper is to discuss the DLC and the additive combination that enable tribofilm formation and effective lubrication of tribological systems.
A self-consistent 3D model of fluctuations in the helium-ionizing background
Davies, Frederick B.; Furlanetto, Steven R.; Dixon, Keri L.
2017-03-01
Large variations in the effective optical depth of the He II Lyα forest have been observed at z ≳ 2.7, but the physical nature of these variations is uncertain: either the Universe is still undergoing the process of He II reionization, or the Universe is highly ionized but the He II-ionizing background fluctuates significantly on large scales. In an effort to build upon our understanding of the latter scenario, we present a novel model for the evolution of ionizing background fluctuations. Previous models have assumed the mean free path of ionizing photons to be spatially uniform, ignoring the dependence of that scale on the local ionization state of the intergalactic medium (IGM). This assumption is reasonable when the mean free path is large compared to the average distance between the primary sources of He II-ionizing photons, ≳ L⋆ quasars. However, when this is no longer the case, the background fluctuations become more severe, and an accurate description of the average propagation of ionizing photons through the IGM requires additionally accounting for the fluctuations in opacity. We demonstrate the importance of this effect by constructing 3D semi-analytic models of the helium-ionizing background from z = 2.5-3.5 that explicitly include a spatially varying mean free path of ionizing photons. The resulting distribution of effective optical depths at large scales in the He II Lyα forest is very similar to the latest observations with HST/COS at 2.5 ≲ z ≲ 3.5.
Consistency of different tropospheric models and mapping functions for precise GNSS processing
Graffigna, Victoria; Hernández-Pajares, Manuel; García-Rigo, Alberto; Gende, Mauricio
2017-04-01
The TOmographic Model of the IONospheric electron content (TOMION) software implements a simultaneous precise geodetic and ionospheric modeling, which can be used to test new approaches for real-time precise GNSS modeling (positioning, ionospheric and tropospheric delays, clock errors, among others). In this work, the software is used to estimate the Zenith Tropospheric Delay (ZTD) emulating real time and its performance is evaluated through a comparative analysis with a built-in GIPSY estimation and IGS final troposphere product, exemplified in a two-day experiment performed in East Australia. Furthermore, the troposphere mapping function was upgraded from Niell to Vienna approach. On a first scenario, only forward processing was activated and the coordinates of the Wide Area GNSS network were loosely constrained, without fixing the carrier phase ambiguities, for both reference and rover receivers. On a second one, precise point positioning (PPP) was implemented, iterating for a fixed coordinates set for the second day. Comparisons between TOMION, IGS and GIPSY estimates have been performed and for the first one, IGS clocks and orbits were considered. The agreement with GIPSY results seems to be 10 times better than with the IGS final ZTD product, despite having considered IGS products for the computations. Hence, the subsequent analysis was carried out with respect to the GIPSY computations. The estimates show a typical bias of 2cm for the first strategy and of 7mm for PPP, in the worst cases. Moreover, Vienna mapping function showed in general a fairly better agreement than Niell one for both strategies. The RMS values' were found to be around 1cm for all studied situations, with a slightly fitter performance for the Niell one. Further improvement could be achieved for such estimations with coefficients for the Vienna mapping function calculated from raytracing as well as integrating meteorological comparative parameters.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
Institute of Scientific and Technical Information of China (English)
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Strong consistency of maximum quasi-likelihood estimates in generalized linear models
Institute of Scientific and Technical Information of China (English)
YiN; Changming; ZHAO; Lincheng
2005-01-01
In a generalized linear model with q × 1 responses, bounded and fixed p × qregressors Zi and general link function, under the most general assumption on the mini-mum eigenvalue of∑ni＝1n ZiZ'i, the moment condition on responses as weak as possibleand other mild regular conditions, we prove that with probability one, the quasi-likelihoodequation has a solutionβn for all large sample size n, which converges to the true regres-sion parameterβo. This result is an essential improvement over the relevant results in literature.
Energy Technology Data Exchange (ETDEWEB)
Pain, J.C. [CEA/DIF, B.P. 12, 91680 Bruyeres-le-Chatel Cedex (France)]. E-mail: jean-christophe.pain@cea.fr; Dejonghe, G. [CEA/DIF, B.P. 12, 91680 Bruyeres-le-Chatel Cedex (France); Blenski, T. [CEA/DSM/DRECAM/SPAM, Centre d' Etudes de Saclay, 91191 Gif-sur-Yvette Cedex (France)
2006-05-15
We propose a thermodynamically consistent model involving detailed screened ions, described by superconfigurations, in plasmas. In the present work, the electrons, bound and free, are treated quantum-mechanically so that resonances are carefully taken into account in the self-consistent calculation of the electronic structure of each superconfiguration. The procedure is in some sense similar to the one used in Inferno code developed by D.A. Liberman; however, here we perform this calculation in the ion-sphere model for each superconfiguration. The superconfiguration approximation allows rapid calculation of necessary averages over all possible configurations representing excited states of bound electrons. The model enables a fully quantum-mechanical self-consistent calculation of the electronic structure of ions and provides the relevant thermodynamic quantities (e.g., internal energy, Helmholtz free energy and pressure), together with an improved treatment of pressure ionization. It should therefore give a better insight into the impact of plasma effects on photoabsorption spectra.
A New Algorithm for Self-Consistent 3-D Modeling of Collisions in Dusty Debris Disks
Stark, Christopher C
2009-01-01
We present a new "collisional grooming" algorithm that enables us to model images of debris disks where the collision time is less than the Poynting Robertson time for the dominant grain size. Our algorithm uses the output of a collisionless disk simulation to iteratively solve the mass flux equation for the density distribution of a collisional disk containing planets in 3 dimensions. The algorithm can be run on a single processor in ~1 hour. Our preliminary models of disks with resonant ring structures caused by terrestrial mass planets show that the collision rate for background particles in a ring structure is enhanced by a factor of a few compared to the rest of the disk, and that dust grains in or near resonance have even higher collision rates. We show how collisions can alter the morphology of a resonant ring structure by reducing the sharpness of a resonant ring's inner edge and by smearing out azimuthal structure. We implement a simple prescription for particle fragmentation and show how Poynting-Ro...
A consistent model for \\pi N transition distribution amplitudes and backward pion electroproduction
Lansberg, J P; Semenov-Tian-Shansky, K; Szymanowski, L
2011-01-01
The extension of the concept of generalized parton distributions leads to the introduction of baryon to meson transition distribution amplitudes (TDAs), non-diagonal matrix elements of the nonlocal three quark operator between a nucleon and a meson state. We present a general framework for modelling nucleon to pion ($\\pi N$) TDAs. Our main tool is the spectral representation for \\pi N TDAs in terms of quadruple distributions. We propose a factorized Ansatz for quadruple distributions with input from the soft-pion theorem for \\pi N TDAs. The spectral representation is complemented with a D-term like contribution from the nucleon exchange in the cross channel. We then study backward pion electroproduction in the QCD collinear factorization approach in which the non-perturbative part of the amplitude involves \\pi N TDAs. Within our two component model for \\pi N TDAs we update previous leading-twist estimates of the unpolarized cross section. Finally, we compute the transverse target single spin asymmetry as a fu...
A consistent model for leptogenesis, dark matter and the IceCube signal
Fiorentin, M Re; Fornengo, N
2016-01-01
We discuss a left-right symmetric extension of the Standard Model in which the three additional right-handed neutrinos play a central role in explaining the baryon asymmetry of the Universe, the dark matter abundance and the ultra energetic signal detected by the IceCube experiment. The energy spectrum and neutrino flux measured by IceCube are ascribed to the decays of the lightest right-handed neutrino $N_1$, thus fixing its mass and lifetime, while the production of $N_1$ in the primordial thermal bath occurs via a freeze-in mechanism driven by the additional $SU(2)_R$ interactions. The constraints imposed by IceCube and the dark matter abundance allow nonetheless the heavier right-handed neutrinos to realize a standard type-I seesaw leptogenesis, with the $B-L$ asymmetry dominantly produced by the next-to-lightest neutrino $N_2$. Further consequences and predictions of the model are that: the $N_1$ production implies a specific power-law relation between the reheating temperature of the Universe and the va...
A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling
Shapiro, B.; Jin, Q.
2015-12-01
Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.
DEFF Research Database (Denmark)
Staunstrup, Jørgen
1998-01-01
This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....
Ataman, Meric; Hernandez Gardiol, Daniel F; Fengos, Georgios; Hatzimanikatis, Vassily
2017-07-01
Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.
On the (in)consistency of a multi-model ensemble of the past 30 years land surface state.
Dutra, Emanuel; Schellekens, Jaap; Beck, Hylke; Balsamo, Gianpaolo
2016-04-01
Global land-surface and hydrological models are a fundamental tool in understanding the land-surface state and evolution either coupled to atmospheric models for climate and weather predictions or in stand-alone mode. In this study we take a recently developed dataset consisting in stand-alone simulations by 10 global hydrological and land surface models sharing the same atmospheric forcing for the period 1979-2012 (the eart2Observe dataset). This multi-model ensemble provides the first freely available dataset with such a spatial/temporal scale that allows for a characterization of the multi-model characteristics such as inter-model consistency and error-spread relationship. We will present a metric for the ensemble consistency using the concept of potential predictability, that can be interpreted as a proxy for the multi-model agreement. Initial results point to regions of low inter-model agreement in the polar and tropical regions, the latter also present when comparing globally available precipitation datasets. In addition to this, the discharge ensemble spread around the ensemble mean was compared to the error of the ensemble mean for several large-scale and small scale basins. This showed a general under-estimation of the ensemble spread, particularly in tropical basins, suggesting that the current dataset lacks the representation of the precipitation uncertainty in the input meteorological data.
Directory of Open Access Journals (Sweden)
Meric Ataman
2017-07-01
Full Text Available Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.
Gauge propagator and physical consistency of the CPT-even part of the standard model extension
Casana, Rodolfo; Ferreira, Manoel M., Jr.; Gomes, Adalto R.; Pinheiro, Paulo R. D.
2009-12-01
In this work, we explicitly evaluate the gauge propagator of the Maxwell theory supplemented by the CPT-even term of the standard model extension. First, we specialize our evaluation for the parity-odd sector of the tensor Wμνρσ, using a parametrization that retains only the three nonbirefringent coefficients. From the poles of the propagator, it is shown that physical modes of this electrodynamics are stable, noncausal and unitary. In the sequel, we carry out the parity-even gauge propagator using a parametrization that allows to work with only the isotropic nonbirefringent element. In this case, we show that the physical modes of the parity-even sector of the tensor W are causal, stable and unitary for a limited range of the isotropic coefficient.
Consistency and normality of Huber-Dutter estimators for partial linear model
Institute of Scientific and Technical Information of China (English)
2008-01-01
For partial linear model Y = Xτβ0 + g0(T) + with unknown β0 ∈ Rd and an unknown smooth function g0, this paper considers the Huber-Dutter estimators of β0, scale σ for the errors and the function g0 approximated by the smoothing B-spline functions, respectively. Under some regularity conditions, the Huber-Dutter estimators of β0 and σ are shown to be asymptotically normal with the rate of convergence n-1/2 and the B-spline Huber-Dutter estimator of g0 achieves the optimal rate of convergence in nonparametric regression. A simulation study and two examples demonstrate that the Huber-Dutter estimator of β0 is competitive with its M-estimator without scale parameter and the ordinary least square estimator.
Consistency and normality of Huber-Dutter estimators for partial linear model
Institute of Scientific and Technical Information of China (English)
TONG XingWei; CUI HengJian; YU Peng
2008-01-01
For partial linear model Y = Xτβ0 + g0(T) + ∈ with unknown/β0 ∈ Rd and an unknown smooth function g0,this paper considers the Huber-Dutter estimators of/β0,scale σ for the errors and the function g0 approximated by the smoothing B-spline functions,respectively.Under some regularity conditions,the Huber-Dutter estimators of/β0 and σ are shown to be asymptotically normal with the rate of convergence n-1/2 and the B-spline Huber-Dutter estimator of go achieves the optimal rate of convergence in nonparametric regression.A simulation study and two examples demonstrate that the Huber-Dutter estimator of/β0 is competitive with its M-estimator without scale parameter and the ordinary least square estimator.
Quesada, José Manuel; Capote, Roberto; Soukhovitski, Efrem S.; Chiba, Satoshi
2016-03-01
An extension for odd-A actinides of a previously derived dispersive coupledchannel optical model potential (OMP) for 238U and 232Th nuclei is presented. It is used to fit simultaneously all the available experimental databases including neutron strength functions for nucleon scattering on 232Th, 233,235,238U and 239Pu nuclei. Quasi-elastic (p,n) scattering data on 232Th and 238U to the isobaric analogue states of the target nucleus are also used to constrain the isovector part of the optical potential. For even-even (odd) actinides almost all low-lying collective levels below 1 MeV (0.5 MeV) of excitation energy are coupled. OMP parameters show a smooth energy dependence and energy independent geometry.
Liuzzi, G.; Masiello, G.; Serio, C.; Venafra, S.; Camy-Peyret, C.
2016-10-01
Spectra observed by the Infrared Atmospheric Sounder Interferometer (IASI) have been used to assess both retrievals and the spectral quality and consistency of current forward models and spectroscopic databases for atmospheric gas line and continuum absorption. The analysis has been performed with thousands of observed spectra over sea surface in the Pacific Ocean close to the Mauna Loa (Hawaii) validation station. A simultaneous retrieval for surface temperature, atmospheric temperature, H2O, HDO, O3 profiles and gas average column abundance of CO2, CO, CH4, SO2, N2O, HNO3, NH3, OCS and CF4 has been performed and compared to in situ observations. The retrieval system considers the full IASI spectrum (all 8461 spectral channels on the range 645-2760 cm-1). We have found that the average column amount of atmospheric greenhouse gases can be retrieved with a precision better than 1% in most cases. The analysis of spectral residuals shows that, after inversion, they are generally reduced to within the IASI radiometric noise. However, larger residuals still appear for many of the most abundant gases, namely H2O, CH4 and CO2. The H2O ν2 spectral region is in general warmer (higher radiance) than observations. The CO2ν2 and N2O/CO2ν3 spectral regions now show a consistent behavior for channels, which are probing the troposphere. Updates in CH4 spectroscopy do not seem to improve the residuals. The effect of isotopic fractionation of HDO is evident in the 2500-2760 cm-1 region and in the atmospheric window around 1200 cm-1.
Gustafsson, Leif; Sternad, Mikael
2007-10-01
Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.
Towards a Self Consistent Model of the Thermal Structure of the Venus Atmosphere
Limaye, Sanjay; Vandaele, Ann C.; Wilson, Colin
Nearly three decades ago, an international effort led to the adoption of the Venus International Reference Atmosphere (VIRA) was published in 1985 after the significant data returned by the Pioneer Venus Orbiter and Probes and the earlier Venera missions (Kliore et al., 1985). The vertical thermal structure is one component of the reference model which relied primarily on the three Pioneer Venus Small Probes, the Large Probe profiles as well as several hundred retrieved temperature profiles from the Pioneer Venus Orbiter radio occultation data collected during 1978 - 1982. Since then a huge amount of thermal structure data has been obtained from multiple instruments on ESA’s Venus Express (VEX) orbiter mission. The VEX data come from retrieval of temperature profiles from SPICAV/SOIR stellar/solar occultations, VeRa radio occultations and from the passive remote sensing by the VIRTIS instrument. The results of these three experiments vary in their intrinsic properties - altitude coverage, spatial and temporal sampling and resolution and accuracy An international team has been formed with support from the International Space Studies Institute (Bern, Switzerland) to consider the observations of the Venus atmospheric structure obtained since the data used for the COSPAR Venus International Reference Atmosphere (Kliore et al., 1985). We report on the progress made by the comparison of the newer data with VIRA model and also between different experiments where there is overlap. Kliore, A.J., V.I. Moroz, and G.M. Keating, Eds. 1985, VIRA: Venus International Reference Atmosphere, Advances in Space Research, Volume 5, Number 11, 307 pages.
Directory of Open Access Journals (Sweden)
Marco Del Giudice
Full Text Available BACKGROUND: Schizophrenia is a mental disorder marked by an evolutionarily puzzling combination of high heritability, reduced reproductive success, and a remarkably stable prevalence. Recently, it has been proposed that sexual selection may be crucially involved in the evolution of schizophrenia. In the sexual selection model (SSM of schizophrenia and schizotypy, schizophrenia represents the negative extreme of a sexually selected indicator of genetic fitness and condition. Schizotypal personality traits are hypothesized to increase the sensitivity of the fitness indicator, thus conferring mating advantages on high-fitness individuals but increasing the risk of schizophrenia in low-fitness individuals; the advantages of successful schzotypy would be mediated by enhanced courtship-related traits such as verbal creativity. Thus, schizotypy-increasing alleles would be maintained by sexual selection, and could be selectively neutral or even beneficial, at least in some populations. However, most empirical studies find that the reduction in fertility experienced by schizophrenic patients is not compensated for by increased fertility in their unaffected relatives. This finding has been interpreted as indicating strong negative selection on schizotypy-increasing alleles, and providing evidence against sexual selection on schizotypy. METHODOLOGY: A simple mathematical model is presented, showing that reduced fertility in the families of schizophrenic patients can coexist with selective neutrality of schizotypy-increasing alleles, or even with positive selection on schizotypy in the general population. If the SSM is correct, studies of patients' families can be expected to underestimate the true fertility associated with schizotypy. SIGNIFICANCE: This paper formally demonstrates that reduced fertility in the families of schizophrenic patients does not constitute evidence against sexual selection on schizotypy-increasing alleles. Futhermore, it suggests
Final Scientific/Technical Report "Arc Tube Coating System for Color Consistency"
Energy Technology Data Exchange (ETDEWEB)
Buelow, Roger; Jenson, Chris; Kazenski, Keith
2013-03-21
DOE has enabled the use of coating materials using low cost application methods on light sources to positively affect the output of those sources. The coatings and light source combinations have shown increased lumen output of LED fixtures (1.5%-2.0%), LED arrays (1.4%) and LED powered remote phosphor systems Philips L-Prize lamp (0.9%). We have also demonstrated lifetime enhancements (3000 hrs vs 8000 hrs) and shifting to higher CRI (51 to 65) in metal halide high intensity discharge lamps with metal oxide coatings. The coatings on LEDs and LED products are significant as the market is moving increasingly more towards LED technology. Enhancements in LED performance are demonstrated in this work through the use of available materials and low cost application processes. EFOI used low refractive index fluoropolymers and low cost dipping processes for application of the material to surfaces related to light transmission of LEDs and LED products. Materials included Teflon AF, an amorphous fluorinated polymer and fluorinated acrylic monomers. The DOE SSL Roadmap sets goals for LED performance moving into the future. EFOI's coating technology is a means to shift the performance curve for LEDs. This is not limited to one type of LED, but is relevant across LED technologies. The metal halide work included the use of sol-gel solutions resulting in silicon dioxide and titanium dioxide coatings on the quartz substrates of the metal halide arc tubes. The coatings were applied using low cost dipping processes.
Modelling Railway Interlocking Systems
DEFF Research Database (Denmark)
Lindegaard, Morten Peter; Viuf, P.; Haxthausen, Anne Elisabeth
2000-01-01
In this report we present a model of interlocking systems, and describe how the model may be validated by simulation. Station topologies are modelled by graphs in which the nodes denote track segments, and the edges denote connectivity for train traÆc. Points and signals are modelled by annotatio...
Toward A Self Consistent MHD Model of Chromospheres and Winds From Late Type Evolved Stars
Airapetian, V S; Carpenter, K G
2014-01-01
We present the first magnetohydrodynamic model of the stellar chromospheric heating and acceleration of the outer atmospheres of cool evolved stars, using alpha Tau as a case study. We used a 1.5D MHD code with a generalized Ohm's law that accounts for the effects of partial ionization in the stellar atmosphere to study Alfven wave dissipation and wave reflection. We have demonstrated that due to inclusion of the effects of ion-neutral collisions in magnetized weakly ionized chromospheric plasma on resistivity and the appropriate grid resolution, the numerical resistivity becomes 1-2 orders of magnitude smaller than the physical resistivity. The motions introduced by non-linear transverse Alfven waves can explain non-thermally broadened and non-Gaussian profiles of optically thin UV lines forming in the stellar chromosphere of alpha Tau and other late-type giant and supergiant stars. The calculated heating rates in the stellar chromosphere due to resistive (Joule) dissipation of electric currents, induced by ...
Complementarity of DM searches in a consistent simplified model: the case of Z{sup ′}
Energy Technology Data Exchange (ETDEWEB)
Jacques, Thomas [SISSA and INFN,via Bonomea 265, 34136 Trieste (Italy); Katz, Andrey [Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Morgante, Enrico; Racco, Davide [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Rameez, Mohamed [Département de Physique Nucléaire et Corpusculaire,Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Riotto, Antonio [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland)
2016-10-14
We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC, direct and indirect detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy Z{sup ′} mediates the interactions between the SM and the DM. We find that for heavy dark matter indirect detection provides the strongest bounds on this scenario, while IceCube bounds are typically stronger than those from direct detection. The LHC constraints are dominant for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun and the Galactic Center are either bb̄ or tt̄, while the heavy DM annihilation is completely dominated by Zh channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast IceCube constraints to allow proper comparison with constraints from direct and indirect detection experiments and LHC exclusions.
Complementarity of DM Searches in a Consistent Simplified Model: the Case of Z'
Jacques, Thomas; Morgante, Enrico; Racco, Davide; Rameez, Mohamed; Riotto, Antonio
2016-01-01
We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC and direct detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy $Z'$ mediates the interactions between the SM and the DM. We find that in most cases IceCube provides the strongest bounds on this scenario, while the LHC constraints are only meaningful for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun are either $b \\bar b$ or $t \\bar t$, while the heavy DM annihilation is completely dominated by $Zh$ channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast Ice...
Pisnichenko, I A
2007-01-01
The regional climate model prepared from Eta WS (workstation) forecast model has been integrated over South America with the horizontal resolution of 40 km for the period of 1961-1977. The model was forced at its lateral boundaries by the outputs of HadAMP. The data of HadAMP represent the simulation of modern climate with the resolution about150 km. In order to prepare climate regional model from the Eta forecast model was added new blocks and multiple modifications and corrections was made in the original model. The running of climate Eta model was made on the supercomputer SX-6. The detailed analysis of the results of dynamical downscaling experiment includes an investigation of a consistency between the regional and AGCM models as well as of ability of the regional model to resolve important features of climate fields on the finer scale than that resolved by AGCM. In this work we show the results of our investigation of the consistency of the output fields of the Eta model and HadAMP. We have analysed geo...
Bordin, Lorenzo; Creminelli, Paolo; Mirbabayi, Mehrdad; Noreña, Jorge
2017-03-01
We argue that isotropic scalar fluctuations in solid inflation are adiabatic in the super-horizon limit. During the solid phase this adiabatic mode has peculiar features: constant energy-density slices and comoving slices do not coincide, and their curvatures, parameterized respectively by ζ and Script R, both evolve in time. The existence of this adiabatic mode implies that Maldacena's squeezed limit consistency relation holds after angular average over the long mode. The correlation functions of a long-wavelength spherical scalar mode with several short scalar or tensor modes is fixed by the scaling behavior of the correlators of short modes, independently of the solid inflation action or dynamics of reheating.
Self consistent model of core formation and the effective metal-silicate partitioning
Ichikawa, H.; Labrosse, S.; Kameyama, M.
2010-12-01
It has been long known that the formation of the core transforms gravitational energy into heat and is able to heat up the whole Earth by about 2000 K. However, the distribution of this energy within the Earth is still debated and depends on the core formation process considered. Iron rain in the surface magma ocean is supposed to be the first mechanism of separation for large planets, iron then coalesces to form a pond at the base of the magma ocean [Stevenson 1990]. The time scale of the separation can be estimated from falling velocity of the iron phase, which is estimated by numerical simulation [Ichikawa et al., 2010] as ˜ 10cm/s with iron droplet of centimeter-scale. A simple estimate of the metal-silicate partition from the P-T condition of the base of the magma ocean, which must coincide with between peridotite liquidus and solidus by a single-stage model, is inconsistent with Earth's core-mantle partition. P-T conditions where silicate equilibrated with metal are far beyond the liquidus or solidus temperature for about ˜ 700K. For example, estimated P-T conditions are: 40GPa at 3750K for Wade and Wood, 2005, T ≧ 3600K for Chabot and Agee, 2003 and 35GPa at T ≧ 3300K for Gessmann and Rubie, 2000. Meanwhile, Rubie et al., 2003 shown that metal couldn't equilibrate with silicate on the base of the magma ocean before crystallization of silicate. On the other hand, metal-silicate equilibration is achieved only ˜ 5 s in the state of iron rain. Therefore metal and silicate simultaneously separate and equilibrate each other at the P-T condition during the course to the iron pond. Taking into account the release of gravitational energy, temperature of the middle of the magma ocean would be higher than the liquidus. Estimation of the thermal structure during the iron-silicate separation requires the development of a planetary-sized calculation model. However, because of the huge disparity of scales between the cm-sized drops and the magma ocean, a direct
Takahashi, Daisuke A
2015-01-01
The matrix-generalized Bogoliubov-de Gennes systems are recently considered by the present author [arXiv:1509.04242], and the time-dependent and self-consistent multi-soliton solutions are constructed based on the ansatz method. In this paper, restricting the problem to the static case, we exhaustively determine the self-consistent solutions using the inverse scattering theory. Solving the gap equation, we rigorously prove that the self-consistent potential must be reflectionless. As a supplementary topic, we elucidate the relation between the stationary self-consistent potentials and the soliton solutions in the matrix nonlinear Schr\\"odinger equation. The asymptotic formulae of multi-soliton solutions for sufficiently isolated solitons are also presented.
Energy Technology Data Exchange (ETDEWEB)
Sahai, N.; Sverjensky, D.A. [Johns Hopkins Univ., Baltimore, MD (United States)
1997-07-01
Systematic analysis of surface titration data from the literature has been performed for ten oxides (anatase, hematite, goethite, rutile, amorphous silica, quartz, magnetite, {delta}-MnO{sub 2}, corundum, and {gamma}-alumina) in ten electrolytes (LiNO{sub 3}, NaNO{sub 3}, KNO{sub 3}, CsNO{sub 3}, LiCl, NaCl, KCl, CsCl, Nal, and NaClO{sub 4}) over a wide range of ionic strengths (0.001 M-2.9 M) to establish adsorption equilibrium constants and capacitances consistent with the triple-layer model of surface complexation. Experimental data for the same mineral in different electrolytes and data for a given mineral/electrolyte system from various investigators have been compared. In this analysis, the surface protonation constants (K{sub s,1} and K{sub s,2}) were calculated by combining predicted values of {Delta}pK(log K{sub s,2} - log K{sub s,1}) with experimental points of zero charge; site-densities were obtained from tritium-exchange experiments reported in the literature, and the outer-layer capacitance (C{sub 2}) was set at 0.2 F {center_dot} m{sup -2}. 98 refs., 8 figs., 27 tabs.
Models of vertical coordination consistent with the development of bio-energetics
Directory of Open Access Journals (Sweden)
Gianluca Nardone
2009-04-01
Full Text Available To foster the development of the biomasses for solid fuel it is fundamental to build up a strategy at a local level in which co-exists farms as well as industrial farms. To such aim, it is necessary to implement an effective vertical coordination between the stakeholders with the definition of a contract that prevents opportunistic behaviors and guarantees the industrial investments of constant supplies over the time. Starting from a project that foresees a biomasses power plant in the south of Italy, this study reflects on the payments to fix in an eventual contract in such a way to maintain the fidelity of the farmers. These one have a greater flexibility since they can choose the most convenient crop. Therefore, their fidelity can be obtained tying the contractual payments to the price of the main alternative crop to the energetic one. The results of the study seem to indicate the opportunity to fix a purchase price of the raw materials linked to the one of durum wheat that is the most widespread crop in the territory and the one that depends more on a volatile market. Using the data of the District 12 of the province of Foggia Water Consortium with an area of 11.300 hectares (instead of the 20.000 demanded in the proposal, it has been possible to organize approximately 600 enterprises in five cluster, each of them identified by a representative farm. With a model of linear programming, we have run different simulations taking into account the possibility to grow sorghum in different ways. Through an aggregation process, it has been calculated that farmers may find it convenient to supply the energetic crop at a price of 50 €/t when the price of durum wheat is 150 €/t. Anyway, this price is lower than the one offered by firm that is planning to build the power plant. Moreover, it has been identified a strong correlation between the price of the durum wheat and the price that makes convenient for the farmers to grow the sorghum. When the
Models of vertical coordination consistent with the development of bio-energetics
Directory of Open Access Journals (Sweden)
Rosaria Viscecchia
2011-02-01
Full Text Available To foster the development of the biomasses for solid fuel it is fundamental to build up a strategy at a local level in which co-exists farms as well as industrial farms. To such aim, it is necessary to implement an effective vertical coordination between the stakeholders with the definition of a contract that prevents opportunistic behaviors and guarantees the industrial investments of constant supplies over the time. Starting from a project that foresees a biomasses power plant in the south of Italy, this study reflects on the payments to fix in an eventual contract in such a way to maintain the fidelity of the farmers. These one have a greater flexibility since they can choose the most convenient crop. Therefore, their fidelity can be obtained tying the contractual payments to the price of the main alternative crop to the energetic one. The results of the study seem to indicate the opportunity to fix a purchase price of the raw materials linked to the one of durum wheat that is the most widespread crop in the territory and the one that depends more on a volatile market. Using the data of the District 12 of the province of Foggia Water Consortium with an area of 11.300 hectares (instead of the 20.000 demanded in the proposal, it has been possible to organize approximately 600 enterprises in five cluster, each of them identified by a representative farm. With a model of linear programming, we have run different simulations taking into account the possibility to grow sorghum in different ways. Through an aggregation process, it has been calculated that farmers may find it convenient to supply the energetic crop at a price of 50 €/t when the price of durum wheat is 150 €/t. Anyway, this price is lower than the one offered by firm that is planning to build the power plant. Moreover, it has been identified a strong correlation between the price of the durum wheat and the price that makes convenient for the farmers to grow the sorghum. When the
Directory of Open Access Journals (Sweden)
J. Callies
2011-08-01
Full Text Available A simple model of the thermohaline circulation (THC is formulated, with the objective to represent explicitly the geostrophic force balance of the basinwide THC. The model comprises advective-diffusive density balances in two meridional-vertical planes located at the eastern and the western walls of a hemispheric sector basin. Boundary mixing constrains vertical motion to lateral boundary layers along these walls. Interior, along-boundary, and zonally integrated meridional flows are in thermal-wind balance. Rossby waves and the absence of interior mixing render isopycnals zonally flat except near the western boundary, constraining meridional flow to the western boundary layer. The model is forced by a prescribed meridional surface density profile.
This two-plane model reproduces both steady-state density and steady-state THC structures of a primitive-equation model. The solution shows narrow deep sinking at the eastern high latitudes, distributed upwelling at both boundaries, and a western boundary current with poleward surface and equatorward deep flow. The overturning strength has a 2/3-power-law dependence on vertical diffusivity and a 1/3-power-law dependence on the imposed meridional surface density difference. Convective mixing plays an essential role in the two-plane model, ensuring that deep sinking is located at high latitudes. This role of convective mixing is consistent with that in three-dimensional models and marks a~sharp contrast with previous two-dimensional models.
Overall, the two-plane model reproduces crucial features of the THC as simulated in simple-geometry three-dimensional models. At the same time, the model self-consistently makes quantitative a conceptual picture of the three-dimensional THC that hitherto has been expressed either purely qualitatively or not self-consistently.
Directory of Open Access Journals (Sweden)
J. Callies
2012-01-01
Full Text Available A simple model of the thermohaline circulation (THC is formulated, with the objective to represent explicitly the geostrophic force balance of the basinwide THC. The model comprises advective-diffusive density balances in two meridional-vertical planes located at the eastern and the western walls of a hemispheric sector basin. Boundary mixing constrains vertical motion to lateral boundary layers along these walls. Interior, along-boundary, and zonally integrated meridional flows are in thermal-wind balance. Rossby waves and the absence of interior mixing render isopycnals zonally flat except near the western boundary, constraining meridional flow to the western boundary layer. The model is forced by a prescribed meridional surface density profile.
This two-plane model reproduces both steady-state density and steady-state THC structures of a primitive-equation model. The solution shows narrow deep sinking at the eastern high latitudes, distributed upwelling at both boundaries, and a western boundary current with poleward surface and equatorward deep flow. The overturning strength has a 2/3-power-law dependence on vertical diffusivity and a 1/3-power-law dependence on the imposed meridional surface density difference. Convective mixing plays an essential role in the two-plane model, ensuring that deep sinking is located at high latitudes. This role of convective mixing is consistent with that in three-dimensional models and marks a sharp contrast with previous two-dimensional models.
Overall, the two-plane model reproduces crucial features of the THC as simulated in simple-geometry three-dimensional models. At the same time, the model self-consistently makes quantitative a conceptual picture of the three-dimensional THC that hitherto has been expressed either purely qualitatively or not self-consistently.
Schnell, D J; Galavotti, C; Fishbein, M; Chan, D K
1996-01-01
The stages of behavior change model has been used to understand a variety of health behaviors. Since consistent condom use has been promoted as a risk-reduction behavior for prevention of human immunodeficiency virus (HIV) infection, an algorithm for staging the adoption of consistent condom use during vaginal sex was empirically developed using three considerations: HIV prevention efficacy, analogy with work on staging other health-related behaviors, and condom use data from groups at high risk for HIV infection. This algorithm suggests that the adoption of consistent condom use among persons at high risk can be meaningfully measured with the model. However, variations in the algorithm details affect both the interpretation of stages and apportionment of persons across stages.
The relativistic consistent angular-momentum projected shell model study of the N=Z nucleus 52Fe
Institute of Scientific and Technical Information of China (English)
LI YanSong; LONG GuiLu
2009-01-01
The relativistic consistent angular-momentum projected shell model (RECAPS) is used in the study of the structure and electromagnetic transitions of the low-lying states in the N=Z nucleus 52Fe.The model calculations show a reasonably good agreement with the data.The backbending at 12+ is reproduced and the energy level structure suggests that neutron-proton interactions play important roles.
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.
The relativistic consistent angular-momentum projected shell model study of the N=Z nucleus 52Fe
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
The relativistic consistent angular-momentum projected shell model(ReCAPS) is used in the study of the structure and electromagnetic transitions of the low-lying states in the N=Z nucleus 52Fe.The model calculations show a reasonably good agreement with the data.The backbending at 12+ is reproduced and the energy level structure suggests that neutron-proton interactions play important roles.
2012-01-01
We present a systematic study of the performance of numerical pseudo-atomic orbital basis sets in the calculation of dielectric matrices of extended systems using the self-consistent Sternheimer approach of [F. Giustino et al., Phys. Rev. B 81 (11), 115105 (2010)]. In order to cover a range of systems, from more insulating to more metallic character, we discuss results for the three semiconductors diamond, silicon, and germanium. Dielectric matrices calculated using our method fall within 1-3...
Yeaman, Andrew R. J.
The Fishbein and Ajzen model of attitude-behavior consistency was applied to 56 undergraduates learning to use a microcomputer. Two levels of context for this act were compared: the students' beliefs about themselves, and their beliefs about people in general. The results indicated that students' beliefs were good predictors of their behavioral…
DEFF Research Database (Denmark)
Zahid, F.; Paulsson, Magnus; Polizzi, E.;
2005-01-01
We present a transport model for molecular conduction involving an extended Huckel theoretical treatment of the molecular chemistry combined with a nonequilibrium Green's function treatment of quantum transport. The self-consistent potential is approximated by CNDO (complete neglect of differential...
Sean P. Healey; Paul L. Patterson; Sassan S. Saatchi; Michael A. Lefsky; Andrew J. Lister; Elizabeth A. Freeman
2012-01-01
Lidar height data collected by the Geosciences Laser Altimeter System (GLAS) from 2002 to 2008 has the potential to form the basis of a globally consistent sample-based inventory of forest biomass. GLAS lidar return data were collected globally in spatially discrete full waveform "shots," which have been shown to be strongly correlated with aboveground forest...
Béghin, Christian
2015-02-01
This model is worked out in the frame of physical mechanisms proposed in previous studies accounting for the generation and the observation of an atypical Schumann Resonance (SR) during the descent of the Huygens Probe in the Titan's atmosphere on 14 January 2005. While Titan is staying inside the subsonic co-rotating magnetosphere of Saturn, a secondary magnetic field carrying an Extremely Low Frequency (ELF) modulation is shown to be generated through ion-acoustic instabilities of the Pedersen current sheets induced at the interface region between the impacting magnetospheric plasma and Titan's ionosphere. The stronger induced magnetic field components are focused within field-aligned arcs-like structures hanging down the current sheets, with minimum amplitude of about 0.3 nT throughout the ramside hemisphere from the ionopause down to the Moon surface, including the icy crust and its interface with a conductive water ocean. The deep penetration of the modulated magnetic field in the atmosphere is thought to be allowed thanks to the force balance between the average temporal variations of thermal and magnetic pressures within the field-aligned arcs. However, there is a first cause of diffusion of the ELF magnetic components, probably due to feeding one, or eventually several SR eigenmodes. A second leakage source is ascribed to a system of eddy-Foucault currents assumed to be induced through the buried water ocean. The amplitude spectrum distribution of the induced ELF magnetic field components inside the SR cavity is found fully consistent with the measurements of the Huygens wave-field strength. Waiting for expected future in-situ exploration of Titan's lower atmosphere and the surface, the Huygens data are the only experimental means available to date for constraining the proposed model.