Chip Multithreaded Consistency Model
Institute of Scientific and Technical Information of China (English)
Zu-Song Li; Dan-Dan Huan; Wei-Wu Hu; Zhi-Min Tang
2008-01-01
Multithreaded technique is the developing trend of high performance processor. Memory consistency model is essential to the correctness, performance and complexity of multithreaded processor. The chip multithreaded consistency model adapting to multithreaded processor is proposed in this paper. The restriction imposed on memory event ordering by chip multithreaded consistency is presented and formalized. With the idea of critical cycle built by Wei-Wu Hu, we prove that the proposed chip multithreaded consistency model satisfies the criterion of correct execution of sequential consistency model. Chip multithreaded consistency model provides a way of achieving high performance compared with sequential consistency model and ensures the compatibility of software that the execution result in multithreaded processor is the same as the execution result in uniprocessor. The implementation strategy of chip multithreaded consistency model in Godson-2 SMT processor is also proposed. Godson-2 SMT processor supports chip multithreaded consistency model correctly by exception scheme based on the sequential memory access queue of each thread.
Consistency of non-flat $\\Lambda$CDM model with the new result from BOSS
Kumar, Suresh
2015-01-01
Using 137,562 quasars in the redshift range $2.1\\leq z\\leq3.5$ from the Data Release 11 (DR11) of the Baryon Oscillation Spectroscopic Survey (BOSS) of Sloan Digital Sky Survey (SDSS)-III, the BOSS-SDSS collaboration estimated the expansion rate $H(z=2.34)=222\\pm7$ km/s/Mpc of Universe, and reported that this value is in tension with the predictions of flat $\\Lambda$CDM model at around 2.5$\\sigma$ level. In this letter, we briefly describe some attempts made in the literature to relieve the tension, and show that the tension can naturally be alleviated in non-flat $\\Lambda$CDM model with positive curvature. However, this idea confronts with the inflation paradigm which predicts almost a spatially flat Universe. Nevertheless, the theoretical consistency of the non-flat $\\Lambda$CDM model with the new result from BOSS deserves attention of the community.
Saro, A.; De Lucia, G.; Borgani, S.; Dolag, K.
2010-08-01
We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical smoothed particle hydrodynamics (SPH) simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. This simplified comparison is thus not meant to be compared with observational data, but is aimed at understanding the level of agreement, at the stripped-down level considered, between two techniques that are widely used to model galaxy formation in a cosmological framework and which present complementary advantages and disadvantages. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: (i) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; (ii) while all stars associated with the BCG were formed in its progenitors in the SAM used here, this holds true only for half of the final BCG stellar mass in the SPH simulation, the remaining half being contributed by tidal stripping of stars from the diffuse stellar component associated with galaxies accreted on the cluster halo; (iii) SPH satellites can lose up to 90 per cent of their stellar mass at the time of accretion, due to tidal stripping, a process not included in the SAM used in this paper; (iv) in the SPH simulation, significant cooling occurs on the most massive satellite galaxies and this lasts for up to 1 Gyr after accretion. This physical process is
Gas cooling in semi-analytic models and SPH simulations: are results consistent?
Saro, A; Borgani, S; Dolag, K
2010-01-01
We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical SPH simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: a) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; b) while all stars associated with the BCG were formed in its progenitors i...
Consistent model driven architecture
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.
2009-01-01
Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.
Self-consistent triaxial models
Sanders, Jason L
2015-01-01
We present self-consistent triaxial stellar systems that have analytic distribution functions (DFs) expressed in terms of the actions. These provide triaxial density profiles with cores or cusps at the centre. They are the first self-consistent triaxial models with analytic DFs suitable for modelling giant ellipticals and dark haloes. Specifically, we study triaxial models that reproduce the Hernquist profile from Williams & Evans (2015), as well as flattened isochrones of the form proposed by Binney (2014). We explore the kinematics and orbital structure of these models in some detail. The models typically become more radially anisotropic on moving outwards, have velocity ellipsoids aligned in Cartesian coordinates in the centre and aligned in spherical polar coordinates in the outer parts. In projection, the ellipticity of the isophotes and the position angle of the major axis of our models generally changes with radius. So, a natural application is to elliptical galaxies that exhibit isophote twisting....
Consistent ranking of volatility models
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2006-01-01
result in an inferior model being chosen as "best" with a probability that converges to one as the sample size increases. We document the practical relevance of this problem in an empirical application and by simulation experiments. Our results provide an additional argument for using the realized...... variance in out-of-sample evaluations rather than the squared return. We derive the theoretical results in a general framework that is not specific to the comparison of volatility models. Similar problems can arise in comparisons of forecasting models whenever the predicted variable is a latent variable.......We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...
Energy Technology Data Exchange (ETDEWEB)
Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Riley, W. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Best, D. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-09-03
In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for several simulated low activity waste (LAW) glasses (designated as the January, March, and April 2015 LAW glasses) fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions.
Energy Technology Data Exchange (ETDEWEB)
Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Best, D. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-07-07
In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for several simulated low activity waste (LAW) glasses (designated as the August and October 2014 LAW glasses) fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions.
Consistent ranking of volatility models
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2006-01-01
We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...
Entropy-based consistent model driven architecture
Niepostyn, Stanisław Jerzy
2016-09-01
A description of software architecture is a plan of the IT system construction, therefore any architecture gaps affect the overall success of an entire project. The definitions mostly describe software architecture as a set of views which are mutually unrelated, hence potentially inconsistent. Software architecture completeness is also often described in an ambiguous way. As a result most methods of IT systems building comprise many gaps and ambiguities, thus presenting obstacles for software building automation. In this article the consistency and completeness of software architecture are mathematically defined based on calculation of entropy of the architecture description. Following this approach, in this paper we also propose our method of automatic verification of consistency and completeness of the software architecture development method presented in our previous article as Consistent Model Driven Architecture (CMDA). The proposed FBS (Functionality-Behaviour-Structure) entropy-based metric applied in our CMDA approach enables IT architects to decide whether the modelling process is complete and consistent. With this metric, software architects could assess the readiness of undergoing modelling work for the start of IT system building. It even allows them to assess objectively whether the designed software architecture of the IT system could be implemented at all. The overall benefit of such an approach is that it facilitates the preparation of complete and consistent software architecture more effectively as well as it enables assessing and monitoring of the ongoing modelling development status. We demonstrate this with a few industry examples of IT system designs.
Nie, Guanjun; Shan, Yehua
2014-09-01
Quartz c-axis fabrics are widely used to determine the shear plane in ductile shear zones, based upon an assumption that the shear plane is perpendicular to both the central segment of quartz c-axis crossed girdle and single girdle. In this paper the development of quartz c-axis fabric under simple-pure shear deformation is simulated using the visco-plastic self-consistent (VPSC) model so as to re-examine this assumption. In the case of no or weak dynamic recrystallization, the simulated crossed girdles have a central segment perpendicular or nearly perpendicular to the maximum principal finite strain direction (X) and the XY finite strain plane, and at a variable angle relative to the imposed kinematic framework that is dependent on the modeled flow vorticity and finite strain. These crossed girdles have a symmetrical skeleton with respect to the finite strain axes, regardless of the bulk strain and the kinematic vorticity, and rotate in a way similar to the shear sense with increasing bulk strain ratio. The larger the vorticity number the more asymmetrical their legs tend to be. In the case of strong dynamic recrystallization and large bulk strain, under simple shear the crossed girdle switches into single girdles, sub-perpendicular to the shear plane, by losing the weak legs. The numerical results in our models do not confirm the above-mentioned assumption.
Energy Technology Data Exchange (ETDEWEB)
Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States)
2015-12-01
In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for 14 simulated high level waste glasses fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions. The measured chemical composition data are reported and compared with the targeted values for each component for each glass. All of the measured sums of oxides for the study glasses fell within the interval of 96.9 to 100.8 wt %, indicating recovery of all components. Comparisons of the targeted and measured chemical compositions showed that the measured values for the glasses met the targeted concentrations within 10% for those components present at more than 5 wt %. The PCT results were normalized to both the targeted and measured compositions of the study glasses. Several of the glasses exhibited increases in normalized concentrations (NCi) after the canister centerline cooled (CCC) heat treatment. Five of the glasses, after the CCC heat treatment, had NC_{B} values that exceeded that of the Environmental Assessment (EA) benchmark glass. These results can be combined with additional characterization, including X-ray diffraction, to determine the cause of the higher release rates.
Thermodynamically consistent model calibration in chemical kinetics
Directory of Open Access Journals (Sweden)
Goutsias John
2011-05-01
Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new
Modeling and Testing Legacy Data Consistency Requirements
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard
2003-01-01
An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...
A Framework of Memory Consistency Models
Institute of Scientific and Technical Information of China (English)
胡伟武; 施巍松; 等
1998-01-01
Previous descriptions of memory consistency models in shared-memory multiprocessor systems are mainly expressed as constraints on the memory access event ordering and hence are hardware-centric.This paper presents a framework of memory consistency models which describes the memory consistency model on the behavior level.Based on the understanding that the behavior of an execution is determined by the execution order of conflicting accesses,a memory consistency model is defined as an interprocessor synchronization mechanism which orders the execution of operations from different processors.Synchronization order of an execution under certain consistency model is also defined.The synchronization order,together with the program order determines the behavior of an execution.This paper also presents criteria for correct program and correct implementation of consistency models.Regarding an implementation of a consistency model as certain memory event ordering constraints,this paper provides a method to prove the correctness of consistency model implementations,and the correctness of the lock-based cache coherence protocol is proved with this method.
Kolokolova, L
2009-01-01
The most successful model of comet dust presents comet particles as aggregates of submicron grains. It qualitatively explains the spectral and angular change in the comet brightness and polarization and is consistent with the thermal infrared data and composition of the comet dust obtained {\\it in situ} for comet 1P/Halley. However, it experiences some difficulties in providing a quantitative fit to the observational data. Here we present a model that considers comet dust as a mixture of aggregates and compact particles. The model is based on the Giotto and Stardust mission findings that both aggregates (made mainly of organics, silicates, and carbon) and solid silicate particles are present in the comet dust. We simulate aggregates as {\\bf Ballistic Cluster-Cluster Aggregates (BCCA)} and compact particles as polydisperse spheroids with some distribution of the aspect ratio. The particles follow a power-law size distribution with the power -3 that is close to the one obtained for comet dust {\\it in situ}, at ...
Structural Consistency: Enabling XML Keyword Search to Eliminate Spurious Results Consistently
Lee, Ki-Hoon; Han, Wook-Shin; Kim, Min-Soo
2009-01-01
XML keyword search is a user-friendly way to query XML data using only keywords. In XML keyword search, to achieve high precision without sacrificing recall, it is important to remove spurious results not intended by the user. Efforts to eliminate spurious results have enjoyed some success by using the concepts of LCA or its variants, SLCA and MLCA. However, existing methods still could find many spurious results. The fundamental cause for the occurrence of spurious results is that the existing methods try to eliminate spurious results locally without global examination of all the query results and, accordingly, some spurious results are not consistently eliminated. In this paper, we propose a novel keyword search method that removes spurious results consistently by exploiting the new concept of structural consistency.
A self-consistent Maltsev pulse model
Buneman, O.
1985-04-01
A self-consistent model for an electron pulse propagating through a plasma is presented. In this model, the charge imbalance between plasma ions, plasma electrons and pulse electrons creates the travelling potential well in which the pulse electrons are trapped.
Consistent quadrupole-octupole collective model
Dobrowolski, A.; Mazurek, K.; Góźdź, A.
2016-11-01
Within this work we present a consistent approach to quadrupole-octupole collective vibrations coupled with the rotational motion. A realistic collective Hamiltonian with variable mass-parameter tensor and potential obtained through the macroscopic-microscopic Strutinsky-like method with particle-number-projected BCS (Bardeen-Cooper-Schrieffer) approach in full vibrational and rotational, nine-dimensional collective space is diagonalized in the basis of projected harmonic oscillator eigensolutions. This orthogonal basis of zero-, one-, two-, and three-phonon oscillator-like functions in vibrational part, coupled with the corresponding Wigner function is, in addition, symmetrized with respect to the so-called symmetrization group, appropriate to the collective space of the model. In the present model it is D4 group acting in the body-fixed frame. This symmetrization procedure is applied in order to provide the uniqueness of the Hamiltonian eigensolutions with respect to the laboratory coordinate system. The symmetrization is obtained using the projection onto the irreducible representation technique. The model generates the quadrupole ground-state spectrum as well as the lowest negative-parity spectrum in 156Gd nucleus. The interband and intraband B (E 1 ) and B (E 2 ) reduced transition probabilities are also calculated within those bands and compared with the recent experimental results for this nucleus. Such a collective approach is helpful in searching for the fingerprints of the possible high-rank symmetries (e.g., octahedral and tetrahedral) in nuclear collective bands.
Self-Consistent Asset Pricing Models
Malevergne, Y
2006-01-01
We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alpha's and beta's of the factor model are unobservable. Self-consistency leads to renormalized beta's with zero effective alpha's, which are observable with standard OLS regressions. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value $\\alpha_i$ at the origin between an asset $i$'s return and the proxy's return. Self-consistency also introduces ``orthogonality'' and ``normality'' conditions linking the beta's, alpha's (as well as the residuals) and the weights of the proxy por...
Self-consistent model of fermions
Yershov, V N
2002-01-01
We discuss a composite model of fermions based on three-flavoured preons. We show that the opposite character of the Coulomb and strong interactions between these preons lead to formation of complex structures reproducing three generations of quarks and leptons with all their quantum numbers and masses. The model is self-consistent (it doesn't use input parameters). Nevertheless, the masses of the generated structures match the experimental values.
Consistent Stochastic Modelling of Meteocean Design Parameters
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Sterndorff, M. J.
2000-01-01
Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...... height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...
Developing consistent pronunciation models for phonemic variants
CSIR Research Space (South Africa)
Davel, M
2006-09-01
Full Text Available from a lexicon containing variants. In this paper we (the authors) address both these issues by creating ‘pseudo-phonemes’ associated with sets of ‘generation restriction rules’ to model those pronunciations that are consistently realised as two or more...
Are there consistent models giving observable NSI ?
Martinez, Enrique Fernandez
2013-01-01
While the existing direct bounds on neutrino NSI are rather weak, order 10(−)(1) for propagation and 10(−)(2) for production and detection, the close connection between these interactions and new NSI affecting the better-constrained charged letpon sector through gauge invariance make these bounds hard to saturate in realistic models. Indeed, Standard Model extensions leading to neutrino NSI typically imply constraints at the 10(−)(3) level. The question of whether consistent models leading to observable neutrino NSI naturally arises and was discussed in a dedicated session at NUFACT 11. Here we summarize that discussion.
Chen, Si-Guang; Stradins, Paul; Gregg, Brian A
2005-07-21
An in-depth study of n-type doping in a crystalline perylene diimide organic semiconductor (PPEEB) reveals that electrostatic attractions between the dopant electron and its conjugate dopant cation cause the free carrier density to be much lower than the doping density. Measurements of the dark currents as a function of field, doping density, electrode spacing, and temperature are reported along with preliminary Hall-effect measurements. The activation energy of the current, E(aJ), decreases with increasing field and with increasing dopant density, n(d). It is the measured change in E(aJ) with n(d) that accounts primarily for the variations between PPEEB films; the two adjustable parameters employed to fit the current-voltage data proved to be almost constants, independent of n(d) and temperature. The free electron density and the electron mobility are nonlinearly coupled through their shared dependences on both field and temperature. The data are fit to a modified Poole-Frenkel-like model that is shown to be valid for three important electronic processes in organic (excitonic) semiconductors: excitonic effects, doping, and transport. At room temperature, the electron mobility in PPEEB films is estimated to be 0.3 cm(2)/Vs; the fitted value of the mobility for an ideal PPEEB crystal is 3.4 +/- 2.7 cm(2)/Vs. The modified Poole-Frenkel factor that describes the field dependence of the current is 2 +/- 1 x 10(-4) eV (cm/V)(1/2). The analytical model is surprisingly accurate for a system that would require a coupled set of nonlinear tensor equations to describe it precisely. Being based on general electrostatic considerations, our model can form the requisite foundation for treatments of more complex systems. Some analogies to adventitiously doped materials such as pi-conjugated polymers are proposed.
Consistent estimators in random censorship semiparametric models
Institute of Scientific and Technical Information of China (English)
王启华
1996-01-01
For the fixed design regression modelwhen Y, are randomly censored on the right, the estimators of unknown parameter and regression function g from censored observations are defined in the two cases .where the censored distribution is known and unknown, respectively. Moreover, the sufficient conditions under which these estimators are strongly consistent and pth (p>2) mean consistent are also established.
Pressure-Balance Consistency in Magnetospheric Modelling
Institute of Scientific and Technical Information of China (English)
肖永登; 陈出新
2003-01-01
There have been many magnetic field models for geophysical and astrophysical bodies.These theoretical or empirical models represent the reality very well in some cases,but in other cases they may be far from reality.We argue that these models will become more reasonable if they are modified by some coordinate transformations.In order to demonstrate the transformation,we use this method to resolve the "pressure-balance inconsistency"problem that occurs when plasma transports from the outer plasma sheet of the Earth into the inner plasma sheet.
Planck 2013 results. XXXI. Consistency of the Planck data
DEFF Research Database (Denmark)
Ade, P. A. R.; Arnaud, M.; Ashdown, M.
2014-01-01
by deviation of the ratio from unity) between 70 and 100 GHz power spectra averaged over 70 ≤∫≥ 390 at the 0.8% level, and agreement between 143 and 100 GHz power spectra of 0.4% over the same ` range. These values are within and consistent with the overall uncertainties in calibration given in the Planck 2013...... foreground emission. In this paper, we analyse the level of consistency achieved in the 2013 Planck data. We concentrate on comparisons between the 70, 100, and 143 GHz channel maps and power spectra, particularly over the angular scales of the first and second acoustic peaks, on maps masked for diuse....../100 ratio. Correcting for this, the 70, 100, and 143 GHz power spectra agree to 0.4% over the first two acoustic peaks. The likelihood analysis that produced the 2013 cosmological parameters incorporated uncertainties larger than this. We show explicitly that correction of the missing near sidelobe power...
Consistent Partial Least Squares Path Modeling
Dijkstra, Theo K.; Henseler, Jörg
2015-01-01
This paper resumes the discussion in information systems research on the use of partial least squares (PLS) path modeling and shows that the inconsistency of PLS path coefficient estimates in the case of reflective measurement can have adverse consequences for hypothesis testing. To remedy this, the
Planck 2013 results. XXXI. Consistency of the Planck data
Ade, P A R; Ashdown, M; Aumont, J; Baccigalupi, C; Banday, A.J; Barreiro, R.B; Battaner, E; Benabed, K; Benoit-Levy, A; Bernard, J.P; Bersanelli, M; Bielewicz, P; Bond, J.R; Borrill, J; Bouchet, F.R; Burigana, C; Cardoso, J.F; Catalano, A; Challinor, A; Chamballu, A; Chiang, H.C; Christensen, P.R; Clements, D.L; Colombi, S; Colombo, L.P.L; Couchot, F; Coulais, A; Crill, B.P; Curto, A; Cuttaia, F; Danese, L; Davies, R.D; Davis, R.J; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Desert, F.X; Dickinson, C; Diego, J.M; Dole, H; Donzelli, S; Dore, O; Douspis, M; Dupac, X; Ensslin, T.A; Eriksen, H.K; Finelli, F; Forni, O; Frailis, M; Fraisse, A A; Franceschi, E; Galeotta, S; Ganga, K; Giard, M; Gonzalez-Nuevo, J; Gorski, K.M.; Gratton, S.; Gregorio, A; Gruppuso, A; Gudmundsson, J E; Hansen, F.K; Hanson, D; Harrison, D; Henrot-Versille, S; Herranz, D; Hildebrandt, S.R; Hivon, E; Hobson, M; Holmes, W.A.; Hornstrup, A; Hovest, W.; Huffenberger, K.M; Jaffe, T.R; Jaffe, A.H; Jones, W.C; Keihanen, E; Keskitalo, R; Knoche, J; Kunz, M; Kurki-Suonio, H; Lagache, G; Lahteenmaki, A; Lamarre, J.M; Lasenby, A; Lawrence, C.R; Leonardi, R; Leon-Tavares, J; Lesgourgues, J; Liguori, M; Lilje, P.B; Linden-Vornle, M; Lopez-Caniego, M; Lubin, P.M; Macias-Perez, J.F; Maino, D; Mandolesi, N; Maris, M; Martin, P.G; Martinez-Gonzalez, E; Masi, S; Matarrese, S; Mazzotta, P; Meinhold, P.R; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Miville-Deschenes, M.A; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Moss, A; Munshi, D; Murphy, J A; Naselsky, P; Nati, F; Natoli, P; Norgaard-Nielsen, H.U; Noviello, F; Novikov, D; Novikov, I; Oxborrow, C.A; Pagano, L; Pajot, F; Paoletti, D; Partridge, B; Pasian, F; Patanchon, G; Pearson, D; Pearson, T.J; Perdereau, O; Perrotta, F; Piacentini, F; Piat, M; Pierpaoli, E; Pietrobon, D; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Popa, L; Pratt, G.W; Prunet, S; Puget, J.L; Rachen, J.P; Reinecke, M; Remazeilles, M; Renault, C; Ricciardi, S.; Ristorcelli, I; Rocha, G.; Roudier, G; Rubino-Martin, J.A; Rusholme, B; Sandri, M; Scott, D; Stolyarov, V; Sudiwala, R; Sutton, D; Suur-Uski, A.S; Sygnet, J.F; Tauber, J.A; Terenzi, L; Toffolatti, L; Tomasi, M; Tristram, M; Tucci, M; Valenziano, L; Valiviita, J; Van Tent, B; Vielva, P; Villa, F; Wade, L.A; Wandelt, B.D; Wehus, I K; White, S D M; Yvon, D; Zacchei, A; Zonca, A
2014-01-01
The Planck design and scanning strategy provide many levels of redundancy that can be exploited to provide tests of internal consistency. One of the most important is the comparison of the 70 GHz (amplifier) and 100 GHz (bolometer) channels. Based on different instrument technologies, with feeds located differently in the focal plane, analysed independently by different teams using different software, and near the minimum of diffuse foreground emission, these channels are in effect two different experiments. The 143 GHz channel has the lowest noise level on Planck, and is near the minimum of unresolved foreground emission. In this paper, we analyse the level of consistency achieved in the 2013 Planck data. We concentrate on comparisons between the 70, 100, and 143 GHz channel maps and power spectra, particularly over the angular scales of the first and second acoustic peaks, on maps masked for diffuse Galactic emission and for strong unresolved sources. Difference maps covering angular scales from 8°...
Khazanov, G. V.; Gamayunov, K. V.; Gallagher, D. L.; Kozyra, J. W.
2007-01-01
It is well-known that the effects of electromagnetic ion cyclotron (EMIC) waves on ring current (RC) ion and radiation belt (RB) electron dynamics strongly depend on such particle/wave characteristics as the phase-space distribution function, frequency, wavenormal angle, wave energy, and the form of wave spectral energy density. The consequence is that accurate modeling of EMIC waves and RC particles requires robust inclusion of the interdependent dynamics of wave growth/damping, wave propagation, and[ particles. Such a self-consistent model is being progressively developed by Khazanov et al. [2002, 2006, 2007]. This model is based on a system of coupled kinetic equations for the RC and EMIC wave power spectral density along with the ray tracing equations. Thome and Home [2007] (hereafter referred to as TH2007) call the Khazanov et al. [2002, 2006] results into question in their Comment. The points in contention can be summarized as follows. TH2007 claim that: (1) "the important damping of waves by thermal heavy ions is completely ignored", and Landau damping during resonant interaction with thermal electrons is not included in our model; (2) EMIC wave damping due to RC O + is not included in our simulation; (3) non-linear processes limiting EMIC wave amplitude are not included in our model; (4) growth of the background fluctuations to a physically significantamplitude"must occur during a single transit of the unstable region" with subsequent damping below bi-ion latitudes,and consequently"the bounce averaged wave kinetic equation employed in the code contains a physically erroneous 'assumption". Our reply will address each of these points as well as other criticisms mentioned in the Comment. TH2007 are focused on two of our papers that are separated by four years. Significant progress in the self-consistent treatment of the RC-EMIC wave system has been achieved during those years. The paper by Khazanov et al. [2006] presents the latest version of our model, and in
A consistent collinear triad approximation for operational wave models
Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.
2016-08-01
In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.
CONSISTENCY OF LS ESTIMATOR IN SIMPLE LINEAR EV REGRESSION MODELS
Institute of Scientific and Technical Information of China (English)
Liu Jixue; Chen Xiru
2005-01-01
Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadratic-mean consistency.
Radio data and synchrotron emission in consistent cosmic ray models
Bringmann, Torsten; Lineros, Roberto A
2011-01-01
We consider the propagation of electrons in phenomenological two-zone diffusion models compatible with cosmic-ray nuclear data and compute the diffuse synchrotron emission resulting from their interaction with galactic magnetic fields. We find models in agreement not only with cosmic ray data but also with radio surveys at essentially all frequencies. Requiring such a globally consistent description strongly disfavors both a very large (L>15 kpc) and small (L<1 kpc) effective size of the diffusive halo. This has profound implications for, e.g., indirect dark matter searches.
Logical consistency and sum-constrained linear models
van Perlo -ten Kleij, Frederieke; Steerneman, A.G.M.; Koning, Ruud H.
2006-01-01
A topic that has received quite some attention in the seventies and eighties is logical consistency of sum-constrained linear models. Loosely defined, a sum-constrained model is logically consistent if the restrictions on the parameters and explanatory variables are such that the sum constraint is a
Are paleoclimate model ensembles consistent with the MARGO data synthesis?
Directory of Open Access Journals (Sweden)
J. C. Hargreaves
2011-03-01
Full Text Available We investigate the consistency of various ensembles of model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day, however, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.
Are paleoclimate model ensembles consistent with the MARGO data synthesis?
Directory of Open Access Journals (Sweden)
J. C. Hargreaves
2011-08-01
Full Text Available We investigate the consistency of various ensembles of climate model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day. However, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.
Self consistent tight binding model for dissociable water
Lin, You; Wynveen, Aaron; Halley, J. W.; Curtiss, L. A.; Redfern, P. C.
2012-05-01
We report results of development of a self consistent tight binding model for water. The model explicitly describes the electrons of the liquid self consistently, allows dissociation of the water and permits fast direct dynamics molecular dynamics calculations of the fluid properties. It is parameterized by fitting to first principles calculations on water monomers, dimers, and trimers. We report calculated radial distribution functions of the bulk liquid, a phase diagram and structure of solvated protons within the model as well as ac conductivity of a system of 96 water molecules of which one is dissociated. Structural properties and the phase diagram are in good agreement with experiment and first principles calculations. The estimated DC conductivity of a computational sample containing a dissociated water molecule was an order of magnitude larger than that reported from experiment though the calculated ratio of proton to hydroxyl contributions to the conductivity is very close to the experimental value. The conductivity results suggest a Grotthuss-like mechanism for the proton component of the conductivity.
Short Polymer Modeling using Self-Consistent Integral Equation Method
Kim, Yeongyoon; Park, So Jung; Kim, Jaeup
2014-03-01
Self-consistent field theory (SCFT) is an excellent mean field theoretical tool for predicting the morphologies of polymer based materials. In the standard SCFT, the polymer is modeled as a Gaussian chain which is suitable for a polymer of high molecular weight, but not necessarily for a polymer of low molecular weight. In order to overcome this limitation, Matsen and coworkers have recently developed SCFT of discrete polymer chains in which one polymer is modeled as finite number of beads joined by freely jointed bonds of fixed length. In their model, the diffusion equation of the canonical SCFT is replaced by an iterative integral equation, and the full spectral method is used for the production of the phase diagram of short block copolymers. In this study, for the finite length chain problem, we apply pseudospectral method which is the most efficient numerical scheme to solve the iterative integral equation. We use this new numerical method to investigate two different types of polymer bonds: spring-beads model and freely-jointed chain model. By comparing these results with those of the Gaussian chain model, the influences on the morphologies of diblock copolymer melts due to the chain length and the type of bonds are examined. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (no. 2012R1A1A2043633).
The Self-Consistency Model of Subjective Confidence
Koriat, Asher
2012-01-01
How do people monitor the correctness of their answers? A self-consistency model is proposed for the process underlying confidence judgments and their accuracy. In answering a 2-alternative question, participants are assumed to retrieve a sample of representations of the question and base their confidence on the consistency with which the chosen…
Model Checking Data Consistency for Cache Coherence Protocols
Institute of Scientific and Technical Information of China (English)
Hong Pan; Hui-Min Lin; Yi Lv
2006-01-01
A method for automatic verification of cache coherence protocols is presented, in which cache coherence protocols are modeled as concurrent value-passing processes, and control and data consistency requirement are described as formulas in first-orderμ-calculus. A model checker is employed to check if the protocol under investigation satisfies the required properties. Using this method a data consistency error has been revealed in a well-known cache coherence protocol.The error has been corrected, and the revised protocol has been shown free from data consistency error for any data domain size, by appealing to data independence technique.
Creation of Consistent Burn Wounds: A Rat Model
Directory of Open Access Journals (Sweden)
Elijah Zhengyang Cai
2014-07-01
Full Text Available Background Burn infliction techniques are poorly described in rat models. An accurate study can only be achieved with wounds that are uniform in size and depth. We describe a simple reproducible method for creating consistent burn wounds in rats. Methods Ten male Sprague-Dawley rats were anesthetized and dorsum shaved. A 100 g cylindrical stainless-steel rod (1 cm diameter was heated to 100℃ in boiling water. Temperature was monitored using a thermocouple. We performed two consecutive toe-pinch tests on different limbs to assess the depth of sedation. Burn infliction was limited to the loin. The skin was pulled upwards, away from the underlying viscera, creating a flat surface. The rod rested on its own weight for 5, 10, and 20 seconds at three different sites on each rat. Wounds were evaluated for size, morphology and depth. Results Average wound size was 0.9957 cm2 (standard deviation [SD] 0.1845 (n=30. Wounds created with duration of 5 seconds were pale, with an indistinct margin of erythema. Wounds of 10 and 20 seconds were well-defined, uniformly brown with a rim of erythema. Average depths of tissue damage were 1.30 mm (SD 0.424, 2.35 mm (SD 0.071, and 2.60 mm (SD 0.283 for duration of 5, 10, 20 seconds respectively. Burn duration of 5 seconds resulted in full-thickness damage. Burn duration of 10 seconds and 20 seconds resulted in full-thickness damage, involving subjacent skeletal muscle. Conclusions This is a simple reproducible method for creating burn wounds consistent in size and depth in a rat burn model.
An Extended Model Driven Framework for End-to-End Consistent Model Transformation
Directory of Open Access Journals (Sweden)
Mr. G. Ramesh
2016-08-01
Full Text Available Model Driven Development (MDD results in quick transformation from models to corresponding systems. Forward engineering features of modelling tools can help in generating source code from models. To build a robust system it is important to have consistency checking in the design models and the same between design model and the transformed implementation. Our framework named as Extensible Real Time Software Design Inconsistency Checker (XRTSDIC proposed in our previous papers supports consistency checking in design models. This paper focuses on automatic model transformation. An algorithm and defined transformation rules for model transformation from UML class diagram to ERD and SQL are being proposed. The model transformation bestows many advantages such as reducing cost of development, improving quality, enhancing productivity and leveraging customer satisfaction. Proposed framework has been enhanced to ensure that the transformed implementations conform to their model counterparts besides checking end-to-end consistency.
Standard Model Vacuum Stability and Weyl Consistency Conditions
DEFF Research Database (Denmark)
Antipin, Oleg; Gillioz, Marc; Krog, Jens;
2013-01-01
At high energy the standard model possesses conformal symmetry at the classical level. This is reflected at the quantum level by relations between the different beta functions of the model. These relations are known as the Weyl consistency conditions. We show that it is possible to satisfy them...... order by order in perturbation theory, provided that a suitable coupling constant counting scheme is used. As a direct phenomenological application, we study the stability of the standard model vacuum at high energies and compare with previous computations violating the Weyl consistency conditions....
Quantum monadology: a consistent world model for consciousness and physics.
Nakagomi, Teruaki
2003-04-01
The NL world model presented in the previous paper is embodied by use of relativistic quantum mechanics, which reveals the significance of the reduction of quantum states and the relativity principle, and locates consciousness and the concept of flowing time consistently in physics. This model provides a consistent framework to solve apparent incompatibilities between consciousness (as our interior experience) and matter (as described by quantum mechanics and relativity theory). Does matter have an inside? What is the flowing time now? Does physics allow the indeterminism by volition? The problem of quantum measurement is also resolved in this model.
Consistency and Reconciliation Model In Regional Development Planning
Directory of Open Access Journals (Sweden)
Dina Suryawati
2016-10-01
Full Text Available The aim of this study was to identify the problems and determine the conceptual model of regional development planning. Regional development planning is a systemic, complex and unstructured process. Therefore, this study used soft systems methodology to outline unstructured issues with a structured approach. The conceptual models that were successfully constructed in this study are a model of consistency and a model of reconciliation. Regional development planning is a process that is well-integrated with central planning and inter-regional planning documents. Integration and consistency of regional planning documents are very important in order to achieve the development goals that have been set. On the other hand, the process of development planning in the region involves technocratic system, that is, both top-down and bottom-up system of participation. Both must be balanced, do not overlap and do not dominate each other. regional, development, planning, consistency, reconciliation
Model-Consistent Sparse Estimation through the Bootstrap
Bach, Francis
2009-01-01
We consider the least-square linear regression problem with regularization by the $\\ell^1$-norm, a problem usually referred to as the Lasso. In this paper, we first present a detailed asymptotic analysis of model consistency of the Lasso in low-dimensional settings. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection. For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection procedure, referred to as the Bolasso, is extended to high-dimensional settings by a provably consistent two-step procedure.
Consistency analysis of a nonbirefringent Lorentz-violating planar model
Casana, Rodolfo; Moreira, Roemir P M
2011-01-01
In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman's propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor $\\kappa_{\\mu\
Multiscale Parameter Regionalization for consistent global water resources modelling
Wanders, Niko; Wood, Eric; Pan, Ming; Samaniego, Luis; Thober, Stephan; Kumar, Rohini; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc F. P.
2017-04-01
Due to an increasing demand for high- and hyper-resolution water resources information, it has become increasingly important to ensure consistency in model simulations across scales. This consistency can be ensured by scale independent parameterization of the land surface processes, even after calibration of the water resource model. Here, we use the Multiscale Parameter Regionalization technique (MPR, Samaniego et al. 2010, WRR) to allow for a novel, spatially consistent, scale independent parameterization of the global water resource model PCR-GLOBWB. The implementation of MPR in PCR-GLOBWB allows for calibration at coarse resolutions and subsequent parameter transfer to the hyper-resolution. In this study, the model was calibrated at 50 km resolution over Europe and validation carried out at resolutions of 50 km, 10 km and 1 km. MPR allows for a direct transfer of the calibrated transfer function parameters across scales and we find that we can maintain consistent land-atmosphere fluxes across scales. Here we focus on the 2003 European drought and show that the new parameterization allows for high-resolution calibrated simulations of water resources during the drought. For example, we find a reduction from 29% to 9.4% in the percentile difference in the annual evaporative flux across scales when compared against default simulations. Soil moisture errors are reduced from 25% to 6.9%, clearly indicating the benefits of the MPR implementation. This new parameterization allows us to show more spatial detail in water resources simulations that are consistent across scales and also allow validation of discharge for smaller catchments, even with calibrations at a coarse 50 km resolution. The implementation of MPR allows for novel high-resolution calibrated simulations of a global water resources model, providing calibrated high-resolution model simulations with transferred parameter sets from coarse resolutions. The applied methodology can be transferred to other
Emergent Dynamics of a Thermodynamically Consistent Particle Model
Ha, Seung-Yeal; Ruggeri, Tommaso
2017-03-01
We present a thermodynamically consistent particle (TCP) model motivated by the theory of multi-temperature mixture of fluids in the case of spatially homogeneous processes. The proposed model incorporates the Cucker-Smale (C-S) type flocking model as its isothermal approximation. However, it is more complex than the C-S model, because the mutual interactions are not only " mechanical" but are also affected by the "temperature effect" as individual particles may exhibit distinct internal energies. We develop a framework for asymptotic weak and strong flocking in the context of the proposed model.
Viscoelastic models with consistent hypoelasticity for fluids undergoing finite deformations
Altmeyer, Guillaume; Rouhaud, Emmanuelle; Panicaud, Benoit; Roos, Arjen; Kerner, Richard; Wang, Mingchuan
2015-08-01
Constitutive models of viscoelastic fluids are written with rate-form equations when considering finite deformations. Trying to extend the approach used to model these effects from an infinitesimal deformation to a finite transformation framework, one has to ensure that the tensors and their rates are indifferent with respect to the change of observer and to the superposition with rigid body motions. Frame-indifference problems can be solved with the use of an objective stress transport, but the choice of such an operator is not obvious and the use of certain transports usually leads to physically inconsistent formulation of hypoelasticity. The aim of this paper is to present a consistent formulation of hypoelasticity and to combine it with a viscosity model to construct a consistent viscoelastic model. In particular, the hypoelastic model is reversible.
A self-consistent dynamo model for fully convective stars
Yadav, Rakesh Kumar; Christensen, Ulrich; Morin, Julien; Gastine, Thomas; Reiners, Ansgar; Poppenhaeger, Katja; Wolk, Scott J.
2016-01-01
The tachocline region inside the Sun, where the rigidly rotating radiative core meets the differentially rotating convection zone, is thought to be crucial for generating the Sun's magnetic field. Low-mass fully convective stars do not possess a tachocline and were originally expected to generate only weak small-scale magnetic fields. Observations, however, have painted a different picture of magnetism in rapidly-rotating fully convective stars: (1) Zeeman broadening measurements revealed average surface field of several kiloGauss (kG), which is similar to the typical field strength found in sunspots. (2) Zeeman-Doppler-Imaging (ZDI) technique discovered large-scale magnetic fields with a morphology often similar to the Earth's dipole-dominated field. (3) Comparison of Zeeman broadening and ZDI results showed that more than 80% of the magnetic flux resides at small scales. So far, theoretical and computer simulation efforts have not been able to reproduce these features simultaneously. Here we present a self-consistent global model of magnetic field generation in low-mass fully convective stars. A distributed dynamo working in the model spontaneously produces a dipole-dominated surface magnetic field of the observed strength. The interaction of this field with the turbulent convection in outer layers shreds it, producing small-scale fields that carry most of the magnetic flux. The ZDI technique applied to synthetic spectropolarimetric data based on our model recovers most of the large-scale field. Our model simultaneously reproduces the morphology and magnitude of the large-scale field as well as the magnitude of the small-scale field observed on low-mass fully convective stars.
Bolasso: model consistent Lasso estimation through the bootstrap
Bach, Francis
2008-01-01
We consider the least-square linear regression problem with regularization by the l1-norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning rep...
Detection and quantification of flow consistency in business process models
DEFF Research Database (Denmark)
Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel
2017-01-01
Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect......, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second......, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics...
A consistent transported PDF model for treating differential molecular diffusion
Wang, Haifeng; Zhang, Pei
2016-11-01
Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.
Simplified Models for Dark Matter Face their Consistent Completions
Energy Technology Data Exchange (ETDEWEB)
Goncalves, Dorival [Pittsburgh U.; Machado, Pedro N. [Madrid, IFT; No, Jose Miguel [Sussex U.
2016-11-14
Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.
Simplified Models for Dark Matter Face their Consistent Completions
Goncalves, Dorival; No, Jose Miguel
2016-01-01
Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.
Towards consistent nuclear models and comprehensive nuclear data evaluations
Energy Technology Data Exchange (ETDEWEB)
Bouland, O [Los Alamos National Laboratory; Hale, G M [Los Alamos National Laboratory; Lynn, J E [Los Alamos National Laboratory; Talou, P [Los Alamos National Laboratory; Bernard, D [FRANCE; Litaize, O [FRANCE; Noguere, G [FRANCE; De Saint Jean, C [FRANCE; Serot, O [FRANCE
2010-01-01
The essence of this paper is to enlighten the consistency achieved nowadays in nuclear data and uncertainties assessments in terms of compound nucleus reaction theory from neutron separation energy to continuum. Making the continuity of theories used in resolved (R-matrix theory), unresolved resonance (average R-matrix theory) and continuum (optical model) rangcs by the generalization of the so-called SPRT method, consistent average parameters are extracted from observed measurements and associated covariances are therefore calculated over the whole energy range. This paper recalls, in particular, recent advances on fission cross section calculations and is willing to suggest some hints for future developments.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Cont, Rama; Kokholm, Thomas
options on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... on S&P 500 across strikes and maturities as well as options on the VIX volatility index. The calibration of the model is done in two steps, first by matching VIX option prices and then by matching prices of options on the underlying....
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... strikes and maturities as well as options on the VIX volatility index. The calibration of the model is done in two steps, first by matching VIX option prices and then by matching prices of options on the underlying....
Consistency Across Standards or Standards in a New Business Model
Russo, Dane M.
2010-01-01
Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.
A detailed self-consistent vertical Milky Way disc model
Directory of Open Access Journals (Sweden)
Gao S.
2012-02-01
Full Text Available We present a self-consistent vertical disc model of thin and thick disc in the solar vicinity. The model is optimized to fit the local kinematics of main sequence stars by varying the star formation history and the dynamical heating function. The star formation history and the dynamical heating function are not uniquely determined by the local kinematics alone. For four different pairs of input functions we calculate star count predictions at high galactic latitude as a function of colour. The comparison with North Galactic Pole data of SDSS/SEGUE leads to significant constraints of the local star formation history.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Cont, Rama; Kokholm, Thomas
2013-01-01
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
Self consistent modeling of accretion columns in accretion powered pulsars
Falkner, Sebastian; Schwarm, Fritz-Walter; Wolff, Michael Thomas; Becker, Peter A.; Wilms, Joern
2016-04-01
We combine three physical models to self-consistently derive the observed flux and pulse profiles of neutron stars' accretion columns. From the thermal and bulk Comptonization model by Becker & Wolff (2006) we obtain seed photon continua produced in the dense inner regions of the accretion column. In a thin outer layer these seed continua are imprinted with cyclotron resonant scattering features calculated using Monte Carlo simulations. The observed phase and energy dependent flux corresponding to these emission profiles is then calculated, taking relativistic light bending into account. We present simulated pulse profiles and the predicted dependency of the observable X-ray spectrum as a function of pulse phase.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
Warped 5D Standard Model Consistent with EWPT
Cabrer, Joan A; Quiros, Mariano
2011-01-01
For a 5D Standard Model propagating in an AdS background with an IR localized Higgs, compatibility of bulk KK gauge modes with EWPT yields a phenomenologically unappealing KK spectrum (m > 12.5 TeV) and leads to a "little hierarchy problem". For a bulk Higgs the solution to the hierarchy problem reduces the previous bound only by sqrt(3). As a way out, models with an enhanced bulk gauge symmetry SU(2)_R x U(1)_(B-L) were proposed. In this note we describe a much simpler (5D Standard) Model, where introduction of an enlarged gauge symmetry is no longer required. It is based on a warped gravitational background which departs from AdS at the IR brane and a bulk propagating Higgs. The model is consistent with EWPT for a range of KK masses within the LHC reach.
Consistent regularization and renormalization in models with inhomogeneous phases
Adhikari, Prabal
2016-01-01
In many models in condensed matter physics and high-energy physics, one finds inhomogeneous phases at high density and low temperature. These phases are characterized by a spatially dependent condensate or order parameter. A proper calculation requires that one takes the vacuum fluctuations of the model into account. These fluctuations are ultraviolet divergent and must be regularized. We discuss different consistent ways of regularizing and renormalizing quantum fluctuations, focusing on a symmetric energy cutoff scheme and dimensional regularization. We apply these techniques calculating the vacuum energy in the NJL model in 1+1 dimensions in the large-$N_c$ limit and the 3+1 dimensional quark-meson model in the mean-field approximation both for a one-dimensional chiral-density wave.
Consistent regularization and renormalization in models with inhomogeneous phases
Adhikari, Prabal; Andersen, Jens O.
2017-02-01
In many models in condensed matter and high-energy physics, one finds inhomogeneous phases at high density and low temperature. These phases are characterized by a spatially dependent condensate or order parameter. A proper calculation requires that one takes the vacuum fluctuations of the model into account. These fluctuations are ultraviolet divergent and must be regularized. We discuss different ways of consistently regularizing and renormalizing quantum fluctuations, focusing on momentum cutoff, symmetric energy cutoff, and dimensional regularization. We apply these techniques calculating the vacuum energy in the Nambu-Jona-Lasinio model in 1 +1 dimensions in the large-Nc limit and in the 3 +1 dimensional quark-meson model in the mean-field approximation both for a one-dimensional chiral-density wave.
Self-consistent triaxial de Zeeuw-Carollo Models
Thakur, Parijat; Das, Mousumi; Chakraborty, D K; Ann, H B
2007-01-01
We use the usual method of Schwarzschild to construct self-consistent solutions for the triaxial de Zeeuw & Carollo (1996) models with central density cusps. ZC96 models are triaxial generalisations of spherical $\\gamma$-models of Dehnen whose densities vary as $r^{-\\gamma}$ near the center and $r^{-4}$ at large radii and hence, possess a central density core for $\\gamma=0$ and cusps for $\\gamma > 0$. We consider four triaxial models from ZC96, two prolate triaxials: $(p, q) = (0.65, 0.60)$ with $\\gamma = 1.0$ and 1.5, and two oblate triaxials: $(p, q) = (0.95, 0.60)$ with $\\gamma = 1.0$ and 1.5. We compute 4500 orbits in each model for time periods of $10^{5} T_{D}$. We find that a large fraction of the orbits in each model are stochastic by means of their nonzero Liapunov exponents. The stochastic orbits in each model can sustain regular shapes for $\\sim 10^{3} T_{D}$ or longer, which suggests that they diffuse slowly through their allowed phase-space. Except for the oblate triaxial models with $\\gamma ...
Consistency analysis of a nonbirefringent Lorentz-violating planar model
Energy Technology Data Exchange (ETDEWEB)
Casana, Rodolfo; Ferreira, Manoel M.; Moreira, Roemir P.M. [Universidade Federal do Maranhao (UFMA), Departamento de Fisica, Sao Luis, MA (Brazil)
2012-07-15
In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor {kappa}{sub {mu}{nu}}. The propagator of the gauge field is explicitly evaluated and expressed in terms of linear independent symmetric tensors, presenting only one physical mode. The same holds for the scalar propagator. A consistency analysis is performed based on the poles of the propagators. The isotropic parity-even sector is stable, causal and unitary mode for 0{<=}{kappa}{sub 00}<1. On the other hand, the anisotropic sector is stable and unitary but in general noncausal. Finally, it is shown that this planar model interacting with a {lambda}{phi}{sup 4}-Higgs field supports compact-like vortex configurations. (orig.)
Consistency analysis of a nonbirefringent Lorentz-violating planar model
Casana, Rodolfo; Ferreira, Manoel M.; Moreira, Roemir P. M.
2012-07-01
In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor κ μν . The propagator of the gauge field is explicitly evaluated and expressed in terms of linear independent symmetric tensors, presenting only one physical mode. The same holds for the scalar propagator. A consistency analysis is performed based on the poles of the propagators. The isotropic parity-even sector is stable, causal and unitary mode for 0≤ κ 00<1. On the other hand, the anisotropic sector is stable and unitary but in general noncausal. Finally, it is shown that this planar model interacting with a λ| φ|4-Higgs field supports compactlike vortex configurations.
Self-Consistent Modeling of Reionization in Cosmological Hydrodynamical Simulations
Oñorbe, Jose; Lukić, Zarija
2016-01-01
The ultraviolet background (UVB) emitted by quasars and galaxies governs the ionization and thermal state of the intergalactic medium (IGM), regulates the formation of high-redshift galaxies, and is thus a key quantity for modeling cosmic reionization. The vast majority of cosmological hydrodynamical simulations implement the UVB via a set of spatially uniform photoionization and photoheating rates derived from UVB synthesis models. We show that simulations using canonical UVB rates reionize, and perhaps more importantly, spuriously heat the IGM, much earlier z ~ 15 than they should. This problem arises because at z > 6, where observational constraints are non-existent, the UVB amplitude is far too high. We introduce a new methodology to remedy this issue, and generate self-consistent photoionization and photoheating rates to model any chosen reionization history. Following this approach, we run a suite of hydrodynamical simulations of different reionization scenarios, and explore the impact of the timing of ...
Consistent Static Models of Local Thermospheric Composition Profiles
Picone, J M; Drob, D P
2016-01-01
The authors investigate the ideal, nondriven multifluid equations of motion to identify consistent (i.e., truly stationary), mechanically static models for composition profiles within the thermosphere. These physically faithful functions are necessary to define the parametric core of future empirical atmospheric models and climatologies. Based on the strength of interspecies coupling, the thermosphere has three altitude regions: (1) the lower thermosphere (herein z ~200 km), in which the species flows are approximately uncoupled; and (3) a transition region in between, where the effective species particle mass and the effective species vertical flow interpolate between the solutions for the upper and lower thermosphere. We place this view in the context of current terminology within the community, i.e., a fully mixed (lower) region and an upper region in diffusive equilibrium (DE). The latter condition, DE, currently used in empirical composition models, does not represent a truly static composition profile ...
Thermodynamically consistent model of brittle oil shales under overpressure
Izvekov, Oleg
2016-04-01
The concept of dual porosity is a common way for simulation of oil shale production. In the frame of this concept the porous fractured media is considered as superposition of two permeable continua with mass exchange. As a rule the concept doesn't take into account such as the well-known phenomenon as slip along natural fractures, overpressure in low permeability matrix and so on. Overpressure can lead to development of secondary fractures in low permeability matrix in the process of drilling and pressure reduction during production. In this work a new thermodynamically consistent model which generalizes the model of dual porosity is proposed. Particularities of the model are as follows. The set of natural fractures is considered as permeable continuum. Damage mechanics is applied to simulation of secondary fractures development in low permeability matrix. Slip along natural fractures is simulated in the frame of plasticity theory with Drucker-Prager criterion.
A minimal model of self-consistent partial synchrony
Clusella, Pau; Politi, Antonio; Rosenblum, Michael
2016-09-01
We show that self-consistent partial synchrony in globally coupled oscillatory ensembles is a general phenomenon. We analyze in detail appearance and stability properties of this state in possibly the simplest setup of a biharmonic Kuramoto-Daido phase model as well as demonstrate the effect in limit-cycle relaxational Rayleigh oscillators. Such a regime extends the notion of splay state from a uniform distribution of phases to an oscillating one. Suitable collective observables such as the Kuramoto order parameter allow detecting the presence of an inhomogeneous distribution. The characteristic and most peculiar property of self-consistent partial synchrony is the difference between the frequency of single units and that of the macroscopic field.
Mean-field theory and self-consistent dynamo modeling
Energy Technology Data Exchange (ETDEWEB)
Yoshizawa, Akira; Yokoi, Nobumitsu [Tokyo Univ. (Japan). Inst. of Industrial Science; Itoh, Sanae-I [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Itoh, Kimitaka [National Inst. for Fusion Science, Toki, Gifu (Japan)
2001-12-01
Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)
Directory of Open Access Journals (Sweden)
June Ronald K
2009-11-01
Full Text Available Abstract Background Cartilage degeneration via osteoarthritis affects millions of elderly people worldwide, yet the specific contributions of matrix biopolymers toward cartilage viscoelastic properties remain unknown despite 30 years of research. Polymer dynamics theory may enable such an understanding, and predicts that cartilage stress-relaxation will proceed faster when the average polymer length is shortened. Methods This study tested whether the predictions of polymer dynamics were consistent with changes in cartilage mechanics caused by enzymatic digestion of specific cartilage extracellular matrix molecules. Bovine calf cartilage explants were cultured overnight before being immersed in type IV collagenase, bacterial hyaluronidase, or control solutions. Stress-relaxation and cyclical loading tests were performed after 0, 1, and 2 days of incubation. Results Stress-relaxation proceeded faster following enzymatic digestion by collagenase and bacterial hyaluronidase after 1 day of incubation (both p ≤ 0.01. The storage and loss moduli at frequencies of 1 Hz and above were smaller after 1 day of digestion by collagenase and bacterial hyaluronidase (all p ≤ 0.02. Conclusion These results demonstrate that enzymatic digestion alters cartilage viscoelastic properties in a manner consistent with polymer dynamics mechanisms. Future studies may expand the use of polymer dynamics as a microstructural model for understanding the contributions of specific matrix molecules toward tissue-level viscoelastic properties.
Consistency of the tachyon warm inflationary universe models
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiao-Min; Zhu, Jian-Yang, E-mail: zhangxm@mail.bnu.edu.cn, E-mail: zhujy@bnu.edu.cn [Department of Physics, Beijing Normal University, Beijing 100875 (China)
2014-02-01
This study concerns the consistency of the tachyon warm inflationary models. A linear stability analysis is performed to find the slow-roll conditions, characterized by the potential slow-roll (PSR) parameters, for the existence of a tachyon warm inflationary attractor in the system. The PSR parameters in the tachyon warm inflationary models are redefined. Two cases, an exponential potential and an inverse power-law potential, are studied, when the dissipative coefficient Γ = Γ{sub 0} and Γ = Γ(φ), respectively. A crucial condition is obtained for a tachyon warm inflationary model characterized by the Hubble slow-roll (HSR) parameter ε{sub H}, and the condition is extendable to some other inflationary models as well. A proper number of e-folds is obtained in both cases of the tachyon warm inflation, in contrast to existing works. It is also found that a constant dissipative coefficient (Γ = Γ{sub 0}) is usually not a suitable assumption for a warm inflationary model.
A self-consistent spin-diffusion model for micromagnetics
Abert, Claas
2016-12-17
We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.
A proposal for a consistent parametrization of earth models
Forbriger, Thomas; Friederich, Wolfgang
2005-08-01
The current way to parametrize earth models in terms of real-valued seismic velocities and quality factors is incomplete as it does not specify how complex-valued viscoelastic moduli or complex velocities should be computed from them. Various ways to do this can be found in the literature. Depending on the context they may specify (1) the real part of the viscoelastic modulus, (2) the absolute value of the viscoelastic modulus, (3) the real part of complex velocity or (4) the phase velocity of a propagating plane wave. We propose here to exclusively use the first alternative because it is the only one which allows both a flexible choice of elastic parameters and a mathematically rigorous evaluation of the complex-valued viscoelastic moduli. The other definitions only permit an evaluation of viscoelastic moduli if the tabulated quality factors are directly associated with the listed velocities. Ignoring the subtle differences between the three definitions leads to variations in viscoelastic moduli which are second order in 1/Q where Q is a quality factor. This may be the reason why the topic has never been discussed in the literature. In case of shallow seismic media, however, where quality factors may assume values of less than 10, the subtle differences become noticeable in synthetic seismograms. It is then essential to use the same definition in all algorithms to make results comparable. Matters become worse for anisotropic media, which are commonly specified in terms of real elastic moduli and quality factors for effective isotropic moduli. In that case, the complex-valued viscoelastic moduli cannot be determined uniquely. However, interpreting the tabulated constants as the real parts of the complex-valued viscoelastic moduli at least allows a consistent definition, which respects the relative magnitude of the anelastic and anisotropic parts compared to the elastic parts. It should be noted that all these considerations apply to complex-valued viscoelastic
Loss of fibrinogen in zebrafish results in symptoms consistent with human hypofibrinogenemia.
Directory of Open Access Journals (Sweden)
Andy H Vo
Full Text Available Cessation of bleeding after trauma is a necessary evolutionary vertebrate adaption for survival. One of the major pathways regulating response to hemorrhage is the coagulation cascade, which ends with the cleavage of fibrinogen to form a stable clot. Patients with low or absent fibrinogen are at risk for bleeding. While much detailed information is known about fibrinogen regulation and function through studies of humans and mammalian models, bleeding risk in patients cannot always be accurately predicted purely based on fibrinogen levels, suggesting an influence of modifying factors and a need for additional genetic models. The zebrafish has orthologs to the three components of fibrinogen (fga, fgb, and fgg, but it hasn't yet been shown that zebrafish fibrinogen functions to prevent bleeding in vivo. Here we show that zebrafish fibrinogen is incorporated into an induced thrombus, and deficiency results in hemorrhage. An Fgb-eGFP fusion protein is incorporated into a developing thrombus induced by laser injury, but causes bleeding in adult transgenic fish. Antisense morpholino knockdown results in intracranial and intramuscular hemorrhage at 3 days post fertilization. The observed phenotypes are consistent with symptoms exhibited by patients with hypo- and afibrinogenemia. These data demonstrate that zebrafish possess highly conserved orthologs of the fibrinogen chains, which function similarly to mammals through the formation of a fibrin clot.
Classical and Quantum Consistency of the DGP Model
Nicolis, A; Nicolis, Alberto; Rattazzi, Riccardo
2004-01-01
We study the Dvali-Gabadadze-Porrati model by the method of the boundary effective action. The truncation of this action to the bending mode \\pi consistently describes physics in a wide range of regimes both at the classical and at the quantum level. The Vainshtein effect, which restores agreement with precise tests of general relativity, follows straightforwardly. We give a simple and general proof of stability, i.e. absence of ghosts in the fluctuations, valid for most of the relevant cases, like for instance the spherical source in asymptotically flat space. However we confirm that around certain interesting self-accelerating cosmological solutions there is a ghost. We consider the issue of quantum corrections. Around flat space \\pi becomes strongly coupled below a macroscopic length of 1000 km, thus impairing the predictivity of the model. Indeed the tower of higher dimensional operators which is expected by a generic UV completion of the model limits predictivity at even larger length scales. We outline ...
Consistent constraints on the Standard Model Effective Field Theory
Berthier, Laure
2015-01-01
We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, $\\Lambda \\gtrsim \\, 3 \\, {\\rm TeV}$. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an $\\rm S,T$ analysis is modified by the theory errors we include as an illustrative example.
Pluralistic and stochastic gene regulation: examples, models and consistent theory.
Salas, Elisa N; Shu, Jiang; Cserhati, Matyas F; Weeks, Donald P; Ladunga, Istvan
2016-06-01
We present a theory of pluralistic and stochastic gene regulation. To bridge the gap between empirical studies and mathematical models, we integrate pre-existing observations with our meta-analyses of the ENCODE ChIP-Seq experiments. Earlier evidence includes fluctuations in levels, location, activity, and binding of transcription factors, variable DNA motifs, and bursts in gene expression. Stochastic regulation is also indicated by frequently subdued effects of knockout mutants of regulators, their evolutionary losses/gains and massive rewiring of regulatory sites. We report wide-spread pluralistic regulation in ≈800 000 tightly co-expressed pairs of diverse human genes. Typically, half of ≈50 observed regulators bind to both genes reproducibly, twice more than in independently expressed gene pairs. We also examine the largest set of co-expressed genes, which code for cytoplasmic ribosomal proteins. Numerous regulatory complexes are highly significant enriched in ribosomal genes compared to highly expressed non-ribosomal genes. We could not find any DNA-associated, strict sense master regulator. Despite major fluctuations in transcription factor binding, our machine learning model accurately predicted transcript levels using binding sites of 20+ regulators. Our pluralistic and stochastic theory is consistent with partially random binding patterns, redundancy, stochastic regulator binding, burst-like expression, degeneracy of binding motifs and massive regulatory rewiring during evolution.
DEFF Research Database (Denmark)
Kock, Anders Bredahl
2015-01-01
the tuning parameter by Bayesian Information Criterion (BIC) results in consistent model selection. However, it is also shown that the adaptive Lasso has no power against shrinking alternatives of the form c/T if it is tuned to perform consistent model selection. We show that if the adaptive Lasso is tuned...
Deterministic Consistency: A Programming Model for Shared Memory Parallelism
Aviram, Amittai; Ford, Bryan
2009-01-01
The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "...
Energy Technology Data Exchange (ETDEWEB)
Bamba, Kazuharu [Leading Graduate School Promotion Center, Ochanomizu University, 2-1-1 Ohtsuka, Bunkyo-ku, Tokyo 112-8610 (Japan); Department of Physics, Graduate School of Humanities and Sciences, Ochanomizu University, Tokyo 112-8610 (Japan); Nojiri, Shin' ichi [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya University, Nagoya 464-8602 (Japan); Department of Physics, Nagoya University, Nagoya 464-8602 (Japan); Odintsov, Sergei D. [Consejo Superior de Investigaciones Científicas, ICE/CSIC-IEEC, Campus UAB, Facultat de Ciències, Torre C5-Parell-2a pl, E-08193 Bellaterra (Barcelona) (Spain); Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona (Spain); Tomsk State Pedagogical University, 634061 Tomsk (Russian Federation); National Research Tomsk State University, 634050 Tomsk (Russian Federation); King Abdulaziz University, Jeddah (Saudi Arabia)
2014-10-07
We reconstruct scalar field theories to realize inflation compatible with the BICEP2 result as well as the Planck. In particular, we examine the chaotic inflation model, natural (or axion) inflation model, and an inflationary model with a hyperbolic inflaton potential. We perform an explicit approach to find out a scalar field model of inflation in which any observations can be explained in principle.
Consistency of modified MLE in EV model with replicated observations
Institute of Scientific and Technical Information of China (English)
ZHANG; Sanguo
2001-01-01
［1］Kendall, M., Stuart, A., The Advanced Theory of Statistics, Vol. 2, New York: Charles Griffin, 1979.［2］Anderson, T. W., Estimating linear statistical relationships, Ann. Statist., 1984, 12: 1.［3］Cui Hengjian, Asymptotic normality of M-estimates in the EV model, Sys. Sci. and Math. Sci., 1997, 10(3): 225.［4］Madansky, A., The fitting of straight lines when both variables are subject to error, JASA, 1959, 54: 173.［5］Villegas, C., Maximum likelihood estimations of a linear functional relationship, Ann. Math. Statist., 1961, 32(4): 1048.［6］Stout, W. F., Almost Sure Convergence, New York: Academic Press, 1974.［7］Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975.［8］Lai, T. L., Robbins, H., Wei, C. Z., Strong consistency of least squares estimates in multiple regression, J. Multivariate Anal., 1979, 9: 343.［9］Chen Xiru, On limiting properties of U-statistics and von-Mises statistics, Scientia Sinica (in Chinese), 1980, (6): 522.
Consistency Problem with Tracer Advection in the Atmospheric Model GAMIL
Institute of Scientific and Technical Information of China (English)
ZHANG Kai; WAN Hui; WANG Bin; ZHANG Meigen
2008-01-01
The radon transport test,which is a widely used test case for atmospheric transport models,is carried out to evaluate the tracer advection schemes in the Grid-Point Atmospheric Model of IAP-LASG (GAMIL).TWO of the three available schemes in the model are found to be associated with significant biases in the polar regions and in the upper part of the atmosphere,which implies potentially large errors in the simulation of ozone-like tracers.Theoretical analyses show that inconsistency exists between the advection schemes and the discrete continuity equation in the dynamical core of GAMIL and consequently leads to spurious sources and sinks in the tracer transport equation.The impact of this type of inconsistency is demonstrated by idealized tests and identified as the cause of the aforementioned biases.Other potential effects of this inconsistency are also discussed.Results of this study provide some hints for choosing suitable advection schemes in the GAMIL model.At least for the polar-region-concentrated atmospheric components and the closely correlated chemical species,the Flux-Form Semi-Lagrangian advection scheme produces more reasonable simulations of the large-scale transport processes without significantly increasing the computational expense.
A seismologically consistent compositional model of Earth's core.
Badro, James; Côté, Alexander S; Brodholt, John P
2014-05-27
Earth's core is less dense than iron, and therefore it must contain "light elements," such as S, Si, O, or C. We use ab initio molecular dynamics to calculate the density and bulk sound velocity in liquid metal alloys at the pressure and temperature conditions of Earth's outer core. We compare the velocity and density for any composition in the (Fe-Ni, C, O, Si, S) system to radial seismological models and find a range of compositional models that fit the seismological data. We find no oxygen-free composition that fits the seismological data, and therefore our results indicate that oxygen is always required in the outer core. An oxygen-rich core is a strong indication of high-pressure and high-temperature conditions of core differentiation in a deep magma ocean with an FeO concentration (oxygen fugacity) higher than that of the present-day mantle.
Dynamic Consistency between Value and Coordination Models - Research Issues.
Bodenstaff, L.; Wombacher, Andreas; Reichert, M.U.; meersman, R; Tari, Z; herrero, p
Inter-organizational business cooperations can be described from different viewpoints each fulfilling a specific purpose. Since all viewpoints describe the same system they must not contradict each other, thus, must be consistent. Consistency can be checked based on common semantic concepts of the
A more consistent intraluminal rhesus monkey model of ischemic stroke
Institute of Scientific and Technical Information of China (English)
Bo Zhao; Fauzia Akbary; Shengli Li; Jing Lu; Feng Ling; Xunming Ji; Guowei Shang; Jian Chen; Xiaokun Geng; Xin Ye; Guoxun Xu; Ju Wang; Jiasheng Zheng; Hongjun Li
2014-01-01
Endovascular surgery is advantageous in experimentally induced ischemic stroke because it causes fewer cranial traumatic lesions than invasive surgery and can closely mimic the pathophysiol-ogy in stroke patients. However, the outcomes are highly variable, which limits the accuracy of evaluations of ischemic stroke studies. In this study, eight healthy adult rhesus monkeys were randomized into two groups with four monkeys in each group:middle cerebral artery occlusion at origin segment (M1) and middle cerebral artery occlusion at M2 segment. The blood lfow in the middle cerebral artery was blocked completely for 2 hours using the endovascular microcoil placement technique (1 mm × 10 cm) (undetachable), to establish a model of cerebral ischemia. The microcoil was withdrawn and the middle cerebral artery blood lfow was restored. A revers-ible middle cerebral artery occlusion model was identiifed by hematoxylin-eosin staining, digital subtraction angiography, magnetic resonance angiography, magnetic resonance imaging, and neurological evaluation. The results showed that the middle cerebral artery occlusion model was successfully established in eight adult healthy rhesus monkeys, and ischemic lesions were apparent in the brain tissue of rhesus monkeys at 24 hours after occlusion. The rhesus monkeys had symp-toms of neurological deifcits. Compared with the M1 occlusion group, the M2 occlusion group had lower infarction volume and higher neurological scores. These experimental ifndings indicate that reversible middle cerebral artery occlusion can be produced with the endovascular microcoil technique in rhesus monkeys. The M2 occluded model had less infarction and less neurological impairment, which offers the potential for application in the ifeld of brain injury research.
A more consistent intraluminal rhesus monkey model of ischemic stroke.
Zhao, Bo; Shang, Guowei; Chen, Jian; Geng, Xiaokun; Ye, Xin; Xu, Guoxun; Wang, Ju; Zheng, Jiasheng; Li, Hongjun; Akbary, Fauzia; Li, Shengli; Lu, Jing; Ling, Feng; Ji, Xunming
2014-12-01
Endovascular surgery is advantageous in experimentally induced ischemic stroke because it causes fewer cranial traumatic lesions than invasive surgery and can closely mimic the pathophysiology in stroke patients. However, the outcomes are highly variable, which limits the accuracy of evaluations of ischemic stroke studies. In this study, eight healthy adult rhesus monkeys were randomized into two groups with four monkeys in each group: middle cerebral artery occlusion at origin segment (M1) and middle cerebral artery occlusion at M2 segment. The blood flow in the middle cerebral artery was blocked completely for 2 hours using the endovascular microcoil placement technique (1 mm × 10 cm) (undetachable), to establish a model of cerebral ischemia. The microcoil was withdrawn and the middle cerebral artery blood flow was restored. A reversible middle cerebral artery occlusion model was identified by hematoxylin-eosin staining, digital subtraction angiography, magnetic resonance angiography, magnetic resonance imaging, and neurological evaluation. The results showed that the middle cerebral artery occlusion model was successfully established in eight adult healthy rhesus monkeys, and ischemic lesions were apparent in the brain tissue of rhesus monkeys at 24 hours after occlusion. The rhesus monkeys had symptoms of neurological deficits. Compared with the M1 occlusion group, the M2 occlusion group had lower infarction volume and higher neurological scores. These experimental findings indicate that reversible middle cerebral artery occlusion can be produced with the endovascular microcoil technique in rhesus monkeys. The M2 occluded model had less infarction and less neurological impairment, which offers the potential for application in the field of brain injury research.
Flood damage: a model for consistent, complete and multipurpose scenarios
Menoni, Scira; Molinari, Daniela; Ballio, Francesco; Minucci, Guido; Mejri, Ouejdane; Atun, Funda; Berni, Nicola; Pandolfo, Claudia
2016-12-01
Effective flood risk mitigation requires the impacts of flood events to be much better and more reliably known than is currently the case. Available post-flood damage assessments usually supply only a partial vision of the consequences of the floods as they typically respond to the specific needs of a particular stakeholder. Consequently, they generally focus (i) on particular items at risk, (ii) on a certain time window after the occurrence of the flood, (iii) on a specific scale of analysis or (iv) on the analysis of damage only, without an investigation of damage mechanisms and root causes. This paper responds to the necessity of a more integrated interpretation of flood events as the base to address the variety of needs arising after a disaster. In particular, a model is supplied to develop multipurpose complete event scenarios. The model organizes available information after the event according to five logical axes. This way post-flood damage assessments can be developed that (i) are multisectoral, (ii) consider physical as well as functional and systemic damage, (iii) address the spatial scales that are relevant for the event at stake depending on the type of damage that has to be analyzed, i.e., direct, functional and systemic, (iv) consider the temporal evolution of damage and finally (v) allow damage mechanisms and root causes to be understood. All the above features are key for the multi-usability of resulting flood scenarios. The model allows, on the one hand, the rationalization of efforts currently implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.
Directory of Open Access Journals (Sweden)
Kazuharu Bamba
2014-10-01
Full Text Available We reconstruct scalar field theories to realize inflation compatible with the BICEP2 result as well as the Planck. In particular, we examine the chaotic inflation model, natural (or axion inflation model, and an inflationary model with a hyperbolic inflaton potential. We perform an explicit approach to find out a scalar field model of inflation in which any observations can be explained in principle.
Self-consistent modelling of resonant tunnelling structures
DEFF Research Database (Denmark)
Fiig, T.; Jauho, A.P.
1992-01-01
We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated with the ......We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated...... applied voltages and carrier densities at the emitter-barrier interface. We include the two-dimensional accumulation layer charge and the quantum well charge in our self-consistent scheme. We discuss the evaluation of the current contribution originating from the two-dimensional accumulation layer charges...
Guinot, Vincent
2017-09-01
The Integral Porosity and Dual Integral Porosity two-dimensional shallow water models have been proposed recently as efficient upscaled models for urban floods. Very little is known so far about their consistency and wave propagation properties. Simple numerical experiments show that both models are unusually sensitive to the computational grid. In the present paper, a two-dimensional consistency and characteristic analysis is carried out for these two models. The following results are obtained: (i) the models are almost insensitive to grid design when the porosity is isotropic, (ii) anisotropic porosity fields induce an artificial polarization of the mass/momentum fluxes along preferential directions when triangular meshes are used and (iii) extra first-order derivatives appear in the governing equations when regular, quadrangular cells are used. The hyperbolic system is thus mesh-dependent, and with it the wave propagation properties of the model solutions. Criteria are derived to make the solution less mesh-dependent, but it is not certain that these criteria can be satisfied at all computational points when real-world situations are dealt with.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across......We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...
Is the island universe model consistent with observations?
Piao, Yun-Song
2005-01-01
We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.
Consistent Evolution of Software Artifacts and Non-Functional Models
2014-11-14
Ruscio D., Pierantonio A., Arcelli D., Eramo R., Trubiani C., Tucci M. Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica ...Models (SRMs), and ( ii ) antipattern solutions as Target Role Models (TRMs). Hence, SRM-TRM pairs represent new instruments in the hands of developers to...helps to identify the antipatterns that more heavily contribute to the violation of performance requirements [10], and ( ii ) another one aimed at
Towards a self-consistent dynamical nuclear model
Roca-Maza, X.; Niu, Y. F.; Colò, G.; Bortignon, P. F.
2017-04-01
Density functional theory (DFT) is a powerful and accurate tool, exploited in nuclear physics to investigate the ground-state and some of the collective properties of nuclei along the whole nuclear chart. Models based on DFT are not, however, suitable for the description of single-particle dynamics in nuclei. Following the field theoretical approach by A Bohr and B R Mottelson to describe nuclear interactions between single-particle and vibrational degrees of freedom, we have taken important steps towards the building of a microscopic dynamic nuclear model. In connection with this, one important issue that needs to be better understood is the renormalization of the effective interaction in the particle-vibration approach. One possible way to renormalize the interaction is by the so-called subtraction method. In this contribution, we will implement the subtraction method in our model for the first time and study its consequences.
Gas Clumping in Self-Consistent Reionisation Models
Finlator, K; Özel, F; Davé, R
2012-01-01
We use a suite of cosmological hydrodynamic simulations including a self-consistent treatment for inhomogeneous reionisation to study the impact of galactic outflows and photoionisation heating on the volume-averaged recombination rate of the intergalactic medium (IGM). By incorporating an evolving ionising escape fraction and a treatment for self-shielding within Lyman limit systems, we have run the first simulations of "photon-starved" reionisation scenarios that simultaneously reproduce observations of the abundance of galaxies, the optical depth to electron scattering of cosmic microwave background photons \\tau, and the effective optical depth to Lyman\\alpha absorption at z=5. We confirm that an ionising background reduces the clumping factor C by more than 50% by smoothing moderately-overdense (\\Delta=1--100) regions. Meanwhile, outflows increase clumping only modestly. The clumping factor of ionised gas is much lower than the overall baryonic clumping factor because the most overdense gas is self-shield...
Modelling plasticity of unsaturated soils in a thermodynamically consistent framework
Coussy, O
2010-01-01
Constitutive equations of unsaturated soils are often derived in a thermodynamically consistent framework through the use a unique 'effective' interstitial pressure. This later is naturally chosen as the space averaged interstitial pressure. However, experimental observations have revealed that two stress state variables were needed to describe the stress-strain-strength behaviour of unsaturated soils. The thermodynamics analysis presented here shows that the most general approach to the behaviour of unsaturated soils actually requires three stress state variables: the suction, which is required to describe the retention properties of the soil and two effective stresses, which are required to describe the soil deformation at water saturation held constant. Actually, it is shown that a simple assumption related to internal deformation leads to the need of a unique effective stress to formulate the stress-strain constitutive equation describing the soil deformation. An elastoplastic framework is then presented ...
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; Tartakovsky, Alexandre M.; Parks, Michael L.
2017-04-01
We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Self-consistent Models of Strong Interaction with Chiral Symmetry
Nambu, Y.; Pascual, P.
1963-04-01
Some simple models of (renormalizable) meson-nucleon interaction are examined in which the nucleon mass is entirely due to interaction and the chiral ( gamma {sub 5}) symmetry is "broken'' to become a hidden symmetry. It is found that such a scheme is possible provided that a vector meson is introduced as an elementary field. (auth)
Consistency problems for Heath-Jarrow-Morton interest rate models
Filipović, Damir
2001-01-01
The book is written for a reader with knowledge in mathematical finance (in particular interest rate theory) and elementary stochastic analysis, such as provided by Revuz and Yor (Continuous Martingales and Brownian Motion, Springer 1991). It gives a short introduction both to interest rate theory and to stochastic equations in infinite dimension. The main topic is the Heath-Jarrow-Morton (HJM) methodology for the modelling of interest rates. Experts in SDE in infinite dimension with interest in applications will find here the rigorous derivation of the popular "Musiela equation" (referred to in the book as HJMM equation). The convenient interpretation of the classical HJM set-up (with all the no-arbitrage considerations) within the semigroup framework of Da Prato and Zabczyk (Stochastic Equations in Infinite Dimensions) is provided. One of the principal objectives of the author is the characterization of finite-dimensional invariant manifolds, an issue that turns out to be vital for applications. Finally, ge...
Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems
Garg, Vikram V
2014-09-27
Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.
Michaels, Patrick J; Christy, John R; Herman, Chad S; Liljegren, Lucia M; Annan, James D
2013-01-01
Assessing the consistency between short-term global temperature trends in observations and climate model projections is a challenging problem. While climate models capture many processes governing short-term climate fluctuations, they are not expected to simulate the specific timing of these somewhat random phenomena - the occurrence of which may impact the realized trend. Therefore, to assess model performance, we develop distributions of projected temperature trends from a collection of climate models running the IPCC A1B emissions scenario. We evaluate where observed trends of length 5 to 15 years fall within the distribution of model trends of the same length. We find that current trends lie near the lower limits of the model distributions, with cumulative probability-of-occurrence values typically between 5 percent and 20 percent, and probabilities below 5 percent not uncommon. Our results indicate cause for concern regarding the consistency between climate model projections and observed climate behavior...
Aggregated wind power plant models consisting of IEC wind turbine models
DEFF Research Database (Denmark)
Altin, Müfit; Göksu, Ömer; Hansen, Anca Daniela
2015-01-01
turbines, parameters and models to represent each individual wind turbine in detail makes it necessary to develop aggregated wind power plant models considering the simulation time for power system stability studies. In this paper, aggregated wind power plant models consisting of the IEC 61400-27 variable...
Kukush, A.; Markovsky, I.; Van Huffel, S.
2002-01-01
Consistent estimators of the rank-deficient fundamental matrix yielding information on the relative orientation of two images in two-view motion analysis are derived. The estimators are derived by minimizing a corrected contrast function in a quadratic measurement error model. In addition, a consistent estimator for the measurement error variance is obtained. Simulation results show the improved accuracy of the newly proposed estimator compared to the ordinary total least-squares estimator.
Towards an Information Model of Consistency Maintenance in Distributed Interactive Applications
Directory of Open Access Journals (Sweden)
Xin Zhang
2008-01-01
Full Text Available A novel framework to model and explore predictive contract mechanisms in distributed interactive applications (DIAs using information theory is proposed. In our model, the entity state update scheme is modelled as an information generation, encoding, and reconstruction process. Such a perspective facilitates a quantitative measurement of state fidelity loss as a result of the distribution protocol. Results from an experimental study on a first-person shooter game are used to illustrate the utility of this measurement process. We contend that our proposed model is a starting point to reframe and analyse consistency maintenance in DIAs as a problem in distributed interactive media compression.
Consistent phase-change modeling for CO2-based heat mining operation
DEFF Research Database (Denmark)
Singh, Ashok Kumar; Veje, Christian
2017-01-01
–gas phase transition with more accuracy and consistency. Calculation of fluid properties and saturation state were based on the volume translated Peng–Robinson equation of state and results verified. The present model has been applied to a scenario to simulate a CO2-based heat mining process. In this paper...
STRONG CONSISTENCY OF M ESTIMATOR IN LINEAR MODEL FOR NEGATIVELY ASSOCIATED SAMPLES
Institute of Scientific and Technical Information of China (English)
Qunying WU
2006-01-01
This paper discusses the strong consistency of M estimator of regression parameter in linear model for negatively associated samples. As a result, the author extends Theorem 1 and Theorem 2 of Shanchao YANG (2002) to the NA errors without necessarily imposing any extra condition.
Self-consistent modeling of DEMOs with 1.5D BALDUR integrated predictive modeling code
Wisitsorasak, A.; Somjinda, B.; Promping, J.; Onjun, T.
2017-02-01
Self-consistent simulations of four DEMO designs proposed by teams from China, Europe, India, and Korea are carried out using the BALDUR integrated predictive modeling code in which theory-based models are used, for both core transport and boundary conditions. In these simulations, a combination of the NCLASS neoclassical transport and multimode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a pedestal temperature model based on a combination of magnetic and flow shear stabilization, pedestal width scaling and an infinite- n ballooning pressure gradient model and a pedestal density model based on a line average density. Even though an optimistic scenario is considered, the simulation results suggest that, with the exclusion of ELMs, the fusion gain Q obtained for these reactors is pessimistic compared to their original designs, i.e. 52% for the Chinese design, 63% for the European design, 22% for the Korean design, and 26% for the Indian design. In addition, the predicted bootstrap current fractions are also found to be lower than their original designs, as fractions of their original designs, i.e. 0.49 (China), 0.66 (Europe), and 0.58 (India). Furthermore, in relation to sensitivity, it is found that increasing values of the auxiliary heating power and the electron line average density from their design values yield an enhancement of fusion performance. In addition, inclusion of sawtooth oscillation effects demonstrate positive impacts on the plasma and fusion performance in European, Indian and Korean DEMOs, but degrade the performance in the Chinese DEMO.
Directory of Open Access Journals (Sweden)
Yamashiro T
2015-02-01
Full Text Available Tsuneo Yamashiro,1 Tetsuhiro Miyara,1 Osamu Honda,2 Noriyuki Tomiyama,2 Yoshiharu Ohno,3 Satoshi Noma,4 Sadayuki Murayama1 On behalf of the ACTIve Study Group 1Department of Radiology, Graduate School of Medical Science, University of the Ryukyus, Nishihara, Okinawa, Japan; 2Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan; 3Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan; 4Department of Radiology, Tenri Hospital, Tenri, Nara, Japan Purpose: To assess the advantages of iterative reconstruction for quantitative computed tomography (CT analysis of pulmonary emphysema. Materials and methods: Twenty-two patients with pulmonary emphysema underwent chest CT imaging using identical scanners with three different tube currents: 240, 120, and 60 mA. Scan data were converted to CT images using Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR3D and a conventional filtered-back projection mode. Thus, six scans with and without AIDR3D were generated per patient. All other scanning and reconstruction settings were fixed. The percent low attenuation area (LAA%; < -950 Hounsfield units and the lung density 15th percentile were automatically measured using a commercial workstation. Comparisons of LAA% and 15th percentile results between scans with and without using AIDR3D were made by Wilcoxon signed-rank tests. Associations between body weight and measurement errors among these scans were evaluated by Spearman rank correlation analysis. Results: Overall, scan series without AIDR3D had higher LAA% and lower 15th percentile values than those with AIDR3D at each tube current (P<0.0001. For scan series without AIDR3D, lower tube currents resulted in higher LAA% values and lower 15th percentiles. The extent of emphysema was significantly different between each pair among scans when not using AIDR3D (LAA%, P<0.0001; 15th percentile, P<0.01, but was not
Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects
Directory of Open Access Journals (Sweden)
Guangjie Li
2015-07-01
Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.
Pons, J M; Pons, Josep M.; Talavera, Pere
2004-01-01
We clarify the existence of two different types of truncations of the field content in a theory, the consistency of each type being achieved by different means. A proof is given of the conditions to have a consistent truncation in the case of dimensional reductions induced by independent Killing vectors. We explain in what sense the tracelessness condition found by Scherk and Scharwz is not only a necessary condition but also a {\\it sufficient} one for a consistent truncation. The reduction of the gauge group is fully performed showing the existence of a sector of rigid symmetries. We show that truncations originated by the introduction of constraints will in general be inconsistent, but this fact does not prevent the possibility of correct upliftings of solutions in some cases. The presence of constraints has dynamical consequences that turn out to play a fundamental role in the correctness of the uplifting procedure.
The fundamental solution for a consistent complex model of the shallow shell equations
Directory of Open Access Journals (Sweden)
Matthew P. Coleman
1999-09-01
Full Text Available The calculation of the Fourier transforms of the fundamental solution in shallow shell theory ostensibly was accomplished by J. L. Sanders [J. Appl. Mech. 37 (1970, 361-366]. However, as is shown in detail in this paper, the complex model used by Sanders is, in fact, inconsistent. This paper provides a consistent version of Sanders's complex model, along with the Fourier transforms of the fundamental solution for this corrected model. The inverse Fourier transforms are then calculated for the particular cases of the shallow spherical and circular cylindrical shells, and the results of the latter are seen to be in agreement with results appearing elsewhere in the literature.
Tests and applications of self-consistent cranking in the interacting boson model
Kuyucak, S; Kuyucak, Serdar; Sugita, Michiaki
1999-01-01
The self-consistent cranking method is tested by comparing the cranking calculations in the interacting boson model with the exact results obtained from the SU(3) and O(6) dynamical symmetries and from numerical diagonalization. The method is used to study the spin dependence of shape variables in the $sd$ and $sdg$ boson models. When realistic sets of parameters are used, both models lead to similar results: axial shape is retained with increasing cranking frequency while fluctuations in the shape variable $\\gamma$ are slightly reduced.
Self-consistent core-pedestal transport simulations with neural network accelerated models
Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.
2017-08-01
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.
High-Dose-Rate Prostate Brachytherapy Consistently Results in High Quality Dosimetry
Energy Technology Data Exchange (ETDEWEB)
White, Evan C.; Kamrava, Mitchell R.; Demarco, John; Park, Sang-June; Wang, Pin-Chieh; Kayode, Oluwatosin; Steinberg, Michael L. [California Endocurietherapy at UCLA, Department of Radiation Oncology, David Geffen School of Medicine of University of California at Los Angeles, Los Angeles, California (United States); Demanes, D. Jeffrey, E-mail: jdemanes@mednet.ucla.edu [California Endocurietherapy at UCLA, Department of Radiation Oncology, David Geffen School of Medicine of University of California at Los Angeles, Los Angeles, California (United States)
2013-02-01
Purpose: We performed a dosimetry analysis to determine how well the goals for clinical target volume coverage, dose homogeneity, and normal tissue dose constraints were achieved with high-dose-rate (HDR) prostate brachytherapy. Methods and Materials: Cumulative dose-volume histograms for 208 consecutively treated HDR prostate brachytherapy implants were analyzed. Planning was based on ultrasound-guided catheter insertion and postoperative CT imaging; the contoured clinical target volume (CTV) was the prostate, a small margin, and the proximal seminal vesicles. Dosimetric parameters analyzed for the CTV were D90, V90, V100, V150, and V200. Dose to the urethra, bladder, bladder balloon, and rectum were evaluated by the dose to 0.1 cm{sup 3}, 1 cm{sup 3}, and 2 cm{sup 3} of each organ, expressed as a percentage of the prescribed dose. Analysis was stratified according to prostate size. Results: The mean prostate ultrasound volume was 38.7 {+-} 13.4 cm{sup 3} (range: 11.7-108.6 cm{sup 3}). The mean CTV was 75.1 {+-} 20.6 cm{sup 3} (range: 33.4-156.5 cm{sup 3}). The mean D90 was 109.2% {+-} 2.6% (range: 102.3%-118.4%). Ninety-three percent of observed D90 values were between 105 and 115%. The mean V90, V100, V150, and V200 were 99.9% {+-} 0.05%, 99.5% {+-} 0.8%, 25.4% {+-} 4.2%, and 7.8% {+-} 1.4%. The mean dose to 0.1 cm{sup 3}, 1 cm{sup 3}, and 2 cm{sup 3} for organs at risk were: Urethra: 107.3% {+-} 3.0%, 101.1% {+-} 14.6%, and 47.9% {+-} 34.8%; bladder wall: 79.5% {+-} 5.1%, 69.8% {+-} 4.9%, and 64.3% {+-} 5.0%; bladder balloon: 70.3% {+-} 6.8%, 59.1% {+-} 6.6%, and 52.3% {+-} 6.2%; rectum: 76.3% {+-} 2.5%, 70.2% {+-} 3.3%, and 66.3% {+-} 3.8%. There was no significant difference between D90 and V100 when stratified by prostate size. Conclusions: HDR brachytherapy allows the physician to consistently achieve complete prostate target coverage and maintain normal tissue dose constraints for organs at risk over a wide range of target volumes.
Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a Dual Risk Model
Directory of Open Access Journals (Sweden)
Lidong Zhang
2014-01-01
Full Text Available We are concerned with optimal investment strategy for a dual risk model. We assume that the company can invest into a risk-free asset and a risky asset. Short-selling and borrowing money are allowed. Due to lack of iterated-expectation property, the Bellman Optimization Principle does not hold. Thus we investigate the precommitted strategy and time-consistent strategy, respectively. We take three steps to derive the precommitted investment strategy. Furthermore, the time-consistent investment strategy is also obtained by solving the extended Hamilton-Jacobi-Bellman equations. We compare the precommitted strategy with time-consistent strategy and find that these different strategies have different advantages: the former can make value function maximized at the original time t=0 and the latter strategy is time-consistent for the whole time horizon. Finally, numerical analysis is presented for our results.
A thermodynamically consistent phase-field model for two-phase flows with thermocapillary effects
Guo, Zhenlin
2014-01-01
In this paper, we develop a phase-field model for binary incompressible fluid with thermocapillary effects, which allows the different properties (densities, viscosities and heat conductivities) for each component and meanwhile maintains the thermodynamic consistency. The governing equations of the model including the Navier-Stokes equations, Cahn-Hilliard equations and energy balance equation are derived together within a thermodynamic framework based on the entropy generation, which guarantees the thermodynamic consistency. The sharp-interface limit analysis is carried out to show that the interfacial conditions of the classical sharp-interface models can be recovered from our phase-field model. Moreover, some numerical examples including thermocapillary migration of a bubble and thermocapillary convections in a two- layer fluid system are computed by using a continuous finite element method. The results are compared to the existing analytical solutions and theoretical predictions as validations for our mod...
Nonparametric test of consistency between cosmological models and multiband CMB measurements
Aghamousa, Amir
2015-01-01
We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare between the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit $\\Lambda$CDM model at $95\\% (\\sim 2\\sigma)$ confidence distance from the center of the nonparametri...
A consistency assessment of coupled cohesive zone models for mixed-mode debonding problems
Directory of Open Access Journals (Sweden)
R. Dimitri
2014-07-01
Full Text Available Due to their simplicity, cohesive zone models (CZMs are very attractive to describe mixed-mode failure and debonding processes of materials and interfaces. Although a large number of coupled CZMs have been proposed, and despite the extensive related literature, little attention has been devoted to ensuring the consistency of these models for mixed-mode conditions, primarily in a thermodynamical sense. A lack of consistency may affect the local or global response of a mechanical system. This contribution deals with the consistency check for some widely used exponential and bilinear mixed-mode CZMs. The coupling effect on stresses and energy dissipation is first investigated and the path-dependance of the mixed-mode debonding work of separation is analitically evaluated. Analytical predictions are also compared with results from numerical implementations, where the interface is described with zero-thickness contact elements. A node-to-segment strategy is here adopted, which incorporates decohesion and contact within a unified framework. A new thermodynamically consistent mixed-mode CZ model based on a reformulation of the Xu-Needleman model as modified by van den Bosch et al. is finally proposed and derived by applying the Coleman and Noll procedure in accordance with the second law of thermodynamics. The model holds monolithically for loading and unloading processes, as well as for decohesion and contact, and its performance is demonstrated through suitable examples.
Lu, Wei; Song, Joo Hyun; Christensen, Gary E.; Parikh, Parag J.; Bradley, Jeffrey D.; Low, Daniel A.
2006-03-01
Respiratory motion is a significant source of error in conformal radiation therapy for the thorax and upper abdomen. Four-dimensional computed tomography (4D CT) has been proposed to reduce the uncertainty caused by internal respiratory organ motion. A 4D CT dataset is retrospectively reconstructed at various stages of a respiratory cycle. An important tool for 4D treatment planning is deformable image registration. An inverse consistent image registration is used to model lung motion from one respiratory stage to another during a breathing cycle. This diffeomorphic registration jointly estimates the forward and reverse transformations providing more accurate correspondence between two images. Registration results and modeled motions in the lung are shown for three example respiratory stages. The results demonstrate that the consistent image registration satisfactorily models the large motions in the lung, providing a useful tool for 4D planning and delivering.
A consistent modelling methodology for secondary settling tanks in wastewater treatment.
Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar
2011-03-01
The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models.
Moreno Chaparro, Nicolas
2015-06-30
We introduce a framework for model reduction of polymer chain models for dissipative particle dynamics (DPD) simulations, where the properties governing the phase equilibria such as the characteristic size of the chain, compressibility, density, and temperature are preserved. The proposed methodology reduces the number of degrees of freedom required in traditional DPD representations to model equilibrium properties of systems with complex molecules (e.g., linear polymers). Based on geometrical considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling of the simulation parameters. In order to satisfy the geometrical constraints in the reduced model we introduce bond-angle potentials that account for the changes in the chain free energy after the model reduction. Following this coarse-graining process we represent high molecular weight DPD chains (i.e., ≥200≥200 beads per chain) with a significant reduction in the number of particles required (i.e., ≥20≥20 times the original system). We show that our methodology has potential applications modeling systems of high molecular weight molecules at large scales, such as diblock copolymer and DNA.
A new k-epsilon model consistent with Monin-Obukhov similarity theory
DEFF Research Database (Denmark)
van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.
2016-01-01
A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...
A simplified stock-flow consistent post-Keynesian growth model
dos Santos, Claudio H.; Zezza, Gennaro
2005-01-01
A Simplified Stock-Flow Consistent Post-Keynesian Growth Model Claudio H. Dos Santos* and Gennaro Zezza** Abstract: Despite being arguably the most rigorous form of structuralist/post-Keynesian macroeconomics, stock-flow consistent models are quite often complex and difficult to deal with. This paper presents a model that, despite retaining the methodological advantages of the stock-flow consistent method, is intuitive enough to be taught at an undergraduate level. Moreover, the model can eas...
Hess, Julian; Wang, Yongqi
2016-11-01
A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.
Towards a self-consistent halo model for the nonlinear large-scale structure
Schmidt, Fabian
2015-01-01
The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: $(i)$ they do not enforce the stress-energy conservation of matter; $(ii)$ they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model ("EHM") that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed, and results of perturbation theory and the effective field theory can in principle be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written he...
Amruth, B R; R., Amruth B.; Patwardhan, Ajay
2006-01-01
Cosmological inflation models with modifications to include recent cosmological observations has been an active area of research after WMAP 3 results, which have given us information about the composition of dark matter, normal matter and dark energy and the anisotropy at the 300,000 years horizon with high precision. We work on inflation models of Guth and Linde and modify them by introducing a doublet scalar field to give normal matter particles and their supersymmetric partners which result in normal and dark matter of our universe. We include the cosmological constant term as the vaccuum expectation value of the stress energy tensor, as the dark energy. We callibrate the parameters of our model using recent observations of density fluctuations. We develop a model which consistently fits with the recent observations.
Rácz, A; Bajusz, D; Héberger, K
2015-01-01
Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.
Self-consistent tight-binding atomic-relaxation model of titanium dioxide
Energy Technology Data Exchange (ETDEWEB)
Schelling, P.K.; Yu, N.; Halley, J.W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455 (United States)
1998-07-01
We report a self-consistent tight-binding atomic-relaxation model for titanium dioxide. We fit the parameters of the model to first-principles electronic structure calculations of the band structure and energy as a function of lattice parameters in bulk rutile. We report the method and results for the surface structures and energies of relaxed (110), (100), and (001) surfaces of rutile TiO{sub 2} as well as work functions for these surfaces. Good agreement with first-principles calculations and experiments, where available, is found for these surfaces. We find significant charge transfer (increased covalency) at the surfaces. {copyright} {ital 1998} {ital The American Physical Society}
Balawender, K.; Jaworski, A.; Kuszewski, H.; Lejda, K.; Ustrzycki, A.
2016-09-01
Measurements concerning emissions of pollutants contained in automobile combustion engine exhaust gases is of primary importance in view of their harmful impact on the natural environment. This paper presents results of tests aimed at determining exhaust gas pollutant emissions from a passenger car engine obtained under repeatable conditions on a chassis dynamometer. The test set-up was installed in a controlled climate chamber allowing to maintain the temperature conditions within the range from -20°C to +30°C. The analysis covered emissions of such components as CO, CO2, NOx, CH4, THC, and NMHC. The purpose of the study was to assess repeatability of results obtained in a number of tests performed as per NEDC test plan. The study is an introductory stage of a wider research project concerning the effect of climate conditions and fuel type on emission of pollutants contained in exhaust gases generated by automotive vehicles.
Self-Consistent Ring Current/Electromagnetic Ion Cyclotron Waves Modeling
Khazanov, G. V.; Gamayunov, K. V.; Gallagher, D. L.
2006-01-01
The self-consistent treatment of the RC ion dynamics and EMIC waves, which are thought to exert important influences on the ion dynamical evolution, is an important missing element in our understanding of the storm-and recovery-time ring current evolution. For example, the EMlC waves cause the RC decay on a time scale of about one hour or less during the main phase of storms. The oblique EMIC waves damp due to Landau resonance with the thermal plasmaspheric electrons, and subsequent transport of the dissipating wave energy into the ionosphere below causes an ionosphere temperature enhancement. Under certain conditions, relativistic electrons, with energies 21 MeV, can be removed from the outer radiation belt by EMIC wave scattering during a magnetic storm. That is why the modeling of EMIC waves is critical and timely issue in magnetospheric physics. This study will generalize the self-consistent theoretical description of RC ions and EMIC waves that has been developed by Khazanov et al. [2002, 2003] and include the heavy ions and propagation effects of EMIC waves in the global dynamic of self-consistent RC - EMIC waves coupling. The results of our newly developed model that will be presented at the meeting, focusing mainly on the dynamic of EMIC waves and comparison of these results with the previous global RC modeling studies devoted to EMIC waves formation. We also discuss RC ion precipitations and wave induced thermal electron fluxes into the ionosphere.
Helm, Benjamin M; Langley, Katherine; Spangler, Brooke; Vergano, Samantha
2014-08-01
Single nucleotide polymorphism microarrays have the ability to reveal parental consanguinity which may or may not be known to healthcare providers. Consanguinity can have significant implications for the health of patients and for individual and family psychosocial well-being. These results often present ethical and legal dilemmas that can have important ramifications. Unexpected consanguinity can be confounding to healthcare professionals who may be unprepared to handle these results or to communicate them to families or other appropriate representatives. There are few published accounts of experiences with consanguinity and SNP arrays. In this paper we discuss three cases where molecular evidence of parental incest was identified by SNP microarray. We hope to further highlight consanguinity as a potential incidental finding, how the cases were handled by the clinical team, and what resources were found to be most helpful. This paper aims to contribute further to professional discourse on incidental findings with genomic technology and how they were addressed clinically. These experiences may provide some guidance on how others can prepare for these findings and help improve practice. As genetic and genomic testing is utilized more by non-genetics providers, we also hope to inform about the importance of engaging with geneticists and genetic counselors when addressing these findings.
Detecting consistent patterns of directional adaptation using differential selection codon models.
Parto, Sahar; Lartillot, Nicolas
2017-06-23
Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.
SALT Spectropolarimetry and Self-Consistent SED and Polarization Modeling of Blazars
Böttcher, Markus; van Soelen, Brian; Britto, Richard; Buckley, David; Marais, Johannes; Schutte, Hester
2017-09-01
We report on recent results from a target-of-opportunity program to obtain spectropolarimetry observations with the Southern African Large Telescope (SALT) on flaring gamma-ray blazars. SALT spectropolarimetry and contemporaneous multi-wavelength spectral energy distribution (SED) data are being modelled self-consistently with a leptonic single-zone model. Such modeling provides an accurate estimate of the degree of order of the magnetic field in the emission region and the thermal contributions (from the host galaxy and the accretion disk) to the SED, thus putting strong constraints on the physical parameters of the gamma-ray emitting region. For the specific case of the $\\gamma$-ray blazar 4C+01.02, we demonstrate that the combined SED and spectropolarimetry modeling constrains the mass of the central black hole in this blazar to $M_{\\rm BH} \\sim 10^9 \\, M_{\\odot}$.
Fishkind, Donniell E; Tang, Minh; Vogelstein, Joshua T; Priebe, Carey E
2012-01-01
A stochastic block model consists of a random partition of n vertices into blocks 1,2,...,K for which, conditioned on the partition, every pair of vertices has probability of adjacency entirely determined by the block membership of the two vertices. (The model parameters are K, the distribution of the random partition, and a communication probability matrix M in [0,1]^(K x K) listing the adjacency probabilities associated with all pairs of blocks.) Suppose a realization of the n x n vertex adjacency matrix is observed, but the underlying partition of the vertices into blocks is not observed; the main inferential task is to correctly partition the vertices into the blocks with only a negligible number of vertices misassigned. For this inferential task, Rohe et al. (2011) prove the consistency of spectral partitioning applied to the normalized Laplacian, and Sussman et al. (2011) extend this to prove consistency of spectral partitioning directly on the adjacency matrix; both procedures assume that K and rankM a...
A Fully Nonlinear, Dynamically Consistent Numerical Model for Ship Maneuvering in a Seaway
Directory of Open Access Journals (Sweden)
Ray-Qing Lin
2011-01-01
Full Text Available This is the continuation of our research on development of a fully nonlinear, dynamically consistent, numerical ship motion model (DiSSEL. In this paper we report our results on modeling ship maneuvering in arbitrary seaway that is one of the most challenging and important problems in seakeeping. In our modeling, we developed an adaptive algorithm to maintain dynamical balances numerically as the encounter frequencies (the wave frequencies as measured on the ship varying with the ship maneuvering state. The key of this new algorithm is to evaluate the encounter frequency variation differently in the physical domain and in the frequency domain, thus effectively eliminating possible numerical dynamical imbalances. We have tested this algorithm with several well-documented maneuvering experiments, and our results agree very well with experimental data. In particular, the numerical time series of roll and pitch motions and the numerical ship tracks (i.e., surge, sway, and yaw are nearly identical to those of experiments.
Buchanan, John J; Dean, Noah
2014-02-01
The experiment undertaken was designed to elucidate the impact of model skill level on observational learning processes. The task was bimanual circle tracing with a 90° relative phase lead of one hand over the other hand. Observer groups watched videos of either an instruction model, a discovery model, or a skilled model. The instruction and skilled model always performed the task with the same movement strategy, the right-arm traced clockwise and the left-arm counterclockwise around circle templates with the right-arm leading. The discovery model used several movement strategies (tracing-direction/hand-lead) during practice. Observation of the instruction and skilled model provided a significant benefit compared to the discovery model when performing the 90° relative phase pattern in a post-observation test. The observers of the discovery model had significant room for improvement and benefited from post-observation practice of the 90° pattern. The benefit of a model is found in the consistency with which that model uses the same movement strategy, and not within the skill level of the model. It is the consistency in strategy modeled that allows observers to develop an abstract perceptual representation of the task that can be implemented into a coordinated action. Theoretically, the results show that movement strategy information (relative motion direction, hand lead) and relative phase information can be detected through visual perception processes and be successfully mapped to outgoing motor commands within an observational learning context.
Fox-Rabinovitz, Michael S.; Lindzen, Richard S.
1993-01-01
Simple numerical experiments are performed in order to determine the effects of inconsistent combinations of horizontal and vertical resolution in both atmospheric models and observing systems. In both cases, we find that inconsistent spatial resolution is associated with enhanced noise generation. A rather fine horizontal resolution in a satellite-data observing system seems to be excessive when combined with the usually available relatively coarse vertical resolution. Using horizontal filters of different strengths, adjusted in such a way as to render the effective horizontal resolution more consistent with vertical resolution for the observing system, may result in improvement of the analysis accuracy. The increase of vertical resolution for a satellite data observing system with better vertically resolved data, the results are different in that little or no horizontal filtering is needed to make spatial resolution more consistent for the system. The obtained experimental estimates of consistent vertical and effective horizontal resolution are in a general agreement with consistent resolution estimates previously derived theoretically by the authors.
A Self-Consistent Model for Thermal Oxidation of Silicon at Low Oxide Thickness
Directory of Open Access Journals (Sweden)
Gerald Gerlach
2016-01-01
Full Text Available Thermal oxidation of silicon belongs to the most decisive steps in microelectronic fabrication because it allows creating electrically insulating areas which enclose electrically conductive devices and device areas, respectively. Deal and Grove developed the first model (DG-model for the thermal oxidation of silicon describing the oxide thickness versus oxidation time relationship with very good agreement for oxide thicknesses of more than 23 nm. Their approach named as general relationship is the basis of many similar investigations. However, measurement results show that the DG-model does not apply to very thin oxides in the range of a few nm. Additionally, it is inherently not self-consistent. The aim of this paper is to develop a self-consistent model that is based on the continuity equation instead of Fick’s law as the DG-model is. As literature data show, the relationship between silicon oxide thickness and oxidation time is governed—down to oxide thicknesses of just a few nm—by a power-of-time law. Given by the time-independent surface concentration of oxidants at the oxide surface, Fickian diffusion seems to be neglectable for oxidant migration. The oxidant flux has been revealed to be carried by non-Fickian flux processes depending on sites being able to lodge dopants (oxidants, the so-called DOCC-sites, as well as on the dopant jump rate.
RNA secondary structure modeling at consistent high accuracy using differential SHAPE.
Rice, Greggory M; Leonard, Christopher W; Weeks, Kevin M
2014-06-01
RNA secondary structure modeling is a challenging problem, and recent successes have raised the standards for accuracy, consistency, and tractability. Large increases in accuracy have been achieved by including data on reactivity toward chemical probes: Incorporation of 1M7 SHAPE reactivity data into an mfold-class algorithm results in median accuracies for base pair prediction that exceed 90%. However, a few RNA structures are modeled with significantly lower accuracy. Here, we show that incorporating differential reactivities from the NMIA and 1M6 reagents--which detect noncanonical and tertiary interactions--into prediction algorithms results in highly accurate secondary structure models for RNAs that were previously shown to be difficult to model. For these RNAs, 93% of accepted canonical base pairs were recovered in SHAPE-directed models. Discrepancies between accepted and modeled structures were small and appear to reflect genuine structural differences. Three-reagent SHAPE-directed modeling scales concisely to structurally complex RNAs to resolve the in-solution secondary structure analysis problem for many classes of RNA.
Towards a consistent model of the Galaxy; 2, Derivation of the model
Méra, D; Schäffer, R
1998-01-01
We use the calculations derived in a previous paper (Méra, Chabrier and Schaeffer, 1997), based on observational constraints arising from star counts, microlensing experiments and kinematic properties, to determine the amount of dark matter under the form of stellar and sub-stellar objects in the different parts of the Galaxy. This yields the derivation of different mass-models for the Galaxy. In the light of all the afore-mentioned constraints, we discuss two models that correspond to different conclusions about the nature and the location of the Galactic dark matter. In the first model there is a small amount of dark matter in the disk, and a large fraction of the dark matter in the halo is still undetected and likely to be non-baryonic. The second, less conventional model is consistent with entirely, or at least predominantly baryonic dark matter, under the form of brown dwarfs in the disk and white dwarfs in the dark halo. We derive observational predictions for these two models which should be verifiabl...
A self-consistent model for a longitudinal discharge excited He-Sr recombination laser
Energy Technology Data Exchange (ETDEWEB)
Carman, R.J. (Centre for Lasers and Applications, Macquarie University, Sydney NSW 2109 (AU))
1990-09-01
A computer model has been developed to simulate the plasma kinetics in a high-repetition frequency, discharge excited He-Sr recombination laser. A detailed rate equation analysis, incorporating about 80 collisional and radiative processes, is used to determine the temporal and spatial (radial) behavior of the discharge parameters and the intracavity laser field during the current pulse, recombination phase, and afterglow periods. The set of coupled first-order ordinary differential equations used to describe the plasma and external electrical circuit are integrated over multiple discharge cycles to yield fully self-consistent results. The computer model has been used to simulate the behavior of the laser for a set of standard conditions corresponding to typical operating conditions. The species population densities predicted by the model are compared with radial and time-dependent Hook measurements determined experimentally for the same set of standard conditions.
A heterogeneous traffic flow model consisting of two types of vehicles with different sensitivities
Li, Zhipeng; Xu, Xun; Xu, Shangzhi; Qian, Yeqing
2017-01-01
A heterogeneous car following model is constructed for traffic flow consisting of low- and high-sensitivity vehicles. The stability criterion of new model is obtained by using the linear stability theory. We derive the neutral stability diagram for the proposed model with five distinct regions. We conclude the effect of the percentage of low-sensitivity vehicle on the traffic stability in each region. In addition, we further consider a special case that the number of the low-sensitivity vehicles is equal to that of the high-sensitivity ones. We explore the dependence of traffic stability on the average value and the standard deviation of two sensitivities characterizing two vehicle types. The direct numerical simulation results verify the conclusion of theoretical analysis.
Giorgi, F.; Coppola, E.; Raffaele, F.
2014-10-01
We analyze trends of six daily precipitation-based and physically interconnected hydroclimatic indices in an ensemble of historical and 21st century climate projections under forcing from increasing greenhouse gas (GHG) concentrations (Representative Concentration Pathways (RCP)8.5), along with gridded (land only) observations for the late decades of the twentieth century. The indices include metrics of intensity (SDII) and extremes (R95) of precipitation, dry (DSL), and wet spell length, the hydroclimatic intensity index (HY-INT), and a newly introduced index of precipitation area (PA). All the indices in both the 21st century and historical simulations provide a consistent picture of a predominant shift toward a hydroclimatic regime of more intense, shorter, less frequent, and less widespread precipitation events in response to GHG-induced global warming. The trends are larger and more spatially consistent over tropical than extratropical regions, pointing to the importance of tropical convection in regulating this response, and show substantial regional spatial variability. Observed trends in the indices analyzed are qualitatively and consistently in line with the simulated ones, at least at the global and full tropical scale, further supporting the robustness of the identified prevailing hydroclimatic responses. The HY-INT, PA, and R95 indices show the most consistent response to global warming, and thus offer the most promising tools for formal hydroclimatic model validation and detection/attribution studies. The physical mechanism underlying this response and some of the applications of our results are also discussed.
Rudzinski, Joseph F.; Kremer, Kurt; Bereau, Tristan
2016-02-01
Molecular simulations can provide microscopic insight into the physical and chemical driving forces of complex molecular processes. Despite continued advancement of simulation methodology, model errors may lead to inconsistencies between simulated and reference (e.g., from experiments or higher-level simulations) observables. To bound the microscopic information generated by computer simulations within reference measurements, we propose a method that reweights the microscopic transitions of the system to improve consistency with a set of coarse kinetic observables. The method employs the well-developed Markov state modeling framework to efficiently link microscopic dynamics with long-time scale constraints, thereby consistently addressing a wide range of time scales. To emphasize the robustness of the method, we consider two distinct coarse-grained models with significant kinetic inconsistencies. When applied to the simulated conformational dynamics of small peptides, the reweighting procedure systematically improves the time scale separation of the slowest processes. Additionally, constraining the forward and backward rates between metastable states leads to slight improvement of their relative stabilities and, thus, refined equilibrium properties of the resulting model. Finally, we find that difficulties in simultaneously describing both the simulated data and the provided constraints can help identify specific limitations of the underlying simulation approach.
nIFTy cosmology: the clustering consistency of galaxy formation models
Pujol, Arnau; Skibba, Ramin A.; Gaztañaga, Enrique; Benson, Andrew; Blaizot, Jeremy; Bower, Richard; Carretero, Jorge; Castander, Francisco J.; Cattaneo, Andrea; Cora, Sofia A.; Croton, Darren J.; Cui, Weiguang; Cunnama, Daniel; De Lucia, Gabriella; Devriendt, Julien E.; Elahi, Pascal J.; Font, Andreea; Fontanot, Fabio; Garcia-Bellido, Juan; Gargiulo, Ignacio D.; Gonzalez-Perez, Violeta; Helly, John; Henriques, Bruno M. B.; Hirschmann, Michaela; Knebe, Alexander; Lee, Jaehyun; Mamon, Gary A.; Monaco, Pierluigi; Onions, Julian; Padilla, Nelson D.; Pearce, Frazer R.; Power, Chris; Somerville, Rachel S.; Srisawat, Chaichalit; Thomas, Peter A.; Tollet, Edouard; Vega-Martínez, Cristian A.; Yi, Sukyoung K.
2017-07-01
We present a clustering comparison of 12 galaxy formation models [including semi-analytic models (SAMs) and halo occupation distribution (HOD) models] all run on halo catalogues and merger trees extracted from a single Λ cold dark matter N-body simulation. We compare the results of the measurements of the mean halo occupation numbers, the radial distribution of galaxies in haloes and the two-point correlation functions (2PCF). We also study the implications of the different treatments of orphan (galaxies not assigned to any dark matter subhalo) and non-orphan galaxies in these measurements. Our main result is that the galaxy formation models generally agree in their clustering predictions but they disagree significantly between HOD and SAMs for the orphan satellites. Although there is a very good agreement between the models on the 2PCF of central galaxies, the scatter between the models when orphan satellites are included can be larger than a factor of 2 for scales smaller than 1 h-1 Mpc. We also show that galaxy formation models that do not include orphan satellite galaxies have a significantly lower 2PCF on small scales, consistent with previous studies. Finally, we show that the 2PCF of orphan satellites is remarkably different between SAMs and HOD models. Orphan satellites in SAMs present a higher clustering than in HOD models because they tend to occupy more massive haloes. We conclude that orphan satellites have an important role on galaxy clustering and they are the main cause of the differences in the clustering between HOD models and SAMs.
Directory of Open Access Journals (Sweden)
G.Shanmugarathinam
2013-01-01
Full Text Available Caching is one of the important techniques in mobile computing. In caching, frequently accessed data is stored in mobile clients to avoid network traffic and improve the performance in mobile computing. In a mobile computing environment, the number of mobile users increases and requests the server for any updation, but most of the time the server is busy and the client has to wait for a long time. The cache consistency maintenance is difficult for both client and the server. This paper is proposes a technique using a queuing system consisting of one or more servers that provide services of some sort to arrive mobile hosts using agent based technology. This services mechanism of a queuing system is specified by the number of servers each server having its own queue, Agent based technology will maintain the cache consistency between the client and the server .This model saves wireless bandwidth, reduces network traffic and reduces the workload on the server. The simulation result was analyzed with previous technique and the proposed model shows significantly better performance than the earlier approach.
Premixed Combustion Simulations with a Self-Consistent Plasma Model for Initiation
Energy Technology Data Exchange (ETDEWEB)
Sitaraman, Hariswaran; Grout, Ray
2016-01-08
Combustion simulations of H2-O2 ignition are presented here, with a self-consistent plasma fluid model for ignition initiation. The plasma fluid equations for a nanosecond pulsed discharge are solved and coupled with the governing equations of combustion. The discharge operates with the propagation of cathode directed streamer, with radical species produced at streamer heads. These radical species play an important role in the ignition process. The streamer propagation speeds and radical production rates were found to be sensitive to gas temperature and fuel-oxidizer equivalence ratio. The oxygen radical production rates strongly depend on equivalence ratio and subsequently results in faster ignition of leaner mixtures.
Energy Technology Data Exchange (ETDEWEB)
Guy, Aurélien, E-mail: aurelien.guy@onera.fr; Bourdon, Anne, E-mail: anne.bourdon@lpp.polytechnique.fr; Perrin, Marie-Yvonne, E-mail: marie-yvonne.perrin@ecp.fr [CNRS, UPR 288, Laboratoire d' Énergétique Moléculaire et Macroscopique, Combustion (EM2C), Grande Voie des Vignes, 92295 Châtenay-Malabry (France); Ecole Centrale Paris, Grande Voie des Vignes, 92295 Châtenay-Malabry (France)
2015-04-15
In this work, a state-to-state vibrational and electronic collisional model is developed to investigate nonequilibrium phenomena behind a shock wave in an ionized nitrogen flow. In the ionization dynamics behind the shock wave, the electron energy budget is of key importance and it is found that the main depletion term corresponds to the electronic excitation of N atoms, and conversely the major creation terms are the electron-vibration term at the beginning, then replaced by the electron ions elastic exchange term. Based on these results, a macroscopic multi-internal-temperature model for the vibration of N{sub 2} and the electronic levels of N atoms is derived with several groups of vibrational levels of N{sub 2} and electronic levels of N with their own internal temperatures to model the shape of the vibrational distribution of N{sub 2} and of the electronic excitation of N, respectively. In this model, energy and chemistry source terms are calculated self-consistently from the rate coefficients of the state-to-state database. For the shock wave condition studied, a good agreement is observed on the ionization dynamics as well as on the atomic bound-bound radiation between the state-to-state model and the macroscopic multi-internal temperature model with only one group of vibrational levels of N{sub 2} and two groups of electronic levels of N.
Choi, Sung W; Gerencser, Akos A; Ng, Ryan; Flynn, James M; Melov, Simon; Danielson, Steven R; Gibson, Bradford W; Nicholls, David G; Bredesen, Dale E; Brand, Martin D
2012-11-21
Depressed cortical energy supply and impaired synaptic function are predominant associations of Alzheimer's disease (AD). To test the hypothesis that presynaptic bioenergetic deficits are associated with the progression of AD pathogenesis, we compared bioenergetic variables of cortical and hippocampal presynaptic nerve terminals (synaptosomes) from commonly used mouse models with AD-like phenotypes (J20 age 6 months, Tg2576 age 16 months, and APP/PS age 9 and 14 months) to age-matched controls. No consistent bioenergetic deficiencies were detected in synaptosomes from the three models; only APP/PS cortical synaptosomes from 14-month-old mice showed an increase in respiration associated with proton leak. J20 mice were chosen for a highly stringent investigation of mitochondrial function and content. There were no significant differences in the quality of the synaptosomal preparations or the mitochondrial volume fraction. Furthermore, respiratory variables, calcium handling, and membrane potentials of synaptosomes from symptomatic J20 mice under calcium-imposed stress were not consistently impaired. The recovery of marker proteins during synaptosome preparation was the same, ruling out the possibility that the lack of functional bioenergetic defects in synaptosomes from J20 mice was due to the selective loss of damaged synaptosomes during sample preparation. Our results support the conclusion that the intrinsic bioenergetic capacities of presynaptic nerve terminals are maintained in these symptomatic AD mouse models.
ICFD modeling of final settlers - developing consistent and effective simulation model structures
DEFF Research Database (Denmark)
Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham
analysis exercises is kept to a minimum (4). Consequently, detailed information related to, for instance, design boundaries, may be ignored, and their effects may only be accounted for through calibration of model parameters used as catchalls, and by arbitrary amendments of structural uncertainty...... of (6). Further details are shown in (5). Results and discussions Factor screening. Factor screening is carried out by imposing statistically designed moderate (under-loaded) and extreme (under-, critical and overloaded) operational boundary conditions on the 2-D CFD SST model (8). Results obtained...
SELF-CONSISTENT FIELD MODEL OF BRUSHES FORMED BY ROOT-TETHERED DENDRONS
Directory of Open Access Journals (Sweden)
E. B. Zhulina
2015-05-01
Full Text Available We present an analytical self-consistent field (scf theory that describes planar brushes formed by regularly branched root-tethered dendrons of the second and third generations. The developed approach gives the possibility for calculation of the scf molecular potential acting at monomers of the tethered chains. In the linear elasticity regime for stretched polymers, the molecular potential has a parabolic shape with the parameter k depending on architectural parameters of tethered macromolecules: polymerization degrees of spacers, branching functionalities, and number of generations. For dendrons of the second generation, we formulate a general equation for parameter k and analyze how variations in the architectural parameters of these dendrons affect the molecular potential. For dendrons of the third generation, an analytical expression for parameter k is available only for symmetric macromolecules with equal lengths of all spacers and equal branching functionalities in all generations. We analyze how the thickness of dendron brush in a good solvent is affected by variations in the chain architecture. Results of the developed scf theory are compared with predictions of boxlike scaling model. We demonstrate that in the limit of high branching functionalities, the results of both approaches become consistent if the value of exponent bin boxlike model is put to unity.In conclusion, we briefly discuss the systems to which the developed scf theory is applicable. These are: planar and concave spherical and cylindrical brushes under various solvent conditions (including solvent-free melted brushes and brush-like layers of ionic (polyelectrolyte dendrons.
Khajepor, Sorush; Chen, Baixin
2016-01-01
A method is developed to analytically and consistently implement cubic equations of state into the recently proposed multipseudopotential interaction (MPI) scheme in the class of two-phase lattice Boltzmann (LB) models [S. Khajepor, J. Wen, and B. Chen, Phys. Rev. E 91, 023301 (2015)]10.1103/PhysRevE.91.023301. An MPI forcing term is applied to reduce the constraints on the mathematical shape of the thermodynamically consistent pseudopotentials; this allows the parameters of the MPI forces to be determined analytically without the need of curve fitting or trial and error methods. Attraction and repulsion parts of equations of state (EOSs), representing underlying molecular interactions, are modeled by individual pseudopotentials. Four EOSs, van der Waals, Carnahan-Starling, Peng-Robinson, and Soave-Redlich-Kwong, are investigated and the results show that the developed MPI-LB system can satisfactorily recover the thermodynamic states of interest. The phase interface is predicted analytically and controlled via EOS parameters independently and its effect on the vapor-liquid equilibrium system is studied. The scheme is highly stable to very high density ratios and the accuracy of the results can be enhanced by increasing the interface resolution. The MPI drop is evaluated with regard to surface tension, spurious velocities, isotropy, dynamic behavior, and the stability dependence on the relaxation time.
A Symplectic Multi-Particle Tracking Model for Self-Consistent Space-Charge Simulation
Qiang, Ji
2016-01-01
Symplectic tracking is important in accelerator beam dynamics simulation. So far, to the best of our knowledge, there is no self-consistent symplectic space-charge tracking model available in the accelerator community. In this paper, we present a two-dimensional and a three-dimensional symplectic multi-particle spectral model for space-charge tracking simulation. This model includes both the effect from external fields and the effect of self-consistent space-charge fields using a split-operator method. Such a model preserves the phase space structure and shows much less numerical emittance growth than the particle-in-cell model in the illustrative examples.
Self-Consistent Model for Pulsed Direct-Current N2 Glow Discharge
Institute of Scientific and Technical Information of China (English)
Liu Chengsen; Wang Dezhen
2005-01-01
A self-consistent analysis of a pulsed direct-current (DC) N2 glow discharge is presented. The model is based on a numerical solution of the continuity equations for electron and ions coupled with Poisson's equation. The spatial-temporal variations of ionic and electronic densities and electric field are obtained. The electric field structure exhibits all the characteristic regions of a typical glow discharge (the cathode fall, the negative glow, and the positive column).Current-voltage characteristics of the discharge can be obtained from the model. The calculated current-voltage results using a constant secondary electron emission coefficient for the gas pressure 133.32 Pa are in reasonable agreement with experiment.
A Globally Consistent Methodology for an Exposure Model for Natural Catastrophe Risk Assessment
Gunasekera, Rashmin; Ishizawa, Oscar; Pandey, Bishwa; Saito, Keiko
2013-04-01
There is a high demand for the development of a globally consistent and robust exposure data model employing a top down approach, to be used in national level catastrophic risk profiling for the public sector liability. To this effect, there are currently several initiatives such as UN-ISDR Global Assessment Report (GAR) and Global Exposure Database for Global Earthquake Model (GED4GEM). However, the consistency and granularity differs from region to region, a problem that is overcome in this proposed approach using national datasets for example in Latin America and the Caribbean Region (LCR). The methodology proposed in this paper aim to produce a global open exposure dataset based upon population, country specific building type distribution and other global/economic indicators such as World Bank indices that are suitable for natural catastrophe risk modelling purposes. The output would be a GIS raster grid at approximately 1 km spatial resolution which would highlight urbaness (building typology distribution, occupancy and use) for each cell at sub national level and compatible with other global initiatives and datasets. It would make use of datasets on population, census, demographic, building data and land use/land cover which are largely available in the public domain. The resultant exposure dataset could be used in conjunction with hazard and vulnerability components to create views of risk for multiple hazards that include earthquake, flood and windstorms. The model we hope would also assist in steps towards future initiatives for open, interchangeable and compatible databases for catastrophe risk modelling. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent.
A CVAR scenario for a standard monetary model using theory-consistent expectations
DEFF Research Database (Denmark)
Juselius, Katarina
2017-01-01
A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...
Tumaneng, Paul W.; Pandit, Sagar A.; Zhao, Guijun; Scott, H. L.
2011-03-01
The connection between membrane inhomogeneity and the structural basis of lipid rafts has sparked interest in the lateral organization of model lipid bilayers of two and three components. In an effort to investigate anisotropic lipid distribution in mixed bilayers, a self-consistent mean-field theoretical model is applied to palmitoyloleoylphosphatidylcholine (POPC)-palmitoyl sphingomyelin (PSM)-cholesterol mixtures. The compositional dependence of lateral organization in these mixtures is mapped onto a ternary plot. The model utilizes molecular dynamics simulations to estimate interaction parameters and to construct chain conformation libraries. We find that at some concentration ratios the bilayers separate spatially into regions of higher and lower chain order coinciding with areas enriched with PSM and POPC, respectively. To examine the effect of the asymmetric chain structure of POPC on bilayer lateral inhomogeneity, we consider POPC-lipid interactions with and without angular dependence. Results are compared with experimental data and with results from a similar model for mixtures of dioleoylphosphatidylcholine, steroyl sphingomyelin, and cholesterol.
Self-consistent modeling of terahertz waveguide and cavity with frequency-dependent conductivity
Huang, Y. J.; Chu, K. R.; Thumm, M.
2015-01-01
The surface resistance of metals, and hence the Ohmic dissipation per unit area, scales with the square root of the frequency of an incident electromagnetic wave. As is well recognized, this can lead to excessive wall losses at terahertz (THz) frequencies. On the other hand, high-frequency oscillatory motion of conduction electrons tends to mitigate the collisional damping. As a result, the classical theory predicts that metals behave more like a transparent medium at frequencies above the ultraviolet. Such a behavior difference is inherent in the AC conductivity, a frequency-dependent complex quantity commonly used to treat electromagnetics of metals at optical frequencies. The THz region falls in the gap between microwave and optical frequencies. However, metals are still commonly modeled by the DC conductivity in currently active vacuum electronics research aimed at the development of high-power THz sources (notably the gyrotron), although a small reduction of the DC conductivity due to surface roughness is sometimes included. In this study, we present a self-consistent modeling of the gyrotron interaction structures (a metallic waveguide or cavity) with the AC conductivity. The resulting waveguide attenuation constants and cavity quality factors are compared with those of the DC-conductivity model. The reduction in Ohmic losses under the AC-conductivity model is shown to be increasingly significant as the frequency reaches deeper into the THz region. Such effects are of considerable importance to THz gyrotrons for which the minimization of Ohmic losses constitutes a major design consideration.
Application of a Multigrid Method to a Mass-Consistent Diagnostic Wind Model.
Wang, Yansen; Williamson, Chatt; Garvey, Dennis; Chang, Sam; Cogan, James
2005-07-01
A multigrid numerical method has been applied to a three-dimensional, high-resolution diagnostic model for flow over complex terrain using a mass-consistent approach. The theoretical background for the model is based on a variational analysis using mass conservation as a constraint. The model was designed for diagnostic wind simulation at the microscale in complex terrain and in urban areas. The numerical implementation takes advantage of a multigrid method that greatly improves the computation speed. Three preliminary test cases for the model's numerical efficiency and its accuracy are given. The model results are compared with an analytical solution for flow over a hemisphere. Flow over a bell-shaped hill is computed to demonstrate that the numerical method is applicable in the case of parameterized lee vortices. A simulation of the mean wind field in an urban domain has also been carried out and compared with observational data. The comparison indicated that the multigrid method takes only 3%-5% of the time that is required by the traditional Gauss-Seidel method.
Development of a Kohn-Sham like potential in the Self-Consistent Atomic Deformation Model
Mehl, M J; Stokes, H T
1996-01-01
This is a brief description of how to derive the local ``atomic'' potentials from the Self-Consistent Atomic Deformation (SCAD) model density function. Particular attention is paid to the spherically averaged case.
Development of a Kohn-Sham like potential in the Self-Consistent Atomic Deformation Model
Mehl, M. J.; Boyer, L. L.; Stokes, H. T.
1996-01-01
This is a brief description of how to derive the local ``atomic'' potentials from the Self-Consistent Atomic Deformation (SCAD) model density function. Particular attention is paid to the spherically averaged case.
Bayesian nonparametric estimation and consistency of mixed multinomial logit choice models
De Blasi, Pierpaolo; Lau, John W; 10.3150/09-BEJ233
2011-01-01
This paper develops nonparametric estimation for discrete choice models based on the mixed multinomial logit (MMNL) model. It has been shown that MMNL models encompass all discrete choice models derived under the assumption of random utility maximization, subject to the identification of an unknown distribution $G$. Noting the mixture model description of the MMNL, we employ a Bayesian nonparametric approach, using nonparametric priors on the unknown mixing distribution $G$, to estimate choice probabilities. We provide an important theoretical support for the use of the proposed methodology by investigating consistency of the posterior distribution for a general nonparametric prior on the mixing distribution. Consistency is defined according to an $L_1$-type distance on the space of choice probabilities and is achieved by extending to a regression model framework a recent approach to strong consistency based on the summability of square roots of prior probabilities. Moving to estimation, slightly different te...
Thermodynamically consistent mesoscopic fluid particle models for a van der Waals fluid
Serrano, Mar; Español, Pep
2000-01-01
The GENERIC structure allows for a unified treatment of different discrete models of hydrodynamics. We first propose a finite volume Lagrangian discretization of the continuum equations of hydrodynamics through the Voronoi tessellation. We then show that a slight modification of these discrete equations has the GENERIC structure. The GENERIC structure ensures thermodynamic consistency and allows for the introduction of correct thermal noise. In this way, we obtain a consistent discrete model ...
Motion of the Philippine Sea plate consistent with the NUVEL-1A model
Zang, Shao Xian; Chen, Qi Yong; Ning, Jie Yuan; Shen, Zheng Kang; Liu, Yong Gang
2002-09-01
We determine Euler vectors for 12 plates, including the Philippine Sea plate (PH), relative to the fixed Pacific plate (PA) by inverting the earthquake slip vectors along the boundaries of the Philippine Sea plate, GPS observed velocities, and 1122 data from the NUVEL-1 and the NUVEL-1A global plate motion model, respectively. This analysis thus also yields Euler vectors for the Philippine Sea plate relative to adjacent plates. Our results are consistent with observed data and can satisfy the geological and geophysical constraints along the Caroline (CR)-PH and PA-CR boundaries. The results also give insight into internal deformation of the Philippine Sea plate. The area enclosed by the Ryukyu Trench-Nankai Trough, Izu-Bonin Trench and GPS stations S102, S063 and Okino Torishima moves uniformly as a rigid plate, but the areas near the Philippine Trench, Mariana Trough and Yap-Palau Trench have obvious deformation.
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.
Institute of Scientific and Technical Information of China (English)
ZHANG SanGuo; LIAO Yuan
2008-01-01
In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.
Bibliographic Relationships in MARC and Consistent with FRBR Model According to RDA Rules
Directory of Open Access Journals (Sweden)
Mahsa Fardehoseiny
2013-03-01
Full Text Available This study was conducted to investigate the bibliographic relationships in the MARC and it’s consistency with the FRBR model. With establishing the necessary relations between bibliographic records, users will retrieve their necessary information faster and more easily. It is important to make a good communication in existing bibliographic records to help users to find what they need. This study’s purpose was to define the relationships between bibliographic records in the National Library's OPAC database and the study’s method was descriptive content analysis approach. In this study, the online catalog (OPAC National Library of Iran has been used to collect information. All records with the mentioned criteria listed in the final report of the IFLA bibliographic relations about the first group entities in FRBR model and RDA rules has been implemented and analyzed. According to this study, if software has been developed in which the data transferring was based on the conceptual model and the MARC’s data that already exists in the National Library's bibliographic database, these relationships will not be transferable. Withal, in this study the relationships on consistent FRBR and MARC concluded with an intelligent mind and the machine is unable to detect them. The results of this study showed that the relations which conveyed from MARC to FRBR, was about 47/70 percent of the MARC fields, in other hand by FRBR to MARC with the use of all intelligent efforts, and diagnosis of MARC relationships, only 31/38 percent of the relations can be covered through the MARC. But based on real data and usable fields in Boostan-e-Saadi with MARC pattern, records on the National Library of Iran showed that the results reduced to 16/95 percent..
A parameter study of self-consistent disk models around Herbig AeBe stars
Meijer, J; De Koter, A; Dullemond, C P; Van Boekel, R; Waters, L B F M
2008-01-01
We present a parameter study of self-consistent models of protoplanetary disks around Herbig AeBe stars. We use the code developed by Dullemond and Dominik, which solves the 2D radiative transfer problem including an iteration for the vertical hydrostatic structure of the disk. This grid of models will be used for several studies on disk emission and mineralogy in followup papers. In this paper we take a first look on the new models, compare them with previous modeling attempts and focus on the effects of various parameters on the overall structure of the SED that leads to the classification of Herbig AeBe stars into two groups, with a flaring (group I) or self-shadowed (group II) SED. We find that the parameter of overriding importance to the SED is the total mass in grains smaller than 25um, confirming the earlier results by Dullemond and Dominik. All other parameters studied have only minor influences, and will alter the SED type only in borderline cases. We find that there is no natural dichotomy between ...
A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints
Directory of Open Access Journals (Sweden)
L. Kantha
2016-01-01
Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.
Motte, Fabrice; Bugler-Lamb, Samuel L.; Falcoz, Quentin
2015-07-01
The attraction of solar energy is greatly enhanced by the possibility of it being used during times of reduced or non-existent solar flux, such as weather induced intermittences or the darkness of the night. Therefore optimizing thermal storage for use in solar energy plants is crucial for the success of this sustainable energy source. Here we present a study of a structured bed filler dedicated to Thermocline type thermal storage, believed to outweigh the financial and thermal benefits of other systems currently in use such as packed bed Thermocline tanks. Several criterions such as Thermocline thickness and Thermocline centering are defined with the purpose of facilitating the assessment of the efficiency of the tank to complement the standard concepts of power output. A numerical model is developed that reduces to two dimensions the modeling of such a tank. The structure within the tank is designed to be built using simple bricks harboring rectangular channels through which the solar heat transfer and storage fluid will flow. The model is scrutinized and tested for physical robustness, and the results are presented in this paper. The consistency of the model is achieved within particular ranges for each physical variable.
Formulation of a self-consistent model for quantum well pin solar cells
Ramey, S.; Khoie, R.
1997-04-01
A self-consistent numerical simulation model for a pin single-cell solar cell is formulated. The solar cell device consists of a p-AlGaAs region, an intrinsic i-AlGaAs/GaAs region with several quantum wells, and a n-AlGaAs region. Our simulator solves a field-dependent Schrödinger equation self-consistently with Poisson and Drift-Diffusion equations. The emphasis is given to the study of the capture of electrons by the quantum wells, the escape of electrons from the quantum wells, and the absorption and recombination within the quantum wells. We believe this would be the first such comprehensive model ever reported. The field-dependent Schrödinger equation is solved using the transfer matrix method. The eigenfunctions and eigenenergies obtained are used to calculate the escape rate of electrons from the quantum wells, and the non-radiative recombination rates of electrons at the boundaries of the quantum wells. These rates together with the capture rates of electrons by the quantum wells are then used in a self-consistent numerical Poisson-Drift-Diffusion solver. The resulting field profiles are then used in the field-dependent Schrödinger solver, and the iteration process is repeated until convergence is reached. In a p-AlGaAs i-AlGaAs/GaAs n-AlGaAs cell with aluminum mole fraction of 0.3, with one 100 Å-wide 284 meV-deep quantum well, the eigenenergies with zero field are 36meV, 136meV, and 267meV, for the first, second and third subbands, respectively. With an electric field of 50 kV/cm, the eigenenergies are shifted to 58meV, 160meV, and 282meV, respectively. With these eigenenergies, the thermionic escape time of electrons from the GaAs Γ-valley, varies from 220 pS to 90 pS for electric fields ranging from 10 to 50 kV/cm. These preliminary results are in good agreement with those reported by other researchers.
Martinez, Guillermo F.; Gupta, Hoshin V.
2011-12-01
Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.
Institute of Scientific and Technical Information of China (English)
Yee LEUNG; WU Kefa; DONG Tianxin
2001-01-01
In this paper, a multivariate linear functional relationship model, where the covariance matrix of the observational errors is not restricted, is considered. The parameter estimation of this model is discussed. The estimators are shown to be a strongly consistent estimation under some mild conditions on the incidental parameters.
Modeling Extreme Solar Energetic Particle Acceleration with Self-Consistent Wave Generation
Arthur, A. D.; le Roux, J. A.
2015-12-01
Observations of extreme solar energetic particle (SEP) events associated with coronal mass ejection driven shocks have detected particle energies up to a few GeV at 1 AU within the first ~10 minutes to 1 hour of shock acceleration. Whether or not acceleration by a single shock is sufficient in these events or if some combination of multiple shocks or solar flares is required is currently not well understood. Furthermore, the observed onset times of the extreme SEP events place the shock in the corona when the particles escape upstream. We have updated our focused transport theory model that has successfully been applied to the termination shock and traveling interplanetary shocks in the past to investigate extreme SEP acceleration in the solar corona. This model solves the time-dependent Focused Transport Equation including particle preheating due to the cross shock electric field and the divergence, adiabatic compression, and acceleration of the solar wind flow. Diffusive shock acceleration of SEPs is included via the first-order Fermi mechanism for parallel shocks. To investigate the effects of the solar corona on the acceleration of SEPs, we have included an empirical model for the plasma number density, temperature, and velocity. The shock acceleration process becomes highly time-dependent due to the rapid variation of these coronal properties with heliocentric distance. Additionally, particle interaction with MHD wave turbulence is modeled in terms of gyroresonant interactions with parallel propagating Alfven waves. However, previous modeling efforts suggest that the background amplitude of the solar wind turbulence is not sufficient to accelerate SEPs to extreme energies over the short time scales observed. To account for this, we have included the transport and self-consistent amplification of MHD waves by the SEPs through wave-particle gyroresonance. We will present the results of this extended model for a single fast quasi-parallel CME driven shock in the
Silvis, Maurits H
2015-01-01
Assuming a general constitutive relation for the turbulent stresses in terms of the local large-scale velocity gradient, we constructed a class of subgrid-scale models for large-eddy simulation that are consistent with important physical and mathematical properties. In particular, they preserve symmetries of the Navier-Stokes equations and exhibit the proper near-wall scaling. They furthermore show desirable dissipation behavior and are capable of describing nondissipative effects. We provided examples of such physically-consistent models and showed that existing subgrid-scale models do not all satisfy the desired properties.
Plasma Processes : A self-consistent kinetic modeling of a 1-D, bounded, plasma in equilibrium
Indian Academy of Sciences (India)
Monojoy Goswami; H Ramachandran
2000-11-01
A self-consistent kinetic treatment is presented here, where the Boltzmann equation is solved for a particle conserving Krook collision operator. The resulting equations have been implemented numerically. The treatment solves for the entire quasineutral column, making no assumptions about mfp/, where mfp is the ion-neutral collision mean free path and the size of the device. Coulomb collisions are neglected in favour of collisions with neutrals, and the particle source is modeled as a uniform Maxwellian. Electrons are treated as an inertialess but collisional ﬂuid. The ion distribution function for the trapped and the transiting orbits is obtained. Interesting ﬁndings include the anomalous heating of ions as they approach the presheath, the development of strongly non-Maxwellian features near the last mfp, and strong modiﬁcations of the sheath criterion.
New geometric design consistency model based on operating speed profiles for road safety evaluation.
Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo
2013-12-01
To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment.
A three-dimensional PEM fuel cell model with consistent treatment of water transport in MEA
Meng, Hua
In this paper, a three-dimensional PEM fuel cell model with a consistent water transport treatment in the membrane electrode assembly (MEA) has been developed. In this new PEM fuel cell model, the conservation equation of the water concentration is solved in the gas channels, gas diffusion layers, and catalyst layers while a conservation equation of the water content is established in the membrane. These two equations are connected using a set of internal boundary conditions based on the thermodynamic phase equilibrium and flux equality at the interface of the membrane and the catalyst layer. The existing fictitious water concentration treatment, which assumes thermodynamic phase equilibrium between the water content in the membrane phase and the water concentration, is applied in the two catalyst layers to consider water transport in the membrane phase. Since all the other conservation equations are still developed and solved in the single-domain framework without resort to interfacial boundary conditions, the present new PEM fuel cell model is termed as a mixed-domain method. Results from this mixed-domain approach have been compared extensively with those from the single-domain method, showing good accuracy in terms of not only cell performances and current distributions but also water content variations in the membrane.
The fundamental solution for a consistent complex model of the shallow shell equations
Matthew P. Coleman
1999-01-01
The calculation of the Fourier transforms of the fundamental solution in shallow shell theory ostensibly was accomplished by J. L. Sanders [J. Appl. Mech. 37 (1970), 361-366]. However, as is shown in detail in this paper, the complex model used by Sanders is, in fact, inconsistent. This paper provides a consistent version of Sanders's complex model, along with the Fourier transforms of the fundamental solution for this corrected model. The inverse Fourier transforms are then calculated for th...
A self-consistent linear-mode model of stellar convection
Macauslan, J.
1985-01-01
A normal-mode expansion of the linearized fluid equations in terms of small subset of spherical harmonics can provide a foundation for a physically motivated, self-consistent description of a solar-type convection zone. In the absence of dissipation, a second-order differential equation governs the radial dependence of the modes, so that interpretation of the effects on convection quantities of the normal-form 'potential well' is straightforward. The philosophy is quite different from the more recent work of Narasimha and Antia (1982): all envelopes presented here differ substantially from MLT envelopes, and therefore, from theirs, which are constructed to be consistent with MLT. The amplitude of all modes is set by a Kelvin-Helmholtz-('shear'-) instability argument unrelated to solar observations, with the result that the convection description may be considered to arise from 'first-hueristic-principles'. The thermodynamics modelled vaguely resemble the sun's, and more vigorously convective envelopes show some phenomena qualitatively like solar observations (e.g., atmospheric velocity spectra).
A Self-consistent and Spatially Dependent Model of the Multiband Emission of Pulsar Wind Nebulae
Lu, Fang-Wu; Gao, Quan-Gui; Zhang, Li
2017-01-01
A self-consistent and spatially dependent model is presented to investigate the multiband emission of pulsar wind nebulae (PWNe). In this model, a spherically symmetric system is assumed and the dynamical evolution of the PWN is included. The processes of convection, diffusion, adiabatic loss, radiative loss, and photon–photon pair production are taken into account in the electron’s evolution equation, and the processes of synchrotron radiation, inverse Compton scattering, synchrotron self-absorption, and pair production are included for the photon’s evolution equation. Both coupled equations are simultaneously solved. The model is applied to explain observed results of the PWN in MSH 15–52. Our results show that the spectral energy distributions (SEDs) of both electrons and photons are all a function of distance. The observed photon SED of MSH 15–52 can be well reproduced in this model. With the parameters obtained by fitting the observed SED, the spatial variations of photon index and surface brightness observed in the X-ray band can also be well reproduced. Moreover, it can be derived that the present-day diffusion coefficient of MSH 15–52 at the termination shock is {κ }0=6.6× {10}24 {{cm}}2 {{{s}}}-1, the spatial average has a value of \\bar{κ }=1.4× {10}25 {{cm}}2 {{{s}}}-1, and the present-day magnetic field at the termination shock has a value of {B}0=26.6 μ {{G}} and the spatial averaged magnetic field is \\bar{B}=14.9 μ {{G}}. The spatial changes of the spectral index and surface brightness at different bands are predicted.
Directory of Open Access Journals (Sweden)
Damian M Cummings
2010-05-01
Full Text Available Since the identification of the gene responsible for HD (Huntington's disease, many genetic mouse models have been generated. Each employs a unique approach for delivery of the mutated gene and has a different CAG repeat length and background strain. The resultant diversity in the genetic context and phenotypes of these models has led to extensive debate regarding the relevance of each model to the human disorder. Here, we compare and contrast the striatal synaptic phenotypes of two models of HD, namely the YAC128 mouse, which carries the full-length huntingtin gene on a yeast artificial chromosome, and the CAG140 KI*** (knock-in mouse, which carries a human/mouse chimaeric gene that is expressed in the context of the mouse genome, with our previously published data obtained from the R6/2 mouse, which is transgenic for exon 1 mutant huntingtin. We show that striatal MSNs (medium-sized spiny neurons in YAC128 and CAG140 KI mice have similar electrophysiological phenotypes to that of the R6/2 mouse. These include a progressive increase in membrane input resistance, a reduction in membrane capacitance, a lower frequency of spontaneous excitatory postsynaptic currents and a greater frequency of spontaneous inhibitory postsynaptic currents in a subpopulation of striatal neurons. Thus, despite differences in the context of the inserted gene between these three models of HD, the primary electrophysiological changes observed in striatal MSNs are consistent. The outcomes suggest that the changes are due to the expression of mutant huntingtin and such alterations can be extended to the human condition.
Directory of Open Access Journals (Sweden)
Damian M Cummings
2010-06-01
Full Text Available Since the identification of the gene responsible for HD (Huntington's disease, many genetic mouse models have been generated. Each employs a unique approach for delivery of the mutated gene and has a different CAG repeat length and background strain. The resultant diversity in the genetic context and phenotypes of these models has led to extensive debate regarding the relevance of each model to the human disorder. Here, we compare and contrast the striatal synaptic phenotypes of two models of HD, namely the YAC128 mouse, which carries the full-length huntingtin gene on a yeast artificial chromosome, and the CAG140 KI (knock-in mouse, which carries a human/mouse chimaeric gene that is expressed in the context of the mouse genome, with our previously published data obtained from the R6/2 mouse, which is transgenic for exon 1 mutant huntingtin. We show that striatal MSNs (medium-sized spiny neurons in YAC128 and CAG140 KI mice have similar electrophysiological phenotypes to that of the R6/2 mouse. These include a progressive increase in membrane input resistance, a reduction in membrane capacitance, a lower frequency of spontaneous excitatory postsynaptic currents and a greater frequency of spontaneous inhibitory postsynaptic currents in a subpopulation of striatal neurons. Thus, despite differences in the context of the inserted gene between these three models of HD, the primary electrophysiological changes observed in striatal MSNs are consistent. The outcomes suggest that the changes are due to the expression of mutant huntingtin and such alterations can be extended to the human condition.
Self-consistent models of quasi-relaxed rotating stellar systems
Varri, A L
2012-01-01
Two new families of self-consistent axisymmetric truncated equilibrium models for the description of quasi-relaxed rotating stellar systems are presented. The first extends the spherical King models to the case of solid-body rotation. The second is characterized by differential rotation, designed to be rigid in the central regions and to vanish in the outer parts, where the energy truncation becomes effective. The models are constructed by solving the nonlinear Poisson equation for the self-consistent mean-field potential. For rigidly rotating configurations, the solutions are obtained by an asymptotic expansion on the rotation strength parameter. The differentially rotating models are constructed by means of an iterative approach based on a Legendre series expansion of the density and the potential. The two classes of models exhibit complementary properties. The rigidly rotating configurations are flattened toward the equatorial plane, with deviations from spherical symmetry that increase with the distance f...
Height-Diameter Models for Mixed-Species Forests Consisting of Spruce, Fir, and Beech
Directory of Open Access Journals (Sweden)
Petráš Rudolf
2014-06-01
Full Text Available Height-diameter models define the general relationship between the tree height and diameter at each growth stage of the forest stand. This paper presents generalized height-diameter models for mixed-species forest stands consisting of Norway spruce (Picea abies Karst., Silver fir (Abies alba L., and European beech (Fagus sylvatica L. from Slovakia. The models were derived using two growth functions from the exponential family: the two-parameter Michailoff and three-parameter Korf functions. Generalized height-diameter functions must normally be constrained to pass through the mean stand diameter and height, and then the final growth model has only one or two parameters to be estimated. These “free” parameters are then expressed over the quadratic mean diameter, height and stand age and the final mathematical form of the model is obtained. The study material included 50 long-term experimental plots located in the Western Carpathians. The plots were established 40-50 years ago and have been repeatedly measured at 5 to 10-year intervals. The dataset includes 7,950 height measurements of spruce, 21,661 of fir and 5,794 of beech. As many as 9 regression models were derived for each species. Although the “goodness of fit” of all models showed that they were generally well suited for the data, the best results were obtained for silver fir. The coefficient of determination ranged from 0.946 to 0.948, RMSE (m was in the interval 1.94-1.97 and the bias (m was -0.031 to 0.063. Although slightly imprecise parameter estimation was established for spruce, the estimations of the regression parameters obtained for beech were quite less precise. The coefficient of determination for beech was 0.854-0.860, RMSE (m 2.67-2.72, and the bias (m ranged from -0.144 to -0.056. The majority of models using Korf’s formula produced slightly better estimations than Michailoff’s, and it proved immaterial which estimated parameter was fixed and which parameters
2013-01-01
Michael FalolaDepartment of Epidemiology, University of Alabama at Birmingham, Birmingham, AL, USAI read with interest the article "Chronic obstructive pulmonary disease as a cardiovascular risk factor. Results of a case-control study (CONSISTE study)" by de Lucas-Ramos et al.1 In my opinion, the study did not use case-control design, despite its title.View original paper by de Lucas-Ramos and colleagues.
The Spectrum of the Baryon Masses in a Self-consistent SU(3) Quantum Skyrme Model
Jurciukonis, Darius; Regelskis, Vidas
2012-01-01
The semiclassical SU(3) Skyrme model is traditionally considered as describing a rigid quantum rotator with the profile function being fixed by the classical solution of the corresponding SU(2) Skyrme model. In contrast, we go beyond the classical profile function by quantizing the SU(3) Skyrme model canonically. The quantization of the model is performed in terms of the collective coordinate formalism and leads to the establishment of purely quantum corrections of the model. These new corrections are of fundamental importance. They are crucial in obtaining stable quantum solitons of the quantum SU(3) Skyrme model, thus making the model self-consistent and not dependent on the classical solution of the SU(2) case. We show that such a treatment of the model leads to a family of stable quantum solitons that describe the baryon octet and decuplet and reproduce the experimental values of their masses.
Feng, Bin; Shi, Zelin; Zhang, Chengshuo; Xu, Baoshu; Zhang, Xiaodong
2016-05-01
The point spread function (PSF) inconsistency caused by temperature variation leads to artifacts in decoded images of a wavefront coding infrared imaging system. Therefore, this paper proposes an analytical model for the effect of temperature variation on the PSF consistency. In the proposed model, a formula for the thermal deformation of an optical phase mask is derived. This formula indicates that a cubic optical phase mask (CPM) is still cubic after thermal deformation. A proposed equivalent cubic phase mask (E-CPM) is a virtual and room-temperature lens which characterizes the optical effect of temperature variation on the CPM. Additionally, a calculating method for PSF consistency after temperature variation is presented. Numerical simulation illustrates the validity of the proposed model and some significant conclusions are drawn. Given the form parameter, the PSF consistency achieved by a Ge-material CPM is better than the PSF consistency by a ZnSe-material CPM. The effect of the optical phase mask on PSF inconsistency is much slighter than that of the auxiliary lens group. A large form parameter of the CPM will introduce large defocus-insensitive aberrations, which improves the PSF consistency but degrades the room-temperature MTF.
A simplified benchmark Stock-Flow Consistent (SFC) post-Keynesian growth model
Cláudio H. dos Santos; Zezza, Gennaro
2007-01-01
Despite being arguably one of the most active areas of research in heterodox macroeconomics, the study of the dynamic properties of stock-flow consistent (SFC) growth models of financially sophisticated economies is still in its early stages. This paper attempts to offer a contribution to this line of research by presenting a simplified Post-Keynesian SFC growth model with well-defined dynamic properties, and using it to shed light on the merits and limitations of the current heterodox SFC li...
A Consistent Direct Method for Estimating Parameters in Ordinary Differential Equations Models
Holte, Sarah E.
2016-01-01
Ordinary differential equations provide an attractive framework for modeling temporal dynamics in a variety of scientific settings. We show how consistent estimation for parameters in ODE models can be obtained by modifying a direct (non-iterative) least squares method similar to the direct methods originally developed by Himmelbau, Jones and Bischoff. Our method is called the bias-corrected least squares (BCLS) method since it is a modification of least squares methods known to be biased. Co...
Comment on Self-Consistent Model of Black Hole Formation and Evaporation
Ho, Pei-Ming
2015-01-01
In an earlier work, Kawai et al proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.
Comment on self-consistent model of black hole formation and evaporation
Energy Technology Data Exchange (ETDEWEB)
Ho, Pei-Ming [Department of Physics and Center for Theoretical Sciences, Center for Advanced Study in Theoretical Sciences,National Taiwan University, Taipei 106, Taiwan, R.O.C. (China)
2015-08-18
In an earlier work, Kawai et al. proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.
Spatial coincidence modelling, automated database updating and data consistency in vector GIS.
Kufoniyi, O.
1995-01-01
This thesis presents formal approaches for automated database updating and consistency control in vector- structured spatial databases. To serve as a framework, a conceptual data model is formalized for the representation of geo-data from multiple map layers in which a map layer denotes a set of ter
Song, Y.; Wright, D.
1998-01-01
A formulation of the pressure gradient force for use in models with topography-following coordinates is proposed and diagnostically analyzed by Song. We investigate numerical consistency with respect to global energy conservation, depth-integrated momentum changes, and the represent of the bottom pressure torque.
Subjective Confidence in Perceptual Judgments: A Test of the Self-Consistency Model
Koriat, Asher
2011-01-01
Two questions about subjective confidence in perceptual judgments are examined: the bases for these judgments and the reasons for their accuracy. Confidence in perceptual judgments has been claimed to rest on qualitatively different processes than confidence in memory tasks. However, predictions from a self-consistency model (SCM), which had been…
Subjective Confidence in Perceptual Judgments: A Test of the Self-Consistency Model
Koriat, Asher
2011-01-01
Two questions about subjective confidence in perceptual judgments are examined: the bases for these judgments and the reasons for their accuracy. Confidence in perceptual judgments has been claimed to rest on qualitatively different processes than confidence in memory tasks. However, predictions from a self-consistency model (SCM), which had been…
Self-consistent modeling of radio-frequency plasma generation in stellarators
Moiseenko, V. E.; Stadnik, Yu. S.; Lysoivan, A. I.; Korovin, V. B.
2013-11-01
A self-consistent model of radio-frequency (RF) plasma generation in stellarators in the ion cyclotron frequency range is described. The model includes equations for the particle and energy balance and boundary conditions for Maxwell's equations. The equation of charged particle balance takes into account the influx of particles due to ionization and their loss via diffusion and convection. The equation of electron energy balance takes into account the RF heating power source, as well as energy losses due to the excitation and electron-impact ionization of gas atoms, energy exchange via Coulomb collisions, and plasma heat conduction. The deposited RF power is calculated by solving the boundary problem for Maxwell's equations. When describing the dissipation of the energy of the RF field, collisional absorption and Landau damping are taken into account. At each time step, Maxwell's equations are solved for the current profiles of the plasma density and plasma temperature. The calculations are performed for a cylindrical plasma. The plasma is assumed to be axisymmetric and homogeneous along the plasma column. The system of balance equations is solved using the Crank-Nicholson scheme. Maxwell's equations are solved in a one-dimensional approximation by using the Fourier transformation along the azimuthal and longitudinal coordinates. Results of simulations of RF plasma generation in the Uragan-2M stellarator by using a frame antenna operating at frequencies lower than the ion cyclotron frequency are presented. The calculations show that the slow wave generated by the antenna is efficiently absorbed at the periphery of the plasma column, due to which only a small fraction of the input power reaches the confinement region. As a result, the temperature on the axis of the plasma column remains low, whereas at the periphery it is substantially higher. This leads to strong absorption of the RF field at the periphery via the Landau mechanism.
Toward self-consistent tectono-magmatic numerical model of rift-to-ridge transition
Gerya, Taras; Bercovici, David; Liao, Jie
2017-04-01
Natural data from modern and ancient lithospheric extension systems suggest three-dimensional (3D) character of deformation and complex relationship between magmatism and tectonics during the entire rift-to-ridge transition. Therefore, self-consistent high-resolution 3D magmatic-thermomechanical numerical approaches stand as a minimum complexity requirement for modeling and understanding of this transition. Here we present results from our new high-resolution 3D finite-difference marker-in-cell rift-to-ridge models, which account for magmatic accretion of the crust and use non-linear strain-weakened visco-plastic rheology of rocks that couples brittle/plastic failure and ductile damage caused by grain size reduction. Numerical experiments suggest that nucleation of rifting and ridge-transform patterns are decoupled in both space and time. At intermediate stages, two patterns can coexist and interact, which triggers development of detachment faults, failed rift arms, hyper-extended margins and oblique proto-transforms. En echelon rift patterns typically develop in the brittle upper-middle crust whereas proto-ridge and proto-transform structures nucleate in the lithospheric mantle. These deep proto-structures propagate upward, inter-connect and rotate toward a mature orthogonal ridge-transform patterns on the timescale of millions years during incipient thermal-magmatic accretion of the new oceanic-like lithosphere. Ductile damage of the extending lithospheric mantle caused by grain size reduction assisted by Zenner pinning plays critical role in rift-to-ridge transition by stabilizing detachment faults and transform structures. Numerical results compare well with observations from incipient spreading regions and passive continental margins.
Directory of Open Access Journals (Sweden)
Damiano Monelli
2010-11-01
Full Text Available We present here two self-consistent implementations of a short-term earthquake probability (STEP model that produces daily seismicity forecasts for the area of the Italian national seismic network. Both implementations combine a time-varying and a time-invariant contribution, for which we assume that the instrumental Italian earthquake catalog provides the best information. For the time-invariant contribution, the catalog is declustered using the clustering technique of the STEP model; the smoothed seismicity model is generated from the declustered catalog. The time-varying contribution is what distinguishes the two implementations: 1 for one implementation (STEP-LG, the original model parameterization and estimation is used; 2 for the other (STEP-NG, the mean abundance method is used to estimate aftershock productivity. In the STEP-NG implementation, earthquakes with magnitude up to ML= 6.2 are expected to be less productive compared to the STEP-LG implementation, whereas larger earthquakes are expected to be more productive. We have retrospectively tested the performance of these two implementations and applied likelihood tests to evaluate their consistencies with observed earthquakes. Both of these implementations were consistent with the observed earthquake data in space: STEP-NG performed better than STEP-LG in terms of forecast rates. More generally, we found that testing earthquake forecasts issued at regular intervals does not test the full power of clustering models, and future experiments should allow for more frequent forecasts starting at the times of triggering events.
Altmeyer, Guillaume; Panicaud, Benoit; Rouhaud, Emmanuelle; Wang, Mingchuan; Roos, Arjen; Kerner, Richard
2016-11-01
When constructing viscoelastic models, rate-form relations appear naturally to relate strain and stress tensors. One has to ensure that these tensors and their rates are indifferent with respect to the change of observers and to the superposition with rigid body motions. Objective transports are commonly accepted to ensure this invariance. However, the large number of transport operators developed makes the choice often difficult for the user and may lead to physically inconsistent formulation of hypoelasticity. In this paper, a methodology based on the use of the Lie derivative is proposed to model consistent hypoelasticity as an equivalent incremental formulation of hyperelasticity. Both models are shown to be reversible and completely equivalent. Extension to viscoelasticity is then proposed from this consistent model by associating consistent hypoelastic models with viscous behavior. As an illustration, Mooney-Rivlin nonlinear elasticity is coupled with Newton viscosity and a Maxwell-like material is investigated. Numerical solutions are then presented to illustrate a viscoelastic material subjected to finite deformations for a large range of strain rates.
Rudzinski, Joseph F; Bereau, Tristan
2016-01-01
Molecular simulations can provide microscopic insight into the physical and chemical driving forces of complex molecular processes. Despite continued advancement of simulation methodology, model errors may lead to inconsistencies between simulated and reference (e.g., from experiments or higher-level simulations) observables. To bound the microscopic information generated by computer simulations within reference measurements, we propose a method that reweights the microscopic transitions of the system to improve consistency with a set of coarse kinetic observables. The method employs the well-developed Markov state modeling framework to efficiently link microscopic dynamics with long-time scale constraints, thereby consistently addressing a wide range of time scales. To emphasize the robustness of the method, we consider two distinct coarse-grained models with significant kinetic inconsistencies. When applied to the simulated conformational dynamics of small peptides, the reweighting procedure systematically ...
Zhang, Zhen; Guo, Chonghui
2016-08-01
Due to the uncertainty of the decision environment and the lack of knowledge, decision-makers may use uncertain linguistic preference relations to express their preferences over alternatives and criteria. For group decision-making problems with preference relations, it is important to consider the individual consistency and the group consensus before aggregating the preference information. In this paper, consistency and consensus models for group decision-making with uncertain 2-tuple linguistic preference relations (U2TLPRs) are investigated. First of all, a formula which can construct a consistent U2TLPR from the original preference relation is presented. Based on the consistent preference relation, the individual consistency index for a U2TLPR is defined. An iterative algorithm is then developed to improve the individual consistency of a U2TLPR. To help decision-makers reach consensus in group decision-making under uncertain linguistic environment, the individual consensus and group consensus indices for group decision-making with U2TLPRs are defined. Based on the two indices, an algorithm for consensus reaching in group decision-making with U2TLPRs is also developed. Finally, two examples are provided to illustrate the effectiveness of the proposed algorithms.
Consistency maintenance for constraint in role-based access control model
Institute of Scientific and Technical Information of China (English)
韩伟力; 陈刚; 尹建伟; 董金祥
2002-01-01
Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far'few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces correaponding formal rules, rulebased reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally,the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-ori-ented product data management (PDM) system.
Consistency maintenance for constraint in role-based access control model
Institute of Scientific and Technical Information of China (English)
韩伟力; 陈刚; 尹建伟; 董金祥
2002-01-01
Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces corresponding formal rules, rule-based reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally, the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-oriented product data management (PDM) system.
A New Hierarchy of Phylogenetic Models Consistent with Heterogeneous Substitution Rates.
Woodhams, Michael D; Fernández-Sánchez, Jesús; Sumner, Jeremy G
2015-07-01
When the process underlying DNA substitutions varies across evolutionary history, some standard Markov models underlying phylogenetic methods are mathematically inconsistent. The most prominent example is the general time-reversible model (GTR) together with some, but not all, of its submodels. To rectify this deficiency, nonhomogeneous Lie Markov models have been identified as the class of models that are consistent in the face of a changing process of DNA substitutions regardless of taxon sampling. Some well-known models in popular use are within this class, but are either overly simplistic (e.g., the Kimura two-parameter model) or overly complex (the general Markov model). On a diverse set of biological data sets, we test a hierarchy of Lie Markov models spanning the full range of parameter richness. Compared against the benchmark of the ever-popular GTR model, we find that as a whole the Lie Markov models perform well, with the best performing models having 8-10 parameters and the ability to recognize the distinction between purines and pyrimidines. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society of Systematic Biologists.
Self-consistent chaotic transport in a high-dimensional mean-field Hamiltonian map model
Martínez-del-Río, D; Olvera, A; Calleja, R
2016-01-01
Self-consistent chaotic transport is studied in a Hamiltonian mean-field model. The model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of $N$ coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherent structures. Numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of th...
How consistent is cloudiness over Canada from satellite observations and modeling data?
Trishchenko, A. P.; Khlopenkov, K.; Latifovic, R.
2004-05-01
Being one of the major modulators of radiation budget and hydrological cycle, clouds are still significant challenge for modeling and satellite retrievals. For example, our analysis shows that for Western Canada the systematic difference in total cloud amounts between NCAR/NCEP Reanalysis-2 and ISCCP reaches 20-30 per cent. Especially difficult are satellite retrievals for Northern climate regions over snow-covered surface and during night-time. To understand better these differences and their influence on earth radiation budget in Northern latitudes, we are attempting to undertake the re-analysis of satellite AVHRR data over Canada using improved data processing and cloud detection algorithms. Details of cloud detection algorithm for day-time and night-time conditions over snow-free and snow-covered surfaces are discussed. Selected results of satellite retrievals for typical summer and winter conditions over Canada are compared to previous analyses, such as ISCCP and Pathfinder projects. Consistency between our cloud retrievals using AVHRR data and those available from MODIS will be also considered.
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
Semiparametric reproductive dispersion nonlinear model (SRDNM) is an extension of nonlinear reproductive dispersion models and semiparametric nonlinear regression models, and includes semiparametric nonlinear model and semiparametric generalized linear model as its special cases. Based on the local kernel estimate of nonparametric component, profile-kernel and backfitting estimators of parameters of interest are proposed in SRDNM, and theoretical comparison of both estimators is also investigated in this paper. Under some regularity conditions, strong consistency and asymptotic normality of two estimators are proved. It is shown that the backfitting method produces a larger asymptotic variance than that for the profile-kernel method. A simulation study and a real example are used to illustrate the proposed methodologies.
A control-oriented self-consistent model of an inductively-coupled plasma
Keville, Bernard; Turner, Miles
2009-10-01
An essential first step in the design of real time control algorithms for plasma processes is to determine dynamical relationships between actuator quantities such as gas flow rate set points and plasma states such electron density. An ideal first principles-based, control-oriented model should exhibit the simplicity and computational requirements of an empirical model and, in addition, despite sacrificing first principles detail, capture enough of the essential physics and chemistry of the process in order to provide reasonably accurate qualitative predictions. This presentation describes a control-oriented model of a cylindrical low pressure planar inductive discharge with a stove top antenna. The model consists of equivalent circuit coupled to a global model of the plasma chemistry to produce a self-consistent zero-dimensional model of the discharge. The non-local plasma conductivity and the fields in the plasma are determined from the wave equation and the two-term solution of the Boltzmann equation. Expressions for the antenna impedance and the parameters of the transformer equivalent circuit in terms of the isotropic electron distribution and the geometry of the chamber are presented.
Consistent increase in Indian monsoon rainfall and its variability across CMIP-5 models
Directory of Open Access Journals (Sweden)
A. Menon
2013-01-01
Full Text Available The possibility of an impact of global warming on the Indian monsoon is of critical importance for the large population of this region. Future projections within the Coupled Model Intercomparison Project Phase 3 (CMIP-3 showed a wide range of trends with varying magnitude and sign across models. Here the Indian summer monsoon rainfall is evaluated in 20 CMIP-5 models for the period 1850 to 2100. In the new generation of climate models a consistent increase in seasonal mean rainfall during the summer monsoon periods arises. All models simulate stronger seasonal mean rainfall in the future compared to the historic period under the strongest warming scenario RCP-8.5. Increase in seasonal mean rainfall is the largest for the RCP-8.5 scenario compared to other RCPs. The interannual variability of the Indian monsoon rainfall also shows a consistent positive trend under unabated global warming. Since both the long-term increase in monsoon rainfall as well as the increase in interannual variability in the future is robust across a wide range of models, some confidence can be attributed to these projected trends.
Yeaman, Andrew R. J.
The Fishbein and Ajzen model of attitude-behavior consistency was applied to 56 undergraduates learning to use a microcomputer. Two levels of context for this act were compared: the students' beliefs about themselves, and their beliefs about people in general. The results indicated that students' beliefs were good predictors of their behavioral…
Non-Perturbative Self-Consistent Model in SU(N Gauge Field Theory
Directory of Open Access Journals (Sweden)
Koshelkin A.V.
2012-06-01
Full Text Available Non-perturbative quasi-classical model in a gauge theory with the Yang-Mills (YM field is developed. The self-consistent solutions of the Dirac equation in the SU(N gauge field, which is in the eikonal approximation, and the Yang-Mills (YM equations containing the external fermion current are solved. It shown that the developed model has the self-consistent solutions of the Dirac and Yang-Mills equations at N ≥ 3. In this way, the solutions take place provided that the fermion and gauge fields exist simultaneously, so that the fermion current completely compensates the current generated by the gauge field due to self-interaction of it.
Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel
2017-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.
Institute of Scientific and Technical Information of China (English)
John Jack P. RIEGEL III; David DAVISON
2016-01-01
Historically, there has been little correlation between the material properties used in (1) empirical formulae, (2) analytical formulations, and (3) numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014) to show how the Effective Flow Stress (EFS) strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN) (Anderson and Walker, 1991) and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical) to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D=10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a baseline with a full
Directory of Open Access Journals (Sweden)
John (Jack P. Riegel III
2016-04-01
Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a
Institute of Scientific and Technical Information of China (English)
YIN; Changming; ZHAO; Lincheng; WEI; Chengdong
2006-01-01
In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.
A thermodynamically consistent model of the post-translational Kai circadian clock
Lubensky, David K.; ten Wolde, Pieter Rein
2017-01-01
The principal pacemaker of the circadian clock of the cyanobacterium S. elongatus is a protein phosphorylation cycle consisting of three proteins, KaiA, KaiB and KaiC. KaiC forms a homohexamer, with each monomer consisting of two domains, CI and CII. Both domains can bind and hydrolyze ATP, but only the CII domain can be phosphorylated, at two residues, in a well-defined sequence. While this system has been studied extensively, how the clock is driven thermodynamically has remained elusive. Inspired by recent experimental observations and building on ideas from previous mathematical models, we present a new, thermodynamically consistent, statistical-mechanical model of the clock. At its heart are two main ideas: i) ATP hydrolysis in the CI domain provides the thermodynamic driving force for the clock, switching KaiC between an active conformational state in which its phosphorylation level tends to rise and an inactive one in which it tends to fall; ii) phosphorylation of the CII domain provides the timer for the hydrolysis in the CI domain. The model also naturally explains how KaiA, by acting as a nucleotide exchange factor, can stimulate phosphorylation of KaiC, and how the differential affinity of KaiA for the different KaiC phosphoforms generates the characteristic temporal order of KaiC phosphorylation. As the phosphorylation level in the CII domain rises, the release of ADP from CI slows down, making the inactive conformational state of KaiC more stable. In the inactive state, KaiC binds KaiB, which not only stabilizes this state further, but also leads to the sequestration of KaiA, and hence to KaiC dephosphorylation. Using a dedicated kinetic Monte Carlo algorithm, which makes it possible to efficiently simulate this system consisting of more than a billion reactions, we show that the model can describe a wealth of experimental data. PMID:28296888
Self-consistent Dark Matter simplified models with an s-channel scalar mediator
Bell, Nicole F.; Busoni, Giorgio; Sanderson, Isaac W.
2017-03-01
We examine Simplified Models in which fermionic DM interacts with Standard Model (SM) fermions via the exchange of an s-channel scalar mediator. The single-mediator version of this model is not gauge invariant, and instead we must consider models with two scalar mediators which mix and interfere. The minimal gauge invariant scenario involves the mixing of a new singlet scalar with the Standard Model Higgs boson, and is tightly constrained. We construct two Higgs doublet model (2HDM) extensions of this scenario, where the singlet mixes with the 2nd Higgs doublet. Compared with the one doublet model, this provides greater freedom for the masses and mixing angle of the scalar mediators, and their coupling to SM fermions. We outline constraints on these models, and discuss Yukawa structures that allow enhanced couplings, yet keep potentially dangerous flavour violating processes under control. We examine the direct detection phenomenology of these models, accounting for interference of the scalar mediators, and interference of different quarks in the nucleus. Regions of parameter space consistent with direct detection measurements are determined.
Quantal self-consistent cranking model for monopole excitations in even-even light nuclei
Gulshani, P
2014-01-01
In this article, we derive a quantal self-consistent time-reversal invariant cranking model for isoscalar monopole excitation coupled to intrinsic motion in even-even light nuclei. The model uses a wavefunction that is a product of monopole and intrinsic wavefunctions and a constrained variational method to derive, from a many-particle Schrodinger equation, a pair of coupled self-consistent cranking-type Schrodinger equations for the monopole and intrinsic systems. The monopole and intrinsic wavefunctions are coupled to each other by the two cranking equations and their associated parameters and by two constraints imposed on the intrinsic system. For an isotropic Nilsson shell model and an effective residual two-body interaction, the two coupled cranking equations are solved in the Tamm Dancoff approximation. The strength of the interaction is determined from a Hartree-Fock self-consistency argument. The excitation energy of the first excited state is determined and found to agree closely with those observed ...
Predicting giant magnetoresistance using a self-consistent micromagnetic diffusion model
Abert, Claas; Bruckner, Florian; Vogler, Christoph; Praetorius, Dirk; Suess, Dieter
2015-01-01
We propose a self-consistent micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. Potential calculations for a magnetic multilayer structure with perpendicular current flow confirm experimental findings of a non-sinosoidal dependence of the resistivity on the tilting angle of the magnetization in the different layers. While the sinosoidal dependency is observed for certain material parameter limits, a realistic choice of these parameters leads to a notably narrower distribution.
A consistent modelling methodology for secondary settling tanks: a reliable numerical method.
Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena
2013-01-01
The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.
Directory of Open Access Journals (Sweden)
J. G. Fyke
2013-04-01
Full Text Available A new technique for generating ice sheet preindustrial 1850 initial conditions for coupled ice-sheet/climate models is developed and demonstrated over the Greenland Ice Sheet using the Community Earth System Model (CESM. Paleoclimate end-member simulations and ice core data are used to derive continuous surface mass balance fields which are used to force a long transient ice sheet model simulation. The procedure accounts for the evolution of climate through the last glacial period and converges to a simulated preindustrial 1850 ice sheet that is geometrically and thermodynamically consistent with the 1850 preindustrial simulated CESM state, yet contains a transient memory of past climate that compares well to observations and independent model studies. This allows future coupled ice-sheet/climate projections of climate change that include ice sheets to integrate the effect of past climate conditions on the state of the Greenland Ice Sheet, while maintaining system-wide continuity between past and future climate simulations.
Self-consistent tight-binding model of B and N doping in graphene
DEFF Research Database (Denmark)
Pedersen, Thomas Garm; Pedersen, Jesper Goor
2013-01-01
Boron and nitrogen substitutional impurities in graphene are analyzed using a self-consistent tight-binding approach. An analytical result for the impurity Green's function is derived taking broken electron-hole symmetry into account and validated by comparison to numerical diagonalization...
Directory of Open Access Journals (Sweden)
Oliver M. D. Lutz
2014-12-01
Full Text Available Especially for larger molecules relevant to life sciences, vibrational self-consistent field (VSCF calculations can become unmanageably demanding even when only first and second order potential coupling terms are considered. This paper investigates to what extent the grid density of the VSCF’s underlying potential energy surface can be reduced without sacrificing accuracy of the resulting wavenumbers. Including single-mode and pair contributions, a reduction to eight points per mode did not introduce a significant deviation but improved the computational efficiency by a factor of four. A mean unsigned deviation of 1.3% from the experiment could be maintained for the fifteen molecules under investigation and the approach was found to be applicable to rigid, semi-rigid and soft vibrational problems likewise. Deprotonated phosphoserine, stabilized by two intramolecular hydrogen bonds, was investigated as an exemplary application.
Pisnichenko, I A
2007-01-01
The regional climate model prepared from Eta WS (workstation) forecast model has been integrated over South America with the horizontal resolution of 40 km for the period of 1961-1977. The model was forced at its lateral boundaries by the outputs of HadAMP. The data of HadAMP represent the simulation of modern climate with the resolution about150 km. In order to prepare climate regional model from the Eta forecast model was added new blocks and multiple modifications and corrections was made in the original model. The running of climate Eta model was made on the supercomputer SX-6. The detailed analysis of the results of dynamical downscaling experiment includes an investigation of a consistency between the regional and AGCM models as well as of ability of the regional model to resolve important features of climate fields on the finer scale than that resolved by AGCM. In this work we show the results of our investigation of the consistency of the output fields of the Eta model and HadAMP. We have analysed geo...
Ataman, Meric; Hernandez Gardiol, Daniel F; Fengos, Georgios; Hatzimanikatis, Vassily
2017-07-01
Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.
On the (in)consistency of a multi-model ensemble of the past 30 years land surface state.
Dutra, Emanuel; Schellekens, Jaap; Beck, Hylke; Balsamo, Gianpaolo
2016-04-01
Global land-surface and hydrological models are a fundamental tool in understanding the land-surface state and evolution either coupled to atmospheric models for climate and weather predictions or in stand-alone mode. In this study we take a recently developed dataset consisting in stand-alone simulations by 10 global hydrological and land surface models sharing the same atmospheric forcing for the period 1979-2012 (the eart2Observe dataset). This multi-model ensemble provides the first freely available dataset with such a spatial/temporal scale that allows for a characterization of the multi-model characteristics such as inter-model consistency and error-spread relationship. We will present a metric for the ensemble consistency using the concept of potential predictability, that can be interpreted as a proxy for the multi-model agreement. Initial results point to regions of low inter-model agreement in the polar and tropical regions, the latter also present when comparing globally available precipitation datasets. In addition to this, the discharge ensemble spread around the ensemble mean was compared to the error of the ensemble mean for several large-scale and small scale basins. This showed a general under-estimation of the ensemble spread, particularly in tropical basins, suggesting that the current dataset lacks the representation of the precipitation uncertainty in the input meteorological data.
Directory of Open Access Journals (Sweden)
Meric Ataman
2017-07-01
Full Text Available Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.
Relativistic Consistent Angular-Momentum Projected Shell-Model:Relativistic Mean Field
Institute of Scientific and Technical Information of China (English)
LI Yan-Song; LONG Gui-Lu
2004-01-01
We develop a relativistic nuclear structure model, relativistic consistent angular-momentum projected shellmodel (RECAPS), which combines the relativistic mean-field theory with the angular-momentum projection method.In this new model, nuclear ground-state properties are first calculated consistently using relativistic mean-field (RMF)theory. Then angular momentum projection method is used to project out states with good angular momentum from a few important configurations. By diagonalizing the hamiltonian, the energy levels and wave functions are obtained.This model is a new attempt for the understanding of nuclear structure of normal nuclei and for the prediction of nuclear properties of nuclei far from stability. In this paper, we will describe the treatment of the relativistic mean field. A computer code, RECAPS-RMF, is developed. It solves the relativistic mean field with axial-symmetric deformation in the spherical harmonic oscillator basis. Comparisons between our calculations and existing relativistic mean-field calculations are made to test the model. These include the ground-state properties of spherical nuclei 16O and 208Pb,the deformed nucleus 20Ne. Good agreement is obtained.
Ring current Atmosphere interactions Model with Self-Consistent Magnetic field
Energy Technology Data Exchange (ETDEWEB)
2016-09-09
The Ring current Atmosphere interactions Model with Self-Consistent magnetic field (B) is a unique code that combines a kinetic model of ring current plasma with a three dimensional force-balanced model of the terrestrial magnetic field. The kinetic portion, RAM, solves the kinetic equation to yield the bounce-averaged distribution function as a function of azimuth, radial distance, energy and pitch angle for three ion species (H+, He+, and O+) and, optionally, electrons. The domain is a circle in the Solar-Magnetic (SM) equatorial plane with a radial span of 2 to 6.5 RE. It has an energy range of approximately 100 eV to 500 KeV. The 3-D force balanced magnetic field model, SCB, balances the JxB force with the divergence of the general pressure tensor to calculate the magnetic field configuration within its domain. The domain ranges from near the Earth’s surface, where the field is assumed dipolar, to the shell created by field lines passing through the SM equatorial plane at a radial distance of 6.5 RE. The two codes work in tandem, with RAM providing anisotropic pressure to SCB and SCB returning the self-consistent magnetic field through which RAM plasma is advected.
Silvis, Maurits H; Verstappen, Roel
2016-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is p...
Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.; Waas, Anthony M.
2013-01-01
A mesh objective crack band model was implemented within the generalized method of cells micromechanics theory. This model was linked to a macroscale finite element model to predict post-peak strain softening in composite materials. Although a mesh objective theory was implemented at the microscale, it does not preclude pathological mesh dependence at the macroscale. To ensure mesh objectivity at both scales, the energy density and the energy release rate must be preserved identically across the two scales. This requires a consistent characteristic length or localization limiter. The effects of scaling (or not scaling) the dimensions of the microscale repeating unit cell (RUC), according to the macroscale element size, in a multiscale analysis was investigated using two examples. Additionally, the ramifications of the macroscale element shape, compared to the RUC, was studied.
Consistent neutron star models with magnetic field dependent equations of state
Chatterjee, Debarati; Novak, Jerome; Oertel, Micaela
2014-01-01
We present a self-consistent model for the study of the structure of a neutron star in strong magnetic fields. Starting from a microscopic Lagrangian, this model includes the effect of the magnetic field on the equation of state, the interaction of the electromagnetic field with matter (magnetisation), and anisotropies in the energy-momentum tensor, as well as general relativistic aspects. We build numerical axisymmetric stationary models and show the applicability of the approach with one example quark matter equation of state (EoS) often employed in the recent literature for studies of strongly magnetised neutron stars. For this EoS, the effect of inclusion of magnetic field dependence or the magnetisation do not increase the maximum mass significantly in contrast to what has been claimed by previous studies.
A self consistent chemically stratified atmosphere model for the roAp star 10 Aquilae
Nesvacil, Nicole; Ryabchikova, Tanya A; Kochukhov, Oleg; Akberov, Artur; Weiss, Werner W
2012-01-01
Context: Chemically peculiar A type (Ap) stars are a subgroup of the CP2 stars which exhibit anomalous overabundances of numerous elements, e.g. Fe, Cr, Sr and rare earth elements. The pulsating subgroup of the Ap stars, the roAp stars, present ideal laboratories to observe and model pulsational signatures as well as the interplay of the pulsations with strong magnetic fields and vertical abundance gradients. Aims: Based on high resolution spectroscopic observations and observed stellar energy distributions we construct a self consistent model atmosphere, that accounts for modulations of the temperature-pressure structure caused by vertical abundance gradients, for the roAp star 10 Aquilae (HD 176232). We demonstrate that such an analysis can be used to determine precisely the fundamental atmospheric parameters required for pulsation modelling. Methods: Average abundances were derived for 56 species. For Mg, Si, Ca, Cr, Fe, Co, Sr, Pr, and Nd vertical stratification profiles were empirically derived using the...
Energy Technology Data Exchange (ETDEWEB)
Ming, Y; Ramaswamy, V; Donner, L J; Phillips, V T; Klein, S A; Ginoux, P A; Horowitz, L H
2005-05-02
This paper describes a self-consistent prognostic cloud scheme that is able to predict cloud liquid water, amount and droplet number (N{sub d}) from the same updraft velocity field, and is suitable for modeling aerosol-cloud interactions in general circulation models (GCMs). In the scheme, the evolution of droplets fully interacts with the model meteorology. An explicit treatment of cloud condensation nuclei (CCN) activation allows the scheme to take into account the contributions to N{sub d} of multiple types of aerosol (i.e., sulfate, organic and sea-salt aerosols) and kinetic limitations of the activation process. An implementation of the prognostic scheme in the Geophysical Fluid Dynamics Laboratory (GFDL) AM2 GCM yields a vertical distribution of N{sub d} characteristic of maxima in the lower troposphere differing from that obtained through diagnosing N{sub d} empirically from sulfate mass concentrations. As a result, the agreement of model-predicted present-day cloud parameters with satellite measurements is improved compared to using diagnosed N{sub d}. The simulations with pre-industrial and present-day aerosols show that the combined first and second indirect effects of anthropogenic sulfate and organic aerosols give rise to a global annual mean flux change of -1.8 W m{sup -2} consisting of -2.0 W m{sup -2} in shortwave and 0.2 W m{sup -2} in longwave, as model response alters cloud field, and subsequently longwave radiation. Liquid water path (LWP) and total cloud amount increase by 19% and 0.6%, respectively. Largely owing to high sulfate concentrations from fossil fuel burning, the Northern Hemisphere mid-latitude land and oceans experience strong cooling. So does the tropical land which is dominated by biomass burning organic aerosol. The Northern/Southern Hemisphere and land/ocean ratios are 3.1 and 1.4, respectively. The calculated annual zonal mean flux changes are determined to be statistically significant, exceeding the model's natural
Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling
Energy Technology Data Exchange (ETDEWEB)
Pera, H.; Kleijn, J. M.; Leermakers, F. A. M., E-mail: Frans.leermakers@wur.nl [Laboratory of Physical Chemistry and Colloid Science, Wageningen University, Dreijenplein 6, 6307 HB Wageningen (Netherlands)
2014-02-14
To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k{sub c} and k{sup ¯} and the preferred monolayer curvature J{sub 0}{sup m}, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k{sub c} and the area compression modulus k{sub A} are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k{sup ¯} and J{sub 0}{sup m} can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k{sup ¯} and J{sub 0}{sup m} change sign with relevant parameter changes. Although typically k{sup ¯}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J{sub 0}{sup m}≫0, especially at low ionic
Connolly, Mark; He, Xing; Gonzalez, Nestor; Vespa, Paul; DiStefano, Joe; Hu, Xiao
2014-03-01
Due to the inaccessibility of the cranial vault, it is difficult to study cerebral blood flow dynamics directly. A mathematical model can be useful to study these dynamics. The model presented here is a novel combination of a one-dimensional fluid flow model representing the major vessels of the circle of Willis (CoW), with six individually parameterized auto-regulatory models of the distal vascular beds. This model has the unique ability to simulate high temporal resolution flow and velocity waveforms, amenable to pulse-waveform analysis, as well as sophisticated phenomena such as auto-regulation. Previous work with human patients has shown that vasodilation induced by CO2 inhalation causes 12 consistent pulse-waveform changes as measured by the morphological clustering and analysis of intracranial pressure algorithm. To validate this model, we simulated vasodilation and successfully reproduced 9 out of the 12 pulse-waveform changes. A subsequent sensitivity analysis found that these 12 pulse-waveform changes were most affected by the parameters associated with the shape of the smooth muscle tension response and vessel elasticity, providing insight into the physiological mechanisms responsible for observed changes in the pulse-waveform shape.
Validity test and its consistency in the construction of patient loyalty model
Yanuar, Ferra
2016-04-01
The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.
Directory of Open Access Journals (Sweden)
Roy E Barnewall
2012-06-01
Full Text Available Repeated low-level exposures to Bacillus anthracis could occur before or after the remediation of an environmental release. This is especially true for persistent agents such as Bacillus anthracis spores, the causative agent of anthrax. Studies were conducted to examine aerosol methods needed for consistent daily low aerosol concentrations to deliver a low-dose (less than 106 colony forming units (CFU of B. anthracis spores and included a pilot feasibility characterization study, acute exposure study, and a multiple fifteen day exposure study. This manuscript focuses on the state-of-the-science aerosol methodologies used to generate and aerosolize consistent daily low aerosol concentrations and resultant low inhalation doses. The pilot feasibility characterization study determined that the aerosol system was consistent and capable of producing very low aerosol concentrations. In the acute, single day exposure experiment, targeted inhaled doses of 1 x 102, 1 x 103, 1 x 104, and 1 x 105 CFU were used. In the multiple daily exposure experiment, rabbits were exposed multiple days to targeted inhaled doses of 1 x 102, 1 x 103, and 1 x 104 CFU. In all studies, targeted inhaled doses remained fairly consistent from rabbit to rabbit and day to day. The aerosol system produced aerosolized spores within the optimal mass median aerodynamic diameter particle size range to reach deep lung alveoli. Consistency of the inhaled dose was aided by monitoring and recording respiratory parameters during the exposure with real-time plethysmography. Overall, the presented results show that the animal aerosol system was stable and highly reproducible between different studies and multiple exposure days.
Self-consistent Spectral Functions in the $O(N)$ Model from the FRG
Strodthoff, Nils
2016-01-01
We present the first self-consistent direct calculation of a spectral function in the framework of the Functional Renormalization Group. The study is carried out in the relativistic $O(N)$ model, where the full momentum dependence of the propagators in the complex plane as well as momentum-dependent vertices are considered. The analysis is supplemented by a comparative study of the Euclidean momentum dependence and of the complex momentum dependence on the level of spectral functions. This work lays the groundwork for the computation of full spectral functions in more complex systems.
Self-consistent description of $\\Lambda$ hypernuclei in the quark-meson coupling model
Tsushima, K; Thomas, A W
1997-01-01
The quark-meson coupling model, which has been successfully used to describe the properties of both finite nuclei and infinite nuclear matter, is applied to a study of $\\Lambda$ hypernuclei. With the assumption that the (self-consistent) exchanged scalar, and vector, mesons couple only to the u and d quarks, a very weak spin-orbit force in the $\\Lambda$-nucleus interaction is achieved automatically. This can be interpreted as a direct consequence of the quark structure of the $\\Lambda$ hyperon. Possible implications and extensions of the present investigation are also discussed.
Supporting Consistency in Linked Specialized Engineering Models Through Bindings and Updating
Institute of Scientific and Technical Information of China (English)
Albertus H. Olivier; Gert C. van Rooyen; Berthold Firmenich; Karl E. Beucke
2008-01-01
Currently, some commercial software applications support users to work in an integrated environ-ment. However, this is limited to the suite of models provided by the software vendor and consequently it forces all the parties to use the same software. In contrast, the research described in this paper investigates ways of using standard software applications, which may be specialized for different professional domains.These are linked for effective transfer of information and a binding mechanism is provided to support consis-tency. The proposed solution was implemented using a CAD application and an independent finite element application in order to verify the theoretical aspects of this work.
A “Minsky crisis” in a Stock-Flow Consistent model
Mouakil, Tarik
2014-01-01
This study uses the Stock-Flow Consistent modelling approach to assess the relevance of Minsky’s demonstration of his financial instability hypothesis. We show that this demonstration, based on the assumption of a pro-cyclical leverage ratio, is incompatible with the Kaleckian analysis of profits endorsed by Minsky. Therefore we suggest replacing the assumption of a pro-cyclical leverage ratio with one of a pro-cyclical short-term borrowing, which also appears in Minsky’s work. Cet article...
Keller, D. E.; Fischer, A. M.; Frei, C.; Liniger, M. A.; Appenzeller, C.; Knutti, R.
2014-07-01
Many climate impact assessments over topographically complex terrain require high-resolution precipitation time-series that have a spatio-temporal correlation structure consistent with observations. This consistency is essential for spatially distributed modelling of processes with non-linear responses to precipitation input (e.g. soil water and river runoff modelling). In this regard, weather generators (WGs) designed and calibrated for multiple sites are an appealing technique to stochastically simulate time-series that approximate the observed temporal and spatial dependencies. In this study, we present a stochastic multi-site precipitation generator and validate it over the hydrological catchment Thur in the Swiss Alps. The model consists of several Richardson-type WGs that are run with correlated random number streams reflecting the observed correlation structure among all possible station pairs. A first-order two-state Markov process simulates intermittence of daily precipitation, while precipitation amounts are simulated from a mixture model of two exponential distributions. The model is calibrated separately for each month over the time-period 1961-2011. The WG is skilful at individual sites in representing the annual cycle of the precipitation statistics, such as mean wet day frequency and intensity as well as monthly precipitation sums. It reproduces realistically the multi-day statistics such as the frequencies of dry and wet spell lengths and precipitation sums over consecutive wet days. Substantial added value is demonstrated in simulating daily areal precipitation sums in comparison to multiple WGs that lack the spatial dependency in the stochastic process: the multi-site WG is capable to capture about 95% of the observed variability in daily area sums, while the summed time-series from multiple single-site WGs only explains about 13%. Limitation of the WG have been detected in reproducing observed variability from year to year, a component that has
Tides, Rotation Or Anisotropy? Self-consistent Nonspherical Models For Globular Clusters
Varri, Anna L.; Bertin, G.
2011-01-01
Spherical models of quasi-relaxed stellar systems provide a successful zeroth-order description of globular clusters. Yet, the great progress made in recent years in the acquisition of detailed information of the structure of these stellar systems calls for a renewed effort on the side of modeling. In particular, more general analytical models would allow to address the long-standing issue of the physical origin of the deviations from spherical symmetry of the globular clusters, that now can be properly measured. In fact, it remains to be established which is the cause of the observed flattening, among external tides, internal rotation, and pressure anisotropy. In this paper we focus on the first two physical ingredients. We start by briefly describing a recently studied family of triaxial models that incorporate in a self-consistent way the tidal effects of the host galaxy, as a collisionless analogue of the Roche problem (Varri & Bertin ApJ 2009). We then present two new families of axisymmetric models in which the deviations from spherical symmetry are induced by the presence of internal rotation. The first one is an extension of the well-known family of King models to the case of axisymmetric equilibria flattened by solid-body rotation. The second family is characterized by differential rotation, designed to be rigid in the center and to vanish in the outer parts, where the imposed truncation in phase space becomes effective. For possible application to globular clusters, models of interest should be those, in both families, characterized by low values of the rotation strength parameter and quasi-spherical shape. For general interest in stellar dynamics, we show that, for high values of that parameter, the differentially rotating models may exhibit unexpected morphologies, even with a toroidal core.
Edrisi, Siroos; Bidhendi, Norollah Kasiri; Haghighi, Maryam
2017-01-01
Effective thermal conductivity of the porous media was modeled based on a self-consistent method. This model estimates the heat transfer between insulator surface and air cavities accurately. In this method, the pore size and shape, the temperature gradient and other thermodynamic properties of the fluid was taken into consideration. The results are validated by experimental data for fire bricks used in cracking furnaces at the olefin plant of Maroon petrochemical complexes well as data published for polyurethane foam (synthetic polymers) IPTM and IPM. The model predictions present a good agreement against experimental data with thermal conductivity deviating <1 %.
A new self-consistent hybrid chemistry model for Mars and cometary environments
Wedlund, Cyril Simon; Kallio, Esa; Jarvinen, Riku; Dyadechkin, Sergey; Alho, Markku
2014-05-01
Over the last 15 years, a 3-D hybrid-PIC planetary plasma interaction modelling platform, named HYB, has been developed, which was applied to several planetary environment such as those of Mars, Venus, Mercury, and more recently, the Moon. We present here another evolution of HYB including a fully consistent ionospheric-chemistry package designed to reproduce the main ions in the lower boundary of the model. This evolution, also permitted by the increase in computing power and the switch to spherical coordinates for higher spatial resolution (Dyadechkin et al., 2013), is motivated by the imminent arrival of the Rosetta spacecraft in the vicinity of comet 67P/Churyumov-Gerasimenko. In this presentation we show the application of the new HYB-ionosphere model to 1D and 2D hybrid simulations at Mars above 100 km altitude and demonstrate that with a limited number of chemical reactions, good agreement with 1D kinetic models may be found. This is a first validation step before applying the model to the 67P/CG comet environment, which, like Mars, is expected be rich in carbon oxide compounds.
[THE MODEL OF NEUROVASCULAR UNIT IN VITRO CONSISTING OF THREE CELLS TYPES].
Khilazheva, E D; Boytsova, E B; Pozhilenkova, E A; Solonchuk, Yu R; Salmina, A B
2015-01-01
There are many ways to model blood brain barrier and neurovascular unit in vitro. All existing models have their disadvantages, advantages and some peculiarities of preparation and usage. We obtained the three-cells neurovascular unit model in vitro using progenitor cells isolated from the rat embryos brain (Wistar, 14-16 d). After withdrawal of the progenitor cells the neurospheres were cultured with subsequent differentiation into astrocytes and neurons. Endothelial cells were isolated from embryonic brain too. During the differentiation of progenitor cells the astrocytes monolayer formation occurs after 7-9 d, neurons monolayer--after 10-14 d, endothelial cells monolayer--after 7 d. Our protocol for simultaneous isolation and cultivation of neurons, astrocytes and endothelial cells reduces the time needed to obtain neurovascular unit model in vitro, consisting of three cells types and reduce the number of animals used. It is also important to note the cerebral origin of all cell types, which is also an advantage of our model in vitro.
Kitadai, Norio
2014-04-01
Prediction of the thermodynamic behaviors of biomolecules at high temperature and pressure is fundamental to understanding the role of hydrothermal systems in the origin and evolution of life on the primitive Earth. However, available thermodynamic dataset for amino acids, essential components for life, cannot represent experimentally observed polymerization behaviors of amino acids accurately under hydrothermal conditions. This report presents the thermodynamic data and the revised HKF parameters for the simplest amino acid "Gly" and its polymers (GlyGly, GlyGlyGly and DKP) based on experimental thermodynamic data from the literature. Values for the ionization states of Gly (Gly(+) and Gly(-)) and Gly peptides (GlyGly(+), GlyGly(-), GlyGlyGly(+), and GlyGlyGly(-)) were also retrieved from reported experimental data by combining group additivity algorithms. The obtained dataset enables prediction of the polymerization behavior of Gly as a function of temperature and pH, consistent with experimentally obtained results in the literature. The revised thermodynamic data for zwitterionic Gly, GlyGly, and DKP were also used to estimate the energetics of amino acid polymerization into proteins. Results show that the Gibbs energy necessary to synthesize a mole of peptide bond is more than 10 kJ mol(-1) less than previously estimated over widely various temperatures (e.g., 28.3 kJ mol(-1) → 17.1 kJ mol(-1) at 25 °C and 1 bar). Protein synthesis under abiotic conditions might therefore be more feasible than earlier studies have shown.
Consistency in Regularizations of the Gauged NJL Model at One Loop Level
Battistel, O A
1999-01-01
In this work we revisit questions recently raised in the literature associated to relevant but divergent amplitudes in the gauged NJL model. The questions raised involve ambiguities and symmetry violations which concern the model's predictive power at one loop level. Our study shows by means of an alternative prescription to handle divergent amplitudes, that it is possible to obtain unambiguous and symmetry preserving amplitudes. The procedure adopted makes use solely of {\\it general} properties of an eventual regulator, thus avoiding an explicit form. We find, after a thorough analysis of the problem that there are well established conditions to be fulfiled by any consistent regularization prescription in order to avoid the problems of concern at one loop level.
Macro-particle FEL model with self-consistent spontaneous radiation
Litvinenko, Vladimir N
2015-01-01
Spontaneous radiation plays an important role in SASE FELs and storage ring FELs operating in giant pulse mode. It defines the correlation function of the FEL radiation as well as its many spectral features. Simulations of these systems using randomly distributed macro-particles with charge much higher that of a single electron create the problem of anomalously strong spontaneous radiation, limiting the capabilities of many FEL codes. In this paper we present a self-consistent macro-particle model which provided statistically exact simulation of multi-mode, multi-harmonic and multi-frequency short-wavelength 3-D FELs including the high power and saturation effects. The use of macro-particle clones allows both spontaneous and induced radiation to be treated in the same fashion. Simulations using this model do not require a seed and provide complete temporal and spatial structure of the FEL optical field.
Scale-consistent two-way coupling of land-surface and atmospheric models
Schomburg, A.; Venema, V.; Ament, F.; Simmer, C.
2009-04-01
Processes at the land surface and in the atmosphere act on different spatial scales. While in the atmosphere small-scale heterogeneity is smoothed out quickly by turbulent mixing, this is not the case at the land surface where small-scale variability of orography, land cover, soil texture, soil moisture etc. varies only slowly in time. For the modelling of the fluxes between the land-surface and the atmosphere it is consequently more scale consistent to model the surface processes at a higher spatial resolution than the atmospheric processes. The mosaic approach is one way to deal with this problem. Using this technique the Soil Vegetation Atmosphere Transfer (SVAT) scheme is solved on a higher resolution than the atmosphere, which is possible since a SVAT module generally demands considerably less computation time than the atmospheric part. The upscaling of the turbulent fluxes of sensible and latent heat at the interface to the atmosphere is realized by averaging, due to the nonlinearities involved this is a more sensible approach than averaging the soil properties and computing the fluxes in a second step. The atmospheric quantities are usually assumed to be homogeneous for all soil-subpixels pertaining to one coarse atmospheric grid box. In this work, the aim is to develop a downscaling approach in which the atmospheric quantities at the lowest model layer are disaggregated before they enter the SVAT module at the higher mosaic resolution. The overall aim is a better simulation of the heat fluxes which play an important role for the energy and moisture budgets at the surface. The disaggregation rules for the atmospheric variables will depend on high-resolution surface properties and the current atmospheric conditions. To reduce biases due to nonlinearities we will add small-scale variability according to such rules as well as noise for the variability we can not explain. The model used in this work is the COSMO-model, the weather forecast model (and regional
Microwave air plasmas in capillaries at low pressure I. Self-consistent modeling
Coche, P.; Guerra, V.; Alves, L. L.
2016-06-01
This work presents the self-consistent modeling of micro-plasmas generated in dry air using microwaves (2.45 GHz excitation frequency), within capillaries (model couples the system of rate balance equations for the most relevant neutral and charged species of the plasma to the homogeneous electron Boltzmann equation. The maintenance electric field is self-consistently calculated adopting a transport theory for low to intermediate pressures, taking into account the presence of O- ions in addition to several positive ions, the dominant species being O{}2+ , NO+ and O+ . The low-pressure small-radius conditions considered yield very-intense reduced electric fields (˜600-1500 Td), coherent with species losses controlled by transport and wall recombination, and kinetic mechanisms strongly dependent on electron-impact collisions. The charged-particle transport losses are strongly influenced by the presence of the negative ion, despite its low-density (˜10% of the electron density). For electron densities in the range (1-≤ft. 4\\right)× {{10}12} cm-3, the system exhibits high dissociation degrees for O2 (˜20-70%, depending on the working conditions, in contrast with the ˜0.1% dissociation obtained for N2), a high concentration of O2(a) (˜1014 cm-3) and NO(X) (5× {{10}14} cm-3) and low ozone production (<{{10}-3}% ).
Towards self-consistent modelling of the Sgr A* accretion flow: linking theory and observation
Roberts, Shawn R.; Jiang, Yan-Fei; Wang, Q. Daniel; Ostriker, Jeremiah P.
2017-04-01
The interplay between supermassive black holes (SMBHs) and their environments is believed to command an essential role in galaxy evolution. The majority of these SMBHs are in the radiative inefficient accretion phase where this interplay remains elusive, but suggestively important, due to few observational constraints. To remedy this, we directly fit 2D hydrodynamic simulations to Chandra observations of Sgr A* with Markov chain Monte Carlo sampling, self-consistently modelling the 2D inflow-outflow solution for the first time. We find the temperature and density at flow onset are consistent with the origin of the gas in the stellar winds of massive stars in the vicinity of Sgr A*. We place the first observational constraints on the angular momentum of the gas and estimate the centrifugal radius, rc ≈ 0.056 rb ≈ 8 × 10-3 pc, where rb is the Bondi radius. Less than 1 per cent of the inflowing gas accretes on to the SMBH, the remainder being ejected in a polar outflow. We decouple the quiescent point-like emission from the spatially extended flow. We find this point-like emission, accounting for ˜4 per cent of the quiescent flux, is spectrally too steep to be explained by unresolved flares, nor bremsstrahlung, but is likely a combination of a relatively steep synchrotron power law and the high-energy tail of inverse-Compton emission. With this self-consistent model of the accretion flow structure, we make predictions for the flow dynamics and discuss how future X-ray spectroscopic observations can further our understanding of the Sgr A* accretion flow.
Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models
Vignal, Philippe
2016-02-11
Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are
Energy Technology Data Exchange (ETDEWEB)
Sidhu, D.P.
1980-09-01
I discuss a left-right-symmetric model of weak and electromagnetic interactions which is consistent with the results of all weak-interaction experiments including observed parity violation in eN interactions. The model is essentially indistinguishable from the Weinberg-Salam (WS) model at low energies and differs from it significantly at high q/sup 2/. Of the two (Z/sub 1/,Z/sub 2/) neutral bosons of the model, MZ-italic/sub 1/approx. =M/sub Z/ of the WS model and MZ-italic/sub 2/approx. =2.5M/sub Z//sub 1/approx. =230 GeV. The prospects of distinguishing the two classes of models in e/sup +/e/sup -/ experiments at LEP and in pp and p-barp colliding-beam experiments at ISABELLE are also discussed.
Directory of Open Access Journals (Sweden)
de Lucas-Ramos P
2012-10-01
Full Text Available Pilar de Lucas-Ramos,1,* Jose Luis Izquierdo-Alonso,2,* Jose Miguel Rodriguez-Gonzalez Moro,1 Jesus Fernandez Frances,2 Paz Vaquero Lozano,1 Jose M Bellón-Cano1,3 CONSISTE study group1Servicio de Neumologia, Hospital General Universitario Gregorio Maranon, Madrid, 2Servicio de Neumologia, Hospital Universitario de Guadalajara, Guadalajara, 3Unidad de Investigacion, Hospital General Universitario Gregorio Maranon, Madrid, Spain*These authors contributed equally to this workIntroduction: Chronic obstructive pulmonary disease (COPD patients present a high prevalence of cardiovascular disease. This excess of comorbidity could be related to a common pathogenic mechanism, but it could also be explained by the existence of common risk factors. The objective of this study was to determine whether COPD patients present greater cardiovascular comorbidity than control subjects and whether COPD can be considered a risk factor per se.Methods: 1200 COPD patients and 300 control subjects were recruited for this multicenter, cross-sectional, case–control study.Results: Compared with the control group, the COPD group showed a significantly higher prevalence of ischemic heart disease (12.5% versus 4.7%; P < 0.0001, cerebrovascular disease (10% versus 2%; P < 0.0001, and peripheral vascular disease (16.4% versus 4.1%; P < 0.001. In the univariate risk analysis, COPD, hypertension, diabetes, obesity, and dyslipidemia were risk factors for ischemic heart disease. In the multivariate analysis adjusted for the remaining factors, COPD was still an independent risk factor (odds ratio: 2.23; 95% confidence interval: 1.18–4.24; P = 0.014.Conclusion: COPD patients show a high prevalence of cardiovascular disease, higher than expected given their age and the coexistence of classic cardiovascular risk factors.Keywords: COPD, cardiovascular risk, ischemic heart disease
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
Institute of Scientific and Technical Information of China (English)
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
Consistent post-reaction vibrational energy redistribution in DSMC simulations using TCE model
Borges Sebastião, Israel; Alexeenko, Alina
2016-10-01
The direct simulation Monte Carlo (DSMC) method has been widely applied to study shockwaves, hypersonic reentry flows, and other nonequilibrium flow phenomena. Although there is currently active research on high-fidelity models based on ab initio data, the total collision energy (TCE) and Larsen-Borgnakke (LB) models remain the most often used chemistry and relaxation models in DSMC simulations, respectively. The conventional implementation of the discrete LB model, however, may not satisfy detailed balance when recombination and exchange reactions play an important role in the flow energy balance. This issue can become even more critical in reacting mixtures involving polyatomic molecules, such as in combustion. In this work, this important shortcoming is addressed and an empirical approach to consistently specify the post-reaction vibrational states close to thermochemical equilibrium conditions is proposed within the TCE framework. Following Bird's quantum-kinetic (QK) methodology for populating post-reaction states, the new TCE-based approach involves two main steps. The state-specific TCE reaction probabilities for a forward reaction are first pre-computed from equilibrium 0-D simulations. These probabilities are then employed to populate the post-reaction vibrational states of the corresponding reverse reaction. The new approach is illustrated by application to exchange and recombination reactions relevant to H2-O2 combustion processes.
Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency
Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.
2013-09-01
A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.
Donnellan, M. Brent; Kenny, David A.; Trzesniewski, Kali H.; Lucas, Richard E.; Conger, Rand D.
2012-01-01
The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-est...
Directory of Open Access Journals (Sweden)
A. Mairesse
2013-12-01
Full Text Available The mid-Holocene (6 kyr BP; thousand years before present is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of this relatively large amount of information, we have compared a compilation of 50 air and sea surface temperature reconstructions with the results of three simulations performed with general circulation models and one carried out with LOVECLIM, a model of intermediate complexity. The conclusions derived from this analysis confirm that models and data agree on the large-scale spatial pattern but the models underestimate the magnitude of some observed changes and that large discrepancies are observed at the local scale. To further investigate the origin of those inconsistencies, we have constrained LOVECLIM to follow the signal recorded by the proxies selected in the compilation using a data-assimilation method based on a particle filter. In one simulation, all the 50 proxy-based records are used while in the other two only the continental or oceanic proxy-based records constrain the model results. As expected, data assimilation leads to improving the consistency between model results and the reconstructions. In particular, this is achieved in a robust way in all the experiments through a strengthening of the westerlies at midlatitude that warms up northern Europe. Furthermore, the comparison of the LOVECLIM simulations with and without data assimilation has also objectively identified 16 proxy-based paleoclimate records whose reconstructed signal is either incompatible with the signal recorded by some other proxy-based records or with model physics.
Consistent approach to edge detection using multiscale fuzzy modeling analysis in the human retina
Directory of Open Access Journals (Sweden)
Mehdi Salimian
2012-06-01
Full Text Available Today, many widely used image processing algorithms based on human visual system have been developed. In this paper a smart edge detection based on modeling the performance of simple and complex cells and also modeling and multi-scale image processing in the primary visual cortex is presented. A way to adjust the parameters of Gabor filters (mathematical models of simple cells And the proposed non-linear threshold response are presented in order to Modeling of simple and complex cells. Also, due to multi-scale modeling analysis conducted in the human retina, in the proposed algorithm, all edges of the small and large structures with high precision are detected and localized. Comparing the results of the proposed method for a reliable database with conventional methods shows the higher Performance (about 4-13% and reliability of the proposed method in the detection and localization of edge.
Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie
2014-10-07
Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations.
CMS standard model Higgs boson results
Directory of Open Access Journals (Sweden)
Garcia-Abia Pablo
2013-11-01
Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.
Zimmermann, Eva; Seifert, Udo
2015-02-01
Many single-molecule experiments for molecular motors comprise not only the motor but also large probe particles coupled to it. The theoretical analysis of these assays, however, often takes into account only the degrees of freedom representing the motor. We present a coarse-graining method that maps a model comprising two coupled degrees of freedom which represent motor and probe particle to such an effective one-particle model by eliminating the dynamics of the probe particle in a thermodynamically and dynamically consistent way. The coarse-grained rates obey a local detailed balance condition and reproduce the net currents. Moreover, the average entropy production as well as the thermodynamic efficiency is invariant under this coarse-graining procedure. Our analysis reveals that only by assuming unrealistically fast probe particles, the coarse-grained transition rates coincide with the transition rates of the traditionally used one-particle motor models. Additionally, we find that for multicyclic motors the stall force can depend on the probe size. We apply this coarse-graining method to specific case studies of the F(1)-ATPase and the kinesin motor.
McGlynn, Thomas; Fabbiano, Giuseppina; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; Pevunova, Olga; Imel, David; Berriman, Graham B.; Teplitz, Harry I.; Groom, Steve L.; Desai, Vandana R.; Landry, Walter
2016-07-01
Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.
Baraffe, I; Méra, D; Chabrier, G; Beaulieu, J P
1998-01-01
We have computed stellar evolutionary models for stars in a mass range characteristic of Cepheid variables ($3
Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül
2017-05-01
A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.
Towards Self-Consistent Modelling of the Sgr A* Accretion Flow: Linking Theory and Observation
Roberts, Shawn R; Jiang, Yan-Fei; Ostriker, Jeremiah P
2016-01-01
The interplay between supermassive black holes (SMBHs) and their environments is believed to command an essential role in galaxy evolution. The majority of these SMBHs are in the radiative inefficient accretion phase where this interplay remains elusive, but suggestively important, due to few observational constraints. To remedy this, we directly fit 2-D hydrodynamic simulations to Chandra observations of Sgr A* with Markov Chain Monte Carlo sampling, self-consistently modelling the 2-D inflow-outflow solution for the first time. We find the temperature and density at flow onset are consistent with the origin of the gas in the stellar winds of massive stars in the vicinity of Sgr A*. We place the first observational constraints on the angular momentum of the gas and estimate the centrifugal radius, r$_c$ $\\approx$ 0.056 r$_b$ $\\approx8\\times10^{-3}$ pc, where r$_b$ is the Bondi radius. Less than 1\\% of the inflowing gas accretes onto the SMBH, the remainder being ejected in a polar outflow. For the first time...
The self-consistent field model for Fermi systems with account of three-body interactions
Directory of Open Access Journals (Sweden)
Yu.M. Poluektov
2015-12-01
Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.
Self-consistent 2-phase AGN torus models: SED library for observers
Siebenmorgen, Ralf; Efstathiou, Andreas
2015-01-01
We assume that dust near active galactic nuclei (AGN) is distributed in a torus-like geometry, which may be described by a clumpy medium or a homogeneous disk or as a combination of the two (i.e. a 2-phase medium). The dust particles considered are fluffy and have higher submillimeter emissivities than grains in the diffuse ISM. The dust-photon interaction is treated in a fully self-consistent three dimensional radiative transfer code. We provide an AGN library of spectral energy distributions (SEDs). Its purpose is to quickly obtain estimates of the basic parameters of the AGN, such as the intrinsic luminosity of the central source, the viewing angle, the inner radius, the volume filling factor and optical depth of the clouds, and the optical depth of the disk midplane, and to predict the flux at yet unobserved wavelengths. The procedure is simple and consists of finding an element in the library that matches the observations. We discuss the general properties of the models and in particular the 10mic. silic...
Yuan, Yao-Ming; Jiang, Rui; Hu, Mao-Bin; Wu, Qing-Song; Wang, Ruili
2009-06-01
In this paper, we have investigated traffic flow characteristics in a traffic system consisting of a mixture of adaptive cruise control (ACC) vehicles and manual-controlled (manual) vehicles, by using a hybrid modelling approach. In the hybrid approach, (i) the manual vehicles are described by a cellular automaton (CA) model, which can reproduce different traffic states (i.e., free flow, synchronised flow, and jam) as well as probabilistic traffic breakdown phenomena; (ii) the ACC vehicles are simulated by using a car-following model, which removes artificial velocity fluctuations due to intrinsic randomisation in the CA model. We have studied the traffic breakdown probability from free flow to congested flow, the phase transition probability from synchronised flow to jam in the mixed traffic system. The results are compared with that, where both ACC vehicles and manual vehicles are simulated by CA models. The qualitative and quantitative differences are indicated.
Pranger, C. C.; Le Pourhiet, L.; May, D.; van Dinther, Y.; Gerya, T.
2016-12-01
Subduction zones evolve over millions of years. The state of stress, the distribution of materials, and the strength and structure of the interface between the two plates is intricately tied to a host of time-dependent physical processes, such as damage, friction, (nonlinear) viscous relaxation, and fluid migration. In addition, the subduction interface has a complex three-dimensional geometry that evolves with time and can adjust in response to a changing stress environment or in response to impinging topographical features, and can even branch off as a splay fault. All in all, the behaviour of (large) earthquakes at the millisecond to minute timescale is heavily dependent on the pattern of stress accumulation during the 100 year inter-seismic period, the events occurring on or near the interface in the past thousands of years, as well as the extended geological history of the region. We address the aforementioned modeling requirements by developing a self-consistent 3D staggered grid finite difference continuum description of motion, thermal advection-diffusion, and poro-visco-elastic two-phase flow. Faults are modelled as plastic shear bands that can develop and evolve in response to a changing stress environment without having a prescribed geometry. They obey a Mohr-Coulomb or Drucker-Prager yield criterion and a rate-and-state friction law. For a sound treatment of plasticity, we borrow elements from mechanical engineering, and extend these with high-quality nonlinear iteration schemes and adaptive time-stepping to resolve the rupture process at all time scales. We will present these techniques together with proof-of-concept examples of self-consistently developing seismic cycles in 2D and 3D, including phases of stress accumulation, fault nucleation, dynamic rupture, and healing.
Directory of Open Access Journals (Sweden)
A. Mairesse
2013-07-01
Full Text Available The mid-Holocene (6 thousand years before present is a key period to study the consistency between model results and proxy data as it corresponds to a standard test for models and a reasonable number of proxy records are available. Taking advantage of this relatively large amount of information, we have first compared a compilation of 50 air and sea surface temperature reconstructions with the results of three simulations performed with general circulation models and one carried out with LOVECLIM, a model of intermediate complexity. The conclusions derived from this analysis confirm that models and data agree on the large-scale spatial pattern but underestimate the magnitude of some observed changes and that large discrepancies are observed at the local scale. To further investigate the origin of those inconsistencies, we have constrained LOVECLIM to follow the signal recorded by the proxies selected in the compilation using a data assimilation method based on a particle filter. In one simulation, all the 50 proxies are used while in the other two, only the continental or oceanic proxies constrains the model results. This assimilation improves the consistency between model results and the reconstructions. In particular, this is achieved in a robust way in all the experiments through a strengthening of the westerlies at mid-latitude that warms up the Northern Europe. Furthermore, the comparison of the LOVECLIM simulations with and without data assimilation has also objectively identified 16 proxies whose reconstructed signal is either incompatible with the one recorded by some other proxies or with model physics.
Directory of Open Access Journals (Sweden)
Sohail eEjaz
2015-03-01
Full Text Available Introduction and Objectives: Selective neuronal loss (SNL in the reperfused penumbra may impact clinical recovery and is thus important to investigate. Brief proximal middle cerebral artery occlusion (MCAo results in predominantly striatal SNL, yet cortical damage is more relevant given its behavioral implications and that thrombolytic therapy mainly rescues the cortex. Distal temporary MCAo (tMCAo does target the cortex, but the optimal occlusion duration that results in isolated SNL has not been determined. In the present study we assessed different distal tMCAo durations looking for consistently pure SNL.Methods: Microclip distal tMCAo (md-tMCAo was performed in ~6-month old male spontaneously hypertensive rats (SHRs. We previously reported that 45min md-tMCAo in SHRs results in pan-necrosis in the majority of subjects. Accordingly, three shorter MCAo durations were investigated here in decremental succession, namely 30, 22 and 15mins (n=3, 3 and 7 subjects, respectively. Recanalization was confirmed by MR angiography just prior to brain collection at 28 days and T2-weighted MRI was obtained for characterization of ischemic lesions. NeuN, OX42 and GFAP immunohistochemistry appraised changes in neurons, microglia and astrocytes, respectively. Ischemic lesions were categorized into three main types: 1 pan-necrosis; 2 partial infarction; and 3 SNL. Results: Pan-necrosis or partial infarction was present in all 30min and 22min subjects, but not in the 15min group (p < 0.001, in which isolated cortical SNL was consistently present. MRI revealed characteristic hyperintense abnormalities in all rats with pan-necrosis or partial infarction, but no change in any 15min subject. Conclusions: We found that 15min distal MCAo consistently resulted in pure cortical SNL, whereas durations equal or longer than 22min consistently resulted in infarcts. This model may be of use to study the pathophysiology of cortical SNL and its prevention by appropriate
Energy Technology Data Exchange (ETDEWEB)
BRANNON,REBECCA M.
2000-11-01
A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion
Directory of Open Access Journals (Sweden)
Jiateng Guo
2016-02-01
Full Text Available Three-dimensional (3D geological models are important representations of the results of regional geological surveys. However, the process of constructing 3D geological models from two-dimensional (2D geological elements remains difficult and is not necessarily robust. This paper proposes a method of migrating from 2D elements to 3D models. First, the geological interfaces were constructed using the Hermite Radial Basis Function (HRBF to interpolate the boundaries and attitude data. Then, the subsurface geological bodies were extracted from the spatial map area using the Boolean method between the HRBF surface and the fundamental body. Finally, the top surfaces of the geological bodies were constructed by coupling the geological boundaries to digital elevation models. Based on this workflow, a prototype system was developed, and typical geological structures (e.g., folds, faults, and strata were simulated. Geological modes were constructed through this workflow based on realistic regional geological survey data. The model construction process was rapid, and the resulting models accorded with the constraints of the original data. This method could also be used in other fields of study, including mining geology and urban geotechnical investigations.
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
Consistent treatment of viscoelastic effects at junctions in one-dimensional blood flow models
Müller, Lucas O.; Leugering, Günter; Blanco, Pablo J.
2016-06-01
While the numerical discretization of one-dimensional blood flow models for vessels with viscoelastic wall properties is widely established, there is still no clear approach on how to couple one-dimensional segments that compose a network of viscoelastic vessels. In particular for Voigt-type viscoelastic models, assumptions with regard to boundary conditions have to be made, which normally result in neglecting the viscoelastic effect at the edge of vessels. Here we propose a coupling strategy that takes advantage of a hyperbolic reformulation of the original model and the inherent information of the resulting system. We show that applying proper coupling conditions is fundamental for preserving the physical coherence and numerical accuracy of the solution in both academic and physiologically relevant cases.
National Research Council Canada - National Science Library
Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie
2014-01-01
.... Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic...
Institute of Scientific and Technical Information of China (English)
Lan Chao-Hui; Lan Chao-Zhen; Hu Xi-Wei; Chen Zhao-Quan; Liu Ming-Hai
2009-01-01
A self-consistent and three-dimensional (3D) model of argon discharge in a large-scale rectangular surface-wave plasma (SWP) source is presented in this paper, which is based on the finite-difference time-domain (FDTD) approximation to Maxwell's equations self-consistently coupled with a fluid model for plasma evolution. The discharge characteristics at an input microwave power of 1200 W and a filling gas pressure of 50 Pa in the SWP source are analyzed. The simulation shows the time evolution of deposited power density at different stages, and the 3D distributions of electron density and temperature in the chamber at steady state. In addition, the results show that there is a peak of plasma density approximately at a vertical distance of 3 cm from the quartz window.
Self-consistent modeling of CFETR baseline scenarios for steady-state operation
Chen, Jiale; Jian, Xiang; Chan, Vincent S.; Li, Zeyu; Deng, Zhao; Li, Guoqiang; Guo, Wenfeng; Shi, Nan; Chen, Xi; CFETR Physics Team
2017-07-01
Integrated modeling for core plasma is performed to increase confidence in the proposed baseline scenario in the 0D analysis for the China Fusion Engineering Test Reactor (CFETR). The steady-state scenarios are obtained through the consistent iterative calculation of equilibrium, transport, auxiliary heating and current drives (H&CD). Three combinations of H&CD schemes (NB + EC, NB + EC + LH, and EC + LH) are used to sustain the scenarios with q min > 2 and fusion power of ˜70-150 MW. The predicted power is within the target range for CFETR Phase I, although the confinement based on physics models is lower than that assumed in 0D analysis. Ideal MHD stability analysis shows that the scenarios are stable against n = 1-10 ideal modes, where n is the toroidal mode number. Optimization of RF current drive for the RF-only scenario is also presented. The simulation workflow for core plasma in this work provides a solid basis for a more extensive research and development effort for the physics design of CFETR.
Energy Technology Data Exchange (ETDEWEB)
Keck, R.-E.
2013-07-15
turbine wake turbulence by comparison to field data and wind tunnel experiments. 3. A two-dimensional eddy viscosity model is implemented to govern the distribution of turbulent stresses in the wake deficit. The modified eddy viscosity model improves the least-square fit of the velocity field in the wake by {approx}13% when compared to higher-order models. 4. A method is proposed to couple the increased turbulence level experienced by a turbine operating in waked conditions, to the downstream wake evolution of the wake-affected turbine. The intraturbine turbulence coupling improved the fit of the turbulence distribution by {approx}40% and the wind speed distribution by {approx}30% over a row of eight turbines. 5. The effect of the atmospheric shear on the turbulent stresses in the wake is captured by including a local strain-rate contribution for the ambient shear gradient. This results in more realistic turbulent stress levels in regions of small wake deficit gradients; this is particularly important in the far-wake region where atmospheric shear gradients are an important contribution to the local strain-rate. 6. A method to include the effect of atmospheric stability on the wake deficit evolution and wake meandering is described. Including the atmospheric stability effects improved the model prediction of the mean velocity field by {approx}19% and of turbulence distribution by {approx}28% in unstable atmospheric conditions compared to actuator line results. The power production by a row of wind turbines aligned with the wind direction is reduced by {approx}10% in very stable conditions compared to very unstable conditions at the same turbulence intensity. This power drop is comparable to measurements from the North Hoyle and OWEZ wind farms. (Author)
Institute of Scientific and Technical Information of China (English)
Xiu-Li Sun; Wen-Yin Zhang; Jin-Zhao Wu
2004-01-01
In this paper an event-based operational interleaving semantics is proposed for real-time processes, for which action refinement and a denotational true concurrency semantics are developed and defined in terms of timed event structures. The authors characterize the timed event traces that are generated by the operational semantics in a denotational way, and show that this operational semantics is consistent with the denotational semantics in the sense that they generate the same set of timed event traces, thereby eliminating the gap between the true concurrency and interleaving semantics.
Toward A Self Consistent MHD Model of Chromospheres and Winds From Late Type Evolved Stars
Airapetian, V. S.; Leake, J. E.; Carpenter, Kenneth G.
2015-01-01
We present the first magnetohydrodynamic model of the stellar chromospheric heating and acceleration of the outer atmospheres of cool evolved stars, using α Tau as a case study. We used a 1.5D MHD code with a generalized Ohm's law that accounts for the effects of partial ionization in the stellar atmosphere to study Alfvén wave dissipation and wave reflection. We have demonstrated that due to inclusion of the effects of ion-neutral collisions in magnetized weakly ionized chromospheric plasma on resistivity and the appropriate grid resolution, the numerical resistivity becomes 1-2 orders of magnitude smaller than the physical resistivity. The motions introduced by non-linear transverse Alfvé waves can explain non-thermally broadened and non-Gaussian profiles of optically thin UV lines forming in the stellar chromosphere of α Tau and other late-type giant and supergiant stars. The calculated heating rates in the stellar chromosphere due to resistive (Joule) dissipation of electric currents, induced by upward propagating non-linear Alfvé waves, are consistent with observational constraints on the net radiative losses in UV lines and the continuum from α Tau. At the top of the chromosphere, Alfvé waves experience significant reflection, producing downward propagating transverse waves that interact with upward propagating waves and produce velocity shear in the chromosphere. Our simulations also suggest that momentum deposition by non-linear Alfvé waves becomes significant in the outer chromosphere at 1 stellar radius from the photosphere. The calculated terminal velocity and the mass loss rate are consistent with the observationally derived wind properties in α Tau.
Hazard-consistent ground motions generated with a stochastic fault-rupture model
Energy Technology Data Exchange (ETDEWEB)
Nishida, Akemi, E-mail: nishida.akemi@jaea.go.jp [Center for Computational Science and e-Systems, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba 277-0871 (Japan); Igarashi, Sayaka, E-mail: igrsyk00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Sakamoto, Shigehiro, E-mail: shigehiro.sakamoto@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Uchiyama, Yasuo, E-mail: yasuo.uchiyama@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Yamamoto, Yu, E-mail: ymmyu-00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Muramatsu, Ken, E-mail: kmuramat@tcu.ac.jp [Department of Nuclear Safety Engineering, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo 158-8557 (Japan); Takada, Tsuyoshi, E-mail: takada@load.arch.t.u-tokyo.ac.jp [Department of Architecture, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2015-12-15
Conventional seismic probabilistic risk assessments (PRAs) of nuclear power plants consist of probabilistic seismic hazard and fragility curves. Even when earthquake ground-motion time histories are required, they are generated to fit specified response spectra, such as uniform hazard spectra at a specified exceedance probability. These ground motions, however, are not directly linked with seismic-source characteristics. In this context, the authors propose a method based on Monte Carlo simulations to generate a set of input ground-motion time histories to develop an advanced PRA scheme that can explain exceedance probability and the sequence of safety-functional loss in a nuclear power plant. These generated ground motions are consistent with seismic hazard at a reference site, and their seismic-source characteristics can be identified in detail. Ground-motion generation is conducted for a reference site, Oarai in Japan, the location of a hypothetical nuclear power plant. A total of 200 ground motions are generated, ranging from 700 to 1100 cm/s{sup 2} peak acceleration, which corresponds to a 10{sup −4} to 10{sup −5} annual exceedance frequency. In the ground-motion generation, seismic sources are selected according to their hazard contribution at the site, and Monte Carlo simulations with stochastic parameters for the seismic-source characteristics are then conducted until ground motions with the target peak acceleration are obtained. These ground motions are selected so that they are consistent with the hazard. Approximately 110,000 simulations were required to generate 200 ground motions with these peak accelerations. Deviations of peak ground motion acceleration generated for 1000–1100 cm/s{sup 2} range from 1.5 to 3.0, where the deviation is evaluated with peak ground motion accelerations generated from the same seismic source. Deviations of 1.0 to 3.0 for stress drops, one of the stochastic parameters of seismic-source characteristics, are required to
Towards three-dimensional continuum models of self-consistent along-strike megathrust segmentation
Pranger, Casper; van Dinther, Ylona; May, Dave; Le Pourhiet, Laetitia; Gerya, Taras
2016-04-01
into one algorithm. We are working towards presenting the first benchmarked 3D dynamic rupture models as an important step towards seismic cycle modelling of megathrust segmentation in a three-dimensional subduction setting with slow tectonic loading, self consistent fault development, and spontaneous seismicity.
Shell Effect of Superheavy Nuclei in Self-consistent Mean-Field Models
Institute of Scientific and Technical Information of China (English)
RENZhong-Zhou; TAIFei; XUChang; CHENDing-Han; ZHANGHu-Yong; CAIXiang-Zhou; SHENWen-Qing
2004-01-01
We analyze in detail the numerical results of superheavy nuclei in deformed relativistic mean-field model and deformed Skyrme-Hartree-Fock model. The common points and differences of both models are systematically compared and discussed. Their consequences on the stability of superheavy nuclei are explored and explained. The theoreticalresults are compared with new data of superheavy nuclei from GSI and from Dubna and reasonable agreement is reached.Nuclear shell effect in superheavy region is analyzed and discussed. The spherical shell effect disappears in some cases due to the appearance of deformation or superdeformation in the ground states of nuclei, where valence nucleons occupysignificantly the intruder levels of nuclei. It is shown for the first time that the significant occupation of vaJence nucleons on the intruder states plays an important role for the ground state properties of superheavy nuclei. Nuclei are stable in the deformed or superdeformed configurations. We further point out that one cannot obtain the octupole deformation of even-even nuclei in the present relativistic mean-field model with the σ，ω and ρ mesons because there is no parityviolating interaction and the conservation of parity of even-even nuclei is a basic assumption of the present relativistic mean-field model.
Cafiso, Salvatore; Di Graziano, Alessandro; Di Silvestro, Giacomo; La Cava, Grazia; Persaud, Bhagwant
2010-07-01
In Europe, approximately 60% of road accident fatalities occur on two-lane rural roads. Thus, research to develop and enhance explanatory and predictive models for this road type continues to be of interest in mitigating these accidents. To this end, this paper describes a novel and extensive data collection and modeling effort to define accident models for two-lane road sections based on a unique combination of exposure, geometry, consistency and context variables directly related to the safety performance. The first part of the paper documents how these were identified for the segmentation of highways into homogeneous sections. Next, is a description of the extensive data collection effort that utilized differential cinematic GPS surveys to define the horizontal alignment variables, and road safety inspections (RSIs) to quantify the other road characteristics related to safety. The final part of the paper focuses on the calibration of models for estimating the expected number of accidents on homogeneous sections that can be characterized by constant values of the explanatory variables. Several candidate models were considered for calibration using the Generalized Linear Modeling (GLM) approach. After considering the statistical significance of the parameters related to exposure, geometry, consistency and context factors, and goodness of fit statistics, 19 models were ranked and three were selected as the recommended models. The first of the three is a base model, with length and traffic as the only predictor variables; since these variables are the only ones likely to be available network-wide, this base model can be used in an empirical Bayesian calculation to conduct network screening for ranking "sites with promise" of safety improvement. The other two models represent the best statistical fits with different combinations of significant variables related to exposure, geometry, consistency and context factors. These multiple variable models can be used, with
Strong consistency of maximum quasi-likelihood estimates in generalized linear models
Institute of Scientific and Technical Information of China (English)
YiN; Changming; ZHAO; Lincheng
2005-01-01
In a generalized linear model with q × 1 responses, bounded and fixed p × qregressors Zi and general link function, under the most general assumption on the mini-mum eigenvalue of∑ni＝1n ZiZ'i, the moment condition on responses as weak as possibleand other mild regular conditions, we prove that with probability one, the quasi-likelihoodequation has a solutionβn for all large sample size n, which converges to the true regres-sion parameterβo. This result is an essential improvement over the relevant results in literature.
A self-consistent first-principle based approach to model carrier mobility in organic materials
Energy Technology Data Exchange (ETDEWEB)
Meded, Velimir; Friederich, Pascal; Symalla, Franz; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)
2015-12-31
Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using a fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.
Self-consistent Keldysh approach to quenches in the weakly interacting Bose-Hubbard model
Lo Gullo, N.; Dell'Anna, L.
2016-11-01
We present a nonequilibrium Green's-functional approach to study the dynamics following a quench in weakly interacting Bose-Hubbard model (BHM). The technique is based on the self-consistent solution of a set of equations which represents a particular case of the most general set of Hedin's equations for the interacting single-particle Green's function. We use the ladder approximation as a skeleton diagram for the two-particle scattering amplitude useful, through the self-energy in the Dyson equation, for finding the interacting single-particle Green's function. This scheme is then implemented numerically by a parallelized code. We exploit this approach to study the correlation propagation after a quench in the interaction parameter, for one and two dimensions. In particular, we show how our approach is able to recover the crossover from the ballistic to the diffusive regime by increasing the boson-boson interaction. Finally we also discuss the role of a thermal initial state on the dynamics both for one- and two-dimensional BHMs, finding that, surprisingly, at high temperature a ballistic evolution is restored.
Self-consistent model of a solid for the description of lattice and magnetic properties
Balcerzak, T.; Szałowski, K.; Jaščur, M.
2017-03-01
In the paper a self-consistent theoretical description of the lattice and magnetic properties of a model system with magnetoelastic interaction is presented. The dependence of magnetic exchange integrals on the distance between interacting spins is assumed, which couples the magnetic and the lattice subsystem. The framework is based on summation of the Gibbs free energies for the lattice subsystem and magnetic subsystem. On the basis of minimization principle for the Gibbs energy, a set of equations of state for the system is derived. These equations of state combine the parameters describing the elastic properties (relative volume deformation) and the magnetic properties (magnetization changes). The formalism is extensively illustrated with the numerical calculations performed for a system of ferromagnetically coupled spins S=1/2 localized at the sites of simple cubic lattice. In particular, the significant influence of the magnetic subsystem on the elastic properties is demonstrated. It manifests itself in significant modification of such quantities as the relative volume deformation, thermal expansion coefficient or isothermal compressibility, in particular, in the vicinity of the magnetic phase transition. On the other hand, the influence of lattice subsystem on the magnetic one is also evident. It takes, for example, the form of dependence of the critical (Curie) temperature and magnetization itself on the external pressure, which is thoroughly investigated.
An internally consistent inverse model to calculate ridge-axis hydrothermal fluxes
Coogan, L. A.; Dosso, S.
2010-12-01
Fluid and chemical fluxes from high-temperature, on-axis, hydrothermal systems at mid-ocean ridges have been estimated in a number of ways. These generally use simple mass balances based on either vent fluid compositions or the compositions of altered sheeted dikes. Here we combine these approaches in an internally consistent model. Seawater is assumed to enter the crust and react with the sheeted dike complex at high temperatures. Major element fluxes for both the rock and fluid are calculated from balanced stoichiometric reactions. These reactions include end-member components of the minerals plagioclase, pyroxene, amphibole, chlorite and epidote along with pure anhydrite, quartz, pyrite, pyrrhotite, titanite, magnetite, ilmenite and ulvospinel and the fluid species H2O, Mg2+, Ca2+, Fe2+, Na+, Si4+, H2S, H+ and H2. Trace element abundances (Li, B, K, Rb, Cs, Sr, Ba, U, Tl, Mn, Cu, Zn, Co, Ni, Pb and Os) and isotopic ratios (Li, B, O, Sr, Tl, Os) are calculated from simple mass balance of a fluid-rock reaction. A fraction of the Cu, Zn, Pb, Co, Ni, Os and Mn in the fluid after fluid-rock reaction is allowed to precipitate during discharge before the fluid reaches the seafloor. S-isotopes are tied to mineralogical reactions involving S-bearing phases. The free parameters in the model are the amounts of each mineralogical reaction that occurs, the amounts of the metals precipitated during discharge, and the water-to-rock ratio. These model parameters, and their uncertainties, are constrained by: (i) mineral abundances and mineral major element compositions in altered dikes from ODP Hole 504B and the Pito and Hess Deep tectonic windows (EPR crust); (ii) changes in dike bulk-rock trace element and isotopic compositions from these locations relative to fresh MORB glass compositions; and (iii) published vent fluid compositions from basalt-hosted high-temperature ridge axis hydrothermal systems. Using a numerical inversion algorithm, the probability density of different
A self-consistent model for the evolution of the gas produced in the debris disc of β Pictoris
Kral, Q.; Wyatt, M.; Carswell, R. F.; Pringle, J. E.; Matrà, L.; Juhász, A.
2016-09-01
This paper presents a self-consistent model for the evolution of gas produced in the debris disc of β Pictoris. Our model proposes that atomic carbon and oxygen are created from the photodissociation of CO, which is itself released from volatile-rich bodies in the debris disc due to grain-grain collisions or photodesorption. While the CO lasts less than one orbit, the atomic gas evolves by viscous spreading resulting in an accretion disc inside the parent belt and a decretion disc outside. The temperature, ionization fraction and population levels of carbon and oxygen are followed with the photodissociation region model CLOUDY, which is coupled to a dynamical viscous α model. We present new gas observations of β Pic, of C I observed with Atacama Pathfinder EXperiment and O I observed with Herschel, and show that these along with published C II and CO observations can all be explained with this new model. Our model requires a viscosity α > 0.1, similar to that found in sufficiently ionized discs of other astronomical objects; we propose that the magnetorotational instability is at play in this highly ionized and dilute medium. This new model can be tested from its predictions for high-resolution ALMA observations of C I. We also constrain the water content of the planetesimals in β Pic. The scenario proposed here might be at play in all debris discs and this model could be used more generally on all discs with C, O or CO detections.
THE GAMMA-RAY AND NEUTRINO SKY: A CONSISTENT PICTURE OF FERMI-LAT, MILAGRO, AND ICECUBE RESULTS
Energy Technology Data Exchange (ETDEWEB)
Gaggero, Daniele; Urbano, Alfredo; Valli, Mauro [SISSA and INFN, via Bonomea 265, I-34136 Trieste (Italy); Grasso, Dario; Marinelli, Antonio, E-mail: d.gaggero@uva.nl, E-mail: alfredo.leonardo.urbano@cern.ch, E-mail: mauro.valli@sissa.it, E-mail: dario.grasso@pi.infn.it, E-mail: antonio.marinelli@pi.infn.it [INFN and Dipartimento di Fisica “E. Fermi,” Pisa University, Largo B. Pontecorvo 3, I-56127 Pisa (Italy)
2015-12-20
We compute the γ-ray and neutrino diffuse emission of the Galaxy on the basis of a recently proposed phenomenological model characterized by radially dependent cosmic-ray (CR) transport properties. We show how this model, designed to reproduce both Fermi-LAT γ-ray data and local CR observables, naturally reproduces the anomalous TeV diffuse emission observed by Milagro in the inner Galactic plane. Above 100 TeV our picture predicts a neutrino flux that is about five (two) times larger than the neutrino flux computed with conventional models in the Galactic Center region (full-sky). Explaining in that way up to ∼25% of the flux measured by IceCube, we reproduce the full-sky IceCube spectrum adding an extra-Galactic component derived from the muonic neutrinos flux in the northern hemisphere. We also present precise predictions for the Galactic plane region where the flux is dominated by the Galactic emission.
The Gamma-Ray and Neutrino Sky: A Consistent Picture of Fermi-LAT, Milagro, and IceCube Results
Gaggero, Daniele; Grasso, Dario; Marinelli, Antonio; Urbano, Alfredo; Valli, Mauro
2015-12-01
We compute the γ-ray and neutrino diffuse emission of the Galaxy on the basis of a recently proposed phenomenological model characterized by radially dependent cosmic-ray (CR) transport properties. We show how this model, designed to reproduce both Fermi-LAT γ-ray data and local CR observables, naturally reproduces the anomalous TeV diffuse emission observed by Milagro in the inner Galactic plane. Above 100 TeV our picture predicts a neutrino flux that is about five (two) times larger than the neutrino flux computed with conventional models in the Galactic Center region (full-sky). Explaining in that way up to ∼25% of the flux measured by IceCube, we reproduce the full-sky IceCube spectrum adding an extra-Galactic component derived from the muonic neutrinos flux in the northern hemisphere. We also present precise predictions for the Galactic plane region where the flux is dominated by the Galactic emission.
Functional connectivity modeling of consistent cortico-striatal degeneration in Huntington's disease
Directory of Open Access Journals (Sweden)
Imis Dogan
2015-01-01
Full Text Available Huntington's disease (HD is a progressive neurodegenerative disorder characterized by a complex neuropsychiatric phenotype. In a recent meta-analysis we identified core regions of consistent neurodegeneration in premanifest HD in the striatum and middle occipital gyrus (MOG. For early manifest HD convergent evidence of atrophy was most prominent in the striatum, motor cortex (M1 and inferior frontal junction (IFJ. The aim of the present study was to functionally characterize this topography of brain atrophy and to investigate differential connectivity patterns formed by consistent cortico-striatal atrophy regions in HD. Using areas of striatal and cortical atrophy at different disease stages as seeds, we performed task-free resting-state and task-based meta-analytic connectivity modeling (MACM. MACM utilizes the large data source of the BrainMap database and identifies significant areas of above-chance co-activation with the seed-region via the activation-likelihood-estimation approach. In order to delineate functional networks formed by cortical as well as striatal atrophy regions we computed the conjunction between the co-activation profiles of striatal and cortical seeds in the premanifest and manifest stages of HD, respectively. Functional characterization of the seeds was obtained using the behavioral meta-data of BrainMap. Cortico-striatal atrophy seeds of the premanifest stage of HD showed common co-activation with a rather cognitive network including the striatum, anterior insula, lateral prefrontal, premotor, supplementary motor and parietal regions. A similar but more pronounced co-activation pattern, additionally including the medial prefrontal cortex and thalamic nuclei was found with striatal and IFJ seeds at the manifest HD stage. The striatum and M1 were functionally connected mainly to premotor and sensorimotor areas, posterior insula, putamen and thalamus. Behavioral characterization of the seeds confirmed that experiments
The Bioenvironmental modeling of Bahar city based on Climate-consistent Architecture
Directory of Open Access Journals (Sweden)
Parna Kazemian
2014-07-01
Full Text Available The identification of the climate of a particularplace and the analysis of the climatic needs in terms of human comfort and theuse of construction materials is one of the prerequisites of aclimate-consistent design. In studies on climate and weather, usingillustrative reports, first a picture of the state of climate is offered. Then,based on the obtained results, the range of changes is determined, and thecause-effect relationships at different scales are identified. Finally, by ageneral examination of the obtained information, on the one hand, the range ofchanges is identified, and, on the other hand, their practical uses in thefuture are selected. In the present paper, the bioclimatic conditions of Baharcity, according to the 29-year-long statistics of the synoptic station between1976 and 2005 was examined, using Olgyay and Mahoney indexes. It should beadded that, because of the short distance between Bahar and Hamedan, they havea single synoptic station. The results indicate that Bahar city has dominantlycold weather during most of the months. Therefore, based on the implications ofeach method, the principles of the suggestive architectural designing can beintegrated and improved in order to achieve sustainable development.
A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling
Shapiro, B.; Jin, Q.
2015-12-01
Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.
DEFF Research Database (Denmark)
Sogachev, Andrey; Kelly, Mark C.; Leclerc, Monique Y.
2012-01-01
A self-consistent two-equation closure treating buoyancy and plant drag effects has been developed, through consideration of the behaviour of the supplementary equation for the length-scale-determining variable in homogeneous turbulent flow. Being consistent with the canonical flow regimes of gri...
Donnellan, M. Brent; Kenny, David A.; Trzesniewski, Kali H.; Lucas, Richard E.; Conger, Rand D.
2012-01-01
The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-esteem assessments, whereas state factors accounted for about 16% of the variance in repeated assessments of latent self-esteem. The stability of individual differences in self-esteem increased with age consistent with the cumulative continuity principle of personality development. PMID:23180899
A Consistent Fuzzy Preference Relations Based ANP Model for R&D Project Selection
Directory of Open Access Journals (Sweden)
Chia-Hua Cheng
2017-08-01
Full Text Available In today’s rapidly changing economy, technology companies have to make decisions on research and development (R&D projects investment on a routine bases with such decisions having a direct impact on that company’s profitability, sustainability and future growth. Companies seeking profitable opportunities for investment and project selection must consider many factors such as resource limitations and differences in assessment, with consideration of both qualitative and quantitative criteria. Often, differences in perception by the various stakeholders hinder the attainment of a consensus of opinion and coordination efforts. Thus, in this study, a hybrid model is developed for the consideration of the complex criteria taking into account the different opinions of the various stakeholders who often come from different departments within the company and have different opinions about which direction to take. The decision-making trial and evaluation laboratory (DEMATEL approach is used to convert the cause and effect relations representing the criteria into a visual network structure. A consistent fuzzy preference relations based analytic network process (CFPR-ANP method is developed to calculate the preference-weights of the criteria based on the derived network structure. The CFPR-ANP is an improvement over the original analytic network process (ANP method in that it reduces the problem of inconsistency as well as the number of pairwise comparisons. The combined complex proportional assessment (COPRAS-G method is applied with fuzzy grey relations to resolve conflicts arising from differences in information and opinions provided by the different stakeholders about the selection of the most suitable R&D projects. This novel combination approach is then used to assist an international brand-name company to prioritize projects and make project decisions that will maximize returns and ensure sustainability for the company.
Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers
Cartar, William; Mørk, Jesper; Hughes, Stephen
2017-08-01
We present a powerful computational approach to simulate the threshold behavior of photonic-crystal quantum-dot (QD) lasers. Using a finite-difference time-domain (FDTD) technique, Maxwell-Bloch equations representing a system of thousands of statistically independent and randomly positioned two-level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within the model. These Maxwell-Bloch equations are implemented by using Lumerical's flexible material plug-in tool, which allows a user to define additional equations of motion for the nonlinear polarization. We implement the gain ensemble within triangular-lattice photonic-crystal cavities of various length N (where N refers to the number of missing holes), and investigate the cavity mode characteristics and the threshold regime as a function of cavity length. We develop effective two-dimensional model simulations which are derived after studying the full three-dimensional passive material structures by matching the cavity quality factors and resonance properties. We also demonstrate how to obtain the correct point-dipole radiative decay rate from Fermi's golden rule, which is captured naturally by the FDTD method. Our numerical simulations predict that the pump threshold plateaus around cavity lengths greater than N =9 , which we identify as a consequence of the complex spatial dynamics and gain coupling from the inhomogeneous QD ensemble. This behavior is not expected from simple rate-equation analysis commonly adopted in the literature, but is in qualitative agreement with recent experiments. Single-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also
Reed, S.C.; Vitousek, P.M.; Cleveland, C.C.
2011-01-01
Accurately predicting the effects of global change on net carbon (C) exchange between terrestrial ecosystems and the atmosphere requires a more complete understanding of how nutrient availability regulates both plant growth and heterotrophic soil respiration. Models of soil development suggest that the nature of nutrient limitation changes over the course of ecosystem development, transitioning from nitrogen (N) limitation in 'young' sites to phosphorus (P) limitation in 'old' sites. However, previous research has focused primarily on plant responses to added nutrients, and the applicability of nutrient limitation-soil development models to belowground processes has not been thoroughly investigated. Here, we assessed the effects of nutrients on soil C cycling in three different forests that occupy a 4 million year substrate age chronosequence where tree growth is N limited at the youngest site, co-limited by N and P at the intermediate-aged site, and P limited at the oldest site. Our goal was to use short-term laboratory soil C manipulations (using 14C-labeled substrates) and longer-term intact soil core incubations to compare belowground responses to fertilization with aboveground patterns. When nutrients were applied with labile C (sucrose), patterns of microbial nutrient limitation were similar to plant patterns: microbial activity was limited more by N than by P in the young site, and P was more limiting than N in the old site. However, in the absence of C additions, increased respiration of native soil organic matter only occurred with simultaneous additions of N and P. Taken together, these data suggest that altered nutrient inputs into ecosystems could have dissimilar effects on C cycling above- and belowground, that nutrients may differentially affect of the fate of different soil C pools, and that future changes to the net C balance of terrestrial ecosystems will be partially regulated by soil nutrient status. ?? 2010 US Government.
A self-consistent impedance method for electromagnetic surface impedance modeling
Thiel, David V.; Mittra, Raj
2001-01-01
A two-dimensional, self-consistent impedance method has been derived and used to calculate the electromagnetic surface impedance above buried objects at very low frequencies. The earth half space is discretized using an array of impedance elements. Inhomogeneities in the complex permittivity of the earth are reflected in variations in these impedance elements. The magnetic field is calculated for each cell in the solution space using a difference equation derived from Faraday's and Ampere's laws. It is necessary to include an air layer above the earth's surface to allow the scattered magnetic field to be calculated at the surface. The source field is applied above the earth's surface as a Dirichlet boundary condition, whereas the Neumann condition is employed at all other boundaries in the solution space. This, in turn, enables users to use both finite and infinite magnetic field sources as excitations. The technique is shown to be computationally efficient and yields reasonably accurate results when applied to a number of one- and two-dimensional earth structures with a known surface impedance distribution.
Directory of Open Access Journals (Sweden)
Sam Walcott
2015-11-01
Full Text Available Muscle contracts due to ATP-dependent interactions of myosin motors with thin filaments composed of the proteins actin, troponin, and tropomyosin. Contraction is initiated when calcium binds to troponin, which changes conformation and displaces tropomyosin, a filamentous protein that wraps around the actin filament, thereby exposing myosin binding sites on actin. Myosin motors interact with each other indirectly via tropomyosin, since myosin binding to actin locally displaces tropomyosin and thereby facilitates binding of nearby myosin. Defining and modeling this local coupling between myosin motors is an open problem in muscle modeling and, more broadly, a requirement to understanding the connection between muscle contraction at the molecular and macro scale. It is challenging to directly observe this coupling, and such measurements have only recently been made. Analysis of these data suggests that two myosin heads are required to activate the thin filament. This result contrasts with a theoretical model, which reproduces several indirect measurements of coupling between myosin, that assumes a single myosin head can activate the thin filament. To understand this apparent discrepancy, we incorporated the model into stochastic simulations of the experiments, which generated simulated data that were then analyzed identically to the experimental measurements. By varying a single parameter, good agreement between simulation and experiment was established. The conclusion that two myosin molecules are required to activate the thin filament arises from an assumption, made during data analysis, that the intensity of the fluorescent tags attached to myosin varies depending on experimental condition. We provide an alternative explanation that reconciles theory and experiment without assuming that the intensity of the fluorescent tags varies.
A self-consistent 3D model of fluctuations in the helium-ionizing background
Davies, Frederick B.; Furlanetto, Steven R.; Dixon, Keri L.
2017-03-01
Large variations in the effective optical depth of the He II Lyα forest have been observed at z ≳ 2.7, but the physical nature of these variations is uncertain: either the Universe is still undergoing the process of He II reionization, or the Universe is highly ionized but the He II-ionizing background fluctuates significantly on large scales. In an effort to build upon our understanding of the latter scenario, we present a novel model for the evolution of ionizing background fluctuations. Previous models have assumed the mean free path of ionizing photons to be spatially uniform, ignoring the dependence of that scale on the local ionization state of the intergalactic medium (IGM). This assumption is reasonable when the mean free path is large compared to the average distance between the primary sources of He II-ionizing photons, ≳ L⋆ quasars. However, when this is no longer the case, the background fluctuations become more severe, and an accurate description of the average propagation of ionizing photons through the IGM requires additionally accounting for the fluctuations in opacity. We demonstrate the importance of this effect by constructing 3D semi-analytic models of the helium-ionizing background from z = 2.5-3.5 that explicitly include a spatially varying mean free path of ionizing photons. The resulting distribution of effective optical depths at large scales in the He II Lyα forest is very similar to the latest observations with HST/COS at 2.5 ≲ z ≲ 3.5.
Consistency of different tropospheric models and mapping functions for precise GNSS processing
Graffigna, Victoria; Hernández-Pajares, Manuel; García-Rigo, Alberto; Gende, Mauricio
2017-04-01
The TOmographic Model of the IONospheric electron content (TOMION) software implements a simultaneous precise geodetic and ionospheric modeling, which can be used to test new approaches for real-time precise GNSS modeling (positioning, ionospheric and tropospheric delays, clock errors, among others). In this work, the software is used to estimate the Zenith Tropospheric Delay (ZTD) emulating real time and its performance is evaluated through a comparative analysis with a built-in GIPSY estimation and IGS final troposphere product, exemplified in a two-day experiment performed in East Australia. Furthermore, the troposphere mapping function was upgraded from Niell to Vienna approach. On a first scenario, only forward processing was activated and the coordinates of the Wide Area GNSS network were loosely constrained, without fixing the carrier phase ambiguities, for both reference and rover receivers. On a second one, precise point positioning (PPP) was implemented, iterating for a fixed coordinates set for the second day. Comparisons between TOMION, IGS and GIPSY estimates have been performed and for the first one, IGS clocks and orbits were considered. The agreement with GIPSY results seems to be 10 times better than with the IGS final ZTD product, despite having considered IGS products for the computations. Hence, the subsequent analysis was carried out with respect to the GIPSY computations. The estimates show a typical bias of 2cm for the first strategy and of 7mm for PPP, in the worst cases. Moreover, Vienna mapping function showed in general a fairly better agreement than Niell one for both strategies. The RMS values' were found to be around 1cm for all studied situations, with a slightly fitter performance for the Niell one. Further improvement could be achieved for such estimations with coefficients for the Vienna mapping function calculated from raytracing as well as integrating meteorological comparative parameters.
Wan, Li; Xu, Shixin; Liao, Maijia; Liu, Chun; Sheng, Ping
2014-01-01
In this work, we treat the Poisson-Nernst-Planck (PNP) equations as the basis for a consistent framework of the electrokinetic effects. The static limit of the PNP equations is shown to be the charge-conserving Poisson-Boltzmann (CCPB) equation, with guaranteed charge neutrality within the computational domain. We propose a surface potential trap model that attributes an energy cost to the interfacial charge dissociation. In conjunction with the CCPB, the surface potential trap can cause a surface-specific adsorbed charge layer σ. By defining a chemical potential μ that arises from the charge neutrality constraint, a reformulated CCPB can be reduced to the form of the Poisson-Boltzmann equation, whose prediction of the Debye screening layer profile is in excellent agreement with that of the Poisson-Boltzmann equation when the channel width is much larger than the Debye length. However, important differences emerge when the channel width is small, so the Debye screening layers from the opposite sides of the channel overlap with each other. In particular, the theory automatically yields a variation of σ that is generally known as the "charge regulation" behavior, attendant with predictions of force variation as a function of nanoscale separation between two charged surfaces that are in good agreement with the experiments, with no adjustable or additional parameters. We give a generalized definition of the ζ potential that reflects the strength of the electrokinetic effect; its variations with the concentration of surface-specific and surface-nonspecific salt ions are shown to be in good agreement with the experiments. To delineate the behavior of the electro-osmotic (EO) effect, the coupled PNP and Navier-Stokes equations are solved numerically under an applied electric field tangential to the fluid-solid interface. The EO effect is shown to exhibit an intrinsic time dependence that is noninertial in its origin. Under a step-function applied electric field, a
Kral, Quentin; Carswell, Robert; Pringle, Jim; Matra, Luca; Juhasz, Attila
2016-01-01
This paper presents a self-consistent model for the evolution of gas produced in the debris disc of $\\beta$ Pictoris. Our model proposes that atomic carbon and oxygen are created from the photodissociation of CO, which is itself released from volatile-rich bodies in the debris disc due to grain-grain collisions or photodesorption. While the CO lasts less than one orbit, the atomic gas evolves by viscous spreading resulting in an accretion disc inside the parent belt and a decretion disc outside. The temperature, ionisation fraction and population levels of carbon and oxygen are followed with the photodissociation region model Cloudy, which is coupled to a dynamical viscous $\\alpha$ model. We present new gas observations of $\\beta$ Pic, of C I observed with APEX and O I observed with Herschel, and show that these along with published C II and CO observations can all be explained with this new model. Our model requires a viscosity $\\alpha$ > 0.1, similar to that found in sufficiently ionised discs of other astr...
Bonnet-Lebrun, Anne-Sophie
2017-03-17
Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modelled communities - i.e., with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities - from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in pre-equilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under pre-equilibrium conditions. This article is protected by copyright. All rights reserved.
Dai, Junyi; Kerestes, Rebecca; Upton, Daniel J.; Busemeyer, Jerome R.; Stout, Julie C.
2015-01-01
The Iowa Gambling Task (IGT) and the Soochow Gambling Task (SGT) are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning (EVL) model and the prospect valence learning (PVL) model, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79) and 27 control participants (mean age 35; SD 10.44) completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models. PMID:25814963
Directory of Open Access Journals (Sweden)
Junyi eDai
2015-03-01
Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.
Groenendijk, Peter; van der Sleen, Peter; Vlam, Mart; Bunyavejchewin, Sarayudh; Bongers, Frans; Zuidema, Pieter A
2015-10-01
The important role of tropical forests in the global carbon cycle makes it imperative to assess changes in their carbon dynamics for accurate projections of future climate-vegetation feedbacks. Forest monitoring studies conducted over the past decades have found evidence for both increasing and decreasing growth rates of tropical forest trees. The limited duration of these studies restrained analyses to decadal scales, and it is still unclear whether growth changes occurred over longer time scales, as would be expected if CO2 -fertilization stimulated tree growth. Furthermore, studies have so far dealt with changes in biomass gain at forest-stand level, but insights into species-specific growth changes - that ultimately determine community-level responses - are lacking. Here, we analyse species-specific growth changes on a centennial scale, using growth data from tree-ring analysis for 13 tree species (~1300 trees), from three sites distributed across the tropics. We used an established (regional curve standardization) and a new (size-class isolation) growth-trend detection method and explicitly assessed the influence of biases on the trend detection. In addition, we assessed whether aggregated trends were present within and across study sites. We found evidence for decreasing growth rates over time for 8-10 species, whereas increases were noted for two species and one showed no trend. Additionally, we found evidence for weak aggregated growth decreases at the site in Thailand and when analysing all sites simultaneously. The observed growth reductions suggest deteriorating growth conditions, perhaps due to warming. However, other causes cannot be excluded, such as recovery from large-scale disturbances or changing forest dynamics. Our findings contrast growth patterns that would be expected if elevated CO2 would stimulate tree growth. These results suggest that commonly assumed growth increases of tropical forests may not occur, which could lead to erroneous
Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas
2015-03-01
Regional climate modelling sometimes requires that the regional model be nudged towards the large-scale driving data to avoid the development of inconsistencies between them. These inconsistencies are known to produce large surface temperature and rainfall artefacts. Therefore, it is essential to maintain the synoptic circulation within the simulation domain consistent with the synoptic circulation at the domain boundaries. Nudging techniques, initially developed for data assimilation purposes, are increasingly used in regional climate modeling and offer a workaround to this issue. In this context, several questions on the "optimal" use of nudging are still open. In this study we focus on a specific question which is: What variable should we nudge? in order to maintain the consistencies between the regional model and the driving fields as much as possible. For that, a "Big Brother Experiment", where a reference atmospheric state is known, is conducted using the weather research and forecasting (WRF) model over the Euro-Mediterranean region. A set of 22 3-month simulations is performed with different sets of nudged variables and nudging options (no nudging, indiscriminate nudging, spectral nudging) for summer and winter. The results show that nudging clearly improves the model capacity to reproduce the reference fields. However the skill scores depend on the set of variables used to nudge the regional climate simulations. Nudging the tropospheric horizontal wind is by far the key variable to nudge to simulate correctly surface temperature and wind, and rainfall. To a lesser extent, nudging tropospheric temperature also contributes to significantly improve the simulations. Indeed, nudging tropospheric wind or temperature directly impacts the simulation of the tropospheric geopotential height and thus the synoptic scale atmospheric circulation. Nudging moisture improves the precipitation but the impact on the other fields (wind and temperature) is not significant. As
Choi, Ho-Meoyng
2014-01-01
We discuss the link between the chiral symmetry of QCD and the numerical results of the light-front quark model (LFQM), analyzing both the two-point and three-point functions of a pseudoscalar meson from the perspective of the vacuum fluctuation consistent with the chiral symmetry of QCD. The two-point and three-point functions are exemplified in this work by the twist-2 and twist-3 distribution amplitudes of a pseudoscalar meson and the pion elastic form factor, respectively. The present analysis of the pseudoscalar meson commensurates with the previous analysis of the vector meson two-point function and fortifies our observation that the light-front quark model with effective degrees of freedom represented by the constituent quark and antiquark may provide the view of effective zero-mode cloud around the quark and antiquark inside the meson. Consequently, the constituents dressed by the zero-mode cloud may be expected to satisfy the chiral symmetry of QCD. Our results appear consistent with this expectation...
A consistent use of the Gurson-Tvergaard-Needleman damage model for the R-curve calculation
Directory of Open Access Journals (Sweden)
Gabriele Cricrì
2013-04-01
Full Text Available The scope of the present work is to point out a consistent simulation procedure for the quasi-static fracture processes, starting from the micro-structural characteristics of the material. To this aim, a local nine-parameters Gurson-Tvergaard-Needleman (GTN damage law has been used. The damage parameters depend on the micro-structural characteristics and must be calculated, measured or opportunely tuned. This can be done, as proposed by the author, by using an opportunely tuned GTN model for the representative volume element simulations, in order to enrich the original damage model by considering also the defect size distribution. Once determined all the material parameters, an MT fracture test has been simulated by a FE code, to calculate the R-curve in an aeronautical Al-based alloy. The simulation procedure produced results in a very good agreement with the experimental data.
Leboissertier, Anthony; Okong'O, Nora; Bellan, Josette
2005-01-01
Large-eddy simulation (LES) is conducted of a three-dimensional temporal mixing layer whose lower stream is initially laden with liquid drops which may evaporate during the simulation. The gas-phase equations are written in an Eulerian frame for two perfect gas species (carrier gas and vapour emanating from the drops), while the liquid-phase equations are written in a Lagrangian frame. The effect of drop evaporation on the gas phase is considered through mass, species, momentum and energy source terms. The drop evolution is modelled using physical drops, or using computational drops to represent the physical drops. Simulations are performed using various LES models previously assessed on a database obtained from direct numerical simulations (DNS). These LES models are for: (i) the subgrid-scale (SGS) fluxes and (ii) the filtered source terms (FSTs) based on computational drops. The LES, which are compared to filtered-and-coarsened (FC) DNS results at the coarser LES grid, are conducted with 64 times fewer grid points than the DNS, and up to 64 times fewer computational than physical drops. It is found that both constant-coefficient and dynamic Smagorinsky SGS-flux models, though numerically stable, are overly dissipative and damp generated small-resolved-scale (SRS) turbulent structures. Although the global growth and mixing predictions of LES using Smagorinsky models are in good agreement with the FC-DNS, the spatial distributions of the drops differ significantly. In contrast, the constant-coefficient scale-similarity model and the dynamic gradient model perform well in predicting most flow features, with the latter model having the advantage of not requiring a priori calibration of the model coefficient. The ability of the dynamic models to determine the model coefficient during LES is found to be essential since the constant-coefficient gradient model, although more accurate than the Smagorinsky model, is not consistently numerically stable despite using DNS
Cook, J W S; Dendy, R O
2010-01-01
We present particle-in-cell (PIC) simulations of minority energetic protons in deuterium plasmas, which demonstrate a collective instability responsible for emission near the lower hybrid frequency and its harmonics. The simulations capture the lower hybrid drift instability in a regime relevant to tokamak fusion plasmas, and show further that the excited electromagnetic fields collectively and collisionlessly couple free energy from the protons to directed electron motion. This results in an asymmetric tail antiparallel to the magnetic field. We focus on obliquely propagating modes under conditions approximating the outer mid-plane edge in a large tokamak, through which there pass confined centrally born fusion products on banana orbits that have large radial excursions. A fully self-consistent electromagnetic relativistic PIC code representing all vector field quantities and particle velocities in three dimensions as functions of a single spatial dimension is used to model this situation, by evolving the in...
Towards a Self Consistent Model of the Thermal Structure of the Venus Atmosphere
Limaye, Sanjay; Vandaele, Ann C.; Wilson, Colin
Nearly three decades ago, an international effort led to the adoption of the Venus International Reference Atmosphere (VIRA) was published in 1985 after the significant data returned by the Pioneer Venus Orbiter and Probes and the earlier Venera missions (Kliore et al., 1985). The vertical thermal structure is one component of the reference model which relied primarily on the three Pioneer Venus Small Probes, the Large Probe profiles as well as several hundred retrieved temperature profiles from the Pioneer Venus Orbiter radio occultation data collected during 1978 - 1982. Since then a huge amount of thermal structure data has been obtained from multiple instruments on ESA’s Venus Express (VEX) orbiter mission. The VEX data come from retrieval of temperature profiles from SPICAV/SOIR stellar/solar occultations, VeRa radio occultations and from the passive remote sensing by the VIRTIS instrument. The results of these three experiments vary in their intrinsic properties - altitude coverage, spatial and temporal sampling and resolution and accuracy An international team has been formed with support from the International Space Studies Institute (Bern, Switzerland) to consider the observations of the Venus atmospheric structure obtained since the data used for the COSPAR Venus International Reference Atmosphere (Kliore et al., 1985). We report on the progress made by the comparison of the newer data with VIRA model and also between different experiments where there is overlap. Kliore, A.J., V.I. Moroz, and G.M. Keating, Eds. 1985, VIRA: Venus International Reference Atmosphere, Advances in Space Research, Volume 5, Number 11, 307 pages.
Directory of Open Access Journals (Sweden)
Marco Del Giudice
Full Text Available BACKGROUND: Schizophrenia is a mental disorder marked by an evolutionarily puzzling combination of high heritability, reduced reproductive success, and a remarkably stable prevalence. Recently, it has been proposed that sexual selection may be crucially involved in the evolution of schizophrenia. In the sexual selection model (SSM of schizophrenia and schizotypy, schizophrenia represents the negative extreme of a sexually selected indicator of genetic fitness and condition. Schizotypal personality traits are hypothesized to increase the sensitivity of the fitness indicator, thus conferring mating advantages on high-fitness individuals but increasing the risk of schizophrenia in low-fitness individuals; the advantages of successful schzotypy would be mediated by enhanced courtship-related traits such as verbal creativity. Thus, schizotypy-increasing alleles would be maintained by sexual selection, and could be selectively neutral or even beneficial, at least in some populations. However, most empirical studies find that the reduction in fertility experienced by schizophrenic patients is not compensated for by increased fertility in their unaffected relatives. This finding has been interpreted as indicating strong negative selection on schizotypy-increasing alleles, and providing evidence against sexual selection on schizotypy. METHODOLOGY: A simple mathematical model is presented, showing that reduced fertility in the families of schizophrenic patients can coexist with selective neutrality of schizotypy-increasing alleles, or even with positive selection on schizotypy in the general population. If the SSM is correct, studies of patients' families can be expected to underestimate the true fertility associated with schizotypy. SIGNIFICANCE: This paper formally demonstrates that reduced fertility in the families of schizophrenic patients does not constitute evidence against sexual selection on schizotypy-increasing alleles. Futhermore, it suggests
Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.
2003-01-01
. In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling...
Using open sidewalls for modelling self-consistent lithosphere subduction dynamics
Chertova, M.V.; Geenen, T.; van den Berg, A.; Spakman, W.
2012-01-01
Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable) vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free
Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.
2009-01-01
This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…
Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.
2009-01-01
This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…
Directory of Open Access Journals (Sweden)
Ahmad El Sayed
2014-01-01
Full Text Available A lifted H2/N2 turbulent jet flame issuing into a vitiated coflow is investigated using the conditional moment closure. The conditional velocity (CV and the conditional scalar dissipation rate (CSDR submodels are chosen such that they are fully consistent with the moments of the presumed β probability density function (PDF. The CV is modelled using the PDF-gradient diffusion model. Two CSDR submodels based on the double integration of the homogeneous and inhomogeneous mixture fraction PDF transport equations are implemented. The effect of CSDR modelling is investigated over a range of coflow temperatures (Tc and the stabilisation mechanism is determined from the analysis of the transport budgets and the history of radical build-up ahead of the stabilisation height. For all Tc, the balance between chemistry, axial convection, and micromixing, and the absence of axial diffusion upstream of the stabilisation height indicate that the flame is stabilized by autoignition. This conclusion is confirmed from the rapid build-up of HO2 ahead of H, O, and OH. The inhomogeneous CSDR modelling yields higher dissipation levels at the most reactive mixture fraction, which results in longer ignition delays and larger liftoff heights. The effect of the spurious sources arising from homogeneous modelling is found to be small but nonnegligible, mostly notably within the flame zone.
A self-consistent model for the discharge kinetics in a high-repetition-rate copper-vapor laser
Energy Technology Data Exchange (ETDEWEB)
Carman, R.J.; Brown, D.J.W.; Piper, J.A. (Macquarie Univ., Sydney (Australia). Centre for Lasers and Applications)
1994-08-01
A self-consistent computer model has been developed to simulate the discharge kinetics and lasing characteristics of a copper-vapor laser (CVL) for typical operating conditions. Using a detailed rate-equation analysis, the model calculates the spatio-temporal evolution of the population densities of 11 atomic and ionic copper levels, four neon levels, and includes 70 collisional and radiative processes, in addition to radial particle transport. The long-term evolution of the plasma is taken into account by integrating the set of coupled rate equations describing the discharge and electrical circuit through multiple excitation-afterglow cycles. A time-dependent two-electron group model, based on a bi-Maxwellian electron energy distribution function, has been used to evaluate the energy partitioning between the copper vapor and the neon-buffer gas. The behavior of the plasma in the cooler end regions of the discharge tube near the electrodes, where the plasma kinetics are dominated by the buffer gas, has also been modeled. Results from the model have been compared to experimental data for a narrow-bore ([phi] = 1.8 cm) CVL operating under optimum conditions.
Self-consistent, axisymmetric two-integral models of elliptical galaxies with embedded nuclear discs
Van den Bosch, F C; van den Bosch, Frank C; de Zeeuw, P Tim
1996-01-01
Recently, observations with the Hubble Space Telescope have revealed small stellar discs embedded in the nuclei of a number of ellipticals and S0s. In this paper we construct two-integral axisymmetric models for such systems. We calculate the even part of the phase-space distribution function, and specify the odd part by means of a simple parameterization. We investigate the photometric as well as the kinematic signatures of nuclear discs, including their velocity profiles (VPs), and study the influence of seeing convolution. The rotation curve of a nuclear disc gives an excellent measure of the central mass-to-light ratio whenever the VPs clearly reveal the narrow, rapidly rotating component associated with the nuclear disc. Steep cusps and seeing convolution both result in central VPs that are dominated by the bulge light, and these VPs barely show the presence of the nuclear disc, impeding measurements of the central rotation velocities of the disc stars. However, if a massive BH is present, the disc compo...
A Delay Model of Multiple-Valued Logic Circuits Consisting of Min, Max, and Literal Operations
Takagi, Noboru
Delay models for binary logic circuits have been proposed and clarified their mathematical properties. Kleene's ternary logic is one of the simplest delay models to express transient behavior of binary logic circuits. Goto first applied Kleene's ternary logic to hazard detection of binary logic circuits in 1948. Besides Kleene's ternary logic, there are many delay models of binary logic circuits, Lewis's 5-valued logic etc. On the other hand, multiple-valued logic circuits recently play an important role for realizing digital circuits. This is because, for example, they can reduce the size of a chip dramatically. Though multiple-valued logic circuits become more important, there are few discussions on delay models of multiple-valued logic circuits. Then, in this paper, we introduce a delay model of multiple-valued logic circuits, which are constructed by Min, Max, and Literal operations. We then show some of the mathematical properties of our delay model.
Physically-consistent wall boundary conditions for the k-ω turbulence model
DEFF Research Database (Denmark)
Fuhrman, David R.; Dixen, Martin; Jacobsen, Niels Gjøl
2010-01-01
A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components of the fluc......A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components...
Antoniu, Gabriel; Cudennec, Loïc; Monnet, Sébastien
2006-01-01
This paper addresses the problem of efficient visualization of shared data within code coupling grid applications. These applications are structured as a set of distributed, autonomous, weakly-coupled codes. We focus on the case where the codes are able to interact using the abstraction of a shared data space. We propose an efficient visualization scheme by adapting the mechanisms used to maintain the data consistency. We introduce a new operation called relaxed read, as an extension to the e...
Cano, Zach; Johansson Andreas, K. G.; Maeda, Keiichi
2016-04-01
We present an analytical model that considers energy arising from a magnetar central engine. The results of fitting this model to the optical and X-ray light curves of five long-duration γ-ray bursts (LGRBs) and two ultralong GRBs (ULGRBs), including their associated supernovae (SNe), show that emission from a magnetar central engine cannot be solely responsible for powering an LGRB-SN. While the early afterglow (AG)-dominated phase can be well described with our model, the predicted SN luminosity is underluminous by a factor of 3-17. We use this as compelling evidence that additional sources of heating must be present to power an LGRB-SN, which we argue must be radioactive heating. Our self-consistent modelling approach was able to successfully describe all phases of ULGRB 111209A/SN 2011kl, from the early AG to the later SN, where we determined for the magnetar central engine a magnetic field strength of 1.1-1.3 × 1015 G, an initial spin period of 11.5-13.0 ms, a spin-down time of 4.8-6.5 d, and an initial energy of 1.2-1.6 × 1050 erg. These values are entirely consistent with those determined by other authors. The luminosity of a magnetar-powered SN is directly related to how long the central engine is active, where central engines with longer durations give rise to brighter SNe. The spin-down time-scales of superluminous supernovae (SLSNe) are of order months to years, which provides a natural explanation as to why SN 2011kl was less luminous than SLSNe that are also powered by emission from magnetar central engines.
Woitke, P.; Min, M.; Pinte, C.; Thi, W. -F; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.
2016-01-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on
O. Fovet; L. Ruiz; M. Hrachowitz; M. Faucheux; C. Gascuel-Odoux
2015-01-01
While most hydrological models reproduce the general flow dynamics, they frequently fail to adequately mimic system-internal processes. In particular, the relationship between storage and discharge, which often follows annual hysteretic patterns in shallow hard-rock aquifers, is rarely considered in modelling studies. One main reason is that catchment storage is...
Vertical Equating: An Empirical Study of the Consistency of Thurstone and Rasch Model Approaches.
Schratz, Mary K.
To explore the appropriateness of the Rasch model for the vertical equating of a multi-level, multi-form achievement test series, both the Rasch model and the traditional Thurstone procedures were applied to the Listening Comprehension subtest scores of the Stanford Achievement Test. Two adjacent levels of these tests were administered in 1981 to…
Woitke, P.; Min, M.; Pinte, C.; Thi, W. -F; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.
2016-01-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on t
Self-consistent modelling of hot plasmas within non-extensive Tsallis' thermostatistics
Pain, Jean-Christophe; Gilleron, Franck
2011-01-01
A study of the effects of non-extensivity on the modelling of atomic physics in hot dense plasmas is proposed within Tsallis' statistics. The electronic structure of the plasma is calculated through an average-atom model based on the minimization of the non-extensive free energy.
CONSISTENT USE OF THE KALMAN FILTER IN CHEMICAL TRANSPORT MODELS (CTMS) FOR DEDUCING EMISSIONS
Past research has shown that emissions can be deduced using observed concentrations of a chemical, a Chemical Transport Model (CTM), and the Kalman filter in an inverse modeling application. An expression was derived for the relationship between the "observable" (i.e., the con...
DEFF Research Database (Denmark)
Keck, Rolf-Erik; Veldkamp, Dick; Wedel-Heinen, Jens Jakob
This thesis describes the further development and validation of the dynamic meandering wake model for simulating the flow field and power production of wind farms operating in the atmospheric boundary layer (ABL). The overall objective of the conducted research is to improve the modelling capabil...... intensity. This power drop is comparable to measurements from the North Hoyle and OWEZ wind farms....
Self consistent model of core formation and the effective metal-silicate partitioning
Ichikawa, H.; Labrosse, S.; Kameyama, M.
2010-12-01
It has been long known that the formation of the core transforms gravitational energy into heat and is able to heat up the whole Earth by about 2000 K. However, the distribution of this energy within the Earth is still debated and depends on the core formation process considered. Iron rain in the surface magma ocean is supposed to be the first mechanism of separation for large planets, iron then coalesces to form a pond at the base of the magma ocean [Stevenson 1990]. The time scale of the separation can be estimated from falling velocity of the iron phase, which is estimated by numerical simulation [Ichikawa et al., 2010] as ˜ 10cm/s with iron droplet of centimeter-scale. A simple estimate of the metal-silicate partition from the P-T condition of the base of the magma ocean, which must coincide with between peridotite liquidus and solidus by a single-stage model, is inconsistent with Earth's core-mantle partition. P-T conditions where silicate equilibrated with metal are far beyond the liquidus or solidus temperature for about ˜ 700K. For example, estimated P-T conditions are: 40GPa at 3750K for Wade and Wood, 2005, T ≧ 3600K for Chabot and Agee, 2003 and 35GPa at T ≧ 3300K for Gessmann and Rubie, 2000. Meanwhile, Rubie et al., 2003 shown that metal couldn't equilibrate with silicate on the base of the magma ocean before crystallization of silicate. On the other hand, metal-silicate equilibration is achieved only ˜ 5 s in the state of iron rain. Therefore metal and silicate simultaneously separate and equilibrate each other at the P-T condition during the course to the iron pond. Taking into account the release of gravitational energy, temperature of the middle of the magma ocean would be higher than the liquidus. Estimation of the thermal structure during the iron-silicate separation requires the development of a planetary-sized calculation model. However, because of the huge disparity of scales between the cm-sized drops and the magma ocean, a direct
Models of vertical coordination consistent with the development of bio-energetics
Directory of Open Access Journals (Sweden)
Gianluca Nardone
2009-04-01
Full Text Available To foster the development of the biomasses for solid fuel it is fundamental to build up a strategy at a local level in which co-exists farms as well as industrial farms. To such aim, it is necessary to implement an effective vertical coordination between the stakeholders with the definition of a contract that prevents opportunistic behaviors and guarantees the industrial investments of constant supplies over the time. Starting from a project that foresees a biomasses power plant in the south of Italy, this study reflects on the payments to fix in an eventual contract in such a way to maintain the fidelity of the farmers. These one have a greater flexibility since they can choose the most convenient crop. Therefore, their fidelity can be obtained tying the contractual payments to the price of the main alternative crop to the energetic one. The results of the study seem to indicate the opportunity to fix a purchase price of the raw materials linked to the one of durum wheat that is the most widespread crop in the territory and the one that depends more on a volatile market. Using the data of the District 12 of the province of Foggia Water Consortium with an area of 11.300 hectares (instead of the 20.000 demanded in the proposal, it has been possible to organize approximately 600 enterprises in five cluster, each of them identified by a representative farm. With a model of linear programming, we have run different simulations taking into account the possibility to grow sorghum in different ways. Through an aggregation process, it has been calculated that farmers may find it convenient to supply the energetic crop at a price of 50 €/t when the price of durum wheat is 150 €/t. Anyway, this price is lower than the one offered by firm that is planning to build the power plant. Moreover, it has been identified a strong correlation between the price of the durum wheat and the price that makes convenient for the farmers to grow the sorghum. When the
Models of vertical coordination consistent with the development of bio-energetics
Directory of Open Access Journals (Sweden)
Rosaria Viscecchia
2011-02-01
Full Text Available To foster the development of the biomasses for solid fuel it is fundamental to build up a strategy at a local level in which co-exists farms as well as industrial farms. To such aim, it is necessary to implement an effective vertical coordination between the stakeholders with the definition of a contract that prevents opportunistic behaviors and guarantees the industrial investments of constant supplies over the time. Starting from a project that foresees a biomasses power plant in the south of Italy, this study reflects on the payments to fix in an eventual contract in such a way to maintain the fidelity of the farmers. These one have a greater flexibility since they can choose the most convenient crop. Therefore, their fidelity can be obtained tying the contractual payments to the price of the main alternative crop to the energetic one. The results of the study seem to indicate the opportunity to fix a purchase price of the raw materials linked to the one of durum wheat that is the most widespread crop in the territory and the one that depends more on a volatile market. Using the data of the District 12 of the province of Foggia Water Consortium with an area of 11.300 hectares (instead of the 20.000 demanded in the proposal, it has been possible to organize approximately 600 enterprises in five cluster, each of them identified by a representative farm. With a model of linear programming, we have run different simulations taking into account the possibility to grow sorghum in different ways. Through an aggregation process, it has been calculated that farmers may find it convenient to supply the energetic crop at a price of 50 €/t when the price of durum wheat is 150 €/t. Anyway, this price is lower than the one offered by firm that is planning to build the power plant. Moreover, it has been identified a strong correlation between the price of the durum wheat and the price that makes convenient for the farmers to grow the sorghum. When the
Han, D.; Wang, J.
2015-12-01
The moon-plasma interactions and the resulting surface charging have been subjects of extensive recent investigations. While many particle-in-cell (PIC) based simulation models have been developed, all existing PIC simulation models treat the surface of the Moon as a boundary condition to the plasma flow. In such models, the surface of the Moon is typically limited to simple geometry configurations, the surface floating potential is calculated from a simplified current balance condition, and the electric field inside the regolith layer cannot be resolved. This paper presents a new full particle PIC model to simulate local scale plasma flow and surface charging. A major feature of this new model is that the surface is treated as an "interface" between two mediums rather than a boundary, and the simulation domain includes not only the plasma but also the regolith layer and the bedrock underneath it. There are no limitations on the surface shape. An immersed-finite-element field solver is applied which calculates the regolith surface floating potential and the electric field inside the regolith layer directly from local charge deposition. The material property of the regolith layer is also explicitly included in simulation. This new model is capable of providing a self-consistent solution to the plasma flow field, lunar surface charging, the electric field inside the regolith layer and the bedrock for realistic surface terrain. This new model is applied to simulate lunar surface-plasma interactions and surface charging under various ambient plasma conditions. The focus is on the lunar terminator region, where the combined effects from the low sun elevation angle and the localized plasma wake generated by plasma flow over a rugged terrain can generate strongly differentially charged surfaces and complex dust dynamics. We discuss the effects of the regolith properties and regolith layer charging on the plasma flow field, dust levitation, and dust transport.
Sarofim, M. C.; Martinich, J.; Waldhoff, S.; DeAngelo, B. J.; McFarland, J.; Jantarasami, L.; Shouse, K.; Crimmins, A.; Li, J.
2014-12-01
The Climate Change Impacts and Risk Analysis (CIRA) project establishes a new multi-model framework to systematically assess the physical impacts, economic damages, and risks from climate change. The primary goal of this framework is to estimate the degree to which climate change impacts and damages in the United States are avoided or reduced in the 21st century under multiple greenhouse gas (GHG) emissions mitigation scenarios. The first phase of the CIRA project is a modeling exercise that included two integrated assessment models and 15 sectoral models encompassing five broad impacts sectors: water resources, electric power, infrastructure, human health, and ecosystems. Three consistent socioeconomic and climate scenarios are used to analyze the benefits of global GHG mitigation targets: a reference scenario and two policy scenarios with total radiative forcing targets in 2100 of 4.5 W/m2 and 3.7 W/m2. In this exercise, the implications of key uncertainties are explored, including climate sensitivity, climate model, natural variability, and model structures and parameters. This presentation describes the motivations and goals of the CIRA project; the design and academic contribution of the first CIRA modeling exercise; and briefly summarizes several papers published in a special issue of Climatic Change. The results across impact sectors show that GHG mitigation provides benefits to the United States that increase over time, the effects of climate change can be strongly influenced by near-term policy choices, adaptation can reduce net damages, and impacts exhibit spatial and temporal patterns that may inform mitigation and adaptation policy discussions.
Jacques, Kevin; Sabariego, Ruth,; Geuzaine, Christophe; GYSELINCK Johan
2015-01-01
This paper deals with the implementation of an energy-consistent ferromagnetic hysteresis model in 2D finite element computations. This vector hysteresis model relies on a strong thermodynamic foundation and ensures the closure of minor hysteresis loops. The model accuracy can be increased by controlling the number of intrinsic cell components while parameters can be easily fitted on common material measurements. Here, the native h-based material model is inverted using the Newton-Raphson met...
Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen
2016-07-01
The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our
Dosen, Strahinja; Markovic, Marko; Wille, Nicola; Henkel, Markus; Koppe, Mario; Ninu, Andrei; Frömmel, Cornelius; Farina, Dario
2015-06-01
Prosthesis users usually agree that myoelectric prostheses should be equipped with somatosensory feedback. However, the exact role of feedback and potential benefits are still elusive. The current study investigates the nature of human control processes within a specific context of routine grasping. Although the latter includes a fast feedforward control of the grasping force, the assumption was that the feedback would still be useful; it would communicate the outcome of the grasping trial, which the subjects could use to learn an internal model of feedforward control. Nine able-bodied subjects produced repeatedly a desired level of grasping force using different control configurations: feedback versus no-feedback, virtual versus real prosthetic hand, and joystick versus myocontrol. The outcome measures were the median and dispersion of the relative force errors. The results demonstrated that the feedback was successful in limiting the variability of the routine grasping due to uncertainties in the system and/or the command interface. The internal models of feedforward control could be employed by the subjects to control the prosthesis without the loss of performance even after the force feedback was removed. The models were, however, unstable over time, especially with myocontrol. Overall, the study demonstrates that the prosthesis system can be learned by the subjects using feedback. The feedback is also essential to maintain the model, and it could be delivered intermittently. This approach has practical advantages, but the level to which this mechanism can be truly exploited in practice depends directly on the consistency of the prosthesis control interface.
Interpreting Results from the Multinomial Logit Model
DEFF Research Database (Denmark)
Wulff, Jesper
2015-01-01
This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...
Energy Technology Data Exchange (ETDEWEB)
Morscheidt, W.; Hassouni, K. [Laboratoire d' Ingenierie des Materiaux et des Hautes Pressions, CNRS-UPN, 93 - Villetaneuse (France); Amouroux, J.; Arefi-Khonsari, F. [Universite Pierre et Marie Curie, Lab. de Genie des Procedes Plasmas, 75 - Paris (France)
2001-07-01
A one-dimensional self consistent numerical model of argon-oxygen glow discharges obtained in parallel plate capacitively coupled devices has been presented. This model includes a discharge module that solves for the coupled set of charged species continuity equations, the electron energy transport equation and Poisson's equation. It also includes a neutral species transport-chemistry module that solves the stationary continuity equations of these species. The chemistry and electron energy losses through inelastic collisions were described by a 14 species-62 reactions thermochemical model. Results obtained from simulations performed for a feed gas composition of 66% Oxygen-34% Argon and several discharge pressures were discussed. These results mainly showed that for pressures below 200 mTorr the electron-impact ionization, dissociation and excitation processes mainly took place in the center of the discharge, while at higher pressures these processes took place at the discharge edges. The discharges obtained in the low pressure regime are electronegative, O{sup -} being the major negative ion, while at higher pressures the plasma was electro-positive. The axial profiles of the major charged species show a substantial non uniformity with pronounced maxima in the center of the discharge at low pressure. At high pressures, these profiles are more uniform in the ambipolar plasma region and sharply decrease at the sheath. (authors)
The Twente lower extremity model : consistent dynamic simulation of the human locomotor apparatus
Klein Horsman, Martijn Dirk
2007-01-01
Orthopedic interventions such as tendon transfers have shown to be successful in the treatment of gait disorders. Still, in many cases dysfunctions remained or worsened. To assist clinicians, an interactive tool will be useful that allows evaluation of if-then scenarios with respect to treatment methods. Comprehensive musculoskeletal models have shown a high potential to serve as such a tool. By varying anatomical model parameters, alterations in anatomy due to surgery can be implemented. Inv...
Toward a self-consistent, high-resolution absolute plate motion model for the Pacific
Wessel, Paul; Harada, Yasushi; Kroenke, Loren W.
2006-03-01
The hot spot hypothesis postulates that linear volcanic trails form as lithospheric plates move relative to stationary or slowly moving plumes. Given geometry and ages from several trails, one can reconstruct absolute plate motions (APM) that provide valuable information about past and present tectonism, paleogeography, and volcanism. Most APM models have been designed by fitting small circles to coeval volcanic chain segments and determining stage rotation poles, opening angles, and time intervals. Unlike relative plate motion (RPM) models, such APM models suffer from oversimplicity, self-inconsistencies, inadequate fits to data, and lack of rigorous uncertainty estimates; in addition, they work only for fixed hot spots. Newer methods are now available that overcome many of these limitations. We present a technique that provides high-resolution APM models derived from stationary or moving hot spots (given prescribed paths). The simplest model assumes stationary hot spots, and an example of such a model is presented. Observations of geometry and chronology on the Pacific plate appear well explained by this type of model. Because it is a one-plate model, it does not discriminate between hot spot drift or true polar wander as explanations for inferred paleolatitudes from the Emperor chain. Whether there was significant relative motion within the hot spots under the Pacific plate during the last ˜70 m.y. is difficult to quantify, given the paucity and geological uncertainty of age determinations. Evidence in support of plume drift appears limited to the period before the 47 Ma Hawaii-Emperor Bend and, apart from the direct paleolatitude determinations, may have been somewhat exaggerated.
2012-06-13
daily low-dose Bacillus anthracis spore inhalation exposures in the rabbit model Roy E. Barnewall 1, Jason E. Comer 1, Brian D. Miller 1, BradfordW...multiple exposure days. Keywords: Bacillus anthracis , inhalation exposures, low-dose, subchronic exposures, spores, anthrax, aerosol system INTRODUCTION... Bacillus Anthracis Spore Inhalation Exposures In The Rabbit Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d
A consistent hamiltonian treatment of the Thirring-Wess and Schwinger model in the covariant gauge
Martinovič, L'ubomír
2014-06-01
We present a unified hamiltonian treatment of the massless Schwinger model in the Landau gauge and of its non-gauge counterpart-the Thirring-Wess (TW) model. The operator solution of the Dirac equation has the same structure in the both models and identifies free fields as the true dynamical degrees of freedom. The coupled boson field equations (Maxwell and Proca, respectively) can also be solved exactly. The Hamiltonan in Fock representation is derived for the TW model and its diagonalization via a Bogoliubov transformation is suggested. The axial anomaly is derived in both models directly from the operator solution using a hermitian version of the point-splitting regularization. A subtlety of the residual gauge freedom in the covariant gauge is shown to modify the usual definition of the "gauge-invariant" currents. The consequence is that the axial anomaly and the boson mass generation are restricted to the zero-mode sector only. Finally, we discuss quantization of the unphysical gauge-field components in terms of ghost modes in an indefinite-metric space and sketch the next steps within the finite-volume treatment necessary to fully reveal physical content of the model in our hamiltonian formulation.
Hachem, Walid; Mestre, Xavier; Najim, Jamal; Vallet, Pascal
2011-01-01
In array processing, a common problem is to estimate the angles of arrival of $K$ deterministic sources impinging on an array of $M$ antennas, from $N$ observations of the source signal, corrupted by gaussian noise. The problem reduces to estimate a quadratic form (called "localization function") of a certain projection matrix related to the source signal empirical covariance matrix. Recently, a new subspace estimation method (called "G-MUSIC") has been proposed, in the context where the number of available samples $N$ is of the same order of magnitude than the number of sensors $M$. In this context, the traditional subspace methods tend to fail because the empirical covariance matrix of the observations is a poor estimate of the source signal covariance matrix. The G-MUSIC method is based on a new consistent estimator of the localization function in the regime where $M$ and $N$ tend to $+\\infty$ at the same rate. However, the consistency of the angles estimator was not adressed. The purpose of this paper is ...
Woitke, P; Pinte, C; Thi, W -F; Kamp, I; Rab, C; Anthonioz, F; Antonellini, S; Baldovin-Saavedra, C; Carmona, A; Dominik, C; Dionatos, O; Greaves, J; Güdel, M; Ilee, J D; Liebhart, A; Ménard, F; Rigon, L; Waters, L B F M; Aresu, G; Meijerink, R; Spaans, M
2015-01-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. We propose new standard dust opacities for disk models, we present a simplified treatment of PAHs sufficient to reproduce the PAH emission features, and we suggest using a simple treatment of dust settling. We roughly adjust parameters to obtain a model that predicts typical Class II T Tauri star continuum and line observations. We systematically study the impact of each model parameter (disk mass, disk extension and shape, dust settling, dust size and opacity, gas/dust ratio, etc.) on all continuum and line observables, in particular on the SED, mm-slope, continuum visibilities, and emission lines including [OI] 63um, high-J CO lines, (sub-)mm CO isotopologue lines, and CO fundamental ro-vibrational lines. We find that evolved dust properties (large grains...
Consistent parameter fixing in the quark-meson model with vacuum fluctuations
Carignano, Stefano; Buballa, Michael; Elkamhawy, Wael
2016-08-01
We revisit the renormalization prescription for the quark-meson model in an extended mean-field approximation, where vacuum quark fluctuations are included. At a given cutoff scale the model parameters are fixed by fitting vacuum quantities, typically including the sigma-meson mass mσ and the pion decay constant fπ. In most publications the latter is identified with the expectation value of the sigma field, while for mσ the curvature mass is taken. When quark loops are included, this prescription is however inconsistent, and the correct identification involves the renormalized pion decay constant and the sigma pole mass. In the present article we investigate the influence of the parameter-fixing scheme on the phase structure of the model at finite temperature and chemical potential. Despite large differences between the model parameters in the two schemes, we find that in homogeneous matter the effect on the phase diagram is relatively small. For inhomogeneous phases, on the other hand, the choice of the proper renormalization prescription is crucial. In particular, we show that if renormalization effects on the pion decay constant are not considered, the model does not even present a well-defined renormalized limit when the cutoff is sent to infinity.
A Hybrid EAV-Relational Model for Consistent and Scalable Capture of Clinical Research Data.
Khan, Omar; Lim Choi Keung, Sarah N; Zhao, Lei; Arvanitis, Theodoros N
2014-01-01
Many clinical research databases are built for specific purposes and their design is often guided by the requirements of their particular setting. Not only does this lead to issues of interoperability and reusability between research groups in the wider community but, within the project itself, changes and additions to the system could be implemented using an ad hoc approach, which may make the system difficult to maintain and even more difficult to share. In this paper, we outline a hybrid Entity-Attribute-Value and relational model approach for modelling data, in light of frequently changing requirements, which enables the back-end database schema to remain static, improving the extensibility and scalability of an application. The model also facilitates data reuse. The methods used build on the modular architecture previously introduced in the CURe project.
Gritsenko, O. V.; Rubio, A.; Balbás, L. C.; Alonso, J. A.
1993-03-01
The model Coulomb pair-correlation functions proposed several years ago by Gritsenko, Bagaturyants, Kazansky, and Zhidomirov are incorporated into the self-consistent local-density approximation (LDA) scheme for electronic systems. Different correlation functions satisfying well-established local boundary conditions and integral conditions have been tested by performing LDA calculations for closed-shell atoms. Those correlation functions contain a single parameter which can be optimized by fitting the atomic correlation energies to empirical data. In this way, a single (universal) value of the parameter is found to give a very good fit for all the atoms studied. The results provide a substantial improvement of calculated correlation energies as compared to the usual LDA functionals and the scheme should be useful for molecular and cluster calculations.
Institute of Scientific and Technical Information of China (English)
Mohamed BALAH; Hamdan Naser AL-GHAMEDY
2004-01-01
The paper presents an approach for the formulation of general laminated shells based on a third order shear deformation theory. These shells undergo finite (unlimited in size) rotations and large overall motions but with small strains. A singularity-free parametrization of the rotation field is adopted. The constitutive equations, derived with respect to laminate curvilinear coordinates,are applicable to shell elements with an arbitrary number of orthotropic layers and where the material principal axes can vary from layer to layer. A careful consideration of the consistent linearization procedure pertinent to the proposed parametrization of finite rotations leads to symmetric tangent stiffness matrices. The matrix formulation adopted here makes it possible to implement the present formulation within the framework of the finite element method as a straightforward task.
Modeling of etch profile evolution including wafer charging effects using self consistent ion fluxes
Energy Technology Data Exchange (ETDEWEB)
Hoekstra, R.J.; Kushner, M.J. [Univ. of Illinois, Urbana, IL (United States). Dept. of Electrical and Computer Engineering
1996-12-31
As high density plasma reactors become more predominate in industry, the need has intensified for computer aided design tools which address both equipment issues such as ion flux uniformity onto the water and process issues such etch feature profile evolution. A hierarchy of models has been developed to address these issues with the goal of producing a comprehensive plasma processing design capability. The Hybrid Plasma Equipment Model (HPEM) produces ion and neutral densities, and electric fields in the reactor. The Plasma Chemistry Monte Carlo Model (PCMC) determines the angular and energy distributions of ion and neutral fluxes to the wafer using species source functions, time dependent bulk electric fields, and sheath potentials from the HPEM. These fluxes are then used by the Monte Carlo Feature Profile Model (MCFP) to determine the time evolution of etch feature profiles. Using this hierarchy, the effects of physical modifications of the reactor, such as changing wafer clamps or electrode structures, on etch profiles can be evaluated. The effects of wafer charging on feature evolution are examined by calculating the fields produced by the charge deposited by ions and electrons within the features. The effect of radial variations and nonuniformity in angular and energy distribution of the reactive fluxes on feature profiles and feature charging will be discussed for p-Si etching in inductively-coupled plasma (ICP) sustained in chlorine gas mixtures. The effects of over- and under-wafer topography on etch profiles will also be discussed.
Application of a Mass-Consistent Wind Model to Chinook Windstorms
1988-06-01
Meteor., 6, 837--344. Endlich, R. M., F. L. Ludwig, C. M. Bhunralkar, and M. A. Estoque , 1380: A practical method for estimating wind character34szics at...Project 8349, Menlo Park, CA. 94025. Endlich, R. M., F. L. Ludwig, C. M. Bhunralkar, and M. A. Estoque , 1982: A diagnostic model for estimating winds
Baraffe, [No Value; Alibert, Y; Mera, D; Charbrier, G; Beaulieu, JP
1998-01-01
We have computed stellar evolutionary models for stars in a mass range characteristic of Cepheid variables (3
Energy Technology Data Exchange (ETDEWEB)
Weimer-Jehle, Wolfgang; Wassermann, Sandra; Kosow, Hannah [Internationales Zentrum fuer Kultur- und Technikforschung an der Univ. Stuttgart (Germany). ZIRN Interdisziplinaerer Forschungsschwerpunkt Risiko und Nachhaltige Technikentwicklung
2011-04-15
Model-based environmental scenarios normally require multiple framework assumptions regarding future social, political and economic developments (external developments). In most cases these framework assumptions are highly uncertain. Furthermore, different external developments are not isolated from each other and their interdependences can be described by qualitative judgments only. If the internal consistency of framework assumptions is not methodologically addressed, environmental models risk to be based on inconsistent combinations of framework assumptions which do not reflect existing relations between the respective factors in an appropriate way. This report aims at demonstrating how consistent context scenarios can be developed with the help of the cross-impact balance analysis (CIB). This method allows not only for the internal consistency of framework assumptions of a single model but also for the overall consistency of framework assumptions of modeling instruments, supporting the integrated interpretation of the results of different models. In order to demonstrate the method, in a first step, ten common framework assumptions were chosen and their possible future developments until 2030 were described. In a second step, a qualitative impact network was developed based on expert elicitation. The impact network provided the basis for a qualitative but systematic analysis of the internal consistency of combinations of framework assumptions. This analysis was carried out with the CIB-method and resulted in a set of consistent context scenarios. These scenarios can be used as an informative background for defining framework assumptions for environmental models at the UBA. (orig.)
Woitke, P.; Min, M.; Pinte, C.; Thi, W.-F.; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.
2016-02-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on the assumptions about the shape of the disk, the dust opacities, dust settling, and polycyclic aromatic hydrocarbons (PAHs). In particular, we propose new standard dust opacities for disk models, we present a simplified treatment of PAHs in radiative equilibrium which is sufficient to reproduce the PAH emission features, and we suggest using a simple yet physically justified treatment of dust settling. We roughly adjust parameters to obtain a model that predicts continuum and line observations that resemble typical multi-wavelength continuum and line observations of Class II T Tauri stars. We systematically study the impact of each model parameter (disk mass, disk extension and shape, dust settling, dust size and opacity, gas/dust ratio, etc.) on all mainstream continuum and line observables, in particular on the SED, mm-slope, continuum visibilities, and emission lines including [OI] 63 μm, high-J CO lines, (sub-)mm CO isotopologue lines, and CO fundamental ro-vibrational lines. We find that evolved dust properties, i.e. large grains, often needed to fit the SED, have important consequences for disk chemistry and heating/cooling balance, leading to stronger near- to far-IR emission lines in general. Strong dust settling and missing disk flaring have similar effects on continuum observations, but opposite effects on far-IR gas emission lines. PAH molecules can efficiently shield the gas from stellar UV radiation because of their strong absorption and negligible scattering opacities in comparison to evolved dust. The observable millimetre-slope of the SED can become significantly more gentle in the case of cold disk midplanes, which we find regularly in our T Tauri models
Genome scale models of yeast: towards standardized evaluation and consistent omic integration
DEFF Research Database (Denmark)
Sanchez, Benjamin J.; Nielsen, Jens
2015-01-01
Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published...... and are currently used for metabolic engineering and elucidating biological interactions. Here we review the history of yeast's GEMs, focusing on recent developments. We study how these models are typically evaluated, using both descriptive and predictive metrics. Additionally, we analyze the different ways...... in which all levels of omics data (from gene expression to flux) have been integrated in yeast GEMs. Relevant conclusions and current challenges for both GEM evaluation and omic integration are highlighted....
Advancing Nucleosynthesis in Self-consistent, Multidimensional Models of Core-Collapse Supernovae
Harris, J Austin; Chertkow, Merek A; Bruenn, Stephen W; Lentz, Eric J; Messer, O E Bronson; Mezzacappa, Anthony; Blondin, John M; Marronetti, Pedro; Yakunin, Konstantin N
2014-01-01
We investigate core-collapse supernova (CCSN) nucleosynthesis in polar axisymmetric simulations using the multidimensional radiation hydrodynamics code CHIMERA. Computational costs have traditionally constrained the evolution of the nuclear composition in CCSN models to, at best, a 14-species $\\alpha$-network. Such a simplified network limits the ability to accurately evolve detailed composition, neutronization and the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks in post-processing nucleosynthesis calculations. Limitations such as poor spatial resolution of the tracer particles, estimation of the expansion timescales, and determination of the "mass-cut" at the end of the simulation impose uncertainties inherent to this approach. We present a detailed analysis of the impact of these uncertainties on post-processing nucleosynthesis calculations and implications for future models.
Thermodynamically consistent modeling for dissolution/growth of bubbles in an incompressible solvent
Bothe, Dieter
2014-01-01
We derive mathematical models of the elementary process of dissolution/growth of bubbles in a liquid under pressure control. The modeling starts with a fully compressible version, both for the liquid and the gas phase so that the entropy principle can be easily evaluated. This yields a full PDE system for a compressible two-phase fluid with mass transfer of the gaseous species. Then the passage to an incompressible solvent in the liquid phase is discussed, where a carefully chosen equation of state for the liquid mixture pressure allows for a limit in which the solvent density is constant. We finally provide a simplification of the PDE system in case of a dilute solution.
Self-Consistent, Axisymmetric Two-Integral Models of Elliptical Galaxies with Embedded Nuclear Discs
Bosch, van den, PPJ Paul; de, Zeeuw, W.
1996-01-01
Recently, observations with the Hubble Space Telescope have revealed small stellar discs embedded in the nuclei of a number of ellipticals and S0s. In this paper we construct two-integral axisymmetric models for such systems. We calculate the even part of the phase-space distribution function, and specify the odd part by means of a simple parameterization. We investigate the photometric as well as the kinematic signatures of nuclear discs, including their velocity profiles (VPs), and study th...
Energy regeneration model of self-consistent field of electron beams into electric power*
Kazmin, B. N.; Ryzhov, D. R.; Trifanov, I. V.; Snezhko, A. A.; Savelyeva, M. V.
2016-04-01
We consider physic-mathematical models of electric processes in electron beams, conversion of beam parameters into electric power values and their transformation into users’ electric power grid (onboard spacecraft network). We perform computer simulation validating high energy efficiency of the studied processes to be applied in the electric power technology to produce the power as well as electric power plants and propulsion installation in the spacecraft.
Flood damage: a model for consistent, complete and multipurpose scenarios
Directory of Open Access Journals (Sweden)
S. Menoni
2016-12-01
implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.
A consistent model for leptogenesis, dark matter and the IceCube signal
Energy Technology Data Exchange (ETDEWEB)
Fiorentin, M. Re [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Niro, V. [Departamento de Física Teórica, Universidad Autónoma de Madrid,Cantoblanco, E-28049 Madrid (Spain); Instituto de Física Teórica UAM/CSIC,Calle Nicolás Cabrera 13-15, Cantoblanco, E-28049 Madrid (Spain); Fornengo, N. [Dipartimento di Fisica, Università di Torino,via P. Giuria, 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Torino,via P. Giuria, 1, 10125 Torino (Italy)
2016-11-04
We discuss a left-right symmetric extension of the Standard Model in which the three additional right-handed neutrinos play a central role in explaining the baryon asymmetry of the Universe, the dark matter abundance and the ultra energetic signal detected by the IceCube experiment. The energy spectrum and neutrino flux measured by IceCube are ascribed to the decays of the lightest right-handed neutrino N{sub 1}, thus fixing its mass and lifetime, while the production of N{sub 1} in the primordial thermal bath occurs via a freeze-in mechanism driven by the additional SU(2){sub R} interactions. The constraints imposed by IceCube and the dark matter abundance allow nonetheless the heavier right-handed neutrinos to realize a standard type-I seesaw leptogenesis, with the B−L asymmetry dominantly produced by the next-to-lightest neutrino N{sub 2}. Further consequences and predictions of the model are that: the N{sub 1} production implies a specific power-law relation between the reheating temperature of the Universe and the vacuum expectation value of the SU(2){sub R} triplet; leptogenesis imposes a lower bound on the reheating temperature of the Universe at 7×10{sup 9} GeV. Additionally, the model requires a vanishing absolute neutrino mass scale m{sub 1}≃0.
Consistent negative response of US crops to high temperatures in observations and crop models
Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja
2017-04-01
High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day above 30°C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures above 30°C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.
Consistent negative response of US crops to high temperatures in observations and crop models
Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja
2017-01-01
High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day >30 °C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures >30 °C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.
Jha, Sanjeev Kumar
2013-01-01
A downscaling approach based on multiple-point geostatistics (MPS) is presented. The key concept underlying MPS is to sample spatial patterns from within training images, which can then be used in characterizing the relationship between different variables across multiple scales. The approach is used here to downscale climate variables including skin surface temperature (TSK), soil moisture (SMOIS), and latent heat flux (LH). The performance of the approach is assessed by applying it to data derived from a regional climate model of the Murray-Darling basin in southeast Australia, using model outputs at two spatial resolutions of 50 and 10 km. The data used in this study cover the period from 1985 to 2006, with 1985 to 2005 used for generating the training images that define the relationships of the variables across the different spatial scales. Subsequently, the spatial distributions for the variables in the year 2006 are determined at 10 km resolution using the 50 km resolution data as input. The MPS geostatistical downscaling approach reproduces the spatial distribution of TSK, SMOIS, and LH at 10 km resolution with the correct spatial patterns over different seasons, while providing uncertainty estimates through the use of multiple realizations. The technique has the potential to not only bridge issues of spatial resolution in regional and global climate model simulations but also in feature sharpening in remote sensing applications through image fusion, filling gaps in spatial data, evaluating downscaled variables with available remote sensing images, and aggregating/disaggregating hydrological and groundwater variables for catchment studies.
Fioc, M; Fioc, Michel; Rocca-Volmerange, Brigitte
1999-01-01
We provide here the documentation of the new version of the spectral evolution model PEGASE. PEGASE computes synthetic spectra of galaxies in the UV to near-IR range from 0 to 20 Gyr, for a given stellar IMF and evolutionary scenario (star formation law, infall, galactic winds). The radiation emitted by stars from the main sequence to the pre-supernova or white dwarf stage is calculated, as well as the extinction by dust. A simple modeling of the nebular emission (continuum and lines) is also proposed. PEGASE may be used to model starbursts as well as old galaxies. The main improvements of PEGASE.2 relative to PEGASE.1 (Fioc & Rocca-Volmerange 1997) are the following: (1)The stellar evolutionary tracks of the Padova group for metallicities between 0.0001 and 0.1 have been included; (2)The evolution of the metallicity of the interstellar medium (ISM) due to SNII, SNIa and AGB stars is followed. Stars are formed with the same metallicity as the ISM (instead of a solar metallicity in PEGASE.1), providing thu...
Self-consistent physical parameters for 5 intermediate-age SMC stellar clusters from CMD modelling
Dias, Bruno; Barbuy, Beatriz; Santiago, Basilio; Ortolani, Sergio; Balbinot, Eduardo
2013-01-01
Context. Stellar clusters in the Small Magellanic Cloud (SMC) are useful probes to study the chemical and dynamical evolution of this neighbouring dwarf galaxy, enabling inspection of a large period covering over 10 Gyr. Aims. The main goals of this work are the derivation of age, metallicity, distance modulus, reddening, core radius and central density profile for six sample clusters, in order to place them in the context of the Small Cloud evolution. The studied clusters are: AM 3, HW 1, HW 34, HW 40, Lindsay 2, and Lindsay 3, where HW 1, HW 34, and Lindsay 2 are studied for the first time. Methods. Optical Colour-Magnitude Diagrams (V, B-V CMDs) and radial density profiles were built from images obtained with the 4.1m SOAR telescope, reaching V~23. The determination of structural parameters were carried out applying King profile fitting. The other parameters were derived in a self-consistent way by means of isochrone fitting, which uses the likelihood statistics to identify the synthetic CMDs that best rep...
Hydraulic fracture model comparison study: Complete results
Energy Technology Data Exchange (ETDEWEB)
Warpinski, N.R. [Sandia National Labs., Albuquerque, NM (United States); Abou-Sayed, I.S. [Mobil Exploration and Production Services (United States); Moschovidis, Z. [Amoco Production Co. (US); Parker, C. [CONOCO (US)
1993-02-01
Large quantities of natural gas exist in low permeability reservoirs throughout the US. Characteristics of these reservoirs, however, make production difficult and often economic and stimulation is required. Because of the diversity of application, hydraulic fracture design models must be able to account for widely varying rock properties, reservoir properties, in situ stresses, fracturing fluids, and proppant loads. As a result, fracture simulation has emerged as a highly complex endeavor that must be able to describe many different physical processes. The objective of this study was to develop a comparative study of hydraulic-fracture simulators in order to provide stimulation engineers with the necessary information to make rational decisions on the type of models most suited for their needs. This report compares the fracture modeling results of twelve different simulators, some of them run in different modes for eight separate design cases. Comparisons of length, width, height, net pressure, maximum width at the wellbore, average width at the wellbore, and average width in the fracture have been made, both for the final geometry and as a function of time. For the models in this study, differences in fracture length, height and width are often greater than a factor of two. In addition, several comparisons of the same model with different options show a large variability in model output depending upon the options chosen. Two comparisons were made of the same model run by different companies; in both cases the agreement was good. 41 refs., 54 figs., 83 tabs.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
Institute of Scientific and Technical Information of China (English)
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Energy Technology Data Exchange (ETDEWEB)
Pain, J.C. [CEA/DIF, B.P. 12, 91680 Bruyeres-le-Chatel Cedex (France)]. E-mail: jean-christophe.pain@cea.fr; Dejonghe, G. [CEA/DIF, B.P. 12, 91680 Bruyeres-le-Chatel Cedex (France); Blenski, T. [CEA/DSM/DRECAM/SPAM, Centre d' Etudes de Saclay, 91191 Gif-sur-Yvette Cedex (France)
2006-05-15
We propose a thermodynamically consistent model involving detailed screened ions, described by superconfigurations, in plasmas. In the present work, the electrons, bound and free, are treated quantum-mechanically so that resonances are carefully taken into account in the self-consistent calculation of the electronic structure of each superconfiguration. The procedure is in some sense similar to the one used in Inferno code developed by D.A. Liberman; however, here we perform this calculation in the ion-sphere model for each superconfiguration. The superconfiguration approximation allows rapid calculation of necessary averages over all possible configurations representing excited states of bound electrons. The model enables a fully quantum-mechanical self-consistent calculation of the electronic structure of ions and provides the relevant thermodynamic quantities (e.g., internal energy, Helmholtz free energy and pressure), together with an improved treatment of pressure ionization. It should therefore give a better insight into the impact of plasma effects on photoabsorption spectra.
Performance results of HESP physical model
Chanumolu, Anantha; Thirupathi, Sivarani; Jones, Damien; Giridhar, Sunetra; Grobler, Deon; Jakobsson, Robert
2017-02-01
As a continuation to the published work on model based calibration technique with HESP(Hanle Echelle Spectrograph) as a case study, in this paper we present the performance results of the technique. We also describe how the open parameters were chosen in the model for optimization, the glass data accuracy and handling the discrepancies. It is observed through simulations that the discrepancies in glass data can be identified but not quantifiable. So having an accurate glass data is important which is possible to obtain from the glass manufacturers. The model's performance in various aspects is presented using the ThAr calibration frames from HESP during its pre-shipment tests. Accuracy of model predictions and its wave length calibration comparison with conventional empirical fitting, the behaviour of open parameters in optimization, model's ability to track instrumental drifts in the spectrum and the double fibres performance were discussed. It is observed that the optimized model is able to predict to a high accuracy the drifts in the spectrum from environmental fluctuations. It is also observed that the pattern in the spectral drifts across the 2D spectrum which vary from image to image is predictable with the optimized model. We will also discuss the possible science cases where the model can contribute.
A New Algorithm for Self-Consistent 3-D Modeling of Collisions in Dusty Debris Disks
Stark, Christopher C
2009-01-01
We present a new "collisional grooming" algorithm that enables us to model images of debris disks where the collision time is less than the Poynting Robertson time for the dominant grain size. Our algorithm uses the output of a collisionless disk simulation to iteratively solve the mass flux equation for the density distribution of a collisional disk containing planets in 3 dimensions. The algorithm can be run on a single processor in ~1 hour. Our preliminary models of disks with resonant ring structures caused by terrestrial mass planets show that the collision rate for background particles in a ring structure is enhanced by a factor of a few compared to the rest of the disk, and that dust grains in or near resonance have even higher collision rates. We show how collisions can alter the morphology of a resonant ring structure by reducing the sharpness of a resonant ring's inner edge and by smearing out azimuthal structure. We implement a simple prescription for particle fragmentation and show how Poynting-Ro...
A consistent model for \\pi N transition distribution amplitudes and backward pion electroproduction
Lansberg, J P; Semenov-Tian-Shansky, K; Szymanowski, L
2011-01-01
The extension of the concept of generalized parton distributions leads to the introduction of baryon to meson transition distribution amplitudes (TDAs), non-diagonal matrix elements of the nonlocal three quark operator between a nucleon and a meson state. We present a general framework for modelling nucleon to pion ($\\pi N$) TDAs. Our main tool is the spectral representation for \\pi N TDAs in terms of quadruple distributions. We propose a factorized Ansatz for quadruple distributions with input from the soft-pion theorem for \\pi N TDAs. The spectral representation is complemented with a D-term like contribution from the nucleon exchange in the cross channel. We then study backward pion electroproduction in the QCD collinear factorization approach in which the non-perturbative part of the amplitude involves \\pi N TDAs. Within our two component model for \\pi N TDAs we update previous leading-twist estimates of the unpolarized cross section. Finally, we compute the transverse target single spin asymmetry as a fu...
A consistent model for leptogenesis, dark matter and the IceCube signal
Fiorentin, M Re; Fornengo, N
2016-01-01
We discuss a left-right symmetric extension of the Standard Model in which the three additional right-handed neutrinos play a central role in explaining the baryon asymmetry of the Universe, the dark matter abundance and the ultra energetic signal detected by the IceCube experiment. The energy spectrum and neutrino flux measured by IceCube are ascribed to the decays of the lightest right-handed neutrino $N_1$, thus fixing its mass and lifetime, while the production of $N_1$ in the primordial thermal bath occurs via a freeze-in mechanism driven by the additional $SU(2)_R$ interactions. The constraints imposed by IceCube and the dark matter abundance allow nonetheless the heavier right-handed neutrinos to realize a standard type-I seesaw leptogenesis, with the $B-L$ asymmetry dominantly produced by the next-to-lightest neutrino $N_2$. Further consequences and predictions of the model are that: the $N_1$ production implies a specific power-law relation between the reheating temperature of the Universe and the va...
Thermal X-ray emission from a baryonic jet: a self-consistent multicolour spectral model
Khabibullin, Ildar; Sazonov, Sergey
2015-01-01
We present a publicly-available spectral model for thermal X-ray emission from a baryonic jet in an X-ray binary system, inspired by the microquasar SS 433. The jet is assumed to be strongly collimated (half-opening angle $\\Theta\\sim 1\\deg$) and mildly relativistic (bulk velocity $\\beta=V_{b}/c\\sim 0.03-0.3$). Its X-ray spectrum is found by integrating over thin slices of constant temperature, radiating in optically thin coronal regime. The temperature profile along the jet and corresponding differential emission measure distribution are calculated with full account for gas cooling due to expansion and radiative losses. Since the model predicts both the spectral shape and luminosity of the jet's emission, its normalisation is not a free parameter if the source distance is known. We also explore the possibility of using simple X-ray observables (such as flux ratios in different energy bands) to constrain physical parameters of the jet (e.g. gas temperature and density at its base) without broad-band fitting of...
Chen, She; Nobelen, J. C. P. Y.; Nijdam, S.
2017-09-01
Ionic wind is produced by a corona discharge when gaseous ions are accelerated in the electric field and transfer their momentum to neutral molecules by collisions. This technique is promising because a gas flow can be generated without the need for moving parts and can be easily miniaturized. The basic theory of ionic wind sounds simple but the details are far from clear. In our experiment, a negative DC voltage is applied to a needle-cylinder electrode geometry. Hot wire anemometry is used to measure the flow velocity at the downstream exit of the cylinder. The flow velocity fluctuates but the average velocity increases with the voltage. The current consists of a regular train of pulses with short rise time, the well-known Trichel pulses. To reveal the ionic wind mechanism in the Trichel pulse stage, a three-species corona model coupled with gas dynamics is built. The drift-diffusion equations of the plasma together with the Navier–Stokes equations of the flow are solved in COMSOL Multiphysics. The electric field, net number density of charged species, electrohydrodynamic (EHD) body force and flow velocity are calculated in detail by a self-consistent model. Multiple time scales are employed: hundreds of microseconds for the plasma characteristics and longer time scales (∼1 s) for the flow behavior. We found that the flow velocity as well as the EHD body force have opposite directions in the ionization region close to the tip and the ion drift region further away from the tip. The calculated mean current, Trichel pulse frequency and flow velocity are very close to our experimental results. Furthermore, in our simulations we were able to reproduce the mushroom-like minijets observed in experiments.
DEFF Research Database (Denmark)
Staunstrup, Jørgen
1998-01-01
This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....
Gauge propagator and physical consistency of the CPT-even part of the standard model extension
Casana, Rodolfo; Ferreira, Manoel M., Jr.; Gomes, Adalto R.; Pinheiro, Paulo R. D.
2009-12-01
In this work, we explicitly evaluate the gauge propagator of the Maxwell theory supplemented by the CPT-even term of the standard model extension. First, we specialize our evaluation for the parity-odd sector of the tensor Wμνρσ, using a parametrization that retains only the three nonbirefringent coefficients. From the poles of the propagator, it is shown that physical modes of this electrodynamics are stable, noncausal and unitary. In the sequel, we carry out the parity-even gauge propagator using a parametrization that allows to work with only the isotropic nonbirefringent element. In this case, we show that the physical modes of the parity-even sector of the tensor W are causal, stable and unitary for a limited range of the isotropic coefficient.
Consistency and normality of Huber-Dutter estimators for partial linear model
Institute of Scientific and Technical Information of China (English)
2008-01-01
For partial linear model Y = Xτβ0 + g0(T) + with unknown β0 ∈ Rd and an unknown smooth function g0, this paper considers the Huber-Dutter estimators of β0, scale σ for the errors and the function g0 approximated by the smoothing B-spline functions, respectively. Under some regularity conditions, the Huber-Dutter estimators of β0 and σ are shown to be asymptotically normal with the rate of convergence n-1/2 and the B-spline Huber-Dutter estimator of g0 achieves the optimal rate of convergence in nonparametric regression. A simulation study and two examples demonstrate that the Huber-Dutter estimator of β0 is competitive with its M-estimator without scale parameter and the ordinary least square estimator.
Consistency and normality of Huber-Dutter estimators for partial linear model
Institute of Scientific and Technical Information of China (English)
TONG XingWei; CUI HengJian; YU Peng
2008-01-01
For partial linear model Y = Xτβ0 + g0(T) + ∈ with unknown/β0 ∈ Rd and an unknown smooth function g0,this paper considers the Huber-Dutter estimators of/β0,scale σ for the errors and the function g0 approximated by the smoothing B-spline functions,respectively.Under some regularity conditions,the Huber-Dutter estimators of/β0 and σ are shown to be asymptotically normal with the rate of convergence n-1/2 and the B-spline Huber-Dutter estimator of go achieves the optimal rate of convergence in nonparametric regression.A simulation study and two examples demonstrate that the Huber-Dutter estimator of/β0 is competitive with its M-estimator without scale parameter and the ordinary least square estimator.
Quesada, José Manuel; Capote, Roberto; Soukhovitski, Efrem S.; Chiba, Satoshi
2016-03-01
An extension for odd-A actinides of a previously derived dispersive coupledchannel optical model potential (OMP) for 238U and 232Th nuclei is presented. It is used to fit simultaneously all the available experimental databases including neutron strength functions for nucleon scattering on 232Th, 233,235,238U and 239Pu nuclei. Quasi-elastic (p,n) scattering data on 232Th and 238U to the isobaric analogue states of the target nucleus are also used to constrain the isovector part of the optical potential. For even-even (odd) actinides almost all low-lying collective levels below 1 MeV (0.5 MeV) of excitation energy are coupled. OMP parameters show a smooth energy dependence and energy independent geometry.
Gustafsson, Leif; Sternad, Mikael
2007-10-01
Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.
Modeling Malaysia's Energy System: Some Preliminary Results
Directory of Open Access Journals (Sweden)
Ahmad M. Yusof
2011-01-01
Full Text Available Problem statement: The current dynamic and fragile world energy environment necessitates the development of new energy model that solely caters to analyze Malaysias energy scenarios. Approach: The model is a network flow model that traces the flow of energy carriers from its sources (import and mining through some conversion and transformation processes for the production of energy products to final destinations (energy demand sectors. The integration to the economic sectors is done exogeneously by specifying the annual sectoral energy demand levels. The model in turn optimizes the energy variables for a specified objective function to meet those demands. Results: By minimizing the inter temporal petroleum product imports for the crude oil system the annual extraction level of Tapis blend is projected at 579600 barrels per day. The aggregate demand for petroleum products is projected to grow at 2.1% year-1 while motor gasoline and diesel constitute 42 and 38% of the petroleum products demands mix respectively over the 5 year planning period. Petroleum products import is expected to grow at 6.0% year-1. Conclusion: The preliminary results indicate that the model performs as expected. Thus other types of energy carriers such as natural gas, coal and biomass will be added to the energy system for the overall development of Malaysia energy model.
Bigagli, Lorenzo; Papeschi, Fabrizio; Nativi, Stefano; Bastin, Lucy; Masó, Joan
2013-04-01
a few products are annotated with their PID; recent studies show that on a total of about 100000 Clearinghouse products, only 37 have the Product Identifier. Furthermore the association should be persistent within the GeoViQua scope. GeoViQua architecture is built on the brokering approach successfully experimented within the EuroGEOSS project and realized by the GEO DAB (Discovery and Access Broker). Part of the GEOSS Common Infrastructure (GCI), the GEO DAB allows for harmonization and distribution in a transparent way for both users and data providers. This way, GeoViQua can effectively complement and extend the GEO DAB obtaining a Quality-augmentation broker (GeoViQua Broker) which plays a central role in ensuring the consistency of the Producer and User quality models. This work is focused on the typical use case in which the GeoViQua Broker performs data discovery from different data providers, and then integrates in the Quality Information Model the producer quality report with the feedback given by users. In particular, this work highlights the problems faced by the GeoViQua Broker and the techniques adopted to ensure consistency and persistency also for quality reports whose target products are not annotated with a PID. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 265178.
A physiological production model for cacao : results of model simulations
Zuidema, P.A.; Leffelaar, P.A.
2002-01-01
CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.
A physiological production model for cacao : results of model simulations
Zuidema, P.A.; Leffelaar, P.A.
2002-01-01
CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.
Modelling rainfall erosion resulting from climate change
Kinnell, Peter
2016-04-01
It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.
Simulation Modeling of Radio Direction Finding Results
Directory of Open Access Journals (Sweden)
K. Pelikan
1994-12-01
Full Text Available It is sometimes difficult to determine analytically error probabilities of direction finding results for evaluating algorithms of practical interest. Probalistic simulation models are described in this paper that can be to study error performance of new direction finding systems or to geographical modifications of existing configurations.
Toward A Self Consistent MHD Model of Chromospheres and Winds From Late Type Evolved Stars
Airapetian, V S; Carpenter, K G
2014-01-01
We present the first magnetohydrodynamic model of the stellar chromospheric heating and acceleration of the outer atmospheres of cool evolved stars, using alpha Tau as a case study. We used a 1.5D MHD code with a generalized Ohm's law that accounts for the effects of partial ionization in the stellar atmosphere to study Alfven wave dissipation and wave reflection. We have demonstrated that due to inclusion of the effects of ion-neutral collisions in magnetized weakly ionized chromospheric plasma on resistivity and the appropriate grid resolution, the numerical resistivity becomes 1-2 orders of magnitude smaller than the physical resistivity. The motions introduced by non-linear transverse Alfven waves can explain non-thermally broadened and non-Gaussian profiles of optically thin UV lines forming in the stellar chromosphere of alpha Tau and other late-type giant and supergiant stars. The calculated heating rates in the stellar chromosphere due to resistive (Joule) dissipation of electric currents, induced by ...
Complementarity of DM searches in a consistent simplified model: the case of Z{sup ′}
Energy Technology Data Exchange (ETDEWEB)
Jacques, Thomas [SISSA and INFN,via Bonomea 265, 34136 Trieste (Italy); Katz, Andrey [Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Morgante, Enrico; Racco, Davide [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Rameez, Mohamed [Département de Physique Nucléaire et Corpusculaire,Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Riotto, Antonio [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland)
2016-10-14
We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC, direct and indirect detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy Z{sup ′} mediates the interactions between the SM and the DM. We find that for heavy dark matter indirect detection provides the strongest bounds on this scenario, while IceCube bounds are typically stronger than those from direct detection. The LHC constraints are dominant for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun and the Galactic Center are either bb̄ or tt̄, while the heavy DM annihilation is completely dominated by Zh channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast IceCube constraints to allow proper comparison with constraints from direct and indirect detection experiments and LHC exclusions.
Complementarity of DM Searches in a Consistent Simplified Model: the Case of Z'
Jacques, Thomas; Morgante, Enrico; Racco, Davide; Rameez, Mohamed; Riotto, Antonio
2016-01-01
We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC and direct detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy $Z'$ mediates the interactions between the SM and the DM. We find that in most cases IceCube provides the strongest bounds on this scenario, while the LHC constraints are only meaningful for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun are either $b \\bar b$ or $t \\bar t$, while the heavy DM annihilation is completely dominated by $Zh$ channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast Ice...
Energy Technology Data Exchange (ETDEWEB)
Powell, Brian [Clemson Univ., SC (United States); Kaplan, Daniel I [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Arai, Yuji [Univ. of Illinois, Urbana-Champaign, IL (United States); Becker, Udo [Univ. of Michigan, Ann Arbor, MI (United States); Ewing, Rod [Stanford Univ., CA (United States)
2016-12-29
This university lead SBR project is a collaboration lead by Dr. Brian Powell (Clemson University) with co-principal investigators Dan Kaplan (Savannah River National Laboratory), Yuji Arai (presently at the University of Illinois), Udo Becker (U of Michigan) and Rod Ewing (presently at Stanford University). Hypothesis: The underlying hypothesis of this work is that strong interactions of plutonium with mineral surfaces are due to formation of inner sphere complexes with a limited number of high-energy surface sites, which results in sorption hysteresis where Pu(IV) is the predominant sorbed oxidation state. The energetic favorability of the Pu(IV) surface complex is strongly influenced by positive sorption entropies, which are mechanistically driven by displacement of solvating water molecules from the actinide and mineral surface during sorption. Objectives: The overarching objective of this work is to examine Pu(IV) and Pu(V) sorption to pure metal (oxyhydr)oxide minerals and sediments using variable temperature batch sorption, X-ray absorption spectroscopy, electron microscopy, and quantum-mechanical and empirical-potential calculations. The data will be compiled into a self-consistent surface complexation model. The novelty of this effort lies largely in the manner the information from these measurements and calculations will be combined into a model that will be used to evaluate the thermodynamics of plutonium sorption reactions as well as predict sorption of plutonium to sediments from DOE sites using a component additivity approach.
Bordin, Lorenzo; Creminelli, Paolo; Mirbabayi, Mehrdad; Noreña, Jorge
2017-03-01
We argue that isotropic scalar fluctuations in solid inflation are adiabatic in the super-horizon limit. During the solid phase this adiabatic mode has peculiar features: constant energy-density slices and comoving slices do not coincide, and their curvatures, parameterized respectively by ζ and Script R, both evolve in time. The existence of this adiabatic mode implies that Maldacena's squeezed limit consistency relation holds after angular average over the long mode. The correlation functions of a long-wavelength spherical scalar mode with several short scalar or tensor modes is fixed by the scaling behavior of the correlators of short modes, independently of the solid inflation action or dynamics of reheating.
Raftari, Behrouz; Vuik, Kees
2015-01-01
The charging of insulating samples degrades the quality and complicates the interpretation of images in scanning electron microscopy and is important in other applications, such as particle detectors. In this paper we analyze this nontrivial phenomenon on different time scales employing the drift-diffusion-reaction approach augmented with the trapping rate equations and a realistic semi-empirical source function describing the pulsed nature of the electron beam. We consider both the fast processes following the impact of a single primary electron, the slower dynamics resulting from the continuous bombardment of a sample, and the eventual approach to the steady-state regime.
Gaggero, Daniele; Marinelli, Antonio; Urbano, Alfredo; Valli, Mauro
2015-01-01
In this Letter we propose a novel interpretation of the anomalous TeV gamma-ray diffuse emission observed by Milagro in the inner Galactic plane consistent with the signal reported by H.E.S.S. in the Galactic ridge; remarkably, our picture also accounts for a relevant portion of the neutrino flux measured by IceCube. Our scenario is based on a recently proposed phenomenological model characterized by radially-dependent cosmic-ray (CR) transport properties. Designed to reproduce both Fermi-LAT gamma-ray data and local CR observables, this model offers for the first time a self-consistent picture of both the GeV and the TeV sky.
Directory of Open Access Journals (Sweden)
J. Callies
2011-08-01
Full Text Available A simple model of the thermohaline circulation (THC is formulated, with the objective to represent explicitly the geostrophic force balance of the basinwide THC. The model comprises advective-diffusive density balances in two meridional-vertical planes located at the eastern and the western walls of a hemispheric sector basin. Boundary mixing constrains vertical motion to lateral boundary layers along these walls. Interior, along-boundary, and zonally integrated meridional flows are in thermal-wind balance. Rossby waves and the absence of interior mixing render isopycnals zonally flat except near the western boundary, constraining meridional flow to the western boundary layer. The model is forced by a prescribed meridional surface density profile.
This two-plane model reproduces both steady-state density and steady-state THC structures of a primitive-equation model. The solution shows narrow deep sinking at the eastern high latitudes, distributed upwelling at both boundaries, and a western boundary current with poleward surface and equatorward deep flow. The overturning strength has a 2/3-power-law dependence on vertical diffusivity and a 1/3-power-law dependence on the imposed meridional surface density difference. Convective mixing plays an essential role in the two-plane model, ensuring that deep sinking is located at high latitudes. This role of convective mixing is consistent with that in three-dimensional models and marks a~sharp contrast with previous two-dimensional models.
Overall, the two-plane model reproduces crucial features of the THC as simulated in simple-geometry three-dimensional models. At the same time, the model self-consistently makes quantitative a conceptual picture of the three-dimensional THC that hitherto has been expressed either purely qualitatively or not self-consistently.
Directory of Open Access Journals (Sweden)
J. Callies
2012-01-01
Full Text Available A simple model of the thermohaline circulation (THC is formulated, with the objective to represent explicitly the geostrophic force balance of the basinwide THC. The model comprises advective-diffusive density balances in two meridional-vertical planes located at the eastern and the western walls of a hemispheric sector basin. Boundary mixing constrains vertical motion to lateral boundary layers along these walls. Interior, along-boundary, and zonally integrated meridional flows are in thermal-wind balance. Rossby waves and the absence of interior mixing render isopycnals zonally flat except near the western boundary, constraining meridional flow to the western boundary layer. The model is forced by a prescribed meridional surface density profile.
This two-plane model reproduces both steady-state density and steady-state THC structures of a primitive-equation model. The solution shows narrow deep sinking at the eastern high latitudes, distributed upwelling at both boundaries, and a western boundary current with poleward surface and equatorward deep flow. The overturning strength has a 2/3-power-law dependence on vertical diffusivity and a 1/3-power-law dependence on the imposed meridional surface density difference. Convective mixing plays an essential role in the two-plane model, ensuring that deep sinking is located at high latitudes. This role of convective mixing is consistent with that in three-dimensional models and marks a sharp contrast with previous two-dimensional models.
Overall, the two-plane model reproduces crucial features of the THC as simulated in simple-geometry three-dimensional models. At the same time, the model self-consistently makes quantitative a conceptual picture of the three-dimensional THC that hitherto has been expressed either purely qualitatively or not self-consistently.
Schnell, D J; Galavotti, C; Fishbein, M; Chan, D K
1996-01-01
The stages of behavior change model has been used to understand a variety of health behaviors. Since consistent condom use has been promoted as a risk-reduction behavior for prevention of human immunodeficiency virus (HIV) infection, an algorithm for staging the adoption of consistent condom use during vaginal sex was empirically developed using three considerations: HIV prevention efficacy, analogy with work on staging other health-related behaviors, and condom use data from groups at high risk for HIV infection. This algorithm suggests that the adoption of consistent condom use among persons at high risk can be meaningfully measured with the model. However, variations in the algorithm details affect both the interpretation of stages and apportionment of persons across stages.
The relativistic consistent angular-momentum projected shell model study of the N=Z nucleus 52Fe
Institute of Scientific and Technical Information of China (English)
LI YanSong; LONG GuiLu
2009-01-01
The relativistic consistent angular-momentum projected shell model (RECAPS) is used in the study of the structure and electromagnetic transitions of the low-lying states in the N=Z nucleus 52Fe.The model calculations show a reasonably good agreement with the data.The backbending at 12+ is reproduced and the energy level structure suggests that neutron-proton interactions play important roles.
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.
The relativistic consistent angular-momentum projected shell model study of the N=Z nucleus 52Fe
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
The relativistic consistent angular-momentum projected shell model(ReCAPS) is used in the study of the structure and electromagnetic transitions of the low-lying states in the N=Z nucleus 52Fe.The model calculations show a reasonably good agreement with the data.The backbending at 12+ is reproduced and the energy level structure suggests that neutron-proton interactions play important roles.
DEFF Research Database (Denmark)
Zahid, F.; Paulsson, Magnus; Polizzi, E.;
2005-01-01
We present a transport model for molecular conduction involving an extended Huckel theoretical treatment of the molecular chemistry combined with a nonequilibrium Green's function treatment of quantum transport. The self-consistent potential is approximated by CNDO (complete neglect of differential...
Postmus, B.R.; Leermakers, F.A.M.; Cohen Stuart, M.A.
2008-01-01
We have constructed a model to predict the properties of non-ionic (alkyl-ethylene oxide) (C(n)E(m)) surfactants, both in aqueous solutions and near a silica surface, based upon the self-consistent field theory using the Scheutjens-Fleer discretisation scheme. The system has the pH and the ionic
Carneiro, D F; Sampaio, M D; Nemes, M C
2003-01-01
We compute the three loop $\\beta$ function of the Wess-Zumino model to motivate implicit regularization (IR) as a consistent and practical momentum-space framework to study supersymmetric quantum field theories. In this framework which works essentially in the physical dimension of the theory we show that ultraviolet are clearly disantangled from infrared divergences. We obtain consistent results which motivate the method as a good choice to study supersymmetry anomalies in quantum field theories.
Houston, J B; Kenworthy, K E
2000-03-01
Strategies for the prediction of in vivo drug clearance from in vitro drug metabolite kinetic data are well established for the rat. In this animal species, metabolism rate-substrate concentration relationships can commonly be described by the classic hyperbola consistent with the Michaelis-Menten model and simple scaling of the parameter intrinsic clearance (CL(int) - the ratio of V(max) to K(m)) is particularly valuable. The in vitro scaling of kinetic data from human tissue is more complex, particularly as many substrates for cytochrome P450 (CYP) 3A4, the dominant human CYP, show nonhyperbolic metabolism rate-substrate concentration curves. This review critically examines these types of data, which require the adoption of an enzyme model with multiple sites showing cooperative binding for the drug substrate, and considers the constraints this kinetic behavior places on the prediction of in vivo pharmacokinetic characteristics, such as metabolic stability and inhibitory drug interaction potential. The cases of autoactivation and autoinhibition are discussed; the former results in an initial lag in the rate-substrate concentration profile to generate a sigmoidal curve whereas the latter is characterized by a convex curve as V(max) is not maintained at high substrate concentrations. When positive cooperativity occurs, we suggest the use of CL(max), the maximal clearance resulting from autoactivation, as a substitute for CL(int). The impact of heteroactivation on this approach is also of importance. In the case of negative cooperativity, care in using the V(max)/K(m) approach to CL(int) determination must be taken. Examples of substrates displaying each type of kinetic behavior are discussed for various recombinant CYP enzymes, and possible artifactual sources of atypical rate-concentration curves are outlined. Finally, the consequences of ignoring atypical Michaelis-Menten kinetic relationships are examined, and the inconsistencies reported for both different
Using a Theory-Consistent CVAR Scenario to Test an Exchange Rate Model Based on Imperfect Knowledge
Directory of Open Access Journals (Sweden)
Katarina Juselius
2017-07-01
Full Text Available A theory-consistent CVAR scenario describes a set of testable regularieties one should expect to see in the data if the basic assumptions of the theoretical model are empirically valid. Using this method, the paper demonstrates that all basic assumptions about the shock structure and steady-state behavior of an an imperfect knowledge based model for exchange rate determination can be formulated as testable hypotheses on common stochastic trends and cointegration. This model obtaines remarkable support for almost every testable hypothesis and is able to adequately account for the long persistent swings in the real exchange rate.
Institute of Scientific and Technical Information of China (English)
TANG NianSheng; CHEN XueDong; WANG XueRen
2009-01-01
Semiparametric reproductive dispersion nonlinear model (SRDNM) is an extension of nonlinear reproductive dispersion models and semiparametric nonlinear regression models, and includes semiparametric nonlinear model and semiparametric generalized linear model as its special cases. Based on the local kernel estimate of nonparametric component, profile-kernel and backfitting estimators of parameters of interest are proposed in SRDNM, and theoretical comparison of both estimators is also investigated in this paper. Under some regularity conditions, strong consistency and asymptotic normality of two estimators are proved. It is shown that the backtitting method produces a larger asymptotic variance than that for the profile-kernel method. A simulation study and a real example are used to illustrate the proposed methodologies.
Lin, M. C.; Verboncoeur, J.
2016-10-01
A maximum electron current transmitted through a planar diode gap is limited by space charge of electrons dwelling across the gap region, the so called space charge limited (SCL) emission. By introducing a counter-streaming ion flow to neutralize the electron charge density, the SCL emission can be dramatically raised, so electron current transmission gets enhanced. In this work, we have developed a relativistic self-consistent model for studying the enhancement of maximum transmission by a counter-streaming ion current. The maximum enhancement is found when the ion effect is saturated, as shown analytically. The solutions in non-relativistic, intermediate, and ultra-relativistic regimes are obtained and verified with 1-D particle-in-cell simulations. This self-consistent model is general and can also serve as a comparison for verification of simulation codes, as well as extension to higher dimensions.
Exact results for the one dimensional asymmetric exclusion model
Derrida, B.; Evans, M. R.; Hakim, V.; Pasquier, V.
1993-11-01
The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices.
Exact results for the one dimensional asymmetric exclusion model
Energy Technology Data Exchange (ETDEWEB)
Derrida, B.; Evans, M.R.; Pasquier, V. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Service de Physique Theorique; Hakim, V. [Ecole Normale Superieure, 75 - Paris (France)
1993-12-31
The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices. (author).
The Danish national passenger model – Model specification and results
DEFF Research Database (Denmark)
Rich, Jeppe; Hansen, Christian Overgaard
2016-01-01
The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...
Energy Technology Data Exchange (ETDEWEB)
Andrade, Maria Celia Ramos; Ludwig, Gerson Otto [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Lab. Associado de Plasma]. E-mail: mcr@plasma.inpe.br
2004-07-01
Different bootstrap current formulations are implemented in a self-consistent equilibrium calculation obtained from a direct variational technique in fixed boundary tokamak plasmas. The total plasma current profile is supposed to have contributions of the diamagnetic, Pfirsch-Schlueter, and the neoclassical Ohmic and bootstrap currents. The Ohmic component is calculated in terms of the neoclassical conductivity, compared here among different expressions, and the loop voltage determined consistently in order to give the prescribed value of the total plasma current. A comparison among several bootstrap current models for different viscosity coefficient calculations and distinct forms for the Coulomb collision operator is performed for a variety of plasma parameters of the small aspect ratio tokamak ETE (Experimento Tokamak Esferico) at the Associated Plasma Laboratory of INPE, in Brazil. We have performed this comparison for the ETE tokamak so that the differences among all the models reported here, mainly regarding plasma collisionality, can be better illustrated. The dependence of the bootstrap current ratio upon some plasma parameters in the frame of the self-consistent calculation is also analysed. We emphasize in this paper what we call the Hirshman-Sigmar/Shaing model, valid for all collisionality regimes and aspect ratios, and a fitted formulation proposed by Sauter, which has the same range of validity but is faster to compute than the previous one. The advantages or possible limitations of all these different formulations for the bootstrap current estimate are analysed throughout this work. (author)
Wang, Bo; Bauer, Sebastian
2016-04-01
Geological models are the prerequisite of exploring possible use of the subsurface and evaluating induced impacts. Subsurface geological models often show strong complexity in geometry and hydraulic connectivity because of their heterogeneous nature. In order to model that complexity, the corner point grid approach has been applied by geologists for decades. The corner point grid utilizes a set of hexahedral blocks to represent geological formations. Due to the appearance of eroded geological layers, some edges of those blocks may be collapsed and the blocks thus degenerate. This leads to the inconsistency and the impossibility of using the corner point grid directly with a finite element based simulator. Therefore, in this study, we introduce a workflow for transferring heterogeneous geological models to consistent finite element models. In the corner point grid, the hexahedral blocks without collapsed edges are converted to hexahedral elements directly. But if they degenerate, each block is divided into prism, pyramid and tetrahedral elements based on individual degenerated situation. This approach consistently converts any degenerated corner point grid to a consistent hybrid finite element mesh. Along with the above converting scheme, the corresponding heterogeneous geological data, e.g. permeability and porosity, can be transferred as well. Moreover, well trajectories designed in the corner point grid can be resampled to the nodes in the finite element mesh, which represents the location for source terms along the well path. As a proof of concept, we implement the workflow in the framework of transferring models from Petrel to the finite element OpenGeoSys simulator. As application scenario we choose a deep geothermal reservoir operation in the North German Basin. A well doublet is defined in a saline aquifer in the Rhaetian formation, which has a depth of roughly 4000 m. The geometric model shows all kinds of degenerated blocks due to eroded layers and the
Neradilek, Moni B; Polissar, Nayak L; Einstein, Daniel R; Glenny, Robb W; Minard, Kevin R; Carson, James P; Jiao, Xiangmin; Jacob, Richard E; Cox, Timothy C; Postlethwait, Edward M; Corley, Richard A
2012-06-01
We examine a previously published branch-based approach for modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that take account of error. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys, and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from self-consistency exist, we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. The new variance model can be used instead. Measurement error has an important impact on the estimated morphometry models and needs to be addressed in the analysis.
Mathematical Existence Results for the Doi-Edwards Polymer Model
Chupin, Laurent
2017-01-01
In this paper, we present some mathematical results on the Doi-Edwards model describing the dynamics of flexible polymers in melts and concentrated solutions. This model, developed in the late 1970s, has been used and extensively tested in modeling and simulation of polymer flows. From a mathematical point of view, the Doi-Edwards model consists in a strong coupling between the Navier-Stokes equations and a highly nonlinear constitutive law. The aim of this article is to provide a rigorous proof of the well-posedness of the Doi-Edwards model, namely that it has a unique regular solution. We also prove, which is generally much more difficult for flows of viscoelastic type, that the solution is global in time in the two dimensional case, without any restriction on the smallness of the data.
Berg, Matthew; Hartley, Brian; Richters, Oliver
2015-01-01
By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.
Cappelluti, Federica; Ma, Shuai; Pugliese, Diego; Sacco, Adriano; Lamberti, Andrea; Ghione, Giovanni; Tresso, Elena
2013-09-21
A numerical device-level model of dye-sensitized solar cells (DSCs) is presented, which self-consistently couples a physics-based description of the photoactive layer with a compact circuit-level description of the passive parts of the cell. The opto-electronic model of the nanoporous dyed film includes a detailed description of photogeneration and trap-limited kinetics, and a phenomenological description of nonlinear recombination. Numerical simulations of the dynamic small-signal behavior of DSCs, accounting for trapping and nonlinear recombination mechanisms, are reported for the first time and validated against experiments. The model is applied to build a consistent picture of the static and dynamic small-signal performance of nanocrystalline TiO2-based DSCs under different incident illumination intensity and direction, analyzed in terms of current-voltage characteristic, Incident Photon to Current Efficiency, and Electrochemical Impedance Spectroscopy. This is achieved with a reliable extraction and validation of a unique set of model parameters against a large enough set of experimental data. Such a complete and validated description allows us to gain a detailed view of the cell collection efficiency dependence on different operating conditions. In particular, based on dynamic numerical simulations, we provide for the first time a sound support to the interpretation of the diffusion length, in the presence of nonlinear recombination and non-uniform electron density distribution, as derived from small-signal characterization techniques and clarify its correlation with different estimation methods based on spectral measurements.
Feofilov, Artem G.; Yankovsky, Valentine A.; Pesnell, William D.; Kutepov, Alexander A.; Goldberg, Richard A.; Mauilova, Rada O.
2007-01-01
We present the new version of the ALI-ARMS (for Accelerated Lambda Iterations for Atmospheric Radiation and Molecular Spectra) model. The model allows simultaneous self-consistent calculating the non-LTE populations of the electronic-vibrational levels of the O3 and O2 photolysis products and vibrational level populations of CO2, N2,O2, O3, H2O, CO and other molecules with detailed accounting for the variety of the electronic-vibrational, vibrational-vibrational and vibrational-translational energy exchange processes. The model was used as the reference one for modeling the O2 dayglows and infrared molecular emissions for self-consistent diagnostics of the multi-channel space observations of MLT in the SABER experiment It also allows reevaluating the thermalization efficiency of the absorbed solar ultraviolet energy and infrared radiative cooling/heating of MLT by detailed accounting of the electronic-vibrational relaxation of excited photolysis products via the complex chain of collisional energy conversion processes down to the vibrational energy of optically active trace gas molecules.
Metzger, S.; Xu, K.; Desai, A. R.; Taylor, J. R.; Kljun, N.; Schneider, D.; Kampe, T. U.; Fox, A. M.
2013-12-01
Process-based models, such as land surface models (LSMs), allow insight in the spatio-temporal distribution of stocks and the exchange of nutrients, trace gases etc. among environmental compartments. More recently, LSMs also become capable of assimilating time-series of in-situ reference observations. This enables calibrating the underlying functional relationships to site-specific characteristics, or to constrain the model results after each time-step in an attempt to minimize drift. The spatial resolution of LSMs is typically on the order of 10^2-10^4 km2, which is suitable for linking regional to continental scales and beyond. However, continuous in-situ observations of relevant stock and exchange variables, such as tower-based eddy-covariance (EC) fluxes, represent orders of magnitude smaller spatial scales (10^-6-10^1 km2). During data assimilation, this significant gap in spatial representativeness is typically either neglected, or side-stepped using simple tiling approaches. Moreover, at ';coarse' resolutions, a single LSM evaluation per time-step implies linearity among the underlying functional relationships as well as among the sub-grid land cover fractions. This, however, is not warranted for land-atmosphere exchange processes over more complex terrain. Hence, it is desirable to explicitly consider spatial variability at LSM sub-grid scales. Here we present a procedure that determines from a single EC tower the spatially integrated probability density function (PDF) of the surface-atmosphere exchange for individual land covers. These PDFs allow quantifying the expected value, as well as spatial variability over a target domain, can be assimilated in tiling-capable LSMs, and mitigate linearity assumptions at ';coarse' resolutions. The procedure is based on the extraction and extrapolation of environmental response functions (ERFs), for which a technical-oriented companion poster is submitted. In short, the subsequent steps are: (i) Time
Linden, Tim; Anderson, Brandon
2010-01-01
A generic prediction in the paradigm of weakly interacting dark matter is the production of relativistic particles from dark matter pair-annihilation in regions of high dark matter density. Ultra-relativistic electrons and positrons produced in the center of the Galaxy by dark matter annihilation should produce a diffuse synchrotron emission. While the spectral shape of the synchrotron dark matter haze depends on the particle model (and secondarily on the galactic magnetic fields), the morphology of the haze depends primarily on (1) the dark matter density distribution, (2) the galactic magnetic field morphology, and (3) the diffusion model for high-energy cosmic-ray leptons. Interestingly, an unidentified excess of microwave radiation with characteristics similar to those predicted by dark matter models has been claimed to exist near the galactic center region in the data reported by the WMAP satellite, and dubbed the "WMAP haze". In this study, we carry out a self-consistent treatment of the variables enume...
Energy Technology Data Exchange (ETDEWEB)
Filanovich, A.N., E-mail: a.n.filanovich@urfu.ru; Povzner, A.A., E-mail: a.a.povzner@urfu.ru
2016-06-15
A self-consistent thermodynamic model of PuCoGa{sub 5} is developed, which for the first time takes into account the anharmonicity of both acoustic phonons, described within a Debye model, and optical phonons, considered in an Einstein approximation. Within the framework of this model, we have calculated the temperature dependencies of lattice contributions to heat capacity, bulk modulus, volumetric coefficient of thermal expansion, Debye and Einstein temperatures and their Grüneisen parameters. The electronic heat capacity of PuCoGa{sub 5} is obtained, which demonstrates an unusual temperature dependence with two maxima. In addition, it is shown that an abnormal low temperature behavior of the bulk modulus of PuCoGa{sub 5} is not caused by the effects of lattice anharmonicity and is most likely due to the valence fluctuations, which is in agreement with previous studies.
Greczynski, G.; Hultman, L.
2016-11-01
We present first self-consistent modelling of x-ray photoelectron spectroscopy (XPS) Ti 2p, N 1s, O 1s, and C 1s core level spectra with a cross-peak quantitative agreement for a series of TiN thin films grown by dc magnetron sputtering and oxidized to different extent by varying the venting temperature Tv of the vacuum chamber before removing the deposited samples. So-obtained film series constitute a model case for XPS application studies, where certain degree of atmosphere exposure during sample transfer to the XPS instrument is unavoidable. The challenge is to extract information about surface chemistry without invoking destructive pre-cleaning with noble gas ions. All TiN surfaces are thus analyzed in the as-received state by XPS using monochromatic Al Kα radiation (hν = 1486.6 eV). Details of line shapes and relative peak areas obtained from deconvolution of the reference Ti 2p and N 1 s spectra representative of a native TiN surface serve as an input to model complex core level signals from air-exposed surfaces, where contributions from oxides and oxynitrides make the task very challenging considering the influence of the whole deposition process at hand. The essential part of the presented approach is that the deconvolution process is not only guided by the comparison to the reference binding energy values that often show large spread, but in order to increase reliability of the extracted chemical information the requirement for both qualitative and quantitative self-consistency between component peaks belonging to the same chemical species is imposed across all core-level spectra (including often neglected O 1s and C 1s signals). The relative ratios between contributions from different chemical species vary as a function of Tv presenting a self-consistency check for our model. We propose that the cross-peak self-consistency should be a prerequisite for reliable XPS peak modelling as it enhances credibility of obtained chemical information, while relying
Energy Technology Data Exchange (ETDEWEB)
NONE
2000-03-01
Development was made and marketing was begun on a model of the microwave consistency meter with the world's smallest caliber of 50 mm (non-sanitary type) for general industrial fields, with the food industry as the main object. Added commercialization of this model made inline consistency measurement possible in the flow rate process such as in foodstuff material processing. In addition, since the maximum fluid conductivity specification is set to 15 mS/cm, the applicable range in various consistency measurements is expanded, and high consistency measurement has become possible, which could not have been realized with consistency meters of the conventional types. Therefore, application to diversified processes is possible, such as to chemical plants. (translated by NEDO)
Kou, Jisheng
2016-01-01
A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is a latest alternative over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of two fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which dem...
Volume of the steady-state space of financial flows in a monetary stock-flow-consistent model
Hazan, Aurélien
2016-01-01
We show that a steady-state stock-flow consistent macroeconomic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to specify the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.
McGurk, B. J.; Painter, T. H.
2014-12-01
Deterministic snow accumulation and ablation simulation models are widely used by runoff managers throughout the world to predict runoff quantities and timing. Model fitting is typically based on matching modeled runoff volumes and timing with observed flow time series at a few points in the basin. In recent decades, sparse networks of point measurements of the mountain snowpacks have been available to compare with modeled snowpack, but the comparability of results from a snow sensor or course to model polygons of 5 to 50 sq. km is suspect. However, snowpack extent, depth, and derived snow water equivalent have been produced by the NASA/JPL Airborne Snow Observatory (ASO) mission for spring of 20013 and 2014 in the Tuolumne River basin above Hetch Hetchy Reservoir. These high-resolution snowpack data have exposed the weakness in a model calibration based on runoff alone. The U.S. Geological Survey's Precipitation Runoff Modeling System (PRMS) calibration that was based on 30-years of inflow to Hetch Hetchy produces reasonable inflow results, but modeled spatial snowpack location and water quantity diverged significantly from the weekly measurements made by ASO during the two ablation seasons. The reason is that the PRMS model has many flow paths, storages, and water transfer equations, and a calibrated outflow time series can be right for many wrong reasons. The addition of a detailed knowledge of snow extent and water content constrains the model so that it is a better representation of the actual watershed hydrology. The mechanics of recalibrating PRMS to the ASO measurements will be described, and comparisons in observed versus modeled flow for both a small subbasin and the entire Hetch Hetchy basin will be shown. The recalibrated model provided a bitter fit to the snowmelt recession, a key factor for water managers as they balance declining inflows with demand for power generation and ecosystem releases during the final months of snow melt runoff.
Energy Technology Data Exchange (ETDEWEB)
Gloaguen, D.; Guillen, R. [Laboratoire d' Applications des Materiaux a la Mecanique (LAMM), C.R.T.T., Boulevard de l' Universite, B.P. 406, 44602 Saint-Nazaire cedex (France); Francois, M. [Laboratoire des Systemes Mecaniques et d' Ingenierie Simultanee (LASMIS), Universite de Technologie de Troyes, 11 rue Marie Curie, B.P. 2060, 10010 Troyes (France); Royer, J. [Laboratoire Mecanique et Materiaux (LMM), Ecole Centrale de Nantes, 1 rue de la Noe, B.P. 92101, 44321 Nantes cedex 03 (France)
2002-09-16
Internal stresses due to anisotropic thermal and plastic properties were investigated in rolled {alpha}-titanium. The thermal stresses induced by a cooling process were predicted using a self-consistent model and compared with experimental results obtained by X-ray diffraction. A study of the elastoplastic response after uniaxial loading was performed along the rolling and the transverse direction of the sheet. Using an elastoplastic self-consistent model, the predicted results were compared with X-ray diffraction and mechanical tests. Theoretical and experimental results agree in their tendencies. The comparison between {epsilon}{sub {phi}}{sub {psi}} versus sin{sup 2} {psi} and simulations confirms that prismatic slip is the main active deformation mode. (Abstract Copyright [2002], Wiley Periodicals, Inc.)
DEFF Research Database (Denmark)
Peña, N.A.; Anton, A.; Fantke, Peter
2016-01-01
for the different metal-related processes and interactions. The proposed framework takes into consideration the speciation of the metals to accurately describe the soil processes (runoff and leaching). The processes involving degradation are assumed not significant for metals and volatilization is only accounted...... for special cases (i.e. mercury). And finally, a new module of erosion is included in the modified PestLCI model, because the transport of soil particles to which the metals are bound needs to be considered as potential source of emissions to surface water. In conclusion, we provide a starting point to better...
Modeling Malaysia's Energy System: Some Preliminary Results
Ahmad M. Yusof
2011-01-01
Problem statement: The current dynamic and fragile world energy environment necessitates the development of new energy model that solely caters to analyze Malaysias energy scenarios. Approach: The model is a network flow model that traces the flow of energy carriers from its sources (import and mining) through some conversion and transformation processes for the production of energy products to final destinations (energy demand sectors). The integration to the economic sectors is done exogene...
Engineering Glass Passivation Layers -Model Results
Energy Technology Data Exchange (ETDEWEB)
Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.
2011-08-08
The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan
Directory of Open Access Journals (Sweden)
N Rahimipour
2015-07-01
Full Text Available The classical J1-J2 Heisenberg model on bipartite lattice exhibits "Neel" order. However if the AF interactions between the next nearest neighbor(nnn are increased with respect to the nearest neighbor(nn, the frustration effect arises. In such situations, new phases such as ordered phases with coplanar or spiral ordering and disordered phases such as spin liquids can arise. In this paper we use the self-consistent Gaussian approximation to study the J1-J2 Heisenberg model in honeycomb and diamond lattices. We find the spin liquid phases such as ring-liquid and pancake-liquid in honeycomb lattice.Also for diamond lattice we show that the degeneracy of ground state can be lifted by thermal fluctuations through the order by disorder mechanism.
Self-consistent modelling of line-driven hot-star winds with Monte Carlo radiation hydrodynamics
Noebauer, U M
2015-01-01
Radiative pressure exerted by line interactions is a prominent driver of outflows in astrophysical systems, being at work in the outflows emerging from hot stars or from the accretion discs of cataclysmic variables, massive young stars and active galactic nuclei. In this work, a new radiation hydrodynamical approach to model line-driven hot-star winds is presented. By coupling a Monte Carlo radiative transfer scheme with a finite-volume fluid dynamical method, line-driven mass outflows may be modelled self-consistently, benefiting from the advantages of Monte Carlo techniques in treating multi-line effects, such as multiple scatterings, and in dealing with arbitrary multidimensional configurations. In this work, we introduce our approach in detail by highlighting the key numerical techniques and verifying their operation in a number of simplified applications, specifically in a series of self-consistent, one-dimensional, Sobolev-type, hot-star wind calculations. The utility and accuracy of our approach is dem...
DEFF Research Database (Denmark)
Thomsen, Christa; Nielsen, Anne Ellerup
2006-01-01
of a case study showing that companies use different and not necessarily consistent strategies for reporting on CSR. Finally, the implications for managerial practice are discussed. The chapter concludes by highlighting the value and awareness of the discourse and the discourse types adopted......This chapter first outlines theory and literature on CSR and Stakeholder Relations focusing on the different perspectives and the contextual and dynamic character of the CSR concept. CSR reporting challenges are discussed and a model of analysis is proposed. Next, our paper presents the results...... in the reporting material. By implementing consistent discourse strategies that interact according to a well-defined pattern or order, it is possible to communicate a strong social commitment on the one hand, and to take into consideration the expectations of the shareholders and the other stakeholders...
José Gómez-Navarro, Juan; Raible, Christoph C.; Blumer, Sandro; Martius, Olivia; Felder, Guido
2016-04-01
Extreme precipitation episodes, although rare, are natural phenomena that can threat human activities, especially in areas densely populated such as Switzerland. Their relevance demands the design of public policies that protect public assets and private property. Therefore, increasing the current understanding of such exceptional situations is required, i.e. the climatic characterisation of their triggering circumstances, severity, frequency, and spatial distribution. Such increased knowledge shall eventually lead us to produce more reliable projections about the behaviour of these events under ongoing climate change. Unfortunately, the study of extreme situations is hampered by the short instrumental record, which precludes a proper characterization of events with return period exceeding few decades. This study proposes a new approach that allows studying storms based on a synthetic, but physically consistent database of weather situations obtained from a long climate simulation. Our starting point is a 500-yr control simulation carried out with the Community Earth System Model (CESM). In a second step, this dataset is dynamically downscaled with the Weather Research and Forecasting model (WRF) to a final resolution of 2 km over the Alpine area. However, downscaling the full CESM simulation at such high resolution is infeasible nowadays. Hence, a number of case studies are previously selected. This selection is carried out examining the precipitation averaged in an area encompassing Switzerland in the ESM. Using a hydrological criterion, precipitation is accumulated in several temporal windows: 1 day, 2 days, 3 days, 5 days and 10 days. The 4 most extreme events in each category and season are selected, leading to a total of 336 days to be simulated. The simulated events are affected by systematic biases that have to be accounted before this data set can be used as input in hydrological models. Thus, quantile mapping is used to remove such biases. For this task
Quantitative magnetospheric models: results and perspectives.
Kuznetsova, M.; Hesse, M.; Gombosi, T.; Csem Team
Global magnetospheric models are indispensable tool that allow multi-point measurements to be put into global context Significant progress is achieved in global MHD modeling of magnetosphere structure and dynamics Medium resolution simulations confirm general topological pictures suggested by Dungey State of the art global models with adaptive grids allow performing simulations with highly resolved magnetopause and magnetotail current sheet Advanced high-resolution models are capable to reproduced transient phenomena such as FTEs associated with formation of flux ropes or plasma bubbles embedded into magnetopause and demonstrate generation of vortices at magnetospheric flanks On the other hand there is still controversy about the global state of the magnetosphere predicted by MHD models to the point of questioning the length of the magnetotail and the location of the reconnection sites within it For example for steady southwards IMF driving condition resistive MHD simulations produce steady configuration with almost stationary near-earth neutral line While there are plenty of observational evidences of periodic loading unloading cycle during long periods of southward IMF Successes and challenges in global modeling of magnetispheric dynamics will be addessed One of the major challenges is to quantify the interaction between large-scale global magnetospheric dynamics and microphysical processes in diffusion regions near reconnection sites Possible solutions to controversies will be discussed
Basurah, Hassan M.; Ali, Alaa; Dopita, Michael A.; Alsulami, R.; Amer, Morsi A.; Alruhaili, A.
2016-05-01
We present integral field unit (IFU) spectroscopy and self-consistent photoionization modelling for a sample of four southern Galactic planetary nebulae (PNe) with supposed weak emission-line central stars. The Wide Field Spectrograph on the ANU 2.3 m telescope has been used to provide IFU spectroscopy for NGC 3211, NGC 5979, My 60, and M 4-2 covering the spectral range of 3400-7000 Å. All objects are high-excitation non-Type I PNe, with strong He II emission, strong [Ne V] emission, and weak low-excitation lines. They all appear to be predominantly optically thin nebulae excited by central stars with Teff > 105 K. Three PNe of the sample have central stars which have been previously classified as weak emission-line stars (WELS), and the fourth also shows the characteristic recombination lines of a WELS. However, the spatially resolved spectroscopy shows that rather than arising in the central star, the C IV and N III recombination line emission is distributed in the nebula, and in some cases concentrated in discrete nebular knots. This may suggest that the WELS classification is spurious, and that, rather, these lines arise from (possibly chemically enriched) pockets of nebular gas. Indeed, from careful background subtraction we were able to identify three of the sample as being hydrogen rich O(H)-Type. We have constructed fully self-consistent photoionization models for each object. This allows us to independently determine the chemical abundances in the nebulae, to provide new model-dependent distance estimates, and to place the central stars on the Hertzsprung-Russell diagram. All four PNe have similar initial mass (1.5 < M/M⊙ < 2.0) and are at a similar evolutionary stage.
Ferrier, Ken L.; Austermann, Jacqueline; Mitrovica, Jerry X.; Pico, Tamara
2017-10-01
Sea-level changes are of wide interest because they regulate coastal hazards, shape the sedimentary geologic record and are sensitive to climate change. In areas where rivers deliver sediment to marine deltas and fans, sea-level changes are strongly modulated by the deposition and compaction of marine sediment. Deposition affects sea level by increasing the elevation of the seafloor, by perturbing crustal elevation and gravity fields and by reducing the volume of seawater through the incorporation of water into sedimentary pore space. In a similar manner, compaction affects sea level by lowering the elevation of the seafloor and by purging water out of sediments and into the ocean. Here we incorporate the effects of sediment compaction into a gravitationally self-consistent global sea-level model by extending the approach of Dalca et al. (2013). We show that incorporating compaction requires accounting for two quantities that are not included in the Dalca et al. (2013) analysis: the mean porosity of the sediment and the degree of saturation in the sediment. We demonstrate the effects of compaction by modelling sea-level responses to two simplified 122-kyr sediment transfer scenarios for the Amazon River system, one including compaction and one neglecting compaction. These simulations show that the largest effect of compaction is on the thickness of the compacting sediment, an effect that is largest where deposition rates are fastest. Compaction can also produce minor sea-level changes in coastal regions by influencing shoreline migration and the location of seawater loading, which perturbs crustal elevations. By providing a tool for modelling gravitationally self-consistent sea-level responses to sediment compaction, this work offers an improved approach for interpreting the drivers of past sea-level changes.
Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests
Energy Technology Data Exchange (ETDEWEB)
Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-07
The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.
Energy Technology Data Exchange (ETDEWEB)
Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Richard E.; Cox, Timothy C.; Postlethwait, Edward M.; Corley, Richard A.
2012-04-24
We examine a previously published branch-based approach to modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that account for it. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from selfconsistency exist we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. Measurement error has an important impact on the estimated morphometry models and needs to be accounted for in the analysis.
Directory of Open Access Journals (Sweden)
P.-P. Mathieu
2012-08-01
Full Text Available The terrestrial biosphere is currently a strong sink for anthropogenic CO2 emissions. Through the radiative properties of CO2, the strength of this sink has a direct influence on the radiative budget of the global climate system. The accurate assessment of this sink and its evolution under a changing climate is, hence, paramount for any efficient management strategies of the terrestrial carbon sink to avoid dangerous climate change. Unfortunately, simulations of carbon and water fluxes with terrestrial biosphere models exhibit large uncertainties. A considerable fraction of this uncertainty reflects uncertainty in the parameter values of the process formulations within the models. This paper describes the systematic calibration of the process parameters of a terrestrial biosphere model against two observational data streams: remotely sensed FAPAR (fraction of absorbed photosynthetically active radiation provided by the MERIS (ESA's Medium Resolution Imaging Spectrometer sensor and in situ measurements of atmospheric CO2 provided by the GLOBALVIEW flask sampling network. We use the Carbon Cycle Data Assimilation System (CCDAS to systematically calibrate some 70 parameters of the terrestrial BETHY (Biosphere Energy Transfer Hydrology model. The simultaneous assimilation of all observations provides parameter estimates and uncertainty ranges that are consistent with the observational information. In a subsequent step these parameter uncertainties are propagated through the model to uncertainty ranges for predicted carbon fluxes. We demonstrate the consistent assimilation at global scale, where the global MERIS FAPAR product and atmospheric CO2 are used simultaneously. The assimilation improves the match to independent observations. We quantify how MERIS data improve the accuracy of the current and future (net and gross carbon flux estimates (within and beyond the assimilation period. We further demonstrate the use of an interactive mission benefit
Directory of Open Access Journals (Sweden)
P.-P. Mathieu
2011-11-01
Full Text Available The terrestrial biosphere is currently a strong sink for anthropogenic CO2 emissions. Through the radiative properties of CO2 the strength of this sink has a direct influence on the radiative budget of the global climate system. The accurate assessment of this sink and its evolution under a changing climate is, hence, paramount for any efficient management strategies of the terrestrial carbon sink to avoid dangerous climate change. Unfortunately, simulations of carbon and water fluxes with terrestrial biosphere models exhibit large uncertainties. A considerable fraction of this uncertainty is reflecting uncertainty in the parameter values of the process formulations within the models. This paper describes the systematic calibration of the process parameters of a terrestrial biosphere model against two observational data streams: remotely sensed FAPAR provided by the MERIS sensor and in situ measurements of atmospheric CO2 provided by the GLOBALVIEW flask sampling network. We use the Carbon Cycle Data Assimilation System (CCDAS to systematically calibrate some 70 parameters of the terrestrial biosphere model BETHY. The simultaneous assimilation of all observations provides parameter estimates and uncertainty ranges that are consistent with the observational information. In a subsequent step these parameter uncertainties are propagated through the model to uncertainty ranges for predicted carbon fluxes. We demonstrate the consistent assimilation for two different set-ups: first at site-scale, where MERIS FAPAR observations at a range of sites are used as simultaneous constraints, and second at global scale, where the global MERIS FAPAR product and atmospheric CO2 are used simultaneously. On both scales the assimilation improves the match to independent observations. We quantify how MERIS data improve the accuracy of the current and future (net and gross carbon flux estimates (within and beyond the assimilation period. We further demonstrate the
Noble, Pascal
2012-01-01
In this paper we derive consistent shallow water equations for thin films of power law fluids down an incline. These models account for the streamwise diffusion of momentum which is important to describe accurately the full dynamic of the thin film flows when instabilities like roll-waves arise. These models are validated through a comparison with Orr Sommerfeld equations for large scale perturbations. We only consider laminar flow for which the boundary layer issued from the interaction of the flow with the bottom surface has an influence all over the transverse direction to the flow. In this case the concept itself of thin film and its relation with long wave asymptotic leads naturally to flow conditions around a uniform free surface Poiseuille flow. The apparent viscosity diverges at the free surface which, in turn, introduces a singularity in the formulation of the Orr-Sommerfeld equations and in the derivation of shallow water models. We remove this singularity by introducing a weaker formulation of Cauc...
Hossain, M.; Steinmann, P.
2014-04-01
A physically-based small strain curing model has been developed and discussed in our previous contribution (Hossain et al. in Comput Mech 43:769-779, 2009a) which was extended later for finite strain elasticity and viscoelasticity including shrinkage in Hossain et al. (Comput Mech 44(5):621-630, 2009b) and in Hossain et al. (Comput Mech 46(3):363-375, 2010), respectively. The previously proposed constitutive models for curing processes are based on the temporal evolution of the material parameters, namely the shear modulus and the relaxation time (in the case of viscoelasticity). In the current paper, a thermodynamically consistent small strain constitutive model is formulated that is directly based on the degree of cure, a key parameter in the curing (reaction) kinetics. The new formulation is also in line with the earlier proposed hypoelastic approach. The curing process of polymers is a complex phenomenon involving a series of chemical reactions which transform a viscoelastic fluid into a viscoelastic solid during which the temperature, the chemistry and the mechanics are coupled. Part I of this work will deal with an isothermal viscoelastic formulation including shrinkage effects whereas the following Part II will give emphasis on the thermomechanical coupled approach. Some representative numerical examples conclude the paper and show the capability of the newly proposed constitutive formulation to capture major phenomena observed during the curing processes of polymers.
Jang, Seung Woo; Kotani, Takao; Kino, Hiori; Kuroki, Kazuhiko; Han, Myung Joon
2015-07-24
Despite decades of progress, an understanding of unconventional superconductivity still remains elusive. An important open question is about the material dependence of the superconducting properties. Using the quasiparticle self-consistent GW method, we re-examine the electronic structure of copper oxide high-Tc materials. We show that QSGW captures several important features, distinctive from the conventional LDA results. The energy level splitting between d(x(2)-y(2)) and d(3z(2)-r(2)) is significantly enlarged and the van Hove singularity point is lowered. The calculated results compare better than LDA with recent experimental results from resonant inelastic xray scattering and angle resolved photoemission experiments. This agreement with the experiments supports the previously suggested two-band theory for the material dependence of the superconducting transition temperature, Tc.
Morgan, D. J.; Chamberlain, K. J.; Wilson, C. J. N.
2014-12-01
Diffusion modelling of elemental gradients across compositional zones within crystals is frequently used to investigate timescales of various magmatic processes. In most cases, however, only a single crystal phase is used for this modelling. The ~0.76 Ma Bishop Tuff (Long Valley, eastern California) in later parts of its eruptive sequence has zoned orthopyroxene, quartz and sanidine. It thus provides an unusual opportunity to compare the modelled timescales from each phase, and assess the limitations of single-phase diffusion modelling in lower-temperature, rhyolitic volcanic systems. The presence of a late-stage compositionally distinct melt (the 'bright-rim' melt) mixing into the lower parts of the Bishop magma chamber has been noted by many authors [e.g. Wark et al. 2007, Geology 35, 235; Roberge et al. 2013, CMP 165, 237; Chamberlain et al. 2014, J Petrol 55, 395] in later-erupted material discharged from vents along the northern ring fracture of the caldera. Here we present the results of 1D diffusion modelling of Ba and Sr in sanidine, Ti in quartz and Fe-Mg interdiffusion in orthopyroxene in samples from later-erupted ignimbrite packages in the tuff. Timescales from diffusion modelling of Fe-Mg interdiffusion in orthopyroxene are Bishop Tuff eruption. We highlight the importance of having a good understanding of the assumptions made and uncertainties in diffusion coefficients when undertaking such modelling, especially in examples where only one phase is available for diffusion modelling.
Energy Technology Data Exchange (ETDEWEB)
Nam, Boda; Hwang, Jung Hwa [Dept. of Radiology, Soonchunhyang University Hospital, Seoul (Korea, Republic of); Lee, Young Mok [Bangbae GF Allergy Clinic, Seoul (Korea, Republic of); Park, Jai Soung [Dept. of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of); Jou, Sung Shick [Dept. of Radiology, Soonchunhyang University Cheonan Hospital, Cheonan (Korea, Republic of); Kim, Young Bae [Dept. of Preventive Medicine, Soonchunhyang University College of Medicine, Cheonan (Korea, Republic of)
2015-09-15
We compared the clinical and quantitative CT measurement parameters between chronic obstructive pulmonary disease (COPD) patients with and without consistent clinical symptoms and pulmonary function results. This study included 60 patients having a clinical diagnosis of COPD, who underwent chest CT scan and pulmonary function tests. These 60 patients were classified into typical and atypical groups, which were further sub-classified into 4 groups, based on their dyspnea score and the result of pulmonary function tests [typical 1: mild dyspnea and pulmonary function impairment (PFI); typical 2: severe dyspnea and PFI; atypical 1: mild dyspnea and severe PFI; atypical 2: severe dyspnea and mild PFI]. Quantitative measurements of the CT data for emphysema, bronchial wall thickness and air-trapping were performed using software analysis. Comparative statistical analysis was performed between the groups. The CT emphysema index correlated well with the results of the pulmonary functional test (typical 1 vs. atypical 1, p = 0.032), and the bronchial wall area ratio correlated with the dyspnea score (typical 1 vs. atypical 2, p = 0.033). CT air-trapping index also correlated with the results of the pulmonary function test (typical 1 vs. atypical 1, p = 0.012) and dyspnea score (typical 1 vs. atypical 2, p = 0.000), and was found to be the most significant parameter between the typical and atypical groups. Quantitative CT measurements for emphysema and airways correlated well with the dyspnea score and pulmonary function results in patients with COPD. Air-trapping was the most significant parameter between the typical vs. atypical group of COPD patients.
Energy Technology Data Exchange (ETDEWEB)
Gepraegs, R.; Schmitz, G.; Peters, D. [Institut fuer Atmosphaerenphysik, Kuehlungsborn (Germany)
1997-12-31
A 2D version of the ECHAM T21 climate model has been developed. The new model includes an efficient spectral transport scheme with implicit diffusion. Furthermore, photodissociation and chemistry of the NCAR 2D model have been incorporated. A self consistent parametrization scheme is used for eddy heat- and momentum flux in the troposphere. It is based on the heat flux parametrization of Branscome and mixing-length formulation for quasi-geostrophic vorticity. Above 150 hPa the mixing-coefficient K{sub yy} is prescribed. Some of the model results are discussed, concerning especially the impact of aircraft NO{sub x} emission on the model chemistry. (author) 6 refs.
Fedele, Renato; De Nicola, Sergio; Shukla, P K; Jovanovic, Dusan
2011-01-01
Thermal Wave Model is used to study the strong self-consistent Plasma Wake Field interaction (transverse effects) between a strongly magnetized plasma and a relativistic electron/positron beam travelling along the external magnetic field, in the long beam limit, in terms of a nonlocal NLS equation and the virial equation. In the linear regime, vortices predicted in terms of Laguerre-Gauss beams characterized by non-zero orbital angular momentum (vortex charge). In the nonlinear regime, criteria for collapse and stable oscillations is established and the thin plasma lens mechanism is investigated, for beam size much greater than the plasma wavelength. The beam squeezing and the self-pinching equilibrium is predicted, for beam size much smaller than the plasma wavelength, taking the aberrationless solution of the nonlocal Nonlinear Schroeding equation.
Energy Technology Data Exchange (ETDEWEB)
Johnson, B. C.; Melosh, H. J. [Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907 (United States); Lisse, C. M. [JHU-APL, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States); Chen, C. H. [STScI, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Wyatt, M. C. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Thebault, P. [LESIA, Observatoire de Paris, F-92195 Meudon Principal Cedex (France); Henning, W. G. [NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Gaidos, E. [Department of Geology and Geophysics, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); Elkins-Tanton, L. T. [Department of Terrestrial Magnetism, Carnegie Institution for Science, Washington, DC 20015 (United States); Bridges, J. C. [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom); Morlok, A., E-mail: johns477@purdue.edu [Department of Physical Sciences, Open University, Walton Hall, Milton Keynes MK7 6AA (United Kingdom)
2012-12-10
Spectral modeling of the large infrared excess in the Spitzer IRS spectra of HD 172555 suggests that there is more than 10{sup 19} kg of submicron dust in the system. Using physical arguments and constraints from observations, we rule out the possibility of the infrared excess being created by a magma ocean planet or a circumplanetary disk or torus. We show that the infrared excess is consistent with a circumstellar debris disk or torus, located at {approx}6 AU, that was created by a planetary scale hypervelocity impact. We find that radiation pressure should remove submicron dust from the debris disk in less than one year. However, the system's mid-infrared photometric flux, dominated by submicron grains, has been stable within 4% over the last 27 years, from the Infrared Astronomical Satellite (1983) to WISE (2010). Our new spectral modeling work and calculations of the radiation pressure on fine dust in HD 172555 provide a self-consistent explanation for this apparent contradiction. We also explore the unconfirmed claim that {approx}10{sup 47} molecules of SiO vapor are needed to explain an emission feature at {approx}8 {mu}m in the Spitzer IRS spectrum of HD 172555. We find that unless there are {approx}10{sup 48} atoms or 0.05 M{sub Circled-Plus} of atomic Si and O vapor in the system, SiO vapor should be destroyed by photo-dissociation in less than 0.2 years. We argue that a second plausible explanation for the {approx}8 {mu}m feature can be emission from solid SiO, which naturally occurs in submicron silicate ''smokes'' created by quickly condensing vaporized silicate.
Directory of Open Access Journals (Sweden)
Seiya Nishiyama
2009-01-01
Full Text Available The maximally-decoupled method has been considered as a theory to apply an basic idea of an integrability condition to certain multiple parametrized symmetries. The method is regarded as a mathematical tool to describe a symmetry of a collective submanifold in which a canonicity condition makes the collective variables to be an orthogonal coordinate-system. For this aim we adopt a concept of curvature unfamiliar in the conventional time-dependent (TD self-consistent field (SCF theory. Our basic idea lies in the introduction of a sort of Lagrange manner familiar to fluid dynamics to describe a collective coordinate-system. This manner enables us to take a one-form which is linearly composed of a TD SCF Hamiltonian and infinitesimal generators induced by collective variable differentials of a canonical transformation on a group. The integrability condition of the system read the curvature C = 0. Our method is constructed manifesting itself the structure of the group under consideration. To go beyond the maximaly-decoupled method, we have aimed to construct an SCF theory, i.e., υ (external parameter-dependent Hartree-Fock (HF theory. Toward such an ultimate goal, the υ-HF theory has been reconstructed on an affine Kac-Moody algebra along the soliton theory, using infinite-dimensional fermion. An infinite-dimensional fermion operator is introduced through a Laurent expansion of finite-dimensional fermion operators with respect to degrees of freedom of the fermions related to a υ-dependent potential with a Υ-periodicity. A bilinear equation for the υ-HF theory has been transcribed onto the corresponding τ-function using the regular representation for the group and the Schur-polynomials. The υ-HF SCF theory on an infinite-dimensional Fock space F∞ leads to a dynamics on an infinite-dimensional Grassmannian Gr∞ and may describe more precisely such a dynamics on the group manifold. A finite-dimensional Grassmannian is identified with a Gr
Modeling clicks beyond the first result page
Chuklin, A.; Serdyukov, P.; de Rijke, M.
2013-01-01
Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more
Modeling clicks beyond the first result page
Chuklin, A.; Serdyukov, P.; de Rijke, M.
2013-01-01
Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more docu
Dreizler, S.; Wolff, B.
1999-08-01
We present a multi-wavelength spectral analysis of the DA white dwarf G 191-B2B. The employed atmospheric models account for gravitational settling and radiative levitation, which are, for the first time, calculated self-consistently with the atmospheric structure. The resulting spectra can reproduce the complete EUVE spectrum and the ultraviolet lines of iron. Some restrictions regarding the UV lines of other elements (C, N, O, Ni), however, still remain. In contrast to homogeneous models, it is not necessary to introduce additional photospheric or interstellar absorbers to account for the high opacity at lambda Research in Astronomy, Inc. under NASA contract NAS 5-26555.
Wang, Fei; Xiong, Zi-Yue; Li, Ping; Yang, Hua; Gao, Wen; Li, Hui-Jun
2017-01-05
Chromatographic fingerprint has been extensively used as a comprehensive approach for quality evaluation of herbal medicines (HMs). However, similar chemical profiles do not always mean similar efficacies. The present work, taking Sophora flower-bud and Sophora flower as a typical case, attempts to develop a rational strategy based on fingerprint-activity relationship modeling to realize quality evaluation from chemical consistency to effective consistency. A total of 57 batches of Sophora samples were collected and their antioxidant and hyaluronidase inhibitory activities were measured. Chemical fingerprints were established by high performance liquid chromatography (HPLC) coupled with photodiode array (PDA) detector and quadrupole time-of-flight mass spectrometry (Q-TOF MS), and similarity analyses were calculated based on eight common characteristic peaks. Subsequently, three principal bioactive markers were discovered by correlating biological effects with chemical fingerprints via partial least squares regression (PLSR) and back propagation-artificial neural network modeling (BP-ANN). The selected markers were quantified by the 'single standard to determine multi-components' method, and then the quantitative data as well as their bioactive properties were subjected to principal component analysis to generate two clear-cut groups. This study not only demonstrates the necessity of effective consistency besides chemical consistency in the quality evaluation of HMs, but also provides an applicable strategy to screen out efficacy-associated markers by fingerprint-activity relationship modeling.
Blade element momentum modeling of inflow with shear in comparison with advanced model results
DEFF Research Database (Denmark)
Aagaard Madsen, Helge; Riziotis, V.; Zahle, Frederik
2012-01-01
There seems to be a significant uncertainty in aerodynamic and aeroelastic simulations on megawatt turbines operating in inflow with considerable shear, in particular with the engineering blade element momentum (BEM) model, commonly implemented in the aeroelastic design codes used by industry....... Computations with advanced vortex and computational fluid dynamics models are used to provide improved insight into the complex flow phenomena and rotor aerodynamics caused by the sheared inflow. One consistent result from the advanced models is the variation of induced velocity as a function of azimuth when...... a higher power than in uniform flow. On the basis of the consistent azimuthal induction variations seen in the advanced model results, three different BEM implementation methods are discussed and tested in the same aeroelastic code. A full local BEM implementation on an elemental stream tube in both...
Banerjee, S.; Hassenklover, E.; Kleijn, J.M.; Cohen Stuart, M.A.; Leermakers, F.A.M.
2013-01-01
This paper presents experimental and modeling results on water–CO2 interfacial tension (IFT) together with wettability studies of water on both hydrophilic and hydrophobic surfaces immersed in CO2. CO2–water interfacial tension (IFT) measurements showed that the IFT decreased with increasing pressur
Shiota, D; Chen, P F; Yamamoto, T T; Sakajiri, T; Shibata, K; Shiota, Daikou; Isobe, Hiroaki; Yamamoto, Tetsuya T.; Sakajiri, Takuma; Shibata, Kazunari
2005-01-01
We performed magnetohydrodynamic simulation of coronal mass ejections (CMEs) and associated giant arcade formations, and the results suggested new interpretations of observations of CMEs. We performed two cases of the simulation: with and without heat conduction. Comparing between the results of the two cases, we found that reconnection rate in the conductive case is a little higher than that in the adiabatic case and the temperature of the loop top is consistent with the theoretical value predicted by the Yokoyama-Shibata scaling law. The dynamical properties such as velocity and magnetic fields are similar in the two cases, whereas thermal properties such as temperature and density are very different.In both cases, slow shocks associated with magnetic reconnectionpropagate from the reconnection region along the magnetic field lines around the flux rope, and the shock fronts form spiral patterns. Just outside the slow shocks, the plasma density decreased a great deal. The soft X-ray images synthesized from t...
Engineering model development and test results
Wellman, John A.
1993-08-01
The correctability of the primary mirror spherical error in the Wide Field/Planetary Camera (WF/PC) is sensitive to the precise alignment of the incoming aberrated beam onto the corrective elements. Articulating fold mirrors that provide +/- 1 milliradian of tilt in 2 axes are required to allow for alignment corrections in orbit as part of the fix for the Hubble space telescope. An engineering study was made by Itek Optical Systems and the Jet Propulsion Laboratory (JPL) to investigate replacement of fixed fold mirrors within the existing WF/PC optical bench with articulating mirrors. The study contract developed the base line requirements, established the suitability of lead magnesium niobate (PMN) actuators and evaluated several tilt mechanism concepts. Two engineering model articulating mirrors were produced to demonstrate the function of the tilt mechanism to provide +/- 1 milliradian of tilt, packaging within the space constraints and manufacturing techniques including the machining of the invar tilt mechanism and lightweight glass mirrors. The success of the engineering models led to the follow on design and fabrication of 3 flight mirrors that have been incorporated into the WF/PC to be placed into the Hubble Space Telescope as part of the servicing mission scheduled for late 1993.
Zhao, Jianshi; Cai, Ximing; Wang, Zhongjing
2013-07-15
Water allocation can be undertaken through administered systems (AS), market-based systems (MS), or a combination of the two. The debate on the performance of the two systems has lasted for decades but still calls for attention in both research and practice. This paper compares water users' behavior under AS and MS through a consistent agent-based modeling framework for water allocation analysis that incorporates variables particular to both MS (e.g., water trade and trading prices) and AS (water use violations and penalties/subsidies). Analogous to the economic theory of water markets under MS, the theory of rational violation justifies the exchange of entitled water under AS through the use of cross-subsidies. Under water stress conditions, a unique water allocation equilibrium can be achieved by following a simple bargaining rule that does not depend upon initial market prices under MS, or initial economic incentives under AS. The modeling analysis shows that the behavior of water users (agents) depends on transaction, or administrative, costs, as well as their autonomy. Reducing transaction costs under MS or administrative costs under AS will mitigate the effect that equity constraints (originating with primary water allocation) have on the system's total net economic benefits. Moreover, hydrologic uncertainty is shown to increase market prices under MS and penalties/subsidies under AS and, in most cases, also increases transaction, or administrative, costs.
Directory of Open Access Journals (Sweden)
Ying Jiang
2017-02-01
Full Text Available This paper presents a theoretical formalism for describing systems of semiflexible polymers, which can have density variations due to finite compressibility and exhibit an isotropic-nematic transition. The molecular architecture of the semiflexible polymers is described by a continuum wormlike-chain model. The non-bonded interactions are described through a functional of two collective variables, the local density and local segmental orientation tensor. In particular, the functional depends quadratically on local density-variations and includes a Maier–Saupe-type term to deal with the orientational ordering. The specified density-dependence stems from a free energy expansion, where the free energy of an isotropic and homogeneous homopolymer melt at some fixed density serves as a reference state. Using this framework, a self-consistent field theory is developed, which produces a Helmholtz free energy that can be used for the calculation of the thermodynamics of the system. The thermodynamic properties are analysed as functions of the compressibility of the model, for values of the compressibility realizable in mesoscopic simulations with soft interactions and in actual polymeric materials.
Energy Technology Data Exchange (ETDEWEB)
Waldhoff, Stephanie T.; Martinich, Jeremy; Sarofim, Marcus; DeAngelo, B. J.; McFarland, Jim; Jantarasami, Lesley; Shouse, Kate C.; Crimmins, Allison; Ohrel, Sara; Li, Jia
2015-07-01
The Climate Change Impacts and Risk Analysis (CIRA) modeling exercise is a unique contribution to the scientific literature on climate change impacts, economic damages, and risk analysis that brings together multiple, national-scale models of impacts and damages in an integrated and consistent fashion to estimate climate change impacts, damages, and the benefits of greenhouse gas (GHG) mitigation actions in the United States. The CIRA project uses three consistent socioeconomic, emissions, and climate scenarios across all models to estimate the benefits of GHG mitigation policies: a Business As Usual (BAU) and two policy scenarios with radiative forcing (RF) stabilization targets of 4.5 W/m2 and 3.7 W/m2 in 2100. CIRA was also designed to specifically examine the sensitivity of results to uncertainties around climate sensitivity and differences in model structure. The goals of CIRA project are to 1) build a multi-model framework to produce estimates of multiple risks and impacts in the U.S., 2) determine to what degree risks and damages across sectors may be lowered from a BAU to policy scenarios, 3) evaluate key sources of uncertainty along the causal chain, and 4) provide information for multiple audiences and clearly communicate the risks and damages of climate change and the potential benefits of mitigation. This paper describes the motivations, goals, and design of the CIRA modeling exercise and introduces the subsequent papers in this special issue.
Microplasticity of MMC. Experimental results and modelling
Energy Technology Data Exchange (ETDEWEB)
Maire, E. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Lormand, G. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Gobin, P.F. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Fougeres, R. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France))
1993-11-01
The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.).
Bast, Radovan; Thorvaldsen, Andreas J.; Ringholm, Magnus; Ruud, Kenneth
2009-02-01
We present the first analytic calculations of the second hyperpolarizability in a relativistic framework. The calculations are made possible by our recent developments of a response theory built on a quasienergy formalism, in which the basis set may be both time and perturbation dependent. The approach is formulated for an arbitrary self-consistent field state in the atomic orbital basis. The implementation consists of a stand-alone code that only requires the unperturbed density in the atomic orbital basis as input, as well as a linear response solver by which we can determine the perturbed density matrices to different orders, at each new order solving equations that have the same structure as the linear response equation. Using these features of our formalism, we extend in this paper our approach to the relativistic domain, utilizing both two- and four-component relativistic wave functions. We apply the formalism to the calculation of the electronic and pure vibrational contributions to the second hyperpolarizability tensor for the hydrogen halides. Our results demonstrate that relativistic effects can be substantial for frequency-dependent second hyperpolarizabilities. Due to changes in the pole structure when going to the relativistic domain, the relativistic corrections to the hyperpolarizabilities are not transferable between different optical processes, except for very low frequencies.
Energy Technology Data Exchange (ETDEWEB)
Bast, Radovan; Thorvaldsen, Andreas J.; Ringholm, Magnus [Centre for Theoretical and Computational Chemistry (CTCC), Department of Chemistry, University of Tromso, N-9037 Tromso (Norway); Ruud, Kenneth [Centre for Theoretical and Computational Chemistry (CTCC), Department of Chemistry, University of Tromso, N-9037 Tromso (Norway)], E-mail: kenneth.ruud@chem.uit.no
2009-02-17
We present the first analytic calculations of the second hyperpolarizability in a relativistic framework. The calculations are made possible by our recent developments of a response theory built on a quasienergy formalism, in which the basis set may be both time and perturbation dependent. The approach is formulated for an arbitrary self-consistent field state in the atomic orbital basis. The implementation consists of a stand-alone code that only requires the unperturbed density in the atomic orbital basis as input, as well as a linear response solver by which we can determine the perturbed density matrices to different orders, at each new order solving equations that have the same structure as the linear response equation. Using these features of our formalism, we extend in this paper our approach to the relativistic domain, utilizing both two- and four-component relativistic wave functions. We apply the formalism to the calculation of the electronic and pure vibrational contributions to the second hyperpolarizability tensor for the hydrogen halides. Our results demonstrate that relativistic effects can be substantial for frequency-dependent second hyperpolarizabilities. Due to changes in the pole structure when going to the relativistic domain, the relativistic corrections to the hyperpolarizabilities are not transferable between different optical processes, except for very low frequencies.
Tsutsumi, D.
2015-12-01
To mitigate sediment related disaster triggered by rainfall event, it is necessary to predict a landslide occurrence and subsequent debris flow behavior. Many landslide analysis method have been developed and proposed by numerous researchers for several decades. Among them, distributed slope stability models simulating temporal and spatial instability of local slopes are more essential for early warning or evacuation in area of lower part of hill-slopes. In the present study, a distributed, physically based landslide analysis method consisting of contour line-based method that subdivide a watershed area into stream tubes, and a slope stability analysis in which critical slip surface is searched to identify location and shape of the most instable slip surface in each stream tube, is developed. A target watershed area is divided into stream tubes using GIS technique, grand water flow for each stream tubes during a rainfall event is analyzed by a kinematic wave model, and slope stability for each stream tube is calculated by a simplified Janbu method searching for a critical slip surface using a dynamic programming method. Comparing to previous methods that assume infinite slope for slope stability analysis, the proposed method has advantage simulating landslides more accurately in spatially and temporally, and estimating amount of collapsed slope mass, that can be delivered to a debris flow simulation model as a input data. We applied this method to a small watershed in the Izu Oshima, Tokyo, Japan, where shallow and wide landslides triggered by heavy rainfall and subsequent debris flows attacked Oshima Town, in 2013. Figure shows the temporal and spatial change of simulated grand water level and landslides distribution. The simulated landslides are correspond to the uppermost part of actual landslide area, and the timing of the occurrence of landslides agree well with the actual landslides.
Steiner, James F.; García, Javier A.; Eikmann, Wiebke; McClintock, Jeffrey E.; Brenneman, Laura W.; Dauser, Thomas; Fabian, Andrew C.
2017-02-01
Continuum and reflection spectral models have each been widely employed in measuring the spins of accreting black holes. However, the two approaches have not been implemented together in a photon-conserving, self-consistent framework. We develop such a framework using the black hole X-ray binary GX 339-4 as a touchstone source, and we demonstrate three important ramifications. (1) Compton scattering of reflection emission in the corona is routinely ignored, but is an essential consideration given that reflection is linked to the regimes with strongest Comptonization. Properly accounting for this causes the inferred reflection fraction to increase substantially, especially for the hard state. Another important impact of the Comptonization of reflection emission by the corona is the downscattered tail. Downscattering has the potential to mimic the relativistically broadened red wing of the Fe line associated with a spinning black hole. (2) Recent evidence for a reflection component with a harder spectral index than the power-law continuum is naturally explained as Compton-scattered reflection emission. (3) Photon conservation provides an important constraint on the hard state’s accretion rate. For bright hard states, we show that disk truncation to large scales R\\gg {R}{ISCO} is unlikely as this would require accretion rates far in excess of the observed \\dot{M} of the brightest soft states. Our principal conclusion is that when modeling relativistically broadened reflection, spectral models should allow for coronal Compton scattering of the reflection features, and when possible, take advantage of the additional constraining power from linking to the thermal disk component.
Wurz, P.; Whitby, J. A.; Rohner, U.; Martín-Fernández, J. A.; Lammer, H.; Kolb, C.
2010-10-01
A Monte-Carlo model of exospheres ( Wurz and Lammer, 2003) was extended by treating the ion-induced sputtering process, photon-stimulated desorption, and micro-meteorite impact vaporisation quantitatively in a self-consistent way starting with the actual release of particles from the mineral surface of Mercury. Based on available literature data we established a global model for the surface mineralogy of Mercury and from that derived the average elemental composition of the surface. This model serves as a tool to estimate densities of species in the exosphere depending on the release mechanism and the associated physical parameters quantitatively describing the particle release from the surface. Our calculation shows that the total contribution to the exospheric density at the Hermean surface by solar wind sputtering is about 4×10 7 m -3, which is much less than the experimental upper limit of the exospheric density of 10 12 m -3. The total calculated exospheric density from micro-meteorite impact vaporisation is about 1.6×10 8 m -3, also much less than the observed value. We conclude that solar wind sputtering and micro-meteorite impact vaporisation contribute only a small fraction of Mercury's exosphere, at least close to the surface. Because of the considerably larger scale height of atoms released via sputtering into the exosphere, sputtered atoms start to dominate the exosphere at altitudes exceeding around 1000 km, with the exception of some light and abundant species released thermally, e.g. H 2 and He. Because of Mercury's strong gravitational field not all particles released by sputtering and micro-meteorite impact escape. Over extended time scales this will lead to an alteration of the surface composition.
Liuzzi, G.; Masiello, G.; Serio, C.; Venafra, S.; Camy-Peyret, C.
2016-10-01
Spectra observed by the Infrared Atmospheric Sounder Interferometer (IASI) have been used to assess both retrievals and the spectral quality and consistency of current forward models and spectroscopic databases for atmospheric gas line and continuum absorption. The analysis has been performed with thousands of observed spectra over sea surface in the Pacific Ocean close to the Mauna Loa (Hawaii) validation station. A simultaneous retrieval for surface temperature, atmospheric temperature, H2O, HDO, O3 profiles and gas average column abundance of CO2, CO, CH4, SO2, N2O, HNO3, NH3, OCS and CF4 has been performed and compared to in situ observations. The retrieval system considers the full IASI spectrum (all 8461 spectral channels on the range 645-2760 cm-1). We have found that the average column amount of atmospheric greenhouse gases can be retrieved with a precision better than 1% in most cases. The analysis of spectral residuals shows that, after inversion, they are generally reduced to within the IASI radiometric noise. However, larger residuals still appear for many of the most abundant gases, namely H2O, CH4 and CO2. The H2O ν2 spectral region is in general warmer (higher radiance) than observations. The CO2ν2 and N2O/CO2ν3 spectral regions now show a consistent behavior for channels, which are probing the troposphere. Updates in CH4 spectroscopy do not seem to improve the residuals. The effect of isotopic fractionation of HDO is evident in the 2500-2760 cm-1 region and in the atmospheric window around 1200 cm-1.
Wersal, C.; Ricci, P.; Loizu, J.
2017-04-01
A refined two-point model is derived from the drift-reduced Braginskii equations for the limited tokamak scrape-off layer (SOL) by balancing the parallel and perpendicular transport of plasma and heat and taking into account the plasma–neutral interaction. The model estimates the electron temperature drop along a field line, from a region far from the limiter to the limiter plates. Self-consistent first-principles turbulence simulations of the SOL plasma including its interaction with neutral atoms are performed with the GBS code and compared to the refined two-point model. The refined two-point model is shown to be in very good agreement with the turbulence simulation results.
Budkov, Yu. A.; Nogovitsyn, E. A.; Kiselev, M. G.
2013-04-01
A theoretical approach to calculating the thermodynamic and structural functions of solutions of polyelectrolytes based on Gaussian equivalent representation for the calculation of functional integrals is proposed. It is noted that a new analytical result of this work is the direct assumption of counterions, along with an equation for the gyration radius of a polymer chain as a function of the concentrations of monomers and added low-molecular salt. An equation of state is obtained within the proposed model. Our theoretical results are used to describe the thermodynamic and structural properties of an aqueous solution of sodium polystyrene sulfonate with additions of NaCl.
Béghin, Christian
2015-02-01
This model is worked out in the frame of physical mechanisms proposed in previous studies accounting for the generation and the observation of an atypical Schumann Resonance (SR) during the descent of the Huygens Probe in the Titan's atmosphere on 14 January 2005. While Titan is staying inside the subsonic co-rotating magnetosphere of Saturn, a secondary magnetic field carrying an Extremely Low Frequency (ELF) modulation is shown to be generated through ion-acoustic instabilities of the Pedersen current sheets induced at the interface region between the impacting magnetospheric plasma and Titan's ionosphere. The stronger induced magnetic field components are focused within field-aligned arcs-like structures hanging down the current sheets, with minimum amplitude of about 0.3 nT throughout the ramside hemisphere from the ionopause down to the Moon surface, including the icy crust and its interface with a conductive water ocean. The deep penetration of the modulated magnetic field in the atmosphere is thought to be allowed thanks to the force balance between the average temporal variations of thermal and magnetic pressures within the field-aligned arcs. However, there is a first cause of diffusion of the ELF magnetic components, probably due to feeding one, or eventually several SR eigenmodes. A second leakage source is ascribed to a system of eddy-Foucault currents assumed to be induced through the buried water ocean. The amplitude spectrum distribution of the induced ELF magnetic field components inside the SR cavity is found fully consistent with the measurements of the Huygens wave-field strength. Waiting for expected future in-situ exploration of Titan's lower atmosphere and the surface, the Huygens data are the only experimental means available to date for constraining the proposed model.
Greco, Cristina; Jiang, Ying; Chen, Jeff Z. Y.; Kremer, Kurt; Daoulas, Kostas Ch.
2016-11-01
Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.
Basurah, Hassan M; Dopita, Michael A; Alsulami, R; Amer, Morsi A; Alruhaili, A
2016-01-01
We present integral field unit (IFU) spectroscopy and self-consistent photoionisation modelling for a sample of four southern Galactic planetary nebulae (PNe) with supposed weak emission-line (WEL) central stars. The Wide Field Spectrograph (WiFeS) on the ANU 2.3 m telescope has been used to provide IFU spectroscopy for NGC 3211, NGC 5979, My 60, and M 4-2 covering the spectral range of 3400-7000{\\AA}. All objects are high excitation non-Type I PNe, with strong He II emission, strong [Ne V] emission, and weak low-excitation lines. They all appear to be predominantly optically-thin nebulae excited by central stars with $T_{\\rm eff} > 10^5$K. Three PNe of the sample have central stars which have been previously classified as weak emission-line stars (WELS), and the fourth also shows the characteristic recombination lines of a WELS. However, the spatially-resolved spectroscopy shows that rather than arising in the central star, the C IV and N III recombination line emission is distributed in the nebula, and in s...
Kral, Quentin; Charnoz, Sébastien
2013-01-01
In most current debris disc models, the dynamical and the collisional evolutions are studied separately, with N-body and statistical codes, respectively, because of stringent computational constraints. We present here LIDT-DD, the first code able to mix both approaches in a fully self-consistent way. Our aim is for it to be generic enough so as to be applied to any astrophysical cases where we expect dynamics and collisions to be deeply interlocked with one another: planets in discs, violent massive breakups, destabilized planetesimal belts, exozodiacal discs, etc. The code takes its basic architecture from the LIDT3D algorithm developed by Charnoz et al.(2012) for protoplanetary discs, but has been strongly modified and updated in order to handle the very constraining specificities of debris discs physics: high-velocity fragmenting collisions, radiation-pressure affected orbits, absence of gas, etc. In LIDT-DD, grains of a given size at a given location in a disc are grouped into "super-particles", whose orb...
Energy Technology Data Exchange (ETDEWEB)
Sahai, N.; Sverjensky, D.A. [Johns Hopkins Univ., Baltimore, MD (United States)
1997-07-01
Systematic analysis of surface titration data from the literature has been performed for ten oxides (anatase, hematite, goethite, rutile, amorphous silica, quartz, magnetite, {delta}-MnO{sub 2}, corundum, and {gamma}-alumina) in ten electrolytes (LiNO{sub 3}, NaNO{sub 3}, KNO{sub 3}, CsNO{sub 3}, LiCl, NaCl, KCl, CsCl, Nal, and NaClO{sub 4}) over a wide range of ionic strengths (0.001 M-2.9 M) to establish adsorption equilibrium constants and capacitances consistent with the triple-layer model of surface complexation. Experimental data for the same mineral in different electrolytes and data for a given mineral/electrolyte system from various investigators have been compared. In this analysis, the surface protonation constants (K{sub s,1} and K{sub s,2}) were calculated by combining predicted values of {Delta}pK(log K{sub s,2} - log K{sub s,1}) with experimental points of zero charge; site-densities were obtained from tritium-exchange experiments reported in the literature, and the outer-layer capacitance (C{sub 2}) was set at 0.2 F {center_dot} m{sup -2}. 98 refs., 8 figs., 27 tabs.
Churchill, Nathan W; Madsen, Kristoffer; Mørup, Morten
2016-10-01
The brain consists of specialized cortical regions that exchange information between each other, reflecting a combination of segregated (local) and integrated (distributed) processes that define brain function. Functional magnetic resonance imaging (fMRI) is widely used to characterize these functional relationships, although it is an ongoing challenge to develop robust, interpretable models for high-dimensional fMRI data. Gaussian mixture models (GMMs) are a powerful tool for parcellating the brain, based on the similarity of voxel time series. However, conventional GMMs have limited parametric flexibility: they only estimate segregated structure and do not model interregional functional connectivity, nor do they account for network variability across voxels or between subjects. To address these issues, this letter develops the functional segregation and integration model (FSIM). This extension of the GMM framework simultaneously estimates spatial clustering and the most consistent group functional connectivity structure. It also explicitly models network variability, based on voxel- and subject-specific network scaling profiles. We compared the FSIM to standard GMM in a predictive cross-validation framework and examined the importance of different model parameters, using both simulated and experimental resting-state data. The reliability of parcellations is not significantly altered by flexibility of the FSIM, whereas voxel- and subject-specific network scaling profiles significantly improve the ability to predict functional connectivity in independent test data. Moreover, the FSIM provides a set of interpretable parameters to characterize both consistent and variable aspects functional connectivity structure. As an example of its utility, we use subject-specific network profiles to identify brain regions where network expression predicts subject age in the experimental data. Thus, the FSIM is effective at summarizing functional connectivity structure in group
Directory of Open Access Journals (Sweden)
D. Vatvani
2012-07-01
results obtained, we conclude that, for a good reproduction of the storm surges under hurricane conditions, Makin's new drag parameterization is favourable above the traditional Charnock relation. Furthermore, we are encouraged by these results to continue the studies and establish the effect of improved Makin's wind drag parameterization in the wave model.
The results from this study will be used to evaluate the relevance of extending the present towards implementation of a similar wind drag parameterization in the SWAN wave model, in line with our aim to apply a consistent wind drag formulation throughout the entire storm surge modelling approach.
Chialvo, Ariel A; Moucka, Filip; Vlcek, Lukas; Nezbeda, Ivo
2015-04-16
We developed the Gaussian charge-on-spring (GCOS) version of the original self-consistent field implementation of the Gaussian Charge Polarizable water model and test its accuracy to represent the polarization behavior of the original model involving smeared charges and induced dipole moments. For that purpose we adapted the recently proposed multiple-particle-move (MPM) within the Gibbs and isochoric-isothermal ensembles Monte Carlo methods for the efficient simulation of polarizable fluids. We assessed the accuracy of the GCOS representation by a direct comparison of the resulting vapor-liquid phase envelope, microstructure, and relevant microscopic descriptors of water polarization along the orthobaric curve against the corresponding quantities from the actual GCP water model.
Garrett, T. J.
2014-12-01
Studies of the response of global climate to anthropogenic activities rely upon scenarios for future human activity to provide a range of possible trajectories for greenhouse gases emissions over the coming century. Sophisticated integrated models are used to explore not only what will happen, but what should happen in order to optimize societal well-being. Hundreds of equations might be used to account for the interplay between human decisions, technological change, and macroeconomic priniciples. In contrast, the model equations used to describe geophysical phenomena look very different because they are a) purely deterministic and b) consistent with basic thermodynamic laws. This inconsistency between macroeconomics and physics suggests a rather unhappy marriage. During the Anthropocene the evolution of humanity and our environment will become increasingly intertwined. Representing such a coupling suggests a need for a common theoretical basis. To this end, the approach that is described here is to treat civilization like any other physical process, that is as an open, non-equilibrium thermodynamic system that dissipates energy and diffuses matter in order to sustain existing circulations and to further its material growth. Theoretical arguments and over 40 years of measurements show that a very general representation of global economic wealth (not GDP) has been tied to rates of global primary energy consumption through a constant 7.1 ± 0.1 mW per year 2005 USD. This link between physics and economics leads to very simple expressions for how fast civilization and its rate of energy consumption grow. These are expressible as a function of rates of energy and material resource discovery and depletion, and of the magnitude of externally imposed decay. The equations are validated through hindcasts that show, for example, that economic conditions in the 1950s can be invoked to make remarkably accurate forecasts of present rates of global GDP growth and primary energy