Self-consistent modelling of resonant tunnelling structures
DEFF Research Database (Denmark)
Fiig, T.; Jauho, A.P.
1992-01-01
We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated with the ......We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated...... applied voltages and carrier densities at the emitter-barrier interface. We include the two-dimensional accumulation layer charge and the quantum well charge in our self-consistent scheme. We discuss the evaluation of the current contribution originating from the two-dimensional accumulation layer charges...
Chip Multithreaded Consistency Model
Institute of Scientific and Technical Information of China (English)
Zu-Song Li; Dan-Dan Huan; Wei-Wu Hu; Zhi-Min Tang
2008-01-01
Multithreaded technique is the developing trend of high performance processor. Memory consistency model is essential to the correctness, performance and complexity of multithreaded processor. The chip multithreaded consistency model adapting to multithreaded processor is proposed in this paper. The restriction imposed on memory event ordering by chip multithreaded consistency is presented and formalized. With the idea of critical cycle built by Wei-Wu Hu, we prove that the proposed chip multithreaded consistency model satisfies the criterion of correct execution of sequential consistency model. Chip multithreaded consistency model provides a way of achieving high performance compared with sequential consistency model and ensures the compatibility of software that the execution result in multithreaded processor is the same as the execution result in uniprocessor. The implementation strategy of chip multithreaded consistency model in Godson-2 SMT processor is also proposed. Godson-2 SMT processor supports chip multithreaded consistency model correctly by exception scheme based on the sequential memory access queue of each thread.
Consistent model driven architecture
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
RNA secondary structure modeling at consistent high accuracy using differential SHAPE.
Rice, Greggory M; Leonard, Christopher W; Weeks, Kevin M
2014-06-01
RNA secondary structure modeling is a challenging problem, and recent successes have raised the standards for accuracy, consistency, and tractability. Large increases in accuracy have been achieved by including data on reactivity toward chemical probes: Incorporation of 1M7 SHAPE reactivity data into an mfold-class algorithm results in median accuracies for base pair prediction that exceed 90%. However, a few RNA structures are modeled with significantly lower accuracy. Here, we show that incorporating differential reactivities from the NMIA and 1M6 reagents--which detect noncanonical and tertiary interactions--into prediction algorithms results in highly accurate secondary structure models for RNAs that were previously shown to be difficult to model. For these RNAs, 93% of accepted canonical base pairs were recovered in SHAPE-directed models. Discrepancies between accepted and modeled structures were small and appear to reflect genuine structural differences. Three-reagent SHAPE-directed modeling scales concisely to structurally complex RNAs to resolve the in-solution secondary structure analysis problem for many classes of RNA.
ICFD modeling of final settlers - developing consistent and effective simulation model structures
DEFF Research Database (Denmark)
Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham
analysis exercises is kept to a minimum (4). Consequently, detailed information related to, for instance, design boundaries, may be ignored, and their effects may only be accounted for through calibration of model parameters used as catchalls, and by arbitrary amendments of structural uncertainty...... of (6). Further details are shown in (5). Results and discussions Factor screening. Factor screening is carried out by imposing statistically designed moderate (under-loaded) and extreme (under-, critical and overloaded) operational boundary conditions on the 2-D CFD SST model (8). Results obtained...
Motte, Fabrice; Bugler-Lamb, Samuel L.; Falcoz, Quentin
2015-07-01
The attraction of solar energy is greatly enhanced by the possibility of it being used during times of reduced or non-existent solar flux, such as weather induced intermittences or the darkness of the night. Therefore optimizing thermal storage for use in solar energy plants is crucial for the success of this sustainable energy source. Here we present a study of a structured bed filler dedicated to Thermocline type thermal storage, believed to outweigh the financial and thermal benefits of other systems currently in use such as packed bed Thermocline tanks. Several criterions such as Thermocline thickness and Thermocline centering are defined with the purpose of facilitating the assessment of the efficiency of the tank to complement the standard concepts of power output. A numerical model is developed that reduces to two dimensions the modeling of such a tank. The structure within the tank is designed to be built using simple bricks harboring rectangular channels through which the solar heat transfer and storage fluid will flow. The model is scrutinized and tested for physical robustness, and the results are presented in this paper. The consistency of the model is achieved within particular ranges for each physical variable.
Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling
Energy Technology Data Exchange (ETDEWEB)
Pera, H.; Kleijn, J. M.; Leermakers, F. A. M., E-mail: Frans.leermakers@wur.nl [Laboratory of Physical Chemistry and Colloid Science, Wageningen University, Dreijenplein 6, 6307 HB Wageningen (Netherlands)
2014-02-14
To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k{sub c} and k{sup ¯} and the preferred monolayer curvature J{sub 0}{sup m}, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k{sub c} and the area compression modulus k{sub A} are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k{sup ¯} and J{sub 0}{sup m} can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k{sup ¯} and J{sub 0}{sup m} change sign with relevant parameter changes. Although typically k{sup ¯}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J{sub 0}{sup m}≫0, especially at low ionic
Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie
2014-10-07
Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations.
Towards a self-consistent halo model for the nonlinear large-scale structure
Schmidt, Fabian
2015-01-01
The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: $(i)$ they do not enforce the stress-energy conservation of matter; $(ii)$ they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model ("EHM") that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed, and results of perturbation theory and the effective field theory can in principle be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written he...
Self-consistent triaxial models
Sanders, Jason L
2015-01-01
We present self-consistent triaxial stellar systems that have analytic distribution functions (DFs) expressed in terms of the actions. These provide triaxial density profiles with cores or cusps at the centre. They are the first self-consistent triaxial models with analytic DFs suitable for modelling giant ellipticals and dark haloes. Specifically, we study triaxial models that reproduce the Hernquist profile from Williams & Evans (2015), as well as flattened isochrones of the form proposed by Binney (2014). We explore the kinematics and orbital structure of these models in some detail. The models typically become more radially anisotropic on moving outwards, have velocity ellipsoids aligned in Cartesian coordinates in the centre and aligned in spherical polar coordinates in the outer parts. In projection, the ellipticity of the isophotes and the position angle of the major axis of our models generally changes with radius. So, a natural application is to elliptical galaxies that exhibit isophote twisting....
Towards a Self Consistent Model of the Thermal Structure of the Venus Atmosphere
Limaye, Sanjay; Vandaele, Ann C.; Wilson, Colin
Nearly three decades ago, an international effort led to the adoption of the Venus International Reference Atmosphere (VIRA) was published in 1985 after the significant data returned by the Pioneer Venus Orbiter and Probes and the earlier Venera missions (Kliore et al., 1985). The vertical thermal structure is one component of the reference model which relied primarily on the three Pioneer Venus Small Probes, the Large Probe profiles as well as several hundred retrieved temperature profiles from the Pioneer Venus Orbiter radio occultation data collected during 1978 - 1982. Since then a huge amount of thermal structure data has been obtained from multiple instruments on ESA’s Venus Express (VEX) orbiter mission. The VEX data come from retrieval of temperature profiles from SPICAV/SOIR stellar/solar occultations, VeRa radio occultations and from the passive remote sensing by the VIRTIS instrument. The results of these three experiments vary in their intrinsic properties - altitude coverage, spatial and temporal sampling and resolution and accuracy An international team has been formed with support from the International Space Studies Institute (Bern, Switzerland) to consider the observations of the Venus atmospheric structure obtained since the data used for the COSPAR Venus International Reference Atmosphere (Kliore et al., 1985). We report on the progress made by the comparison of the newer data with VIRA model and also between different experiments where there is overlap. Kliore, A.J., V.I. Moroz, and G.M. Keating, Eds. 1985, VIRA: Venus International Reference Atmosphere, Advances in Space Research, Volume 5, Number 11, 307 pages.
National Research Council Canada - National Science Library
Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie
2014-01-01
.... Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic...
Self-consistent model of fermions
Yershov, V N
2002-01-01
We discuss a composite model of fermions based on three-flavoured preons. We show that the opposite character of the Coulomb and strong interactions between these preons lead to formation of complex structures reproducing three generations of quarks and leptons with all their quantum numbers and masses. The model is self-consistent (it doesn't use input parameters). Nevertheless, the masses of the generated structures match the experimental values.
Energy Technology Data Exchange (ETDEWEB)
Barik, N.; Jena, S.N.
1982-11-01
We show here that the relativistic consistency of an effective power-law potential V(r) = Ar/sup ..nu../+V/sub 0/ (with A, ..nu..>0) (used successfully to describe the heavy-meson spectra) in generating Dirac bound states of QQ-bar and Qq-bar systems implies, and also at the same time is implied by, an equally mixed vector-scalar Lorentz structure which was observed phenomenologically in the fine-hyperfine splittings of meson spectra.
Consistent ranking of volatility models
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2006-01-01
We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...
Consistent ranking of volatility models
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2006-01-01
result in an inferior model being chosen as "best" with a probability that converges to one as the sample size increases. We document the practical relevance of this problem in an empirical application and by simulation experiments. Our results provide an additional argument for using the realized...... variance in out-of-sample evaluations rather than the squared return. We derive the theoretical results in a general framework that is not specific to the comparison of volatility models. Similar problems can arise in comparisons of forecasting models whenever the predicted variable is a latent variable.......We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...
Jang, Seung Woo; Kotani, Takao; Kino, Hiori; Kuroki, Kazuhiko; Han, Myung Joon
2015-07-24
Despite decades of progress, an understanding of unconventional superconductivity still remains elusive. An important open question is about the material dependence of the superconducting properties. Using the quasiparticle self-consistent GW method, we re-examine the electronic structure of copper oxide high-Tc materials. We show that QSGW captures several important features, distinctive from the conventional LDA results. The energy level splitting between d(x(2)-y(2)) and d(3z(2)-r(2)) is significantly enlarged and the van Hove singularity point is lowered. The calculated results compare better than LDA with recent experimental results from resonant inelastic xray scattering and angle resolved photoemission experiments. This agreement with the experiments supports the previously suggested two-band theory for the material dependence of the superconducting transition temperature, Tc.
Entropy-based consistent model driven architecture
Niepostyn, Stanisław Jerzy
2016-09-01
A description of software architecture is a plan of the IT system construction, therefore any architecture gaps affect the overall success of an entire project. The definitions mostly describe software architecture as a set of views which are mutually unrelated, hence potentially inconsistent. Software architecture completeness is also often described in an ambiguous way. As a result most methods of IT systems building comprise many gaps and ambiguities, thus presenting obstacles for software building automation. In this article the consistency and completeness of software architecture are mathematically defined based on calculation of entropy of the architecture description. Following this approach, in this paper we also propose our method of automatic verification of consistency and completeness of the software architecture development method presented in our previous article as Consistent Model Driven Architecture (CMDA). The proposed FBS (Functionality-Behaviour-Structure) entropy-based metric applied in our CMDA approach enables IT architects to decide whether the modelling process is complete and consistent. With this metric, software architects could assess the readiness of undergoing modelling work for the start of IT system building. It even allows them to assess objectively whether the designed software architecture of the IT system could be implemented at all. The overall benefit of such an approach is that it facilitates the preparation of complete and consistent software architecture more effectively as well as it enables assessing and monitoring of the ongoing modelling development status. We demonstrate this with a few industry examples of IT system designs.
Consistent Stochastic Modelling of Meteocean Design Parameters
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Sterndorff, M. J.
2000-01-01
Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...... height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...
Self-consistent structure of metallic hydrogen
Straus, D. M.; Ashcroft, N. W.
1977-01-01
A calculation is presented of the total energy of metallic hydrogen for a family of face-centered tetragonal lattices carried out within the self-consistent phonon approximation. The energy of proton motion is large and proper inclusion of proton dynamics alters the structural dependence of the total energy, causing isotropic lattices to become favored. For the dynamic lattice the structural dependence of terms of third and higher order in the electron-proton interaction is greatly reduced from static lattice equivalents.
Polotsky, A.; Charlaganov, M.; Xu, Y.P.; Leermakers, F.A.M.; Daoud, M.; Muller, A.H.E.; Dotera, T.; Borisov, O.V.
2008-01-01
We present theoretical arguments and experimental evidence for a longitudinal instability in core-shell cylindrical polymer brushes with a solvophobic inner (core) block and a solvophilic outer (shell) block in selective solvents. The two-gradient self-consistent field Scheutjens-Fleer (SCF-SF)
Borisov, O.V.; Zhulina, E.B.; Leermakers, F.A.M.; Muller, A.H.E.
2011-01-01
We present an overview of statistical thermodynamic theories that describe the self-assembly of amphiphilic ionic/hydrophobic diblock copolymers in dilute solution. Block copolymers with both strongly and weakly dissociating (pH-sensitive) ionic blocks are considered. We focus mostly on structural
Modeling and Testing Legacy Data Consistency Requirements
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard
2003-01-01
An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...
A Framework of Memory Consistency Models
Institute of Scientific and Technical Information of China (English)
胡伟武; 施巍松; 等
1998-01-01
Previous descriptions of memory consistency models in shared-memory multiprocessor systems are mainly expressed as constraints on the memory access event ordering and hence are hardware-centric.This paper presents a framework of memory consistency models which describes the memory consistency model on the behavior level.Based on the understanding that the behavior of an execution is determined by the execution order of conflicting accesses,a memory consistency model is defined as an interprocessor synchronization mechanism which orders the execution of operations from different processors.Synchronization order of an execution under certain consistency model is also defined.The synchronization order,together with the program order determines the behavior of an execution.This paper also presents criteria for correct program and correct implementation of consistency models.Regarding an implementation of a consistency model as certain memory event ordering constraints,this paper provides a method to prove the correctness of consistency model implementations,and the correctness of the lock-based cache coherence protocol is proved with this method.
A self-consistent Maltsev pulse model
Buneman, O.
1985-04-01
A self-consistent model for an electron pulse propagating through a plasma is presented. In this model, the charge imbalance between plasma ions, plasma electrons and pulse electrons creates the travelling potential well in which the pulse electrons are trapped.
Self-Consistent Asset Pricing Models
Malevergne, Y
2006-01-01
We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alpha's and beta's of the factor model are unobservable. Self-consistency leads to renormalized beta's with zero effective alpha's, which are observable with standard OLS regressions. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value $\\alpha_i$ at the origin between an asset $i$'s return and the proxy's return. Self-consistency also introduces ``orthogonality'' and ``normality'' conditions linking the beta's, alpha's (as well as the residuals) and the weights of the proxy por...
Developing consistent pronunciation models for phonemic variants
CSIR Research Space (South Africa)
Davel, M
2006-09-01
Full Text Available from a lexicon containing variants. In this paper we (the authors) address both these issues by creating ‘pseudo-phonemes’ associated with sets of ‘generation restriction rules’ to model those pronunciations that are consistently realised as two or more...
Are there consistent models giving observable NSI ?
Martinez, Enrique Fernandez
2013-01-01
While the existing direct bounds on neutrino NSI are rather weak, order 10(−)(1) for propagation and 10(−)(2) for production and detection, the close connection between these interactions and new NSI affecting the better-constrained charged letpon sector through gauge invariance make these bounds hard to saturate in realistic models. Indeed, Standard Model extensions leading to neutrino NSI typically imply constraints at the 10(−)(3) level. The question of whether consistent models leading to observable neutrino NSI naturally arises and was discussed in a dedicated session at NUFACT 11. Here we summarize that discussion.
Thermodynamically consistent model calibration in chemical kinetics
Directory of Open Access Journals (Sweden)
Goutsias John
2011-05-01
Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new
Consistent quadrupole-octupole collective model
Dobrowolski, A.; Mazurek, K.; Góźdź, A.
2016-11-01
Within this work we present a consistent approach to quadrupole-octupole collective vibrations coupled with the rotational motion. A realistic collective Hamiltonian with variable mass-parameter tensor and potential obtained through the macroscopic-microscopic Strutinsky-like method with particle-number-projected BCS (Bardeen-Cooper-Schrieffer) approach in full vibrational and rotational, nine-dimensional collective space is diagonalized in the basis of projected harmonic oscillator eigensolutions. This orthogonal basis of zero-, one-, two-, and three-phonon oscillator-like functions in vibrational part, coupled with the corresponding Wigner function is, in addition, symmetrized with respect to the so-called symmetrization group, appropriate to the collective space of the model. In the present model it is D4 group acting in the body-fixed frame. This symmetrization procedure is applied in order to provide the uniqueness of the Hamiltonian eigensolutions with respect to the laboratory coordinate system. The symmetrization is obtained using the projection onto the irreducible representation technique. The model generates the quadrupole ground-state spectrum as well as the lowest negative-parity spectrum in 156Gd nucleus. The interband and intraband B (E 1 ) and B (E 2 ) reduced transition probabilities are also calculated within those bands and compared with the recent experimental results for this nucleus. Such a collective approach is helpful in searching for the fingerprints of the possible high-rank symmetries (e.g., octahedral and tetrahedral) in nuclear collective bands.
DEFF Research Database (Denmark)
Jensen, Mette Krog; Khaliullin, Renat; Schieber, Jay D.
2012-01-01
Linear viscoelastic (LVE) measurements as well as non-linear elongation measurements have been performed on stoichiometrically imbalanced polymeric networks to gain insight into the structural influence on the rheological response (Jensen et al., Rheol Acta 49(1):1–13, 2010). In particular, we se...
Consistent estimators in random censorship semiparametric models
Institute of Scientific and Technical Information of China (English)
王启华
1996-01-01
For the fixed design regression modelwhen Y, are randomly censored on the right, the estimators of unknown parameter and regression function g from censored observations are defined in the two cases .where the censored distribution is known and unknown, respectively. Moreover, the sufficient conditions under which these estimators are strongly consistent and pth (p>2) mean consistent are also established.
Consistency analysis of a nonbirefringent Lorentz-violating planar model
Casana, Rodolfo; Moreira, Roemir P M
2011-01-01
In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman's propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor $\\kappa_{\\mu\
Consistency and Reconciliation Model In Regional Development Planning
Directory of Open Access Journals (Sweden)
Dina Suryawati
2016-10-01
Full Text Available The aim of this study was to identify the problems and determine the conceptual model of regional development planning. Regional development planning is a systemic, complex and unstructured process. Therefore, this study used soft systems methodology to outline unstructured issues with a structured approach. The conceptual models that were successfully constructed in this study are a model of consistency and a model of reconciliation. Regional development planning is a process that is well-integrated with central planning and inter-regional planning documents. Integration and consistency of regional planning documents are very important in order to achieve the development goals that have been set. On the other hand, the process of development planning in the region involves technocratic system, that is, both top-down and bottom-up system of participation. Both must be balanced, do not overlap and do not dominate each other. regional, development, planning, consistency, reconciliation
Pressure-Balance Consistency in Magnetospheric Modelling
Institute of Scientific and Technical Information of China (English)
肖永登; 陈出新
2003-01-01
There have been many magnetic field models for geophysical and astrophysical bodies.These theoretical or empirical models represent the reality very well in some cases,but in other cases they may be far from reality.We argue that these models will become more reasonable if they are modified by some coordinate transformations.In order to demonstrate the transformation,we use this method to resolve the "pressure-balance inconsistency"problem that occurs when plasma transports from the outer plasma sheet of the Earth into the inner plasma sheet.
Consistent Partial Least Squares Path Modeling
Dijkstra, Theo K.; Henseler, Jörg
2015-01-01
This paper resumes the discussion in information systems research on the use of partial least squares (PLS) path modeling and shows that the inconsistency of PLS path coefficient estimates in the case of reflective measurement can have adverse consequences for hypothesis testing. To remedy this, the
Structural Consistency: Enabling XML Keyword Search to Eliminate Spurious Results Consistently
Lee, Ki-Hoon; Han, Wook-Shin; Kim, Min-Soo
2009-01-01
XML keyword search is a user-friendly way to query XML data using only keywords. In XML keyword search, to achieve high precision without sacrificing recall, it is important to remove spurious results not intended by the user. Efforts to eliminate spurious results have enjoyed some success by using the concepts of LCA or its variants, SLCA and MLCA. However, existing methods still could find many spurious results. The fundamental cause for the occurrence of spurious results is that the existing methods try to eliminate spurious results locally without global examination of all the query results and, accordingly, some spurious results are not consistently eliminated. In this paper, we propose a novel keyword search method that removes spurious results consistently by exploiting the new concept of structural consistency.
A self-consistent spin-diffusion model for micromagnetics
Abert, Claas
2016-12-17
We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.
Structures, profile consistency, and transport scaling in electrostatic convection
DEFF Research Database (Denmark)
Bian, N.H.; Garcia, O.E.
2005-01-01
that for interchange modes, profile consistency is in fact due to mixing by persistent large-scale convective cells. This mechanism is not a turbulent diffusion, cannot occur in collisionless systems, and is the analog of the well-known laminar "magnetic flux expulsion" in magneiohydrodynamics. This expulsion process...... involves a "pinch" across closed streamlines and further results in the formation of pressure fingers along the-separatrix of the convective cells. By nature, these coherent structures are dissipative because the mixing process that leads to their formation relies on a finite amount of collisional...... diffusion. Numerical simulations of two-dimensional interchange modes confirm the role of laminar expulsion by convective cells, for profile consistency and structure formation. They also show that the fingerlike pressure structures ultimately control the rate of heat transport across the plasma layer...
Self consistent tight binding model for dissociable water
Lin, You; Wynveen, Aaron; Halley, J. W.; Curtiss, L. A.; Redfern, P. C.
2012-05-01
We report results of development of a self consistent tight binding model for water. The model explicitly describes the electrons of the liquid self consistently, allows dissociation of the water and permits fast direct dynamics molecular dynamics calculations of the fluid properties. It is parameterized by fitting to first principles calculations on water monomers, dimers, and trimers. We report calculated radial distribution functions of the bulk liquid, a phase diagram and structure of solvated protons within the model as well as ac conductivity of a system of 96 water molecules of which one is dissociated. Structural properties and the phase diagram are in good agreement with experiment and first principles calculations. The estimated DC conductivity of a computational sample containing a dissociated water molecule was an order of magnitude larger than that reported from experiment though the calculated ratio of proton to hydroxyl contributions to the conductivity is very close to the experimental value. The conductivity results suggest a Grotthuss-like mechanism for the proton component of the conductivity.
CONSISTENCY OF LS ESTIMATOR IN SIMPLE LINEAR EV REGRESSION MODELS
Institute of Scientific and Technical Information of China (English)
Liu Jixue; Chen Xiru
2005-01-01
Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadratic-mean consistency.
Consistency analysis of a nonbirefringent Lorentz-violating planar model
Energy Technology Data Exchange (ETDEWEB)
Casana, Rodolfo; Ferreira, Manoel M.; Moreira, Roemir P.M. [Universidade Federal do Maranhao (UFMA), Departamento de Fisica, Sao Luis, MA (Brazil)
2012-07-15
In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor {kappa}{sub {mu}{nu}}. The propagator of the gauge field is explicitly evaluated and expressed in terms of linear independent symmetric tensors, presenting only one physical mode. The same holds for the scalar propagator. A consistency analysis is performed based on the poles of the propagators. The isotropic parity-even sector is stable, causal and unitary mode for 0{<=}{kappa}{sub 00}<1. On the other hand, the anisotropic sector is stable and unitary but in general noncausal. Finally, it is shown that this planar model interacting with a {lambda}{phi}{sup 4}-Higgs field supports compact-like vortex configurations. (orig.)
Consistency analysis of a nonbirefringent Lorentz-violating planar model
Casana, Rodolfo; Ferreira, Manoel M.; Moreira, Roemir P. M.
2012-07-01
In this work analyze the physical consistency of a nonbirefringent Lorentz-violating planar model via the analysis of the pole structure of its Feynman propagators. The nonbirefringent planar model, obtained from the dimensional reduction of the CPT-even gauge sector of the standard model extension, is composed of a gauge and a scalar fields, being affected by Lorentz-violating (LIV) coefficients encoded in the symmetric tensor κ μν . The propagator of the gauge field is explicitly evaluated and expressed in terms of linear independent symmetric tensors, presenting only one physical mode. The same holds for the scalar propagator. A consistency analysis is performed based on the poles of the propagators. The isotropic parity-even sector is stable, causal and unitary mode for 0≤ κ 00<1. On the other hand, the anisotropic sector is stable and unitary but in general noncausal. Finally, it is shown that this planar model interacting with a λ| φ|4-Higgs field supports compactlike vortex configurations.
Configuration of Self-consistent Flows in a Hole Structure
Hasegawa, Hiroki; Ishiguro, Seiji
2016-10-01
Self-consistent particle flows in a hole structure have been studied with a three dimensional electrostatic plasma particle simulation code. In our previous study, we investigated kinetic effects on plasma blob dynamics with the particle simulation code. In this study, we have improved the code in order to investigate the hole propagation dynamics. Here, the hole is the intermittent filamentary structure along the magnetic field line in peripheral plasmas of fusion magnetic confinement devices and the plasma density in the hole is lower than that of background plasma. In the simulation, a hole structure is initially set as a cylindrical form elongated between both end plates and propagates in the grad-B direction. The simulation confirms that a spiral current system is formed in a hole structure. Further, the investigation into the effect of impurities on the flow configuration will be reported. Supported by NIFS Collaboration Research programs (NIFS15KNSS058, NIFS14KNXN279, NIFS15KNTS039, NIFS15KNTS040, and NIFS16KNTT038).
Logical consistency and sum-constrained linear models
van Perlo -ten Kleij, Frederieke; Steerneman, A.G.M.; Koning, Ruud H.
2006-01-01
A topic that has received quite some attention in the seventies and eighties is logical consistency of sum-constrained linear models. Loosely defined, a sum-constrained model is logically consistent if the restrictions on the parameters and explanatory variables are such that the sum constraint is a
The Self-Consistency Model of Subjective Confidence
Koriat, Asher
2012-01-01
How do people monitor the correctness of their answers? A self-consistency model is proposed for the process underlying confidence judgments and their accuracy. In answering a 2-alternative question, participants are assumed to retrieve a sample of representations of the question and base their confidence on the consistency with which the chosen…
Model Checking Data Consistency for Cache Coherence Protocols
Institute of Scientific and Technical Information of China (English)
Hong Pan; Hui-Min Lin; Yi Lv
2006-01-01
A method for automatic verification of cache coherence protocols is presented, in which cache coherence protocols are modeled as concurrent value-passing processes, and control and data consistency requirement are described as formulas in first-orderμ-calculus. A model checker is employed to check if the protocol under investigation satisfies the required properties. Using this method a data consistency error has been revealed in a well-known cache coherence protocol.The error has been corrected, and the revised protocol has been shown free from data consistency error for any data domain size, by appealing to data independence technique.
On the internal consistency of the term structure of forecasts of housing starts
DEFF Research Database (Denmark)
Pierdzioch, C.; Rulke, J. C.; Stadtmann, G.
2013-01-01
We use the term structure of forecasts of housing starts to test for rationality of forecasts. Our test is based on the idea that short-term and long-term forecasts should be internally consistent. We test the internal consistency of forecasts using data for Australia, Canada, Japan and the Unite...... States. Using a simple model of forecast formation, we find that forecasts are not internally consistent, leading to a rejection of forecast rationality....
Standard Model Vacuum Stability and Weyl Consistency Conditions
DEFF Research Database (Denmark)
Antipin, Oleg; Gillioz, Marc; Krog, Jens;
2013-01-01
At high energy the standard model possesses conformal symmetry at the classical level. This is reflected at the quantum level by relations between the different beta functions of the model. These relations are known as the Weyl consistency conditions. We show that it is possible to satisfy them...... order by order in perturbation theory, provided that a suitable coupling constant counting scheme is used. As a direct phenomenological application, we study the stability of the standard model vacuum at high energies and compare with previous computations violating the Weyl consistency conditions....
Quantum monadology: a consistent world model for consciousness and physics.
Nakagomi, Teruaki
2003-04-01
The NL world model presented in the previous paper is embodied by use of relativistic quantum mechanics, which reveals the significance of the reduction of quantum states and the relativity principle, and locates consciousness and the concept of flowing time consistently in physics. This model provides a consistent framework to solve apparent incompatibilities between consciousness (as our interior experience) and matter (as described by quantum mechanics and relativity theory). Does matter have an inside? What is the flowing time now? Does physics allow the indeterminism by volition? The problem of quantum measurement is also resolved in this model.
Model-Consistent Sparse Estimation through the Bootstrap
Bach, Francis
2009-01-01
We consider the least-square linear regression problem with regularization by the $\\ell^1$-norm, a problem usually referred to as the Lasso. In this paper, we first present a detailed asymptotic analysis of model consistency of the Lasso in low-dimensional settings. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection. For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection procedure, referred to as the Bolasso, is extended to high-dimensional settings by a provably consistent two-step procedure.
Multiscale Parameter Regionalization for consistent global water resources modelling
Wanders, Niko; Wood, Eric; Pan, Ming; Samaniego, Luis; Thober, Stephan; Kumar, Rohini; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc F. P.
2017-04-01
Due to an increasing demand for high- and hyper-resolution water resources information, it has become increasingly important to ensure consistency in model simulations across scales. This consistency can be ensured by scale independent parameterization of the land surface processes, even after calibration of the water resource model. Here, we use the Multiscale Parameter Regionalization technique (MPR, Samaniego et al. 2010, WRR) to allow for a novel, spatially consistent, scale independent parameterization of the global water resource model PCR-GLOBWB. The implementation of MPR in PCR-GLOBWB allows for calibration at coarse resolutions and subsequent parameter transfer to the hyper-resolution. In this study, the model was calibrated at 50 km resolution over Europe and validation carried out at resolutions of 50 km, 10 km and 1 km. MPR allows for a direct transfer of the calibrated transfer function parameters across scales and we find that we can maintain consistent land-atmosphere fluxes across scales. Here we focus on the 2003 European drought and show that the new parameterization allows for high-resolution calibrated simulations of water resources during the drought. For example, we find a reduction from 29% to 9.4% in the percentile difference in the annual evaporative flux across scales when compared against default simulations. Soil moisture errors are reduced from 25% to 6.9%, clearly indicating the benefits of the MPR implementation. This new parameterization allows us to show more spatial detail in water resources simulations that are consistent across scales and also allow validation of discharge for smaller catchments, even with calibrations at a coarse 50 km resolution. The implementation of MPR allows for novel high-resolution calibrated simulations of a global water resources model, providing calibrated high-resolution model simulations with transferred parameter sets from coarse resolutions. The applied methodology can be transferred to other
Emergent Dynamics of a Thermodynamically Consistent Particle Model
Ha, Seung-Yeal; Ruggeri, Tommaso
2017-03-01
We present a thermodynamically consistent particle (TCP) model motivated by the theory of multi-temperature mixture of fluids in the case of spatially homogeneous processes. The proposed model incorporates the Cucker-Smale (C-S) type flocking model as its isothermal approximation. However, it is more complex than the C-S model, because the mutual interactions are not only " mechanical" but are also affected by the "temperature effect" as individual particles may exhibit distinct internal energies. We develop a framework for asymptotic weak and strong flocking in the context of the proposed model.
Viscoelastic models with consistent hypoelasticity for fluids undergoing finite deformations
Altmeyer, Guillaume; Rouhaud, Emmanuelle; Panicaud, Benoit; Roos, Arjen; Kerner, Richard; Wang, Mingchuan
2015-08-01
Constitutive models of viscoelastic fluids are written with rate-form equations when considering finite deformations. Trying to extend the approach used to model these effects from an infinitesimal deformation to a finite transformation framework, one has to ensure that the tensors and their rates are indifferent with respect to the change of observer and to the superposition with rigid body motions. Frame-indifference problems can be solved with the use of an objective stress transport, but the choice of such an operator is not obvious and the use of certain transports usually leads to physically inconsistent formulation of hypoelasticity. The aim of this paper is to present a consistent formulation of hypoelasticity and to combine it with a viscosity model to construct a consistent viscoelastic model. In particular, the hypoelastic model is reversible.
Thermodynamically consistent mesoscopic fluid particle models for a van der Waals fluid
Serrano, Mar; Español, Pep
2000-01-01
The GENERIC structure allows for a unified treatment of different discrete models of hydrodynamics. We first propose a finite volume Lagrangian discretization of the continuum equations of hydrodynamics through the Voronoi tessellation. We then show that a slight modification of these discrete equations has the GENERIC structure. The GENERIC structure ensures thermodynamic consistency and allows for the introduction of correct thermal noise. In this way, we obtain a consistent discrete model ...
Bolasso: model consistent Lasso estimation through the bootstrap
Bach, Francis
2008-01-01
We consider the least-square linear regression problem with regularization by the l1-norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning rep...
Detection and quantification of flow consistency in business process models
DEFF Research Database (Denmark)
Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel
2017-01-01
Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect......, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second......, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics...
A consistent transported PDF model for treating differential molecular diffusion
Wang, Haifeng; Zhang, Pei
2016-11-01
Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.
Simplified Models for Dark Matter Face their Consistent Completions
Energy Technology Data Exchange (ETDEWEB)
Goncalves, Dorival [Pittsburgh U.; Machado, Pedro N. [Madrid, IFT; No, Jose Miguel [Sussex U.
2016-11-14
Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.
Simplified Models for Dark Matter Face their Consistent Completions
Goncalves, Dorival; No, Jose Miguel
2016-01-01
Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.
Towards consistent nuclear models and comprehensive nuclear data evaluations
Energy Technology Data Exchange (ETDEWEB)
Bouland, O [Los Alamos National Laboratory; Hale, G M [Los Alamos National Laboratory; Lynn, J E [Los Alamos National Laboratory; Talou, P [Los Alamos National Laboratory; Bernard, D [FRANCE; Litaize, O [FRANCE; Noguere, G [FRANCE; De Saint Jean, C [FRANCE; Serot, O [FRANCE
2010-01-01
The essence of this paper is to enlighten the consistency achieved nowadays in nuclear data and uncertainties assessments in terms of compound nucleus reaction theory from neutron separation energy to continuum. Making the continuity of theories used in resolved (R-matrix theory), unresolved resonance (average R-matrix theory) and continuum (optical model) rangcs by the generalization of the so-called SPRT method, consistent average parameters are extracted from observed measurements and associated covariances are therefore calculated over the whole energy range. This paper recalls, in particular, recent advances on fission cross section calculations and is willing to suggest some hints for future developments.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Cont, Rama; Kokholm, Thomas
options on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... on S&P 500 across strikes and maturities as well as options on the VIX volatility index. The calibration of the model is done in two steps, first by matching VIX option prices and then by matching prices of options on the underlying....
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... strikes and maturities as well as options on the VIX volatility index. The calibration of the model is done in two steps, first by matching VIX option prices and then by matching prices of options on the underlying....
Self-consistent treatment of v-groove quantum wire band structure in no parabolic approximation
Directory of Open Access Journals (Sweden)
Crnjanski Jasna V.
2004-01-01
Full Text Available The self-consistent no parabolic calculation of a V-groove-quantum-wire (VQWR band structure is presented. A comparison with the parabolic flat-band model of VQWR shows that both, the self-consistency and the nonparabolicity shift sub band edges, in some cases even in the opposite directions. These shifts indicate that for an accurate description of inter sub band absorption, both effects have to be taken into the account.
Consistency Across Standards or Standards in a New Business Model
Russo, Dane M.
2010-01-01
Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.
A detailed self-consistent vertical Milky Way disc model
Directory of Open Access Journals (Sweden)
Gao S.
2012-02-01
Full Text Available We present a self-consistent vertical disc model of thin and thick disc in the solar vicinity. The model is optimized to fit the local kinematics of main sequence stars by varying the star formation history and the dynamical heating function. The star formation history and the dynamical heating function are not uniquely determined by the local kinematics alone. For four different pairs of input functions we calculate star count predictions at high galactic latitude as a function of colour. The comparison with North Galactic Pole data of SDSS/SEGUE leads to significant constraints of the local star formation history.
Radio data and synchrotron emission in consistent cosmic ray models
Bringmann, Torsten; Lineros, Roberto A
2011-01-01
We consider the propagation of electrons in phenomenological two-zone diffusion models compatible with cosmic-ray nuclear data and compute the diffuse synchrotron emission resulting from their interaction with galactic magnetic fields. We find models in agreement not only with cosmic ray data but also with radio surveys at essentially all frequencies. Requiring such a globally consistent description strongly disfavors both a very large (L>15 kpc) and small (L<1 kpc) effective size of the diffusive halo. This has profound implications for, e.g., indirect dark matter searches.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Cont, Rama; Kokholm, Thomas
2013-01-01
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
Self consistent modeling of accretion columns in accretion powered pulsars
Falkner, Sebastian; Schwarm, Fritz-Walter; Wolff, Michael Thomas; Becker, Peter A.; Wilms, Joern
2016-04-01
We combine three physical models to self-consistently derive the observed flux and pulse profiles of neutron stars' accretion columns. From the thermal and bulk Comptonization model by Becker & Wolff (2006) we obtain seed photon continua produced in the dense inner regions of the accretion column. In a thin outer layer these seed continua are imprinted with cyclotron resonant scattering features calculated using Monte Carlo simulations. The observed phase and energy dependent flux corresponding to these emission profiles is then calculated, taking relativistic light bending into account. We present simulated pulse profiles and the predicted dependency of the observable X-ray spectrum as a function of pulse phase.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
A consistent collinear triad approximation for operational wave models
Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.
2016-08-01
In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.
Warped 5D Standard Model Consistent with EWPT
Cabrer, Joan A; Quiros, Mariano
2011-01-01
For a 5D Standard Model propagating in an AdS background with an IR localized Higgs, compatibility of bulk KK gauge modes with EWPT yields a phenomenologically unappealing KK spectrum (m > 12.5 TeV) and leads to a "little hierarchy problem". For a bulk Higgs the solution to the hierarchy problem reduces the previous bound only by sqrt(3). As a way out, models with an enhanced bulk gauge symmetry SU(2)_R x U(1)_(B-L) were proposed. In this note we describe a much simpler (5D Standard) Model, where introduction of an enlarged gauge symmetry is no longer required. It is based on a warped gravitational background which departs from AdS at the IR brane and a bulk propagating Higgs. The model is consistent with EWPT for a range of KK masses within the LHC reach.
Consistent regularization and renormalization in models with inhomogeneous phases
Adhikari, Prabal
2016-01-01
In many models in condensed matter physics and high-energy physics, one finds inhomogeneous phases at high density and low temperature. These phases are characterized by a spatially dependent condensate or order parameter. A proper calculation requires that one takes the vacuum fluctuations of the model into account. These fluctuations are ultraviolet divergent and must be regularized. We discuss different consistent ways of regularizing and renormalizing quantum fluctuations, focusing on a symmetric energy cutoff scheme and dimensional regularization. We apply these techniques calculating the vacuum energy in the NJL model in 1+1 dimensions in the large-$N_c$ limit and the 3+1 dimensional quark-meson model in the mean-field approximation both for a one-dimensional chiral-density wave.
Consistent regularization and renormalization in models with inhomogeneous phases
Adhikari, Prabal; Andersen, Jens O.
2017-02-01
In many models in condensed matter and high-energy physics, one finds inhomogeneous phases at high density and low temperature. These phases are characterized by a spatially dependent condensate or order parameter. A proper calculation requires that one takes the vacuum fluctuations of the model into account. These fluctuations are ultraviolet divergent and must be regularized. We discuss different ways of consistently regularizing and renormalizing quantum fluctuations, focusing on momentum cutoff, symmetric energy cutoff, and dimensional regularization. We apply these techniques calculating the vacuum energy in the Nambu-Jona-Lasinio model in 1 +1 dimensions in the large-Nc limit and in the 3 +1 dimensional quark-meson model in the mean-field approximation both for a one-dimensional chiral-density wave.
Self-consistent triaxial de Zeeuw-Carollo Models
Thakur, Parijat; Das, Mousumi; Chakraborty, D K; Ann, H B
2007-01-01
We use the usual method of Schwarzschild to construct self-consistent solutions for the triaxial de Zeeuw & Carollo (1996) models with central density cusps. ZC96 models are triaxial generalisations of spherical $\\gamma$-models of Dehnen whose densities vary as $r^{-\\gamma}$ near the center and $r^{-4}$ at large radii and hence, possess a central density core for $\\gamma=0$ and cusps for $\\gamma > 0$. We consider four triaxial models from ZC96, two prolate triaxials: $(p, q) = (0.65, 0.60)$ with $\\gamma = 1.0$ and 1.5, and two oblate triaxials: $(p, q) = (0.95, 0.60)$ with $\\gamma = 1.0$ and 1.5. We compute 4500 orbits in each model for time periods of $10^{5} T_{D}$. We find that a large fraction of the orbits in each model are stochastic by means of their nonzero Liapunov exponents. The stochastic orbits in each model can sustain regular shapes for $\\sim 10^{3} T_{D}$ or longer, which suggests that they diffuse slowly through their allowed phase-space. Except for the oblate triaxial models with $\\gamma ...
Are paleoclimate model ensembles consistent with the MARGO data synthesis?
Directory of Open Access Journals (Sweden)
J. C. Hargreaves
2011-03-01
Full Text Available We investigate the consistency of various ensembles of model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day, however, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.
Are paleoclimate model ensembles consistent with the MARGO data synthesis?
Directory of Open Access Journals (Sweden)
J. C. Hargreaves
2011-08-01
Full Text Available We investigate the consistency of various ensembles of climate model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day. However, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.
Self-Consistent Modeling of Reionization in Cosmological Hydrodynamical Simulations
Oñorbe, Jose; Lukić, Zarija
2016-01-01
The ultraviolet background (UVB) emitted by quasars and galaxies governs the ionization and thermal state of the intergalactic medium (IGM), regulates the formation of high-redshift galaxies, and is thus a key quantity for modeling cosmic reionization. The vast majority of cosmological hydrodynamical simulations implement the UVB via a set of spatially uniform photoionization and photoheating rates derived from UVB synthesis models. We show that simulations using canonical UVB rates reionize, and perhaps more importantly, spuriously heat the IGM, much earlier z ~ 15 than they should. This problem arises because at z > 6, where observational constraints are non-existent, the UVB amplitude is far too high. We introduce a new methodology to remedy this issue, and generate self-consistent photoionization and photoheating rates to model any chosen reionization history. Following this approach, we run a suite of hydrodynamical simulations of different reionization scenarios, and explore the impact of the timing of ...
Consistent Static Models of Local Thermospheric Composition Profiles
Picone, J M; Drob, D P
2016-01-01
The authors investigate the ideal, nondriven multifluid equations of motion to identify consistent (i.e., truly stationary), mechanically static models for composition profiles within the thermosphere. These physically faithful functions are necessary to define the parametric core of future empirical atmospheric models and climatologies. Based on the strength of interspecies coupling, the thermosphere has three altitude regions: (1) the lower thermosphere (herein z ~200 km), in which the species flows are approximately uncoupled; and (3) a transition region in between, where the effective species particle mass and the effective species vertical flow interpolate between the solutions for the upper and lower thermosphere. We place this view in the context of current terminology within the community, i.e., a fully mixed (lower) region and an upper region in diffusive equilibrium (DE). The latter condition, DE, currently used in empirical composition models, does not represent a truly static composition profile ...
Thermodynamically consistent model of brittle oil shales under overpressure
Izvekov, Oleg
2016-04-01
The concept of dual porosity is a common way for simulation of oil shale production. In the frame of this concept the porous fractured media is considered as superposition of two permeable continua with mass exchange. As a rule the concept doesn't take into account such as the well-known phenomenon as slip along natural fractures, overpressure in low permeability matrix and so on. Overpressure can lead to development of secondary fractures in low permeability matrix in the process of drilling and pressure reduction during production. In this work a new thermodynamically consistent model which generalizes the model of dual porosity is proposed. Particularities of the model are as follows. The set of natural fractures is considered as permeable continuum. Damage mechanics is applied to simulation of secondary fractures development in low permeability matrix. Slip along natural fractures is simulated in the frame of plasticity theory with Drucker-Prager criterion.
A minimal model of self-consistent partial synchrony
Clusella, Pau; Politi, Antonio; Rosenblum, Michael
2016-09-01
We show that self-consistent partial synchrony in globally coupled oscillatory ensembles is a general phenomenon. We analyze in detail appearance and stability properties of this state in possibly the simplest setup of a biharmonic Kuramoto-Daido phase model as well as demonstrate the effect in limit-cycle relaxational Rayleigh oscillators. Such a regime extends the notion of splay state from a uniform distribution of phases to an oscillating one. Suitable collective observables such as the Kuramoto order parameter allow detecting the presence of an inhomogeneous distribution. The characteristic and most peculiar property of self-consistent partial synchrony is the difference between the frequency of single units and that of the macroscopic field.
Short Polymer Modeling using Self-Consistent Integral Equation Method
Kim, Yeongyoon; Park, So Jung; Kim, Jaeup
2014-03-01
Self-consistent field theory (SCFT) is an excellent mean field theoretical tool for predicting the morphologies of polymer based materials. In the standard SCFT, the polymer is modeled as a Gaussian chain which is suitable for a polymer of high molecular weight, but not necessarily for a polymer of low molecular weight. In order to overcome this limitation, Matsen and coworkers have recently developed SCFT of discrete polymer chains in which one polymer is modeled as finite number of beads joined by freely jointed bonds of fixed length. In their model, the diffusion equation of the canonical SCFT is replaced by an iterative integral equation, and the full spectral method is used for the production of the phase diagram of short block copolymers. In this study, for the finite length chain problem, we apply pseudospectral method which is the most efficient numerical scheme to solve the iterative integral equation. We use this new numerical method to investigate two different types of polymer bonds: spring-beads model and freely-jointed chain model. By comparing these results with those of the Gaussian chain model, the influences on the morphologies of diblock copolymer melts due to the chain length and the type of bonds are examined. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (no. 2012R1A1A2043633).
Mean-field theory and self-consistent dynamo modeling
Energy Technology Data Exchange (ETDEWEB)
Yoshizawa, Akira; Yokoi, Nobumitsu [Tokyo Univ. (Japan). Inst. of Industrial Science; Itoh, Sanae-I [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Itoh, Kimitaka [National Inst. for Fusion Science, Toki, Gifu (Japan)
2001-12-01
Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)
A CVAR scenario for a standard monetary model using theory-consistent expectations
DEFF Research Database (Denmark)
Juselius, Katarina
2017-01-01
A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...
Consistency of the tachyon warm inflationary universe models
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiao-Min; Zhu, Jian-Yang, E-mail: zhangxm@mail.bnu.edu.cn, E-mail: zhujy@bnu.edu.cn [Department of Physics, Beijing Normal University, Beijing 100875 (China)
2014-02-01
This study concerns the consistency of the tachyon warm inflationary models. A linear stability analysis is performed to find the slow-roll conditions, characterized by the potential slow-roll (PSR) parameters, for the existence of a tachyon warm inflationary attractor in the system. The PSR parameters in the tachyon warm inflationary models are redefined. Two cases, an exponential potential and an inverse power-law potential, are studied, when the dissipative coefficient Γ = Γ{sub 0} and Γ = Γ(φ), respectively. A crucial condition is obtained for a tachyon warm inflationary model characterized by the Hubble slow-roll (HSR) parameter ε{sub H}, and the condition is extendable to some other inflationary models as well. A proper number of e-folds is obtained in both cases of the tachyon warm inflation, in contrast to existing works. It is also found that a constant dissipative coefficient (Γ = Γ{sub 0}) is usually not a suitable assumption for a warm inflationary model.
Classical and Quantum Consistency of the DGP Model
Nicolis, A; Nicolis, Alberto; Rattazzi, Riccardo
2004-01-01
We study the Dvali-Gabadadze-Porrati model by the method of the boundary effective action. The truncation of this action to the bending mode \\pi consistently describes physics in a wide range of regimes both at the classical and at the quantum level. The Vainshtein effect, which restores agreement with precise tests of general relativity, follows straightforwardly. We give a simple and general proof of stability, i.e. absence of ghosts in the fluctuations, valid for most of the relevant cases, like for instance the spherical source in asymptotically flat space. However we confirm that around certain interesting self-accelerating cosmological solutions there is a ghost. We consider the issue of quantum corrections. Around flat space \\pi becomes strongly coupled below a macroscopic length of 1000 km, thus impairing the predictivity of the model. Indeed the tower of higher dimensional operators which is expected by a generic UV completion of the model limits predictivity at even larger length scales. We outline ...
Consistent constraints on the Standard Model Effective Field Theory
Berthier, Laure
2015-01-01
We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, $\\Lambda \\gtrsim \\, 3 \\, {\\rm TeV}$. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an $\\rm S,T$ analysis is modified by the theory errors we include as an illustrative example.
Creation of Consistent Burn Wounds: A Rat Model
Directory of Open Access Journals (Sweden)
Elijah Zhengyang Cai
2014-07-01
Full Text Available Background Burn infliction techniques are poorly described in rat models. An accurate study can only be achieved with wounds that are uniform in size and depth. We describe a simple reproducible method for creating consistent burn wounds in rats. Methods Ten male Sprague-Dawley rats were anesthetized and dorsum shaved. A 100 g cylindrical stainless-steel rod (1 cm diameter was heated to 100℃ in boiling water. Temperature was monitored using a thermocouple. We performed two consecutive toe-pinch tests on different limbs to assess the depth of sedation. Burn infliction was limited to the loin. The skin was pulled upwards, away from the underlying viscera, creating a flat surface. The rod rested on its own weight for 5, 10, and 20 seconds at three different sites on each rat. Wounds were evaluated for size, morphology and depth. Results Average wound size was 0.9957 cm2 (standard deviation [SD] 0.1845 (n=30. Wounds created with duration of 5 seconds were pale, with an indistinct margin of erythema. Wounds of 10 and 20 seconds were well-defined, uniformly brown with a rim of erythema. Average depths of tissue damage were 1.30 mm (SD 0.424, 2.35 mm (SD 0.071, and 2.60 mm (SD 0.283 for duration of 5, 10, 20 seconds respectively. Burn duration of 5 seconds resulted in full-thickness damage. Burn duration of 10 seconds and 20 seconds resulted in full-thickness damage, involving subjacent skeletal muscle. Conclusions This is a simple reproducible method for creating burn wounds consistent in size and depth in a rat burn model.
A self-consistent dynamo model for fully convective stars
Yadav, Rakesh Kumar; Christensen, Ulrich; Morin, Julien; Gastine, Thomas; Reiners, Ansgar; Poppenhaeger, Katja; Wolk, Scott J.
2016-01-01
The tachocline region inside the Sun, where the rigidly rotating radiative core meets the differentially rotating convection zone, is thought to be crucial for generating the Sun's magnetic field. Low-mass fully convective stars do not possess a tachocline and were originally expected to generate only weak small-scale magnetic fields. Observations, however, have painted a different picture of magnetism in rapidly-rotating fully convective stars: (1) Zeeman broadening measurements revealed average surface field of several kiloGauss (kG), which is similar to the typical field strength found in sunspots. (2) Zeeman-Doppler-Imaging (ZDI) technique discovered large-scale magnetic fields with a morphology often similar to the Earth's dipole-dominated field. (3) Comparison of Zeeman broadening and ZDI results showed that more than 80% of the magnetic flux resides at small scales. So far, theoretical and computer simulation efforts have not been able to reproduce these features simultaneously. Here we present a self-consistent global model of magnetic field generation in low-mass fully convective stars. A distributed dynamo working in the model spontaneously produces a dipole-dominated surface magnetic field of the observed strength. The interaction of this field with the turbulent convection in outer layers shreds it, producing small-scale fields that carry most of the magnetic flux. The ZDI technique applied to synthetic spectropolarimetric data based on our model recovers most of the large-scale field. Our model simultaneously reproduces the morphology and magnitude of the large-scale field as well as the magnitude of the small-scale field observed on low-mass fully convective stars.
A Symplectic Multi-Particle Tracking Model for Self-Consistent Space-Charge Simulation
Qiang, Ji
2016-01-01
Symplectic tracking is important in accelerator beam dynamics simulation. So far, to the best of our knowledge, there is no self-consistent symplectic space-charge tracking model available in the accelerator community. In this paper, we present a two-dimensional and a three-dimensional symplectic multi-particle spectral model for space-charge tracking simulation. This model includes both the effect from external fields and the effect of self-consistent space-charge fields using a split-operator method. Such a model preserves the phase space structure and shows much less numerical emittance growth than the particle-in-cell model in the illustrative examples.
Pluralistic and stochastic gene regulation: examples, models and consistent theory.
Salas, Elisa N; Shu, Jiang; Cserhati, Matyas F; Weeks, Donald P; Ladunga, Istvan
2016-06-01
We present a theory of pluralistic and stochastic gene regulation. To bridge the gap between empirical studies and mathematical models, we integrate pre-existing observations with our meta-analyses of the ENCODE ChIP-Seq experiments. Earlier evidence includes fluctuations in levels, location, activity, and binding of transcription factors, variable DNA motifs, and bursts in gene expression. Stochastic regulation is also indicated by frequently subdued effects of knockout mutants of regulators, their evolutionary losses/gains and massive rewiring of regulatory sites. We report wide-spread pluralistic regulation in ≈800 000 tightly co-expressed pairs of diverse human genes. Typically, half of ≈50 observed regulators bind to both genes reproducibly, twice more than in independently expressed gene pairs. We also examine the largest set of co-expressed genes, which code for cytoplasmic ribosomal proteins. Numerous regulatory complexes are highly significant enriched in ribosomal genes compared to highly expressed non-ribosomal genes. We could not find any DNA-associated, strict sense master regulator. Despite major fluctuations in transcription factor binding, our machine learning model accurately predicted transcript levels using binding sites of 20+ regulators. Our pluralistic and stochastic theory is consistent with partially random binding patterns, redundancy, stochastic regulator binding, burst-like expression, degeneracy of binding motifs and massive regulatory rewiring during evolution.
Hazard consistent structural demands and in-structure design response spectra
Energy Technology Data Exchange (ETDEWEB)
Houston, Thomas W [Los Alamos National Laboratory; Costantino, Michael C [Los Alamos National Laboratory; Costantino, Carl J [Los Alamos National Laboratory
2009-01-01
Current analysis methodology for the Soil Structure Interaction (SSI) analysis of nuclear facilities is specified in ASCE Standard 4. This methodology is based on the use of deterministic procedures with the intention that enough conservatism is included in the specified procedures to achieve an 80% probability of non-exceedance in the computed response of a Structure, System. or Component for given a mean seismic design input. Recently developed standards are aimed at achieving performance-based, risk consistent seismic designs that meet specified target performance goals. These design approaches rely upon accurately characterizing the probability (hazard) level of system demands due to seismic loads consistent with Probabilistic Seismic Hazard Analyses. This paper examines the adequacy of the deterministic SSI procedures described in ASCE 4-98 to achieve an 80th percentile of Non-Exceedance Probability (NEP) in structural demand, given a mean seismic input motion. The study demonstrates that the deterministic procedures provide computed in-structure response spectra that are near or greater than the target 80th percentile NEP for site profiles other than those resulting in high levels of radiation damping. The deterministic procedures do not appear to be as robust in predicting peak accelerations, which correlate to structural demands within the structure.
Consistency of modified MLE in EV model with replicated observations
Institute of Scientific and Technical Information of China (English)
ZHANG; Sanguo
2001-01-01
［1］Kendall, M., Stuart, A., The Advanced Theory of Statistics, Vol. 2, New York: Charles Griffin, 1979.［2］Anderson, T. W., Estimating linear statistical relationships, Ann. Statist., 1984, 12: 1.［3］Cui Hengjian, Asymptotic normality of M-estimates in the EV model, Sys. Sci. and Math. Sci., 1997, 10(3): 225.［4］Madansky, A., The fitting of straight lines when both variables are subject to error, JASA, 1959, 54: 173.［5］Villegas, C., Maximum likelihood estimations of a linear functional relationship, Ann. Math. Statist., 1961, 32(4): 1048.［6］Stout, W. F., Almost Sure Convergence, New York: Academic Press, 1974.［7］Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975.［8］Lai, T. L., Robbins, H., Wei, C. Z., Strong consistency of least squares estimates in multiple regression, J. Multivariate Anal., 1979, 9: 343.［9］Chen Xiru, On limiting properties of U-statistics and von-Mises statistics, Scientia Sinica (in Chinese), 1980, (6): 522.
Dynamic Consistency between Value and Coordination Models - Research Issues.
Bodenstaff, L.; Wombacher, Andreas; Reichert, M.U.; meersman, R; Tari, Z; herrero, p
Inter-organizational business cooperations can be described from different viewpoints each fulfilling a specific purpose. Since all viewpoints describe the same system they must not contradict each other, thus, must be consistent. Consistency can be checked based on common semantic concepts of the
A proposal for a consistent parametrization of earth models
Forbriger, Thomas; Friederich, Wolfgang
2005-08-01
The current way to parametrize earth models in terms of real-valued seismic velocities and quality factors is incomplete as it does not specify how complex-valued viscoelastic moduli or complex velocities should be computed from them. Various ways to do this can be found in the literature. Depending on the context they may specify (1) the real part of the viscoelastic modulus, (2) the absolute value of the viscoelastic modulus, (3) the real part of complex velocity or (4) the phase velocity of a propagating plane wave. We propose here to exclusively use the first alternative because it is the only one which allows both a flexible choice of elastic parameters and a mathematically rigorous evaluation of the complex-valued viscoelastic moduli. The other definitions only permit an evaluation of viscoelastic moduli if the tabulated quality factors are directly associated with the listed velocities. Ignoring the subtle differences between the three definitions leads to variations in viscoelastic moduli which are second order in 1/Q where Q is a quality factor. This may be the reason why the topic has never been discussed in the literature. In case of shallow seismic media, however, where quality factors may assume values of less than 10, the subtle differences become noticeable in synthetic seismograms. It is then essential to use the same definition in all algorithms to make results comparable. Matters become worse for anisotropic media, which are commonly specified in terms of real elastic moduli and quality factors for effective isotropic moduli. In that case, the complex-valued viscoelastic moduli cannot be determined uniquely. However, interpreting the tabulated constants as the real parts of the complex-valued viscoelastic moduli at least allows a consistent definition, which respects the relative magnitude of the anelastic and anisotropic parts compared to the elastic parts. It should be noted that all these considerations apply to complex-valued viscoelastic
Self-consistent tight-binding atomic-relaxation model of titanium dioxide
Energy Technology Data Exchange (ETDEWEB)
Schelling, P.K.; Yu, N.; Halley, J.W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455 (United States)
1998-07-01
We report a self-consistent tight-binding atomic-relaxation model for titanium dioxide. We fit the parameters of the model to first-principles electronic structure calculations of the band structure and energy as a function of lattice parameters in bulk rutile. We report the method and results for the surface structures and energies of relaxed (110), (100), and (001) surfaces of rutile TiO{sub 2} as well as work functions for these surfaces. Good agreement with first-principles calculations and experiments, where available, is found for these surfaces. We find significant charge transfer (increased covalency) at the surfaces. {copyright} {ital 1998} {ital The American Physical Society}
Spatial coincidence modelling, automated database updating and data consistency in vector GIS.
Kufoniyi, O.
1995-01-01
This thesis presents formal approaches for automated database updating and consistency control in vector- structured spatial databases. To serve as a framework, a conceptual data model is formalized for the representation of geo-data from multiple map layers in which a map layer denotes a set of ter
Heinkel, Florian; Gsponer, Jörg
2016-01-29
The mapping of folding landscapes remains an important challenge in protein chemistry. Pulsed oxidative labeling of exposed residues and their detection via mass spectrometry provide new means of taking time-resolved "snapshots" of the structural changes that occur during protein folding. However, such experiments have been so far only interpreted qualitatively. Here, we report the detailed structural interpretation of mass spectrometry data from fast photochemical oxidation of proteins (FPOP) experiments at atomic resolution in a biased molecular dynamics approach. We are able to calculate structures of the early folding intermediate of the model system barstar that are fully consistent with FPOP data and Φ values. Furthermore, structures calculated with both FPOP data and Φ values are significantly less compact and have fewer helical residues than intermediate structures calculated with Φ values only. This improves the agreement with the experimental β-Tanford value and CD measurements. The restraints that we introduce facilitate the structural interpretation of FPOP data and provide new means for refined structure calculations of transiently sampled states on protein folding landscapes.
Rosa, Mónica; Tiago, João M; Singh, Satish K; Geraldes, Vítor; Rodrigues, Miguel A
2016-10-01
The quality of lyophilized products is dependent of the ice structure formed during the freezing step. Herein, we evaluate the importance of the air gap at the bottom of lyophilization vials for consistent nucleation, ice structure, and cake appearance. The bottom of lyophilization vials was modified by attaching a rectified aluminum disc with an adhesive material. Freezing was studied for normal and converted vials, with different volumes of solution, varying initial solution temperature (from 5°C to 20°C) and shelf temperature (from -20°C to -40°C). The impact of the air gap on the overall heat transfer was interpreted with the assistance of a computational fluid dynamics model. Converted vials caused nucleation at the bottom and decreased the nucleation time up to one order of magnitude. The formation of ice crystals unidirectionally structured from bottom to top lead to a honeycomb-structured cake after lyophilization of a solution with 4% mannitol. The primary drying time was reduced by approximately 35%. Converted vials that were frozen radially instead of bottom-up showed similar improvements compared with normal vials but very poor cake quality. Overall, the curvature of the bottom of glass vials presents a considerable threat to consistency by delaying nucleation and causing radial ice growth. Rectifying the vials bottom with an adhesive material revealed to be a relatively simple alternative to overcome this inconsistency.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across......We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...
Consistent Data Assimilation of Structural Isotopes: 23Na and 56Fe
Energy Technology Data Exchange (ETDEWEB)
Giuseppe Palmiotti
2010-09-01
A new approach is proposed, the consistent data assimilation, that allows to link the integral data experiment results to basic nuclear parameters employed by evaluators to generate ENDF/B point energy files in order to improve them. Practical examples are provided for the structural materials 23Na and 56Fe. The sodium neutron propagation experiments, EURACOS and JANUS-8, are used to improve via modifications of 23Na nuclear parameters (like scattering radius, resonance parameters, Optical model parameters, Statistical Hauser-Feshbach model parameters, and Preequilibrium Exciton model parameters) the agreement of calculation versus experiments for a series of measured reaction rate detectors slopes. For the 56Fe case the EURACOS and ZPR3 assembly 54 are used. Results have shown inconsistencies in the set of nuclear parameters used so that further investigation is needed. Future work involves comparison of results against a more traditional multigroup adjustments, and extension to other isotope of interest in the reactor community.
Is the island universe model consistent with observations?
Piao, Yun-Song
2005-01-01
We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.
An Extended Model Driven Framework for End-to-End Consistent Model Transformation
Directory of Open Access Journals (Sweden)
Mr. G. Ramesh
2016-08-01
Full Text Available Model Driven Development (MDD results in quick transformation from models to corresponding systems. Forward engineering features of modelling tools can help in generating source code from models. To build a robust system it is important to have consistency checking in the design models and the same between design model and the transformed implementation. Our framework named as Extensible Real Time Software Design Inconsistency Checker (XRTSDIC proposed in our previous papers supports consistency checking in design models. This paper focuses on automatic model transformation. An algorithm and defined transformation rules for model transformation from UML class diagram to ERD and SQL are being proposed. The model transformation bestows many advantages such as reducing cost of development, improving quality, enhancing productivity and leveraging customer satisfaction. Proposed framework has been enhanced to ensure that the transformed implementations conform to their model counterparts besides checking end-to-end consistency.
Consistent Evolution of Software Artifacts and Non-Functional Models
2014-11-14
Ruscio D., Pierantonio A., Arcelli D., Eramo R., Trubiani C., Tucci M. Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica ...Models (SRMs), and ( ii ) antipattern solutions as Target Role Models (TRMs). Hence, SRM-TRM pairs represent new instruments in the hands of developers to...helps to identify the antipatterns that more heavily contribute to the violation of performance requirements [10], and ( ii ) another one aimed at
Integrated materials–structural models
DEFF Research Database (Denmark)
Stang, Henrik; Geiker, Mette Rica
2008-01-01
Reliable service life models for load carrying structures are significant elements in the evaluation of the performance and sustainability of existing and new structures. Furthermore, reliable service life models are prerequisites for the evaluation of the sustainability of maintenance strategies......, repair works and strengthening methods for structures. A very significant part of the infrastructure consists of reinforced concrete structures. Even though reinforced concrete structures typically are very competitive, certain concrete structures suffer from various types of degradation. A framework...
Towards a self-consistent dynamical nuclear model
Roca-Maza, X.; Niu, Y. F.; Colò, G.; Bortignon, P. F.
2017-04-01
Density functional theory (DFT) is a powerful and accurate tool, exploited in nuclear physics to investigate the ground-state and some of the collective properties of nuclei along the whole nuclear chart. Models based on DFT are not, however, suitable for the description of single-particle dynamics in nuclei. Following the field theoretical approach by A Bohr and B R Mottelson to describe nuclear interactions between single-particle and vibrational degrees of freedom, we have taken important steps towards the building of a microscopic dynamic nuclear model. In connection with this, one important issue that needs to be better understood is the renormalization of the effective interaction in the particle-vibration approach. One possible way to renormalize the interaction is by the so-called subtraction method. In this contribution, we will implement the subtraction method in our model for the first time and study its consequences.
Gas Clumping in Self-Consistent Reionisation Models
Finlator, K; Özel, F; Davé, R
2012-01-01
We use a suite of cosmological hydrodynamic simulations including a self-consistent treatment for inhomogeneous reionisation to study the impact of galactic outflows and photoionisation heating on the volume-averaged recombination rate of the intergalactic medium (IGM). By incorporating an evolving ionising escape fraction and a treatment for self-shielding within Lyman limit systems, we have run the first simulations of "photon-starved" reionisation scenarios that simultaneously reproduce observations of the abundance of galaxies, the optical depth to electron scattering of cosmic microwave background photons \\tau, and the effective optical depth to Lyman\\alpha absorption at z=5. We confirm that an ionising background reduces the clumping factor C by more than 50% by smoothing moderately-overdense (\\Delta=1--100) regions. Meanwhile, outflows increase clumping only modestly. The clumping factor of ionised gas is much lower than the overall baryonic clumping factor because the most overdense gas is self-shield...
Modelling plasticity of unsaturated soils in a thermodynamically consistent framework
Coussy, O
2010-01-01
Constitutive equations of unsaturated soils are often derived in a thermodynamically consistent framework through the use a unique 'effective' interstitial pressure. This later is naturally chosen as the space averaged interstitial pressure. However, experimental observations have revealed that two stress state variables were needed to describe the stress-strain-strength behaviour of unsaturated soils. The thermodynamics analysis presented here shows that the most general approach to the behaviour of unsaturated soils actually requires three stress state variables: the suction, which is required to describe the retention properties of the soil and two effective stresses, which are required to describe the soil deformation at water saturation held constant. Actually, it is shown that a simple assumption related to internal deformation leads to the need of a unique effective stress to formulate the stress-strain constitutive equation describing the soil deformation. An elastoplastic framework is then presented ...
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; Tartakovsky, Alexandre M.; Parks, Michael L.
2017-04-01
We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Consistency Problem with Tracer Advection in the Atmospheric Model GAMIL
Institute of Scientific and Technical Information of China (English)
ZHANG Kai; WAN Hui; WANG Bin; ZHANG Meigen
2008-01-01
The radon transport test,which is a widely used test case for atmospheric transport models,is carried out to evaluate the tracer advection schemes in the Grid-Point Atmospheric Model of IAP-LASG (GAMIL).TWO of the three available schemes in the model are found to be associated with significant biases in the polar regions and in the upper part of the atmosphere,which implies potentially large errors in the simulation of ozone-like tracers.Theoretical analyses show that inconsistency exists between the advection schemes and the discrete continuity equation in the dynamical core of GAMIL and consequently leads to spurious sources and sinks in the tracer transport equation.The impact of this type of inconsistency is demonstrated by idealized tests and identified as the cause of the aforementioned biases.Other potential effects of this inconsistency are also discussed.Results of this study provide some hints for choosing suitable advection schemes in the GAMIL model.At least for the polar-region-concentrated atmospheric components and the closely correlated chemical species,the Flux-Form Semi-Lagrangian advection scheme produces more reasonable simulations of the large-scale transport processes without significantly increasing the computational expense.
Self-consistent Models of Strong Interaction with Chiral Symmetry
Nambu, Y.; Pascual, P.
1963-04-01
Some simple models of (renormalizable) meson-nucleon interaction are examined in which the nucleon mass is entirely due to interaction and the chiral ( gamma {sub 5}) symmetry is "broken'' to become a hidden symmetry. It is found that such a scheme is possible provided that a vector meson is introduced as an elementary field. (auth)
Predicting giant magnetoresistance using a self-consistent micromagnetic diffusion model
Abert, Claas; Bruckner, Florian; Vogler, Christoph; Praetorius, Dirk; Suess, Dieter
2015-01-01
We propose a self-consistent micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. Potential calculations for a magnetic multilayer structure with perpendicular current flow confirm experimental findings of a non-sinosoidal dependence of the resistivity on the tilting angle of the magnetization in the different layers. While the sinosoidal dependency is observed for certain material parameter limits, a realistic choice of these parameters leads to a notably narrower distribution.
A seismologically consistent compositional model of Earth's core.
Badro, James; Côté, Alexander S; Brodholt, John P
2014-05-27
Earth's core is less dense than iron, and therefore it must contain "light elements," such as S, Si, O, or C. We use ab initio molecular dynamics to calculate the density and bulk sound velocity in liquid metal alloys at the pressure and temperature conditions of Earth's outer core. We compare the velocity and density for any composition in the (Fe-Ni, C, O, Si, S) system to radial seismological models and find a range of compositional models that fit the seismological data. We find no oxygen-free composition that fits the seismological data, and therefore our results indicate that oxygen is always required in the outer core. An oxygen-rich core is a strong indication of high-pressure and high-temperature conditions of core differentiation in a deep magma ocean with an FeO concentration (oxygen fugacity) higher than that of the present-day mantle.
A more consistent intraluminal rhesus monkey model of ischemic stroke
Institute of Scientific and Technical Information of China (English)
Bo Zhao; Fauzia Akbary; Shengli Li; Jing Lu; Feng Ling; Xunming Ji; Guowei Shang; Jian Chen; Xiaokun Geng; Xin Ye; Guoxun Xu; Ju Wang; Jiasheng Zheng; Hongjun Li
2014-01-01
Endovascular surgery is advantageous in experimentally induced ischemic stroke because it causes fewer cranial traumatic lesions than invasive surgery and can closely mimic the pathophysiol-ogy in stroke patients. However, the outcomes are highly variable, which limits the accuracy of evaluations of ischemic stroke studies. In this study, eight healthy adult rhesus monkeys were randomized into two groups with four monkeys in each group:middle cerebral artery occlusion at origin segment (M1) and middle cerebral artery occlusion at M2 segment. The blood lfow in the middle cerebral artery was blocked completely for 2 hours using the endovascular microcoil placement technique (1 mm × 10 cm) (undetachable), to establish a model of cerebral ischemia. The microcoil was withdrawn and the middle cerebral artery blood lfow was restored. A revers-ible middle cerebral artery occlusion model was identiifed by hematoxylin-eosin staining, digital subtraction angiography, magnetic resonance angiography, magnetic resonance imaging, and neurological evaluation. The results showed that the middle cerebral artery occlusion model was successfully established in eight adult healthy rhesus monkeys, and ischemic lesions were apparent in the brain tissue of rhesus monkeys at 24 hours after occlusion. The rhesus monkeys had symp-toms of neurological deifcits. Compared with the M1 occlusion group, the M2 occlusion group had lower infarction volume and higher neurological scores. These experimental ifndings indicate that reversible middle cerebral artery occlusion can be produced with the endovascular microcoil technique in rhesus monkeys. The M2 occluded model had less infarction and less neurological impairment, which offers the potential for application in the ifeld of brain injury research.
A more consistent intraluminal rhesus monkey model of ischemic stroke.
Zhao, Bo; Shang, Guowei; Chen, Jian; Geng, Xiaokun; Ye, Xin; Xu, Guoxun; Wang, Ju; Zheng, Jiasheng; Li, Hongjun; Akbary, Fauzia; Li, Shengli; Lu, Jing; Ling, Feng; Ji, Xunming
2014-12-01
Endovascular surgery is advantageous in experimentally induced ischemic stroke because it causes fewer cranial traumatic lesions than invasive surgery and can closely mimic the pathophysiology in stroke patients. However, the outcomes are highly variable, which limits the accuracy of evaluations of ischemic stroke studies. In this study, eight healthy adult rhesus monkeys were randomized into two groups with four monkeys in each group: middle cerebral artery occlusion at origin segment (M1) and middle cerebral artery occlusion at M2 segment. The blood flow in the middle cerebral artery was blocked completely for 2 hours using the endovascular microcoil placement technique (1 mm × 10 cm) (undetachable), to establish a model of cerebral ischemia. The microcoil was withdrawn and the middle cerebral artery blood flow was restored. A reversible middle cerebral artery occlusion model was identified by hematoxylin-eosin staining, digital subtraction angiography, magnetic resonance angiography, magnetic resonance imaging, and neurological evaluation. The results showed that the middle cerebral artery occlusion model was successfully established in eight adult healthy rhesus monkeys, and ischemic lesions were apparent in the brain tissue of rhesus monkeys at 24 hours after occlusion. The rhesus monkeys had symptoms of neurological deficits. Compared with the M1 occlusion group, the M2 occlusion group had lower infarction volume and higher neurological scores. These experimental findings indicate that reversible middle cerebral artery occlusion can be produced with the endovascular microcoil technique in rhesus monkeys. The M2 occluded model had less infarction and less neurological impairment, which offers the potential for application in the field of brain injury research.
Flood damage: a model for consistent, complete and multipurpose scenarios
Menoni, Scira; Molinari, Daniela; Ballio, Francesco; Minucci, Guido; Mejri, Ouejdane; Atun, Funda; Berni, Nicola; Pandolfo, Claudia
2016-12-01
Effective flood risk mitigation requires the impacts of flood events to be much better and more reliably known than is currently the case. Available post-flood damage assessments usually supply only a partial vision of the consequences of the floods as they typically respond to the specific needs of a particular stakeholder. Consequently, they generally focus (i) on particular items at risk, (ii) on a certain time window after the occurrence of the flood, (iii) on a specific scale of analysis or (iv) on the analysis of damage only, without an investigation of damage mechanisms and root causes. This paper responds to the necessity of a more integrated interpretation of flood events as the base to address the variety of needs arising after a disaster. In particular, a model is supplied to develop multipurpose complete event scenarios. The model organizes available information after the event according to five logical axes. This way post-flood damage assessments can be developed that (i) are multisectoral, (ii) consider physical as well as functional and systemic damage, (iii) address the spatial scales that are relevant for the event at stake depending on the type of damage that has to be analyzed, i.e., direct, functional and systemic, (iv) consider the temporal evolution of damage and finally (v) allow damage mechanisms and root causes to be understood. All the above features are key for the multi-usability of resulting flood scenarios. The model allows, on the one hand, the rationalization of efforts currently implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.
Deterministic Consistency: A Programming Model for Shared Memory Parallelism
Aviram, Amittai; Ford, Bryan
2009-01-01
The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "...
Consistency problems for Heath-Jarrow-Morton interest rate models
Filipović, Damir
2001-01-01
The book is written for a reader with knowledge in mathematical finance (in particular interest rate theory) and elementary stochastic analysis, such as provided by Revuz and Yor (Continuous Martingales and Brownian Motion, Springer 1991). It gives a short introduction both to interest rate theory and to stochastic equations in infinite dimension. The main topic is the Heath-Jarrow-Morton (HJM) methodology for the modelling of interest rates. Experts in SDE in infinite dimension with interest in applications will find here the rigorous derivation of the popular "Musiela equation" (referred to in the book as HJMM equation). The convenient interpretation of the classical HJM set-up (with all the no-arbitrage considerations) within the semigroup framework of Da Prato and Zabczyk (Stochastic Equations in Infinite Dimensions) is provided. One of the principal objectives of the author is the characterization of finite-dimensional invariant manifolds, an issue that turns out to be vital for applications. Finally, ge...
Self-consistent core-pedestal transport simulations with neural network accelerated models
Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.
2017-08-01
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.
Aggregated wind power plant models consisting of IEC wind turbine models
DEFF Research Database (Denmark)
Altin, Müfit; Göksu, Ömer; Hansen, Anca Daniela
2015-01-01
turbines, parameters and models to represent each individual wind turbine in detail makes it necessary to develop aggregated wind power plant models considering the simulation time for power system stability studies. In this paper, aggregated wind power plant models consisting of the IEC 61400-27 variable...
Martinez, Guillermo F.; Gupta, Hoshin V.
2011-12-01
Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.
Amazon Forests Maintain Consistent Canopy Structure and Greenness During the Dry Season
Morton, Douglas C.; Nagol, Jyoteshwar; Carabajal, Claudia C.; Rosette, Jacqueline; Palace, Michael; Cook, Bruce D.; Vermote, Eric F.; Harding, David J.; North, Peter R. J.
2014-01-01
The seasonality of sunlight and rainfall regulates net primary production in tropical forests. Previous studies have suggested that light is more limiting than water for tropical forest productivity, consistent with greening of Amazon forests during the dry season in satellite data.We evaluated four potential mechanisms for the seasonal green-up phenomenon, including increases in leaf area or leaf reflectance, using a sophisticated radiative transfer model and independent satellite observations from lidar and optical sensors. Here we show that the apparent green up of Amazon forests in optical remote sensing data resulted from seasonal changes in near-infrared reflectance, an artefact of variations in sun-sensor geometry. Correcting this bidirectional reflectance effect eliminated seasonal changes in surface reflectance, consistent with independent lidar observations and model simulations with unchanging canopy properties. The stability of Amazon forest structure and reflectance over seasonal timescales challenges the paradigm of light-limited net primary production in Amazon forests and enhanced forest growth during drought conditions. Correcting optical remote sensing data for artefacts of sun-sensor geometry is essential to isolate the response of global vegetation to seasonal and interannual climate variability.
Consistency of Semantic Meaning with Structure andFunction in English Writing
Institute of Scientific and Technical Information of China (English)
马文静; 张丽婧
2015-01-01
Writing is one of the skills most difficult to train and develop.Although writing can be circumvented in many cases bysome people, modern civilization is imposing increasing demands on our ability to write, and write well. In English writing,consistency of semantic meaning with structure and function in English writing is a very important part to evaluate the quality ofwriting.In student＇s English academic writing, there are different writing types. The consistency of semanticmeaning with structureand function in different academic writings is various. In this thesis, the consistency of the semantic meaning with structure andfunction in the essay and academic and report writing is analyzed.
Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects
Directory of Open Access Journals (Sweden)
Guangjie Li
2015-07-01
Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.
Validity test and its consistency in the construction of patient loyalty model
Yanuar, Ferra
2016-04-01
The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.
Energy Technology Data Exchange (ETDEWEB)
Mantz, A.B.; /KIPAC, Menlo Park /Stanford U., Phys. Dept.; Allen, S.W.; /KIPAC, Menlo Park /Stanford U., Phys. Dept. /SLAC; Morris, R.Glenn; /KIPAC, Menlo Park /SLAC
2016-07-15
This is the fifth in a series of papers studying the astrophysics and cosmology of massive, dynamically relaxed galaxy clusters. Our sample comprises 40 clusters identified as being dynamically relaxed and hot in Papers I and II of this series. Here we use constraints on cluster mass profiles from X-ray data to test some of the basic predictions of cosmological structure formation in the cold dark matter (CDM) paradigm. We present constraints on the concentration–mass relation for massive clusters, finding a power-law mass dependence with a slope of κm = -0.16 ± 0.07, in agreement with CDM predictions. For this relaxed sample, the relation is consistent with a constant as a function of redshift (power-law slope with 1 + z of κζ = -0.17 ± 0.26), with an intrinsic scatter of σln c = 0.16 ± 0.03. We investigate the shape of cluster mass profiles over the radial range probed by the data (typically ~50 kpc–1 Mpc), and test for departures from the simple Navarro–Frenk–White (NFW) form, for which the logarithmic slope of the density profile tends to -1 at small radii. Specifically, we consider as alternatives the generalized NFW (GNFW) and Einasto parametrizations. For the GNFW model, we find an average value of (minus) the logarithmic inner slope of β = 1.02 ± 0.08, with an intrinsic scatter of σβ = 0.22 ± 0.07, while in the Einasto case we constrain the average shape parameter to be α = 0.29 ± 0.04 with an intrinsic scatter of σα = 0.12 ± 0.04. Our results are thus consistent with the simple NFW model on average, but we clearly detect the presence of intrinsic, cluster-to-cluster scatter about the average.
Mantz, A. B.; Allen, S. W.; Morris, R. G.
2016-10-01
This is the fifth in a series of papers studying the astrophysics and cosmology of massive, dynamically relaxed galaxy clusters. Our sample comprises 40 clusters identified as being dynamically relaxed and hot in Papers I and II of this series. Here we use constraints on cluster mass profiles from X-ray data to test some of the basic predictions of cosmological structure formation in the cold dark matter (CDM) paradigm. We present constraints on the concentration-mass relation for massive clusters, finding a power-law mass dependence with a slope of κm = -0.16 ± 0.07, in agreement with CDM predictions. For this relaxed sample, the relation is consistent with a constant as a function of redshift (power-law slope with 1 + z of κζ = -0.17 ± 0.26), with an intrinsic scatter of σln c = 0.16 ± 0.03. We investigate the shape of cluster mass profiles over the radial range probed by the data (typically ˜50 kpc-1 Mpc), and test for departures from the simple Navarro-Frenk-White (NFW) form, for which the logarithmic slope of the density profile tends to -1 at small radii. Specifically, we consider as alternatives the generalized NFW (GNFW) and Einasto parametrizations. For the GNFW model, we find an average value of (minus) the logarithmic inner slope of β = 1.02 ± 0.08, with an intrinsic scatter of σβ = 0.22 ± 0.07, while in the Einasto case we constrain the average shape parameter to be α = 0.29 ± 0.04 with an intrinsic scatter of σα = 0.12 ± 0.04. Our results are thus consistent with the simple NFW model on average, but we clearly detect the presence of intrinsic, cluster-to-cluster scatter about the average.
Self-consistent chaotic transport in a high-dimensional mean-field Hamiltonian map model
Martínez-del-Río, D; Olvera, A; Calleja, R
2016-01-01
Self-consistent chaotic transport is studied in a Hamiltonian mean-field model. The model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of $N$ coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherent structures. Numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of th...
Relativistic Consistent Angular-Momentum Projected Shell-Model:Relativistic Mean Field
Institute of Scientific and Technical Information of China (English)
LI Yan-Song; LONG Gui-Lu
2004-01-01
We develop a relativistic nuclear structure model, relativistic consistent angular-momentum projected shellmodel (RECAPS), which combines the relativistic mean-field theory with the angular-momentum projection method.In this new model, nuclear ground-state properties are first calculated consistently using relativistic mean-field (RMF)theory. Then angular momentum projection method is used to project out states with good angular momentum from a few important configurations. By diagonalizing the hamiltonian, the energy levels and wave functions are obtained.This model is a new attempt for the understanding of nuclear structure of normal nuclei and for the prediction of nuclear properties of nuclei far from stability. In this paper, we will describe the treatment of the relativistic mean field. A computer code, RECAPS-RMF, is developed. It solves the relativistic mean field with axial-symmetric deformation in the spherical harmonic oscillator basis. Comparisons between our calculations and existing relativistic mean-field calculations are made to test the model. These include the ground-state properties of spherical nuclei 16O and 208Pb,the deformed nucleus 20Ne. Good agreement is obtained.
On the consistency of Monte Carlo track structure DNA damage simulations
Energy Technology Data Exchange (ETDEWEB)
Pater, Piotr, E-mail: piotr.pater@mail.mcgill.ca; Seuntjens, Jan; El Naqa, Issam [McGill University, Montreal, Quebec H3G 1A4 (Canada); Bernal, Mario A. [Instituto de Fisica Gleb Wataghin, Universidade Estadual de Campinas, Campinas 13083-859 (Brazil)
2014-12-15
Purpose: Monte Carlo track structures (MCTS) simulations have been recognized as useful tools for radiobiological modeling. However, the authors noticed several issues regarding the consistency of reported data. Therefore, in this work, they analyze the impact of various user defined parameters on simulated direct DNA damage yields. In addition, they draw attention to discrepancies in published literature in DNA strand break (SB) yields and selected methodologies. Methods: The MCTS code Geant4-DNA was used to compare radial dose profiles in a nanometer-scale region of interest (ROI) for photon sources of varying sizes and energies. Then, electron tracks of 0.28 keV–220 keV were superimposed on a geometric DNA model composed of 2.7 × 10{sup 6} nucleosomes, and SBs were simulated according to four definitions based on energy deposits or energy transfers in DNA strand targets compared to a threshold energy E{sub TH}. The SB frequencies and complexities in nucleosomes as a function of incident electron energies were obtained. SBs were classified into higher order clusters such as single and double strand breaks (SSBs and DSBs) based on inter-SB distances and on the number of affected strands. Results: Comparisons of different nonuniform dose distributions lacking charged particle equilibrium may lead to erroneous conclusions regarding the effect of energy on relative biological effectiveness. The energy transfer-based SB definitions give similar SB yields as the one based on energy deposit when E{sub TH} ≈ 10.79 eV, but deviate significantly for higher E{sub TH} values. Between 30 and 40 nucleosomes/Gy show at least one SB in the ROI. The number of nucleosomes that present a complex damage pattern of more than 2 SBs and the degree of complexity of the damage in these nucleosomes diminish as the incident electron energy increases. DNA damage classification into SSB and DSB is highly dependent on the definitions of these higher order structures and their
Ikonić, Z.; Harrison, P.; Kelsall, R. W.
2004-12-01
Analysis of hole transport in cascaded p-Si /SiGe quantum well structures is performed using self-consistent rate equations simulations. The hole subband structure is calculated using the 6×6k.p model, and then used to find carrier relaxation rates due to the alloy disorder, acoustic, and optical phonon scattering, as well as hole-hole scattering. The simulation accounts for the in-plane k-space anisotropy of both the hole subband structure and the scattering rates. Results are presented for prototype THzSi /SiGe quantum cascade structures.
Energy Technology Data Exchange (ETDEWEB)
Belenkov, E. A., E-mail: belenkov@csu.ru [Chelyabinsk State University (Russian Federation); Mavrinskii, V. V. [Nosov Magnitogorsk State Technical University (Russian Federation); Belenkova, T. E.; Chernov, V. M. [Chelyabinsk State University (Russian Federation)
2015-05-15
A model scheme is proposed for obtaining layered compounds consisting of carbon atoms in the sp- and (vnsp){sup 2}-hybridized states. This model is used to find the possibility of existing the following seven basic structural modifications of graphyne: α-, β1-, β2-, β3-, γ1-, γ2-, and γ3-graphyne. Polymorphic modifications β3 graphyne and γ3 graphyne are described. The basic structural modifications of graphyne contain diatomic polyyne chains and consist only of carbon atoms in two different crystallographically equivalent states. Other nonbasic structural modifications of graphyne can be formed via the elongation of the carbyne chains that connect three-coordinated carbon atoms and via the formation of graphyne layers with a mixed structure consisting of basic layer fragments, such as α-β-graphyne, α-γ-graphyne, and β-γ-graphyne. The semiempirical quantum-mechanical MNDO, AM1, and PM3 methods and ab initio STO6-31G basis calculations are used to find geometrically optimized structures of the basic graphyne layers, their structural parameters, and energies of their sublimation. The energy of sublimation is found to be maximal for γ2-graphyne, which should be the most stable structural modification of graphyne.
Self-consistent description of $\\Lambda$ hypernuclei in the quark-meson coupling model
Tsushima, K; Thomas, A W
1997-01-01
The quark-meson coupling model, which has been successfully used to describe the properties of both finite nuclei and infinite nuclear matter, is applied to a study of $\\Lambda$ hypernuclei. With the assumption that the (self-consistent) exchanged scalar, and vector, mesons couple only to the u and d quarks, a very weak spin-orbit force in the $\\Lambda$-nucleus interaction is achieved automatically. This can be interpreted as a direct consequence of the quark structure of the $\\Lambda$ hyperon. Possible implications and extensions of the present investigation are also discussed.
Self-consistent Dark Matter simplified models with an s-channel scalar mediator
Bell, Nicole F.; Busoni, Giorgio; Sanderson, Isaac W.
2017-03-01
We examine Simplified Models in which fermionic DM interacts with Standard Model (SM) fermions via the exchange of an s-channel scalar mediator. The single-mediator version of this model is not gauge invariant, and instead we must consider models with two scalar mediators which mix and interfere. The minimal gauge invariant scenario involves the mixing of a new singlet scalar with the Standard Model Higgs boson, and is tightly constrained. We construct two Higgs doublet model (2HDM) extensions of this scenario, where the singlet mixes with the 2nd Higgs doublet. Compared with the one doublet model, this provides greater freedom for the masses and mixing angle of the scalar mediators, and their coupling to SM fermions. We outline constraints on these models, and discuss Yukawa structures that allow enhanced couplings, yet keep potentially dangerous flavour violating processes under control. We examine the direct detection phenomenology of these models, accounting for interference of the scalar mediators, and interference of different quarks in the nucleus. Regions of parameter space consistent with direct detection measurements are determined.
Akiba, Miki; Okada, Susumu
2017-10-01
Using the density functional theory with generalized gradient approximation, we studied the energetics and electronic structures of nanoscale rotors consisting of tryptycene and hydrocarbon molecules with respect to their mutual orientation. Energy barriers for the rotational motion of an attached hydrocarbon molecule range from 40 to 200 meV, depending on the attached molecular species and arrangements. The electronic structure of the nanoscale molecular rotors does not depend on the rotational angle of the attached hydrocarbon molecules.
Moreno Chaparro, Nicolas
2015-06-30
We introduce a framework for model reduction of polymer chain models for dissipative particle dynamics (DPD) simulations, where the properties governing the phase equilibria such as the characteristic size of the chain, compressibility, density, and temperature are preserved. The proposed methodology reduces the number of degrees of freedom required in traditional DPD representations to model equilibrium properties of systems with complex molecules (e.g., linear polymers). Based on geometrical considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling of the simulation parameters. In order to satisfy the geometrical constraints in the reduced model we introduce bond-angle potentials that account for the changes in the chain free energy after the model reduction. Following this coarse-graining process we represent high molecular weight DPD chains (i.e., ≥200≥200 beads per chain) with a significant reduction in the number of particles required (i.e., ≥20≥20 times the original system). We show that our methodology has potential applications modeling systems of high molecular weight molecules at large scales, such as diblock copolymer and DNA.
Consistent neutron star models with magnetic field dependent equations of state
Chatterjee, Debarati; Novak, Jerome; Oertel, Micaela
2014-01-01
We present a self-consistent model for the study of the structure of a neutron star in strong magnetic fields. Starting from a microscopic Lagrangian, this model includes the effect of the magnetic field on the equation of state, the interaction of the electromagnetic field with matter (magnetisation), and anisotropies in the energy-momentum tensor, as well as general relativistic aspects. We build numerical axisymmetric stationary models and show the applicability of the approach with one example quark matter equation of state (EoS) often employed in the recent literature for studies of strongly magnetised neutron stars. For this EoS, the effect of inclusion of magnetic field dependence or the magnetisation do not increase the maximum mass significantly in contrast to what has been claimed by previous studies.
A self consistent chemically stratified atmosphere model for the roAp star 10 Aquilae
Nesvacil, Nicole; Ryabchikova, Tanya A; Kochukhov, Oleg; Akberov, Artur; Weiss, Werner W
2012-01-01
Context: Chemically peculiar A type (Ap) stars are a subgroup of the CP2 stars which exhibit anomalous overabundances of numerous elements, e.g. Fe, Cr, Sr and rare earth elements. The pulsating subgroup of the Ap stars, the roAp stars, present ideal laboratories to observe and model pulsational signatures as well as the interplay of the pulsations with strong magnetic fields and vertical abundance gradients. Aims: Based on high resolution spectroscopic observations and observed stellar energy distributions we construct a self consistent model atmosphere, that accounts for modulations of the temperature-pressure structure caused by vertical abundance gradients, for the roAp star 10 Aquilae (HD 176232). We demonstrate that such an analysis can be used to determine precisely the fundamental atmospheric parameters required for pulsation modelling. Methods: Average abundances were derived for 56 species. For Mg, Si, Ca, Cr, Fe, Co, Sr, Pr, and Nd vertical stratification profiles were empirically derived using the...
A new k-epsilon model consistent with Monin-Obukhov similarity theory
DEFF Research Database (Denmark)
van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.
2016-01-01
A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...
Locally self-consistent Green’s function approach to the electronic structure problem
DEFF Research Database (Denmark)
Abrikosov, I.A.; Simak, S.I.; Johansson, B.;
1997-01-01
The locally self-consistent Green's function (LSGF) method is an order-N method for calculation of the electronic structure of systems with an arbitrary distribution of atoms of different kinds on an underlying crystal lattice. For each atom Dyson's equation is used to solve the electronic multiple...
A simplified stock-flow consistent post-Keynesian growth model
dos Santos, Claudio H.; Zezza, Gennaro
2005-01-01
A Simplified Stock-Flow Consistent Post-Keynesian Growth Model Claudio H. Dos Santos* and Gennaro Zezza** Abstract: Despite being arguably the most rigorous form of structuralist/post-Keynesian macroeconomics, stock-flow consistent models are quite often complex and difficult to deal with. This paper presents a model that, despite retaining the methodological advantages of the stock-flow consistent method, is intuitive enough to be taught at an undergraduate level. Moreover, the model can eas...
Ab initio self-consistent x-ray absorption fine structure analysis for metalloproteins.
Dimakis, Nicholas; Bunker, Grant
2006-12-01
X-ray absorption fine structure is a powerful tool for probing the structures of metals in proteins in both crystalline and noncrystalline environments. Until recently, a fundamental problem in biological XAFS has been that ad hoc assumptions must be made concerning the vibrational properties of the amino acid residues that are coordinated to the metal to fit the data. Here, an automatic procedure for accurate structural determination of active sites of metalloproteins is presented. It is based on direct multiple-scattering simulation of experimental X-ray absorption fine structure spectra combining electron multiple scattering calculations with density functional theory calculations of vibrational modes of amino acid residues and the genetic algorithm differential evolution to determine a global minimum in the space of fitting parameters. Structure determination of the metalloprotein active site is obtained through a self-consistent iterative procedure with only minimal initial information.
Guinot, Vincent
2017-09-01
The Integral Porosity and Dual Integral Porosity two-dimensional shallow water models have been proposed recently as efficient upscaled models for urban floods. Very little is known so far about their consistency and wave propagation properties. Simple numerical experiments show that both models are unusually sensitive to the computational grid. In the present paper, a two-dimensional consistency and characteristic analysis is carried out for these two models. The following results are obtained: (i) the models are almost insensitive to grid design when the porosity is isotropic, (ii) anisotropic porosity fields induce an artificial polarization of the mass/momentum fluxes along preferential directions when triangular meshes are used and (iii) extra first-order derivatives appear in the governing equations when regular, quadrangular cells are used. The hyperbolic system is thus mesh-dependent, and with it the wave propagation properties of the model solutions. Criteria are derived to make the solution less mesh-dependent, but it is not certain that these criteria can be satisfied at all computational points when real-world situations are dealt with.
Structuring very large domain models
DEFF Research Database (Denmark)
Störrle, Harald
2010-01-01
View/Viewpoint approaches like IEEE 1471-2000, or Kruchten's 4+1-view model are used to structure software architectures at a high level of granularity. While research has focused on architectural languages and with consistency between multiple views, practical questions such as the structuring a...
On the Lagrangian structure of 3D consistent systems of asymmetric quad-equations
Boll, Raphael
2011-01-01
Recently, the first-named author gave a classification of 3D consistent 6-tuples of quad-equations with the tetrahedron property; several novel asymmetric 6-tuples have been found. Due to 3D consistency, these 6-tuples can be extended to discrete integrable systems on Z^m. We establish Lagrangian structures and flip-invariance of the action functional for the class of discrete integrable systems involving equations for which some of the biquadratics are non-degenerate and some are degenerate. This class covers, among others, some of the above mentioned novel systems.
Tumaneng, Paul W.; Pandit, Sagar A.; Zhao, Guijun; Scott, H. L.
2011-03-01
The connection between membrane inhomogeneity and the structural basis of lipid rafts has sparked interest in the lateral organization of model lipid bilayers of two and three components. In an effort to investigate anisotropic lipid distribution in mixed bilayers, a self-consistent mean-field theoretical model is applied to palmitoyloleoylphosphatidylcholine (POPC)-palmitoyl sphingomyelin (PSM)-cholesterol mixtures. The compositional dependence of lateral organization in these mixtures is mapped onto a ternary plot. The model utilizes molecular dynamics simulations to estimate interaction parameters and to construct chain conformation libraries. We find that at some concentration ratios the bilayers separate spatially into regions of higher and lower chain order coinciding with areas enriched with PSM and POPC, respectively. To examine the effect of the asymmetric chain structure of POPC on bilayer lateral inhomogeneity, we consider POPC-lipid interactions with and without angular dependence. Results are compared with experimental data and with results from a similar model for mixtures of dioleoylphosphatidylcholine, steroyl sphingomyelin, and cholesterol.
A parameter study of self-consistent disk models around Herbig AeBe stars
Meijer, J; De Koter, A; Dullemond, C P; Van Boekel, R; Waters, L B F M
2008-01-01
We present a parameter study of self-consistent models of protoplanetary disks around Herbig AeBe stars. We use the code developed by Dullemond and Dominik, which solves the 2D radiative transfer problem including an iteration for the vertical hydrostatic structure of the disk. This grid of models will be used for several studies on disk emission and mineralogy in followup papers. In this paper we take a first look on the new models, compare them with previous modeling attempts and focus on the effects of various parameters on the overall structure of the SED that leads to the classification of Herbig AeBe stars into two groups, with a flaring (group I) or self-shadowed (group II) SED. We find that the parameter of overriding importance to the SED is the total mass in grains smaller than 25um, confirming the earlier results by Dullemond and Dominik. All other parameters studied have only minor influences, and will alter the SED type only in borderline cases. We find that there is no natural dichotomy between ...
Comparative Protein Structure Modeling Using MODELLER.
Webb, Benjamin; Sali, Andrej
2016-06-20
Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. © 2016 by John Wiley & Sons, Inc.
Self-Consistent Model for Pulsed Direct-Current N2 Glow Discharge
Institute of Scientific and Technical Information of China (English)
Liu Chengsen; Wang Dezhen
2005-01-01
A self-consistent analysis of a pulsed direct-current (DC) N2 glow discharge is presented. The model is based on a numerical solution of the continuity equations for electron and ions coupled with Poisson's equation. The spatial-temporal variations of ionic and electronic densities and electric field are obtained. The electric field structure exhibits all the characteristic regions of a typical glow discharge (the cathode fall, the negative glow, and the positive column).Current-voltage characteristics of the discharge can be obtained from the model. The calculated current-voltage results using a constant secondary electron emission coefficient for the gas pressure 133.32 Pa are in reasonable agreement with experiment.
Macro-particle FEL model with self-consistent spontaneous radiation
Litvinenko, Vladimir N
2015-01-01
Spontaneous radiation plays an important role in SASE FELs and storage ring FELs operating in giant pulse mode. It defines the correlation function of the FEL radiation as well as its many spectral features. Simulations of these systems using randomly distributed macro-particles with charge much higher that of a single electron create the problem of anomalously strong spontaneous radiation, limiting the capabilities of many FEL codes. In this paper we present a self-consistent macro-particle model which provided statistically exact simulation of multi-mode, multi-harmonic and multi-frequency short-wavelength 3-D FELs including the high power and saturation effects. The use of macro-particle clones allows both spontaneous and induced radiation to be treated in the same fashion. Simulations using this model do not require a seed and provide complete temporal and spatial structure of the FEL optical field.
Keller, D. E.; Fischer, A. M.; Frei, C.; Liniger, M. A.; Appenzeller, C.; Knutti, R.
2014-07-01
Many climate impact assessments over topographically complex terrain require high-resolution precipitation time-series that have a spatio-temporal correlation structure consistent with observations. This consistency is essential for spatially distributed modelling of processes with non-linear responses to precipitation input (e.g. soil water and river runoff modelling). In this regard, weather generators (WGs) designed and calibrated for multiple sites are an appealing technique to stochastically simulate time-series that approximate the observed temporal and spatial dependencies. In this study, we present a stochastic multi-site precipitation generator and validate it over the hydrological catchment Thur in the Swiss Alps. The model consists of several Richardson-type WGs that are run with correlated random number streams reflecting the observed correlation structure among all possible station pairs. A first-order two-state Markov process simulates intermittence of daily precipitation, while precipitation amounts are simulated from a mixture model of two exponential distributions. The model is calibrated separately for each month over the time-period 1961-2011. The WG is skilful at individual sites in representing the annual cycle of the precipitation statistics, such as mean wet day frequency and intensity as well as monthly precipitation sums. It reproduces realistically the multi-day statistics such as the frequencies of dry and wet spell lengths and precipitation sums over consecutive wet days. Substantial added value is demonstrated in simulating daily areal precipitation sums in comparison to multiple WGs that lack the spatial dependency in the stochastic process: the multi-site WG is capable to capture about 95% of the observed variability in daily area sums, while the summed time-series from multiple single-site WGs only explains about 13%. Limitation of the WG have been detected in reproducing observed variability from year to year, a component that has
Structural consistency analysis of recombinant and wild-type human serum albumin
Cao, Hui-Ling; Sun, Li-Hua; Liu, Li; Li, Jian; Tang, Lin; Guo, Yun-Zhu; Mei, Qi-Bing; He, Jian-Hua; Yin, Da-Chuan
2017-01-01
Recombinant human serum albumin (rHSA) is potential alternatives for human serum albumin (HSA) which may ease severe shortage of HSA worldwide. In theory, rHSA and HSA are the same. Structure decides function. Therefore, the 3D structural consistency analysis of rHSA and HSA is outmost importance, which is the base of their function consistency. In this paper, the crystal structures of rHSA at resolution limit of 2.22 Å and HSA at 2.30 Å were determined by X-ray diffraction (XRD), which were deposited in the Protein Data Bank (PDB) with accession codes 4G03 (rHSA) and 4G04 (HSA). The differences between rHSA and HSA were systematically analyzed from the crystallization behavior, diffraction data and three-dimensional (3D) structure. The superimposed contrasted analysis indicated that rHSA and HSA achieved a structural similarity of 99% with an r.m.s. deviation of 0.397 Å for the corresponding overall Cα atoms. In addition, the number of α-helices in the rHSA or HSA molecule was verified to be 30. As a result, rHSA can potentially replace HSA. The study provides a theoretical and experimental basis for the clinical and additional applications of rHSA. Meanwhile, it is also a good example for applications of genetic engineering.
Non-zero density-velocity consistency relations for large scale structures
Rizzo, Luca Alberto; Valageas, Patrick
2016-01-01
We present exact kinematic consistency relations for cosmological structures that do not vanish at equal times and can thus be measured in surveys. These rely on cross-correlations between the density and velocity, or momentum, fields. Indeed, the uniform transport of small-scale structures by long wavelength modes, which cannot be detected at equal times by looking at density correlations only, gives rise to a shift in the amplitude of the velocity field that could be measured. These consistency relations only rely on the weak equivalence principle and Gaussian initial conditions. They remain valid in the non-linear regime and for biased galaxy fields. They can be used to constrain non-standard cosmological scenarios or the large-scale galaxy bias.
Fishkind, Donniell E; Tang, Minh; Vogelstein, Joshua T; Priebe, Carey E
2012-01-01
A stochastic block model consists of a random partition of n vertices into blocks 1,2,...,K for which, conditioned on the partition, every pair of vertices has probability of adjacency entirely determined by the block membership of the two vertices. (The model parameters are K, the distribution of the random partition, and a communication probability matrix M in [0,1]^(K x K) listing the adjacency probabilities associated with all pairs of blocks.) Suppose a realization of the n x n vertex adjacency matrix is observed, but the underlying partition of the vertices into blocks is not observed; the main inferential task is to correctly partition the vertices into the blocks with only a negligible number of vertices misassigned. For this inferential task, Rohe et al. (2011) prove the consistency of spectral partitioning applied to the normalized Laplacian, and Sussman et al. (2011) extend this to prove consistency of spectral partitioning directly on the adjacency matrix; both procedures assume that K and rankM a...
Directory of Open Access Journals (Sweden)
Ajay Kumar Bakhla
2013-01-01
Full Text Available Background: As there are no instruments to measure psychological wellness or distress in visually impaired students, we studied internal consistency and factor structure of GHQ-12 in visually impaired children. Materials and Methods: Internal consistency analysis (Cronbach′s alpha and item total correlation and exploratory factor analysis (principal component analysis were carried out to identify factor structure of 12-item general health questionnaire (GHQ-12. Results: All items of GHQ-12 were significantly associated with each other and the Cronbach′s alpha coefficient for the scale was 0.7. On analysis of principal component, three-factor solution was found that accounted for 47.92% of the total variance. The factors included, ′general well-being′, ′depression′ and ′cognitive′, with Cronbach′s alpha coefficients being 0.70, 0.59, and 0.34, respectively. Conclusion: Our study findings suggest GHQ-12 is a reliable with adequate internal consistency scale and multidimensional factor structure in visually impaired students.
The relativistic consistent angular-momentum projected shell model study of the N=Z nucleus 52Fe
Institute of Scientific and Technical Information of China (English)
LI YanSong; LONG GuiLu
2009-01-01
The relativistic consistent angular-momentum projected shell model (RECAPS) is used in the study of the structure and electromagnetic transitions of the low-lying states in the N=Z nucleus 52Fe.The model calculations show a reasonably good agreement with the data.The backbending at 12+ is reproduced and the energy level structure suggests that neutron-proton interactions play important roles.
The relativistic consistent angular-momentum projected shell model study of the N=Z nucleus 52Fe
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
The relativistic consistent angular-momentum projected shell model(ReCAPS) is used in the study of the structure and electromagnetic transitions of the low-lying states in the N=Z nucleus 52Fe.The model calculations show a reasonably good agreement with the data.The backbending at 12+ is reproduced and the energy level structure suggests that neutron-proton interactions play important roles.
Regularized Structural Equation Modeling.
Jacobucci, Ross; Grimm, Kevin J; McArdle, John J
A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM's utility.
Tides, Rotation Or Anisotropy? Self-consistent Nonspherical Models For Globular Clusters
Varri, Anna L.; Bertin, G.
2011-01-01
Spherical models of quasi-relaxed stellar systems provide a successful zeroth-order description of globular clusters. Yet, the great progress made in recent years in the acquisition of detailed information of the structure of these stellar systems calls for a renewed effort on the side of modeling. In particular, more general analytical models would allow to address the long-standing issue of the physical origin of the deviations from spherical symmetry of the globular clusters, that now can be properly measured. In fact, it remains to be established which is the cause of the observed flattening, among external tides, internal rotation, and pressure anisotropy. In this paper we focus on the first two physical ingredients. We start by briefly describing a recently studied family of triaxial models that incorporate in a self-consistent way the tidal effects of the host galaxy, as a collisionless analogue of the Roche problem (Varri & Bertin ApJ 2009). We then present two new families of axisymmetric models in which the deviations from spherical symmetry are induced by the presence of internal rotation. The first one is an extension of the well-known family of King models to the case of axisymmetric equilibria flattened by solid-body rotation. The second family is characterized by differential rotation, designed to be rigid in the center and to vanish in the outer parts, where the imposed truncation in phase space becomes effective. For possible application to globular clusters, models of interest should be those, in both families, characterized by low values of the rotation strength parameter and quasi-spherical shape. For general interest in stellar dynamics, we show that, for high values of that parameter, the differentially rotating models may exhibit unexpected morphologies, even with a toroidal core.
DEFF Research Database (Denmark)
Kock, Anders Bredahl
2015-01-01
the tuning parameter by Bayesian Information Criterion (BIC) results in consistent model selection. However, it is also shown that the adaptive Lasso has no power against shrinking alternatives of the form c/T if it is tuned to perform consistent model selection. We show that if the adaptive Lasso is tuned...
Werheit, Helmut
2016-10-01
The complex, highly distorted structure of boron carbide is composed of B12 and B11C icosahedra and CBC, CBB and B□B linear elements, whose concentration depends on the chemical composition each. These concentrations are shown to be consistent with lattice parameters, fine structure data and chemical composition. The respective impacts on lattice parameters are estimated and discussed. Considering the contributions of the different structural components to the energy of the overall structure makes the structure and its variation within the homogeneity range reasonable; in particular that of B4.3C representing the carbon-rich limit of the homogeneity range. Replacing in B4.3C virtually the B□B components by CBC yields the hypothetical moderately distorted B4.0C (structure formula (B11C)CBC). The reduction of lattice parameters related is compatible with recently reported uncommonly prepared single crystals, whose compositions deviate from B4.3C.
Towards a consistent model of the Galaxy; 2, Derivation of the model
Méra, D; Schäffer, R
1998-01-01
We use the calculations derived in a previous paper (Méra, Chabrier and Schaeffer, 1997), based on observational constraints arising from star counts, microlensing experiments and kinematic properties, to determine the amount of dark matter under the form of stellar and sub-stellar objects in the different parts of the Galaxy. This yields the derivation of different mass-models for the Galaxy. In the light of all the afore-mentioned constraints, we discuss two models that correspond to different conclusions about the nature and the location of the Galactic dark matter. In the first model there is a small amount of dark matter in the disk, and a large fraction of the dark matter in the halo is still undetected and likely to be non-baryonic. The second, less conventional model is consistent with entirely, or at least predominantly baryonic dark matter, under the form of brown dwarfs in the disk and white dwarfs in the dark halo. We derive observational predictions for these two models which should be verifiabl...
DeGiorgio, Michael; Rosenberg, Noah A
2016-08-01
In the last few years, several statistically consistent consensus methods for species tree inference have been devised that are robust to the gene tree discordance caused by incomplete lineage sorting in unstructured ancestral populations. One source of gene tree discordance that has only recently been identified as a potential obstacle for phylogenetic inference is ancestral population structure. In this article, we describe a general model of ancestral population structure, and by relying on a single carefully constructed example scenario, we show that the consensus methods Democratic Vote, STEAC, STAR, R(∗) Consensus, Rooted Triple Consensus, Minimize Deep Coalescences, and Majority-Rule Consensus are statistically inconsistent under the model. We find that among the consensus methods evaluated, the only method that is statistically consistent in the presence of ancestral population structure is GLASS/Maximum Tree. We use simulations to evaluate the behavior of the various consensus methods in a model with ancestral population structure, showing that as the number of gene trees increases, estimates on the basis of GLASS/Maximum Tree approach the true species tree topology irrespective of the level of population structure, whereas estimates based on the remaining methods only approach the true species tree topology if the level of structure is low. However, through simulations using species trees both with and without ancestral population structure, we show that GLASS/Maximum Tree performs unusually poorly on gene trees inferred from alignments with little information. This practical limitation of GLASS/Maximum Tree together with the inconsistency of other methods prompts the need for both further testing of additional existing methods and development of novel methods under conditions that incorporate ancestral population structure.
Self-consistent modeling of terahertz waveguide and cavity with frequency-dependent conductivity
Huang, Y. J.; Chu, K. R.; Thumm, M.
2015-01-01
The surface resistance of metals, and hence the Ohmic dissipation per unit area, scales with the square root of the frequency of an incident electromagnetic wave. As is well recognized, this can lead to excessive wall losses at terahertz (THz) frequencies. On the other hand, high-frequency oscillatory motion of conduction electrons tends to mitigate the collisional damping. As a result, the classical theory predicts that metals behave more like a transparent medium at frequencies above the ultraviolet. Such a behavior difference is inherent in the AC conductivity, a frequency-dependent complex quantity commonly used to treat electromagnetics of metals at optical frequencies. The THz region falls in the gap between microwave and optical frequencies. However, metals are still commonly modeled by the DC conductivity in currently active vacuum electronics research aimed at the development of high-power THz sources (notably the gyrotron), although a small reduction of the DC conductivity due to surface roughness is sometimes included. In this study, we present a self-consistent modeling of the gyrotron interaction structures (a metallic waveguide or cavity) with the AC conductivity. The resulting waveguide attenuation constants and cavity quality factors are compared with those of the DC-conductivity model. The reduction in Ohmic losses under the AC-conductivity model is shown to be increasingly significant as the frequency reaches deeper into the THz region. Such effects are of considerable importance to THz gyrotrons for which the minimization of Ohmic losses constitutes a major design consideration.
Towards self-consistent modelling of the Sgr A* accretion flow: linking theory and observation
Roberts, Shawn R.; Jiang, Yan-Fei; Wang, Q. Daniel; Ostriker, Jeremiah P.
2017-04-01
The interplay between supermassive black holes (SMBHs) and their environments is believed to command an essential role in galaxy evolution. The majority of these SMBHs are in the radiative inefficient accretion phase where this interplay remains elusive, but suggestively important, due to few observational constraints. To remedy this, we directly fit 2D hydrodynamic simulations to Chandra observations of Sgr A* with Markov chain Monte Carlo sampling, self-consistently modelling the 2D inflow-outflow solution for the first time. We find the temperature and density at flow onset are consistent with the origin of the gas in the stellar winds of massive stars in the vicinity of Sgr A*. We place the first observational constraints on the angular momentum of the gas and estimate the centrifugal radius, rc ≈ 0.056 rb ≈ 8 × 10-3 pc, where rb is the Bondi radius. Less than 1 per cent of the inflowing gas accretes on to the SMBH, the remainder being ejected in a polar outflow. We decouple the quiescent point-like emission from the spatially extended flow. We find this point-like emission, accounting for ˜4 per cent of the quiescent flux, is spectrally too steep to be explained by unresolved flares, nor bremsstrahlung, but is likely a combination of a relatively steep synchrotron power law and the high-energy tail of inverse-Compton emission. With this self-consistent model of the accretion flow structure, we make predictions for the flow dynamics and discuss how future X-ray spectroscopic observations can further our understanding of the Sgr A* accretion flow.
Hedlund, Ann; Ateg, Mattias; Andersson, Ing-Marie; Rosén, Gunnar
2010-04-01
Workers' motivation to actively take part in improvements to the work environment is assumed to be important for the efficiency of investments for that purpose. That gives rise to the need for a tool to measure this motivation. A questionnaire to measure motivation for improvements to the work environment has been designed. Internal consistency and test-retest reliability of the domains of the questionnaire have been measured, and the factorial structure has been explored, from the answers of 113 employees. The internal consistency is high (0.94), as well as the correlation for the total score (0.84). Three factors are identified accounting for 61.6% of the total variance. The questionnaire can be a useful tool in improving intervention methods. The expectation is that the tool can be useful, particularly with the aim of improving efficiency of companies' investments for work environment improvements. Copyright 2010 Elsevier Ltd. All rights reserved.
Globular structures of a helix-coil copolymer: Self-consistent treatment
Nowak, C.; Rostiashvili, V. G.; Vilgis, T. A.
2007-01-01
A self-consistent-field theory was developed in the grand canonical ensemble formulation to study transitions in a helix-coil multiblock globule. Helical and coil parts are treated as stiff rods and self-avoiding walks of variable lengths correspondingly. The resulting field theory takes, in addition to the conventional Zimm-Bragg, [J. Chem. Phys. 31, 526 (1959)] parameters, also three-dimensional interaction terms into account. The appropriate differential equations which determine the self-consistent fields were solved numerically with finite element method. Three different phase states are found: open chain, amorphous globule, and nematic liquid-crystalline (LC) globule. The LC-globule formation is driven by the interplay between the hydrophobic helical segment attraction and the anisotropic globule surface energy of an entropic nature. The full phase diagram of the helix-coil copolymer was calculated and thoroughly discussed. The suggested theory shows a clear interplay between secondary and tertiary structures in globular homopolypeptides.
Directory of Open Access Journals (Sweden)
Paul M W Hackett
2016-03-01
Full Text Available When behaviour is interpreted in a reliable manner (i.e., robustly across different situations and times its explained meaning may be seen to possess hermeneutic consistency. In this essay I present an evaluation of the hermeneutic consistency that I propose may be present when the research tool know as the mapping sentence is used to create generic structural ontologies. I also claim that theoretical and empirical validity is a likely result of employing the mapping sentence in research design and interpretation. These claims are non-contentious within the realm of quantitative psychological and behavioural research. However, I extend the scope of both facet theory based research and claims for its structural utility, reliability and validity to philosophical and qualitative investigations. I assert that the hermeneutic consistency of a structural ontology is a product of a structural representation’s ontological components and the mereological relationships between these ontological sub-units: the mapping sentence seminally allows for the depiction of such structure.
Self-consistent modeling of DEMOs with 1.5D BALDUR integrated predictive modeling code
Wisitsorasak, A.; Somjinda, B.; Promping, J.; Onjun, T.
2017-02-01
Self-consistent simulations of four DEMO designs proposed by teams from China, Europe, India, and Korea are carried out using the BALDUR integrated predictive modeling code in which theory-based models are used, for both core transport and boundary conditions. In these simulations, a combination of the NCLASS neoclassical transport and multimode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a pedestal temperature model based on a combination of magnetic and flow shear stabilization, pedestal width scaling and an infinite- n ballooning pressure gradient model and a pedestal density model based on a line average density. Even though an optimistic scenario is considered, the simulation results suggest that, with the exclusion of ELMs, the fusion gain Q obtained for these reactors is pessimistic compared to their original designs, i.e. 52% for the Chinese design, 63% for the European design, 22% for the Korean design, and 26% for the Indian design. In addition, the predicted bootstrap current fractions are also found to be lower than their original designs, as fractions of their original designs, i.e. 0.49 (China), 0.66 (Europe), and 0.58 (India). Furthermore, in relation to sensitivity, it is found that increasing values of the auxiliary heating power and the electron line average density from their design values yield an enhancement of fusion performance. In addition, inclusion of sawtooth oscillation effects demonstrate positive impacts on the plasma and fusion performance in European, Indian and Korean DEMOs, but degrade the performance in the Chinese DEMO.
Scale-consistent two-way coupling of land-surface and atmospheric models
Schomburg, A.; Venema, V.; Ament, F.; Simmer, C.
2009-04-01
Processes at the land surface and in the atmosphere act on different spatial scales. While in the atmosphere small-scale heterogeneity is smoothed out quickly by turbulent mixing, this is not the case at the land surface where small-scale variability of orography, land cover, soil texture, soil moisture etc. varies only slowly in time. For the modelling of the fluxes between the land-surface and the atmosphere it is consequently more scale consistent to model the surface processes at a higher spatial resolution than the atmospheric processes. The mosaic approach is one way to deal with this problem. Using this technique the Soil Vegetation Atmosphere Transfer (SVAT) scheme is solved on a higher resolution than the atmosphere, which is possible since a SVAT module generally demands considerably less computation time than the atmospheric part. The upscaling of the turbulent fluxes of sensible and latent heat at the interface to the atmosphere is realized by averaging, due to the nonlinearities involved this is a more sensible approach than averaging the soil properties and computing the fluxes in a second step. The atmospheric quantities are usually assumed to be homogeneous for all soil-subpixels pertaining to one coarse atmospheric grid box. In this work, the aim is to develop a downscaling approach in which the atmospheric quantities at the lowest model layer are disaggregated before they enter the SVAT module at the higher mosaic resolution. The overall aim is a better simulation of the heat fluxes which play an important role for the energy and moisture budgets at the surface. The disaggregation rules for the atmospheric variables will depend on high-resolution surface properties and the current atmospheric conditions. To reduce biases due to nonlinearities we will add small-scale variability according to such rules as well as noise for the variability we can not explain. The model used in this work is the COSMO-model, the weather forecast model (and regional
Beyond Poisson-Boltzmann: fluctuations and fluid structure in a self-consistent theory.
Buyukdagli, S; Blossey, R
2016-09-01
Poisson-Boltzmann (PB) theory is the classic approach to soft matter electrostatics and has been applied to numerous physical chemistry and biophysics problems. Its essential limitations are in its neglect of correlation effects and fluid structure. Recently, several theoretical insights have allowed the formulation of approaches that go beyond PB theory in a systematic way. In this topical review, we provide an update on the developments achieved in the self-consistent formulations of correlation-corrected Poisson-Boltzmann theory. We introduce a corresponding system of coupled non-linear equations for both continuum electrostatics with a uniform dielectric constant, and a structured solvent-a dipolar Coulomb fluid-including non-local effects. While the approach is only approximate and also limited to corrections in the so-called weak fluctuation regime, it allows us to include physically relevant effects, as we show for a range of applications of these equations.
Beyond Poisson-Boltzmann: fluctuations and fluid structure in a self-consistent theory
Buyukdagli, S.; Blossey, R.
2016-09-01
Poisson-Boltzmann (PB) theory is the classic approach to soft matter electrostatics and has been applied to numerous physical chemistry and biophysics problems. Its essential limitations are in its neglect of correlation effects and fluid structure. Recently, several theoretical insights have allowed the formulation of approaches that go beyond PB theory in a systematic way. In this topical review, we provide an update on the developments achieved in the self-consistent formulations of correlation-corrected Poisson-Boltzmann theory. We introduce a corresponding system of coupled non-linear equations for both continuum electrostatics with a uniform dielectric constant, and a structured solvent—a dipolar Coulomb fluid—including non-local effects. While the approach is only approximate and also limited to corrections in the so-called weak fluctuation regime, it allows us to include physically relevant effects, as we show for a range of applications of these equations.
DEFF Research Database (Denmark)
Andreasen, Martin Møller; Meldrum, Andrew
This paper studies whether dynamic term structure models for US nominal bond yields should enforce the zero lower bound by a quadratic policy rate or a shadow rate specification. We address the question by estimating quadratic term structure models (QTSMs) and shadow rate models with at most four...
Energy Technology Data Exchange (ETDEWEB)
Pain, J.C. [CEA/DIF, B.P. 12, 91680 Bruyeres-le-Chatel Cedex (France)]. E-mail: jean-christophe.pain@cea.fr; Dejonghe, G. [CEA/DIF, B.P. 12, 91680 Bruyeres-le-Chatel Cedex (France); Blenski, T. [CEA/DSM/DRECAM/SPAM, Centre d' Etudes de Saclay, 91191 Gif-sur-Yvette Cedex (France)
2006-05-15
We propose a thermodynamically consistent model involving detailed screened ions, described by superconfigurations, in plasmas. In the present work, the electrons, bound and free, are treated quantum-mechanically so that resonances are carefully taken into account in the self-consistent calculation of the electronic structure of each superconfiguration. The procedure is in some sense similar to the one used in Inferno code developed by D.A. Liberman; however, here we perform this calculation in the ion-sphere model for each superconfiguration. The superconfiguration approximation allows rapid calculation of necessary averages over all possible configurations representing excited states of bound electrons. The model enables a fully quantum-mechanical self-consistent calculation of the electronic structure of ions and provides the relevant thermodynamic quantities (e.g., internal energy, Helmholtz free energy and pressure), together with an improved treatment of pressure ionization. It should therefore give a better insight into the impact of plasma effects on photoabsorption spectra.
Toward self-consistent tectono-magmatic numerical model of rift-to-ridge transition
Gerya, Taras; Bercovici, David; Liao, Jie
2017-04-01
Natural data from modern and ancient lithospheric extension systems suggest three-dimensional (3D) character of deformation and complex relationship between magmatism and tectonics during the entire rift-to-ridge transition. Therefore, self-consistent high-resolution 3D magmatic-thermomechanical numerical approaches stand as a minimum complexity requirement for modeling and understanding of this transition. Here we present results from our new high-resolution 3D finite-difference marker-in-cell rift-to-ridge models, which account for magmatic accretion of the crust and use non-linear strain-weakened visco-plastic rheology of rocks that couples brittle/plastic failure and ductile damage caused by grain size reduction. Numerical experiments suggest that nucleation of rifting and ridge-transform patterns are decoupled in both space and time. At intermediate stages, two patterns can coexist and interact, which triggers development of detachment faults, failed rift arms, hyper-extended margins and oblique proto-transforms. En echelon rift patterns typically develop in the brittle upper-middle crust whereas proto-ridge and proto-transform structures nucleate in the lithospheric mantle. These deep proto-structures propagate upward, inter-connect and rotate toward a mature orthogonal ridge-transform patterns on the timescale of millions years during incipient thermal-magmatic accretion of the new oceanic-like lithosphere. Ductile damage of the extending lithospheric mantle caused by grain size reduction assisted by Zenner pinning plays critical role in rift-to-ridge transition by stabilizing detachment faults and transform structures. Numerical results compare well with observations from incipient spreading regions and passive continental margins.
Consistency relations for large scale structures with primordial non-Gaussianities
Valageas, Patrick; Nishimichi, Takahiro
2016-01-01
We investigate how the consistency relations of large-scale structures are modified when the initial density field is not Gaussian. We consider both scenarios where the primordial density field can be written as a nonlinear functional of a Gaussian field and more general scenarios where the probability distribution of the primordial density field can be expanded around the Gaussian distribution, up to all orders over $\\delta_{L0}$. Working at linear order over the non-Gaussianity parameters $f_{\\rm NL}^{(n)}$ or $S_n$, we find that the consistency relations for the matter density fields are modified as they include additional contributions that involve all-order mixed linear-nonlinear correlations $\\langle \\prod \\delta_L \\prod \\delta \\rangle$. We derive the conditions needed to recover the simple Gaussian form of the consistency relations. This corresponds to scenarios that become Gaussian in the squeezed limit. Our results also apply to biased tracers, and velocity or momentum cross-correlations.
Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.
2009-01-01
Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.
Development of a Kohn-Sham like potential in the Self-Consistent Atomic Deformation Model
Mehl, M J; Stokes, H T
1996-01-01
This is a brief description of how to derive the local ``atomic'' potentials from the Self-Consistent Atomic Deformation (SCAD) model density function. Particular attention is paid to the spherically averaged case.
Development of a Kohn-Sham like potential in the Self-Consistent Atomic Deformation Model
Mehl, M. J.; Boyer, L. L.; Stokes, H. T.
1996-01-01
This is a brief description of how to derive the local ``atomic'' potentials from the Self-Consistent Atomic Deformation (SCAD) model density function. Particular attention is paid to the spherically averaged case.
Bayesian nonparametric estimation and consistency of mixed multinomial logit choice models
De Blasi, Pierpaolo; Lau, John W; 10.3150/09-BEJ233
2011-01-01
This paper develops nonparametric estimation for discrete choice models based on the mixed multinomial logit (MMNL) model. It has been shown that MMNL models encompass all discrete choice models derived under the assumption of random utility maximization, subject to the identification of an unknown distribution $G$. Noting the mixture model description of the MMNL, we employ a Bayesian nonparametric approach, using nonparametric priors on the unknown mixing distribution $G$, to estimate choice probabilities. We provide an important theoretical support for the use of the proposed methodology by investigating consistency of the posterior distribution for a general nonparametric prior on the mixing distribution. Consistency is defined according to an $L_1$-type distance on the space of choice probabilities and is achieved by extending to a regression model framework a recent approach to strong consistency based on the summability of square roots of prior probabilities. Moving to estimation, slightly different te...
Pranger, C. C.; Le Pourhiet, L.; May, D.; van Dinther, Y.; Gerya, T.
2016-12-01
Subduction zones evolve over millions of years. The state of stress, the distribution of materials, and the strength and structure of the interface between the two plates is intricately tied to a host of time-dependent physical processes, such as damage, friction, (nonlinear) viscous relaxation, and fluid migration. In addition, the subduction interface has a complex three-dimensional geometry that evolves with time and can adjust in response to a changing stress environment or in response to impinging topographical features, and can even branch off as a splay fault. All in all, the behaviour of (large) earthquakes at the millisecond to minute timescale is heavily dependent on the pattern of stress accumulation during the 100 year inter-seismic period, the events occurring on or near the interface in the past thousands of years, as well as the extended geological history of the region. We address the aforementioned modeling requirements by developing a self-consistent 3D staggered grid finite difference continuum description of motion, thermal advection-diffusion, and poro-visco-elastic two-phase flow. Faults are modelled as plastic shear bands that can develop and evolve in response to a changing stress environment without having a prescribed geometry. They obey a Mohr-Coulomb or Drucker-Prager yield criterion and a rate-and-state friction law. For a sound treatment of plasticity, we borrow elements from mechanical engineering, and extend these with high-quality nonlinear iteration schemes and adaptive time-stepping to resolve the rupture process at all time scales. We will present these techniques together with proof-of-concept examples of self-consistently developing seismic cycles in 2D and 3D, including phases of stress accumulation, fault nucleation, dynamic rupture, and healing.
Consistent truncations of M-theory for general SU(2) structures
Triendl, Hagen
2015-01-01
In seven dimensions any spin manifold admits an SU(2) structure and therefore very general M-theory compactifications have the potential to allow for a reduction to N=4 gauged supergravity. We perform this general SU(2) reduction and give the relation of SU(2) torsion classes and fluxes to gaugings in the N=4 theory. We furthermore show explicitly that this reduction is a consistent truncation of the eleven-dimensional theory, in other words classical solutions of the reduced theory also solve the eleven-dimensional equations of motion. This reduction generalizes previous M-theory reductions on Tri-Sasakian manifolds and type IIA reductions on Calabi-Yau manifolds of vanishing Euler number. Moreover, it can also be applied to compactifications on certain G2 holonomy manifolds and to more general flux backgrounds.
Paul, Ashesh
2016-01-01
Employing the Sagdeev pseudo-potential technique the ion acoustic solitary structures have been investigated in an unmagnetized collisionless plasma consisting of adiabatic warm ions, nonthermal electrons and isothermal positrons. The qualitatively different compositional parameter spaces clearly indicate the existence domains of solitons and double layers with respect to any parameter of the present plasma system. The present system supports the negative potential double layer which always restricts the occurrence of negative potential solitons. The system also supports positive potential double layers when the ratio of the average thermal velocity of positrons to that of electrons is less than a critical value. However, there exists a parameter regime for which the positive potential double layer is unable to restrict the occurrence of positive potential solitary waves and in this region of the parameter space, there exist positive potential solitary waves after the formation of a positive potential double ...
Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems
Garg, Vikram V
2014-09-27
Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.
Retinal anatomy of the New Zealand kiwi: structural traits consistent with their nocturnal behavior.
Corfield, Jeremy R; Parsons, Stuart; Harimoto, Yoshitetsu; Acosta, Monica L
2015-04-01
Kiwi (Apteryx spp.) have a visual system unlike that of other nocturnal birds, and have specializations to their auditory, olfactory, and tactile systems. Eye size, binocular visual fields and visual brain centers in kiwi are proportionally the smallest yet recorded among birds. Given the many unique features of the kiwi visual system, we examined the laminar organization of the kiwi retina to determine if they evolved increased light sensitivity with a shift to a nocturnal niche or if they retained features of their diurnal ancestor. The laminar organization of the kiwi retina was consistent with an ability to detect low light levels similar to that of other nocturnal species. In particular, the retina appeared to have a high proportion of rod photoreceptors as compared to diurnal species, as evidenced by a thick outer nuclear layer, and also numerous thin photoreceptor segments intercalated among the conical shaped cone photoreceptor inner segments. Therefore, the retinal structure of kiwi was consistent with increased light sensitivity, although other features of the visual system, such as eye size, suggest a reduced reliance on vision. The unique combination of a nocturnal retina and smaller than expected eye size, binocular visual fields, and brain regions make the kiwi visual system unlike that of any bird examined to date. Whether these features of their visual system are an evolutionary design that meets their specific visual needs or are a remnant of a kiwi ancestor that relied more heavily on vision is yet to be determined.
Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas
2015-08-01
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of "low-cost" electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT methods
Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas
2015-08-07
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of "low-cost" electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT methods
Energy Technology Data Exchange (ETDEWEB)
Grimme, Stefan, E-mail: grimme@thch.uni-bonn.de; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas [Mulliken Center for Theoretical Chemistry, Institut für Physikalische und Theoretische Chemie, Rheinische Friedrich-Wilhelms Universität Bonn, Beringstraße 4, 53115 Bonn (Germany)
2015-08-07
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of “low-cost” electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT
Michaels, Patrick J; Christy, John R; Herman, Chad S; Liljegren, Lucia M; Annan, James D
2013-01-01
Assessing the consistency between short-term global temperature trends in observations and climate model projections is a challenging problem. While climate models capture many processes governing short-term climate fluctuations, they are not expected to simulate the specific timing of these somewhat random phenomena - the occurrence of which may impact the realized trend. Therefore, to assess model performance, we develop distributions of projected temperature trends from a collection of climate models running the IPCC A1B emissions scenario. We evaluate where observed trends of length 5 to 15 years fall within the distribution of model trends of the same length. We find that current trends lie near the lower limits of the model distributions, with cumulative probability-of-occurrence values typically between 5 percent and 20 percent, and probabilities below 5 percent not uncommon. Our results indicate cause for concern regarding the consistency between climate model projections and observed climate behavior...
Analytical theory of self-consistent current structures in a collisionless plasma
Kocharovsky, V. V.; Kocharovsky, Vl V.; Martyanov, V. Yu; Tarasov, S. V.
2017-03-01
The most-studied classes of exact solutions to Vlasov–Maxwell equations for stationary neutral current structures in a collisionless relativistic plasma, which allow the particle distribution functions (PDFs) to be chosen at will, are reviewed. A general classification is presented of the current sheets and filaments described by the method of invariants of motion of particles whose PDF is symmetric in a certain way in coordinate and momentum spaces. The possibility is discussed of using these explicit solutions to model the observed and/or expected features of current structures in cosmic and laboratory plasmas. Also addressed are how the magnetic field forms and the analytical description of the so-called Weibel instability in a plasma with an arbitrary PDF.
Kilcoyne, Michelle; Shashkov, Alexander S; Senchenkova, Sof'ya A; Knirel, Yuriy A; Vinogradov, Evgeny V; Radziejewska-Lebrecht, Joanna; Galimska-Stypa, Regina; Savage, Angela V
2002-10-01
The lipopolysaccharide of the bacterium Morganella morganii (strain KF 1676, RK 4222) yielded two polysaccharides, PS1 and PS2, when subjected to mild acid degradation followed by GPC. The polysaccharides were studied by 1H and 13C NMR spectroscopy, including two-dimensional COSY, TOCSY, NOESY, 1H,(13)C HMQC, and HMBC experiments. Each polysaccharide was found to contain a disaccharide repeating unit consisting of two higher sugars, 5-acetamidino-7-acetamido-3,5,7,9-tetradeoxy-L-glycero-D-galacto-non-2-ulosonic acid (a derivative of 8-epilegionaminic acid, 8eLeg5Am7Ac) and 2-acetamido-4-C-(3'-carboxamide-2',2'-dihydroxypropyl)-2,6-dideoxy-D-galactose (shewanellose, She). The two polysaccharides differ only in the ring size of shewanellose and have the following structures:Shewanellose has been previously identified in a phenol-soluble polysaccharide from Shewanella putrefaciens A6, which shows a close structural similarity to PS2.
Institute of Scientific and Technical Information of China (English)
Yee LEUNG; WU Kefa; DONG Tianxin
2001-01-01
In this paper, a multivariate linear functional relationship model, where the covariance matrix of the observational errors is not restricted, is considered. The parameter estimation of this model is discussed. The estimators are shown to be a strongly consistent estimation under some mild conditions on the incidental parameters.
Silvis, Maurits H
2015-01-01
Assuming a general constitutive relation for the turbulent stresses in terms of the local large-scale velocity gradient, we constructed a class of subgrid-scale models for large-eddy simulation that are consistent with important physical and mathematical properties. In particular, they preserve symmetries of the Navier-Stokes equations and exhibit the proper near-wall scaling. They furthermore show desirable dissipation behavior and are capable of describing nondissipative effects. We provided examples of such physically-consistent models and showed that existing subgrid-scale models do not all satisfy the desired properties.
New geometric design consistency model based on operating speed profiles for road safety evaluation.
Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo
2013-12-01
To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment.
The fundamental solution for a consistent complex model of the shallow shell equations
Matthew P. Coleman
1999-01-01
The calculation of the Fourier transforms of the fundamental solution in shallow shell theory ostensibly was accomplished by J. L. Sanders [J. Appl. Mech. 37 (1970), 361-366]. However, as is shown in detail in this paper, the complex model used by Sanders is, in fact, inconsistent. This paper provides a consistent version of Sanders's complex model, along with the Fourier transforms of the fundamental solution for this corrected model. The inverse Fourier transforms are then calculated for th...
A self-consistent first-principle based approach to model carrier mobility in organic materials
Energy Technology Data Exchange (ETDEWEB)
Meded, Velimir; Friederich, Pascal; Symalla, Franz; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)
2015-12-31
Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using a fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.
Kukush, A.; Markovsky, I.; Van Huffel, S.
2002-01-01
Consistent estimators of the rank-deficient fundamental matrix yielding information on the relative orientation of two images in two-view motion analysis are derived. The estimators are derived by minimizing a corrected contrast function in a quadratic measurement error model. In addition, a consistent estimator for the measurement error variance is obtained. Simulation results show the improved accuracy of the newly proposed estimator compared to the ordinary total least-squares estimator.
A Consistent Fuzzy Preference Relations Based ANP Model for R&D Project Selection
Directory of Open Access Journals (Sweden)
Chia-Hua Cheng
2017-08-01
Full Text Available In today’s rapidly changing economy, technology companies have to make decisions on research and development (R&D projects investment on a routine bases with such decisions having a direct impact on that company’s profitability, sustainability and future growth. Companies seeking profitable opportunities for investment and project selection must consider many factors such as resource limitations and differences in assessment, with consideration of both qualitative and quantitative criteria. Often, differences in perception by the various stakeholders hinder the attainment of a consensus of opinion and coordination efforts. Thus, in this study, a hybrid model is developed for the consideration of the complex criteria taking into account the different opinions of the various stakeholders who often come from different departments within the company and have different opinions about which direction to take. The decision-making trial and evaluation laboratory (DEMATEL approach is used to convert the cause and effect relations representing the criteria into a visual network structure. A consistent fuzzy preference relations based analytic network process (CFPR-ANP method is developed to calculate the preference-weights of the criteria based on the derived network structure. The CFPR-ANP is an improvement over the original analytic network process (ANP method in that it reduces the problem of inconsistency as well as the number of pairwise comparisons. The combined complex proportional assessment (COPRAS-G method is applied with fuzzy grey relations to resolve conflicts arising from differences in information and opinions provided by the different stakeholders about the selection of the most suitable R&D projects. This novel combination approach is then used to assist an international brand-name company to prioritize projects and make project decisions that will maximize returns and ensure sustainability for the company.
Bechtel, B.; Pesaresi, M.; See, L.; Mills, G.; Ching, J.; Alexander, P. J.; Feddema, J. J.; Florczyk, A. J.; Stewart, I.
2016-06-01
Although more than half of the Earth's population live in urban areas, we know remarkably little about most cities and what we do know is incomplete (lack of coverage) and inconsistent (varying definitions and scale). While there have been considerable advances in the derivation of a global urban mask using satellite information, the complexity of urban structures, the heterogeneity of materials, and the multiplicity of spectral properties have impeded the derivation of universal urban structural types (UST). Further, the variety of UST typologies severely limits the comparability of such studies and although a common and generic description of urban structures is an essential requirement for the universal mapping of urban structures, such a standard scheme is still lacking. More recently, there have been two developments in urban mapping that have the potential for providing a standard approach: the Local Climate Zone (LCZ) scheme (used by the World Urban Database and Access Portal Tools project) and the Global Human Settlement Layer (GHSL) methodology by JRC. In this paper the LCZ scheme and the GHSL LABEL product were compared for selected cities. The comparison between both datasets revealed a good agreement at city and coarse scale, while the contingency at pixel scale was limited due to the mismatch in grid resolution and typology. At a 1 km scale, built-up as well as open and compact classes showed very good agreement in terms of correlation coefficient and mean absolute distance, spatial pattern, and radial distribution as a function of distance from town, which indicates that a decomposition relevant for modelling applications could be derived from both. On the other hand, specific problems were found for both datasets, which are discussed along with their general advantages and disadvantages as a standard for UST classification in urban remote sensing.
Hazard-consistent ground motions generated with a stochastic fault-rupture model
Energy Technology Data Exchange (ETDEWEB)
Nishida, Akemi, E-mail: nishida.akemi@jaea.go.jp [Center for Computational Science and e-Systems, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba 277-0871 (Japan); Igarashi, Sayaka, E-mail: igrsyk00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Sakamoto, Shigehiro, E-mail: shigehiro.sakamoto@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Uchiyama, Yasuo, E-mail: yasuo.uchiyama@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Yamamoto, Yu, E-mail: ymmyu-00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Muramatsu, Ken, E-mail: kmuramat@tcu.ac.jp [Department of Nuclear Safety Engineering, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo 158-8557 (Japan); Takada, Tsuyoshi, E-mail: takada@load.arch.t.u-tokyo.ac.jp [Department of Architecture, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2015-12-15
obtain these acceleration deviations. A similar tendency can be found for some other seismic-source characteristics, meaning that ground motions obtained in this study cannot be generated by simulations of deterministic fault-rupture models with averaged seismic-source characteristics. Generated ground motions incorporate differences between each seismic-source characteristic, and they are effectively available for PRAs of structures.
Towards three-dimensional continuum models of self-consistent along-strike megathrust segmentation
Pranger, Casper; van Dinther, Ylona; May, Dave; Le Pourhiet, Laetitia; Gerya, Taras
2016-04-01
At subduction megathrusts, propagation of large ruptures may be confined between the up-dip and down-dip limits of the seismogenic zone. This opens a primary role for lateral rupture dimensions to control the magnitude and severity of megathrust earthquakes. The goal of this study is to improve our understanding of the ways in which the inherent variability of the subduction interface may influence the degree of interseismic locking, and the propensity of a rupture to propagate over regions of variable slip potential. The global absence of a historic record sufficiently long to base risk assessment on, makes us rely on numerical modelling as a way to extend our understanding of the spatio-temporal occurrence of earthquakes. However, the complex interaction of the subduction stress environment, the variability of the subduction interface, and the structure and deformation of the crustal wedge has made it very difficult to construct comprehensive numerical models of megathrust segmentation. We develop and exploit the power of a plastic 3D continuum representation of the subduction megathrust, as well as off-megathrust faulting to model the long-term tectonic build-up of stresses, and their sudden seismic release. The sheer size of the 3D problem, and the time scales covering those of tectonics as well as seismology, force us to explore efficient and accurate physical and numerical techniques. We thus focused our efforts on developing a staggered grid finite difference code that makes use of the PETSc library for massively parallel computing. The code incorporates a newly developed automatic discretization algorithm, which enables it to handle a wide variety of equations with relative ease. The different physical and numerical ingredients - like attenuating visco-elasto-plastic materials, frictional weakening and inertially driven seismic release, and adaptive time marching schemes - most of which have been implemented and benchmarked individually - are now combined
Self-consistent models of quasi-relaxed rotating stellar systems
Varri, A L
2012-01-01
Two new families of self-consistent axisymmetric truncated equilibrium models for the description of quasi-relaxed rotating stellar systems are presented. The first extends the spherical King models to the case of solid-body rotation. The second is characterized by differential rotation, designed to be rigid in the central regions and to vanish in the outer parts, where the energy truncation becomes effective. The models are constructed by solving the nonlinear Poisson equation for the self-consistent mean-field potential. For rigidly rotating configurations, the solutions are obtained by an asymptotic expansion on the rotation strength parameter. The differentially rotating models are constructed by means of an iterative approach based on a Legendre series expansion of the density and the potential. The two classes of models exhibit complementary properties. The rigidly rotating configurations are flattened toward the equatorial plane, with deviations from spherical symmetry that increase with the distance f...
Mantz, Adam B; Morris, R Glenn
2016-01-01
This is the fifth in a series of papers studying the astrophysics and cosmology of massive, dynamically relaxed galaxy clusters. Our sample comprises 40 clusters identified as being dynamically relaxed and hot in Papers I and II of this series. Here we use constraints on cluster mass profiles from X-ray data to test some of the basic predictions of cosmological structure formation in the Cold Dark Matter (CDM) paradigm. We present constraints on the concentration--mass relation for massive clusters, finding a power-law mass dependence with a slope of $\\kappa_m=-0.16\\pm0.07$, in agreement with CDM predictions. For this relaxed sample, the relation is consistent with a constant as a function of redshift (power-law slope with $1+z$ of $\\kappa_\\zeta=-0.17\\pm0.26$), with an intrinsic scatter of $\\sigma_{\\ln c}=0.16\\pm0.03$. We investigate the shape of cluster mass profiles over the radial range probed by the data (typically $\\sim50$kpc--1Mpc), and test for departures from the simple Navarro, Frenk & White (NFW...
On Dowell's simplification for acoustic cavity-structure interaction and consistent alternatives.
Ginsberg, Jerry H
2010-01-01
A widely employed description of the acoustical response in a cavity whose walls are compliant, which was first proposed by Dowell and Voss [(1962). AIAA J. 1, 476-477], uses the modes of the corresponding cavity with rigid walls as basis functions for a series representation of the pressure. It yields a velocity field that is not compatible with the movement of the boundary, and the system equations do not satisfy the principle of reciprocity. The simplified formulation is compared to consistent solutions of the coupled field equations in the time and frequency domains. In addition, this paper introduces an extension of the Ritz series method to fluid-structure coupled systems that satisfies all continuity conditions by imposing constraint equations to enforce any such conditions that are not identically satisfied by the series. A slender waveguide terminated by an oscillator is analyzed by each method. The simplified formulation is found to be very accurate for light fluid loading, except for the pressure field at frequencies below the fundamental rigid-cavity resonance, whereas the Ritz series solution is found to be extremely accurate in all cases.
Sovinová, Hana; Csémy, Ladislav
2010-09-01
The primary aim of the study is to examine the psychometric properties and the structure of the Czech version of the Alcohol Use Disorders Identification Test (AUDIT), and to estimate the rate of risky, harmful and problematic alcohol consumers. Two large data sets were analyzed. The first was based on the application of the AUDIT as a part of a general population survey (N = 1.326; age range 18-64), the second represents data gathered by general practitioners (GPs) in the context of a pilot screening and brief advice (SBA) project in the area of Greater Prague (N = 2.589). Analyses of reliability showed satisfying internal consistency of the AUDIT (Cronbach's alpha = 0.83 for population survey and 0.77 for survey based on SBA). Principal component analyses suggest two factor solutions where one factor represents drinking patterns and the second alcohol-related problems or symptoms of dependence. The principal component analyses of both data sets led to similar factor formation. A total of 19% of the general population sample was classified as risky or harmful drinkers and 2% as problem drinkers. These figures were slightly lower in the sample of patients of general practitioners. The Czech version of the AUDIT seems to be a plausible screening instrument. The properties of the instrument suggest usefulness of the summary score for identification of the level of risk.
Wallerstein, D. V.; Lahey, R. S.; Haggenmacher, G. W.
1977-01-01
Many of the practical aspects and problems of ensuring the integrity of a structural model are discussed, as well as the steps which have been taken in the NASTRAN system to assure that these checks can be routinely performed. Model integrity as used applies not only to the structural model but also to the loads applied to the model. Emphasis is also placed on the fact that when dealing with substructure analysis, all of the checking procedures discussed should be applied at the lowest level of substructure prior to any coupling.
The Spectrum of the Baryon Masses in a Self-consistent SU(3) Quantum Skyrme Model
Jurciukonis, Darius; Regelskis, Vidas
2012-01-01
The semiclassical SU(3) Skyrme model is traditionally considered as describing a rigid quantum rotator with the profile function being fixed by the classical solution of the corresponding SU(2) Skyrme model. In contrast, we go beyond the classical profile function by quantizing the SU(3) Skyrme model canonically. The quantization of the model is performed in terms of the collective coordinate formalism and leads to the establishment of purely quantum corrections of the model. These new corrections are of fundamental importance. They are crucial in obtaining stable quantum solitons of the quantum SU(3) Skyrme model, thus making the model self-consistent and not dependent on the classical solution of the SU(2) case. We show that such a treatment of the model leads to a family of stable quantum solitons that describe the baryon octet and decuplet and reproduce the experimental values of their masses.
A consistency assessment of coupled cohesive zone models for mixed-mode debonding problems
Directory of Open Access Journals (Sweden)
R. Dimitri
2014-07-01
Full Text Available Due to their simplicity, cohesive zone models (CZMs are very attractive to describe mixed-mode failure and debonding processes of materials and interfaces. Although a large number of coupled CZMs have been proposed, and despite the extensive related literature, little attention has been devoted to ensuring the consistency of these models for mixed-mode conditions, primarily in a thermodynamical sense. A lack of consistency may affect the local or global response of a mechanical system. This contribution deals with the consistency check for some widely used exponential and bilinear mixed-mode CZMs. The coupling effect on stresses and energy dissipation is first investigated and the path-dependance of the mixed-mode debonding work of separation is analitically evaluated. Analytical predictions are also compared with results from numerical implementations, where the interface is described with zero-thickness contact elements. A node-to-segment strategy is here adopted, which incorporates decohesion and contact within a unified framework. A new thermodynamically consistent mixed-mode CZ model based on a reformulation of the Xu-Needleman model as modified by van den Bosch et al. is finally proposed and derived by applying the Coleman and Noll procedure in accordance with the second law of thermodynamics. The model holds monolithically for loading and unloading processes, as well as for decohesion and contact, and its performance is demonstrated through suitable examples.
Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers
Cartar, William; Mørk, Jesper; Hughes, Stephen
2017-08-01
We present a powerful computational approach to simulate the threshold behavior of photonic-crystal quantum-dot (QD) lasers. Using a finite-difference time-domain (FDTD) technique, Maxwell-Bloch equations representing a system of thousands of statistically independent and randomly positioned two-level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within the model. These Maxwell-Bloch equations are implemented by using Lumerical's flexible material plug-in tool, which allows a user to define additional equations of motion for the nonlinear polarization. We implement the gain ensemble within triangular-lattice photonic-crystal cavities of various length N (where N refers to the number of missing holes), and investigate the cavity mode characteristics and the threshold regime as a function of cavity length. We develop effective two-dimensional model simulations which are derived after studying the full three-dimensional passive material structures by matching the cavity quality factors and resonance properties. We also demonstrate how to obtain the correct point-dipole radiative decay rate from Fermi's golden rule, which is captured naturally by the FDTD method. Our numerical simulations predict that the pump threshold plateaus around cavity lengths greater than N =9 , which we identify as a consequence of the complex spatial dynamics and gain coupling from the inhomogeneous QD ensemble. This behavior is not expected from simple rate-equation analysis commonly adopted in the literature, but is in qualitative agreement with recent experiments. Single-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also
A consistent modelling methodology for secondary settling tanks in wastewater treatment.
Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar
2011-03-01
The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models.
Towards an Information Model of Consistency Maintenance in Distributed Interactive Applications
Directory of Open Access Journals (Sweden)
Xin Zhang
2008-01-01
Full Text Available A novel framework to model and explore predictive contract mechanisms in distributed interactive applications (DIAs using information theory is proposed. In our model, the entity state update scheme is modelled as an information generation, encoding, and reconstruction process. Such a perspective facilitates a quantitative measurement of state fidelity loss as a result of the distribution protocol. Results from an experimental study on a first-person shooter game are used to illustrate the utility of this measurement process. We contend that our proposed model is a starting point to reframe and analyse consistency maintenance in DIAs as a problem in distributed interactive media compression.
Feng, Bin; Shi, Zelin; Zhang, Chengshuo; Xu, Baoshu; Zhang, Xiaodong
2016-05-01
The point spread function (PSF) inconsistency caused by temperature variation leads to artifacts in decoded images of a wavefront coding infrared imaging system. Therefore, this paper proposes an analytical model for the effect of temperature variation on the PSF consistency. In the proposed model, a formula for the thermal deformation of an optical phase mask is derived. This formula indicates that a cubic optical phase mask (CPM) is still cubic after thermal deformation. A proposed equivalent cubic phase mask (E-CPM) is a virtual and room-temperature lens which characterizes the optical effect of temperature variation on the CPM. Additionally, a calculating method for PSF consistency after temperature variation is presented. Numerical simulation illustrates the validity of the proposed model and some significant conclusions are drawn. Given the form parameter, the PSF consistency achieved by a Ge-material CPM is better than the PSF consistency by a ZnSe-material CPM. The effect of the optical phase mask on PSF inconsistency is much slighter than that of the auxiliary lens group. A large form parameter of the CPM will introduce large defocus-insensitive aberrations, which improves the PSF consistency but degrades the room-temperature MTF.
Using a Theory-Consistent CVAR Scenario to Test an Exchange Rate Model Based on Imperfect Knowledge
Directory of Open Access Journals (Sweden)
Katarina Juselius
2017-07-01
Full Text Available A theory-consistent CVAR scenario describes a set of testable regularieties one should expect to see in the data if the basic assumptions of the theoretical model are empirically valid. Using this method, the paper demonstrates that all basic assumptions about the shock structure and steady-state behavior of an an imperfect knowledge based model for exchange rate determination can be formulated as testable hypotheses on common stochastic trends and cointegration. This model obtaines remarkable support for almost every testable hypothesis and is able to adequately account for the long persistent swings in the real exchange rate.
Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a Dual Risk Model
Directory of Open Access Journals (Sweden)
Lidong Zhang
2014-01-01
Full Text Available We are concerned with optimal investment strategy for a dual risk model. We assume that the company can invest into a risk-free asset and a risky asset. Short-selling and borrowing money are allowed. Due to lack of iterated-expectation property, the Bellman Optimization Principle does not hold. Thus we investigate the precommitted strategy and time-consistent strategy, respectively. We take three steps to derive the precommitted investment strategy. Furthermore, the time-consistent investment strategy is also obtained by solving the extended Hamilton-Jacobi-Bellman equations. We compare the precommitted strategy with time-consistent strategy and find that these different strategies have different advantages: the former can make value function maximized at the original time t=0 and the latter strategy is time-consistent for the whole time horizon. Finally, numerical analysis is presented for our results.
A thermodynamically consistent phase-field model for two-phase flows with thermocapillary effects
Guo, Zhenlin
2014-01-01
In this paper, we develop a phase-field model for binary incompressible fluid with thermocapillary effects, which allows the different properties (densities, viscosities and heat conductivities) for each component and meanwhile maintains the thermodynamic consistency. The governing equations of the model including the Navier-Stokes equations, Cahn-Hilliard equations and energy balance equation are derived together within a thermodynamic framework based on the entropy generation, which guarantees the thermodynamic consistency. The sharp-interface limit analysis is carried out to show that the interfacial conditions of the classical sharp-interface models can be recovered from our phase-field model. Moreover, some numerical examples including thermocapillary migration of a bubble and thermocapillary convections in a two- layer fluid system are computed by using a continuous finite element method. The results are compared to the existing analytical solutions and theoretical predictions as validations for our mod...
Nonparametric test of consistency between cosmological models and multiband CMB measurements
Aghamousa, Amir
2015-01-01
We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare between the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit $\\Lambda$CDM model at $95\\% (\\sim 2\\sigma)$ confidence distance from the center of the nonparametri...
Revised self-consistent continuum solvation in electronic-structure calculations
Andreussi, Oliviero; Marzari, Nicola
2011-01-01
The solvation model proposed by Fattebert and Gygi [Journal of Computational Chemistry 23, 662 (2002)] and Scherlis et al. [Journal of Chemical Physics 124, 074103 (2006)] is reformulated, overcoming some of the numerical limitations encountered and extending its range of applicability. We first recast the problem in terms of induced polarization charges that act as a direct mapping of the self-consistent continuum dielectric; this allows to define a functional form for the dielectric that is well behaved both in the high-density region of the nuclear charges and in the low-density region where the electronic wavefunctions decay into the solvent. Second, we outline an iterative procedure to solve the Poisson equation for the quantum fragment embedded in the solvent that does not require multi-grid algorithms, is trivially parallel, and can be applied to any Bravais crystallographic system. Last, we capture some of the non-electrostatic or cavitation terms via a combined use of the quantum volume and quantum s...
A simplified benchmark Stock-Flow Consistent (SFC) post-Keynesian growth model
Cláudio H. dos Santos; Zezza, Gennaro
2007-01-01
Despite being arguably one of the most active areas of research in heterodox macroeconomics, the study of the dynamic properties of stock-flow consistent (SFC) growth models of financially sophisticated economies is still in its early stages. This paper attempts to offer a contribution to this line of research by presenting a simplified Post-Keynesian SFC growth model with well-defined dynamic properties, and using it to shed light on the merits and limitations of the current heterodox SFC li...
A Consistent Direct Method for Estimating Parameters in Ordinary Differential Equations Models
Holte, Sarah E.
2016-01-01
Ordinary differential equations provide an attractive framework for modeling temporal dynamics in a variety of scientific settings. We show how consistent estimation for parameters in ODE models can be obtained by modifying a direct (non-iterative) least squares method similar to the direct methods originally developed by Himmelbau, Jones and Bischoff. Our method is called the bias-corrected least squares (BCLS) method since it is a modification of least squares methods known to be biased. Co...
Structural Equation Model Trees
Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman
2013-01-01
In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…
Self-consistent modelling of hot plasmas within non-extensive Tsallis' thermostatistics
Pain, Jean-Christophe; Gilleron, Franck
2011-01-01
A study of the effects of non-extensivity on the modelling of atomic physics in hot dense plasmas is proposed within Tsallis' statistics. The electronic structure of the plasma is calculated through an average-atom model based on the minimization of the non-extensive free energy.
Comment on Self-Consistent Model of Black Hole Formation and Evaporation
Ho, Pei-Ming
2015-01-01
In an earlier work, Kawai et al proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.
Consistent phase-change modeling for CO2-based heat mining operation
DEFF Research Database (Denmark)
Singh, Ashok Kumar; Veje, Christian
2017-01-01
–gas phase transition with more accuracy and consistency. Calculation of fluid properties and saturation state were based on the volume translated Peng–Robinson equation of state and results verified. The present model has been applied to a scenario to simulate a CO2-based heat mining process. In this paper...
Comment on self-consistent model of black hole formation and evaporation
Energy Technology Data Exchange (ETDEWEB)
Ho, Pei-Ming [Department of Physics and Center for Theoretical Sciences, Center for Advanced Study in Theoretical Sciences,National Taiwan University, Taipei 106, Taiwan, R.O.C. (China)
2015-08-18
In an earlier work, Kawai et al. proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.
Song, Y.; Wright, D.
1998-01-01
A formulation of the pressure gradient force for use in models with topography-following coordinates is proposed and diagnostically analyzed by Song. We investigate numerical consistency with respect to global energy conservation, depth-integrated momentum changes, and the represent of the bottom pressure torque.
Subjective Confidence in Perceptual Judgments: A Test of the Self-Consistency Model
Koriat, Asher
2011-01-01
Two questions about subjective confidence in perceptual judgments are examined: the bases for these judgments and the reasons for their accuracy. Confidence in perceptual judgments has been claimed to rest on qualitatively different processes than confidence in memory tasks. However, predictions from a self-consistency model (SCM), which had been…
Subjective Confidence in Perceptual Judgments: A Test of the Self-Consistency Model
Koriat, Asher
2011-01-01
Two questions about subjective confidence in perceptual judgments are examined: the bases for these judgments and the reasons for their accuracy. Confidence in perceptual judgments has been claimed to rest on qualitatively different processes than confidence in memory tasks. However, predictions from a self-consistency model (SCM), which had been…
STRONG CONSISTENCY OF M ESTIMATOR IN LINEAR MODEL FOR NEGATIVELY ASSOCIATED SAMPLES
Institute of Scientific and Technical Information of China (English)
Qunying WU
2006-01-01
This paper discusses the strong consistency of M estimator of regression parameter in linear model for negatively associated samples. As a result, the author extends Theorem 1 and Theorem 2 of Shanchao YANG (2002) to the NA errors without necessarily imposing any extra condition.
Functional connectivity modeling of consistent cortico-striatal degeneration in Huntington's disease
Directory of Open Access Journals (Sweden)
Imis Dogan
2015-01-01
Full Text Available Huntington's disease (HD is a progressive neurodegenerative disorder characterized by a complex neuropsychiatric phenotype. In a recent meta-analysis we identified core regions of consistent neurodegeneration in premanifest HD in the striatum and middle occipital gyrus (MOG. For early manifest HD convergent evidence of atrophy was most prominent in the striatum, motor cortex (M1 and inferior frontal junction (IFJ. The aim of the present study was to functionally characterize this topography of brain atrophy and to investigate differential connectivity patterns formed by consistent cortico-striatal atrophy regions in HD. Using areas of striatal and cortical atrophy at different disease stages as seeds, we performed task-free resting-state and task-based meta-analytic connectivity modeling (MACM. MACM utilizes the large data source of the BrainMap database and identifies significant areas of above-chance co-activation with the seed-region via the activation-likelihood-estimation approach. In order to delineate functional networks formed by cortical as well as striatal atrophy regions we computed the conjunction between the co-activation profiles of striatal and cortical seeds in the premanifest and manifest stages of HD, respectively. Functional characterization of the seeds was obtained using the behavioral meta-data of BrainMap. Cortico-striatal atrophy seeds of the premanifest stage of HD showed common co-activation with a rather cognitive network including the striatum, anterior insula, lateral prefrontal, premotor, supplementary motor and parietal regions. A similar but more pronounced co-activation pattern, additionally including the medial prefrontal cortex and thalamic nuclei was found with striatal and IFJ seeds at the manifest HD stage. The striatum and M1 were functionally connected mainly to premotor and sensorimotor areas, posterior insula, putamen and thalamus. Behavioral characterization of the seeds confirmed that experiments
Antoniu, Gabriel; Cudennec, Loïc; Monnet, Sébastien
2006-01-01
This paper addresses the problem of efficient visualization of shared data within code coupling grid applications. These applications are structured as a set of distributed, autonomous, weakly-coupled codes. We focus on the case where the codes are able to interact using the abstraction of a shared data space. We propose an efficient visualization scheme by adapting the mechanisms used to maintain the data consistency. We introduce a new operation called relaxed read, as an extension to the e...
Directory of Open Access Journals (Sweden)
Damiano Monelli
2010-11-01
Full Text Available We present here two self-consistent implementations of a short-term earthquake probability (STEP model that produces daily seismicity forecasts for the area of the Italian national seismic network. Both implementations combine a time-varying and a time-invariant contribution, for which we assume that the instrumental Italian earthquake catalog provides the best information. For the time-invariant contribution, the catalog is declustered using the clustering technique of the STEP model; the smoothed seismicity model is generated from the declustered catalog. The time-varying contribution is what distinguishes the two implementations: 1 for one implementation (STEP-LG, the original model parameterization and estimation is used; 2 for the other (STEP-NG, the mean abundance method is used to estimate aftershock productivity. In the STEP-NG implementation, earthquakes with magnitude up to ML= 6.2 are expected to be less productive compared to the STEP-LG implementation, whereas larger earthquakes are expected to be more productive. We have retrospectively tested the performance of these two implementations and applied likelihood tests to evaluate their consistencies with observed earthquakes. Both of these implementations were consistent with the observed earthquake data in space: STEP-NG performed better than STEP-LG in terms of forecast rates. More generally, we found that testing earthquake forecasts issued at regular intervals does not test the full power of clustering models, and future experiments should allow for more frequent forecasts starting at the times of triggering events.
Altmeyer, Guillaume; Panicaud, Benoit; Rouhaud, Emmanuelle; Wang, Mingchuan; Roos, Arjen; Kerner, Richard
2016-11-01
When constructing viscoelastic models, rate-form relations appear naturally to relate strain and stress tensors. One has to ensure that these tensors and their rates are indifferent with respect to the change of observers and to the superposition with rigid body motions. Objective transports are commonly accepted to ensure this invariance. However, the large number of transport operators developed makes the choice often difficult for the user and may lead to physically inconsistent formulation of hypoelasticity. In this paper, a methodology based on the use of the Lie derivative is proposed to model consistent hypoelasticity as an equivalent incremental formulation of hyperelasticity. Both models are shown to be reversible and completely equivalent. Extension to viscoelasticity is then proposed from this consistent model by associating consistent hypoelastic models with viscous behavior. As an illustration, Mooney-Rivlin nonlinear elasticity is coupled with Newton viscosity and a Maxwell-like material is investigated. Numerical solutions are then presented to illustrate a viscoelastic material subjected to finite deformations for a large range of strain rates.
Urada, Lianne A; Morisky, Donald E; Hernandez, Laufred I; Strathdee, Steffanie A
2013-02-01
This paper examined socio-structural factors of consistent condom use among female entertainment workers at high risk for acquiring HIV in Metro Manila, Quezon City, Philippines. Entertainers, aged 18 and over, from 25 establishments (spa/saunas, night clubs, karaoke bars), who traded sex during the previous 6 months, underwent cross-sectional surveys. The 143 entertainers (42% not always using condoms, 58% always using condoms) had median age (23), duration in sex work (7 months), education (9 years), and 29% were married/had live-in boyfriends. In a logistic multiple regression model, social-structural vs. individual factors were associated with inconsistent condom use: being forced/deceived into sex work, less manager contact, less STI/HIV prevention knowledge acquired from medical personnel/professionals, not following a co-workers' condom use advice, and an interaction between establishment type and alcohol use with establishment guests. Interventions should consider the effects of physical (force/deception into work), social (peer, manager influence), and policy (STI/HIV prevention knowledge acquired from medical personnel/professionals) environments on consistent condom use.
Lu, Wei; Song, Joo Hyun; Christensen, Gary E.; Parikh, Parag J.; Bradley, Jeffrey D.; Low, Daniel A.
2006-03-01
Respiratory motion is a significant source of error in conformal radiation therapy for the thorax and upper abdomen. Four-dimensional computed tomography (4D CT) has been proposed to reduce the uncertainty caused by internal respiratory organ motion. A 4D CT dataset is retrospectively reconstructed at various stages of a respiratory cycle. An important tool for 4D treatment planning is deformable image registration. An inverse consistent image registration is used to model lung motion from one respiratory stage to another during a breathing cycle. This diffeomorphic registration jointly estimates the forward and reverse transformations providing more accurate correspondence between two images. Registration results and modeled motions in the lung are shown for three example respiratory stages. The results demonstrate that the consistent image registration satisfactorily models the large motions in the lung, providing a useful tool for 4D planning and delivering.
Rudzinski, Joseph F; Bereau, Tristan
2016-01-01
Molecular simulations can provide microscopic insight into the physical and chemical driving forces of complex molecular processes. Despite continued advancement of simulation methodology, model errors may lead to inconsistencies between simulated and reference (e.g., from experiments or higher-level simulations) observables. To bound the microscopic information generated by computer simulations within reference measurements, we propose a method that reweights the microscopic transitions of the system to improve consistency with a set of coarse kinetic observables. The method employs the well-developed Markov state modeling framework to efficiently link microscopic dynamics with long-time scale constraints, thereby consistently addressing a wide range of time scales. To emphasize the robustness of the method, we consider two distinct coarse-grained models with significant kinetic inconsistencies. When applied to the simulated conformational dynamics of small peptides, the reweighting procedure systematically ...
Zhang, Zhen; Guo, Chonghui
2016-08-01
Due to the uncertainty of the decision environment and the lack of knowledge, decision-makers may use uncertain linguistic preference relations to express their preferences over alternatives and criteria. For group decision-making problems with preference relations, it is important to consider the individual consistency and the group consensus before aggregating the preference information. In this paper, consistency and consensus models for group decision-making with uncertain 2-tuple linguistic preference relations (U2TLPRs) are investigated. First of all, a formula which can construct a consistent U2TLPR from the original preference relation is presented. Based on the consistent preference relation, the individual consistency index for a U2TLPR is defined. An iterative algorithm is then developed to improve the individual consistency of a U2TLPR. To help decision-makers reach consensus in group decision-making under uncertain linguistic environment, the individual consensus and group consensus indices for group decision-making with U2TLPRs are defined. Based on the two indices, an algorithm for consensus reaching in group decision-making with U2TLPRs is also developed. Finally, two examples are provided to illustrate the effectiveness of the proposed algorithms.
The fundamental solution for a consistent complex model of the shallow shell equations
Directory of Open Access Journals (Sweden)
Matthew P. Coleman
1999-09-01
Full Text Available The calculation of the Fourier transforms of the fundamental solution in shallow shell theory ostensibly was accomplished by J. L. Sanders [J. Appl. Mech. 37 (1970, 361-366]. However, as is shown in detail in this paper, the complex model used by Sanders is, in fact, inconsistent. This paper provides a consistent version of Sanders's complex model, along with the Fourier transforms of the fundamental solution for this corrected model. The inverse Fourier transforms are then calculated for the particular cases of the shallow spherical and circular cylindrical shells, and the results of the latter are seen to be in agreement with results appearing elsewhere in the literature.
Tests and applications of self-consistent cranking in the interacting boson model
Kuyucak, S; Kuyucak, Serdar; Sugita, Michiaki
1999-01-01
The self-consistent cranking method is tested by comparing the cranking calculations in the interacting boson model with the exact results obtained from the SU(3) and O(6) dynamical symmetries and from numerical diagonalization. The method is used to study the spin dependence of shape variables in the $sd$ and $sdg$ boson models. When realistic sets of parameters are used, both models lead to similar results: axial shape is retained with increasing cranking frequency while fluctuations in the shape variable $\\gamma$ are slightly reduced.
Consistency maintenance for constraint in role-based access control model
Institute of Scientific and Technical Information of China (English)
韩伟力; 陈刚; 尹建伟; 董金祥
2002-01-01
Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far'few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces correaponding formal rules, rulebased reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally,the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-ori-ented product data management (PDM) system.
Consistency maintenance for constraint in role-based access control model
Institute of Scientific and Technical Information of China (English)
韩伟力; 陈刚; 尹建伟; 董金祥
2002-01-01
Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces corresponding formal rules, rule-based reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally, the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-oriented product data management (PDM) system.
A New Hierarchy of Phylogenetic Models Consistent with Heterogeneous Substitution Rates.
Woodhams, Michael D; Fernández-Sánchez, Jesús; Sumner, Jeremy G
2015-07-01
When the process underlying DNA substitutions varies across evolutionary history, some standard Markov models underlying phylogenetic methods are mathematically inconsistent. The most prominent example is the general time-reversible model (GTR) together with some, but not all, of its submodels. To rectify this deficiency, nonhomogeneous Lie Markov models have been identified as the class of models that are consistent in the face of a changing process of DNA substitutions regardless of taxon sampling. Some well-known models in popular use are within this class, but are either overly simplistic (e.g., the Kimura two-parameter model) or overly complex (the general Markov model). On a diverse set of biological data sets, we test a hierarchy of Lie Markov models spanning the full range of parameter richness. Compared against the benchmark of the ever-popular GTR model, we find that as a whole the Lie Markov models perform well, with the best performing models having 8-10 parameters and the ability to recognize the distinction between purines and pyrimidines. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society of Systematic Biologists.
Hirai, Kenta; Mita, Akira
2016-04-01
Because of social background, such as repeated large earthquakes and cheating in design and construction, structural health monitoring (SHM) systems are getting strong attention. The SHM systems are in a practical phase. An SHM system consisting of small number of sensors has been introduced to 6 tall buildings in Shinjuku area. Including them, there are 2 major issues in the SHM systems consisting of small number of sensors. First, optimal system number of sensors and the location are not well-defined. In the practice, system placement is determined based on rough prediction and experience. Second, there are some uncertainties in estimation results by the SHM systems. Thus, the purpose of this research is to provide useful information for increasing reliability of SHM system and to improve estimation results based on uncertainty analysis of the SHM systems. The important damage index used here is the inter-story drift angle. The uncertainty considered here are number of sensors, earthquake motion characteristics, noise in data, error between numerical model and real building, nonlinearity of parameter. Then I have analyzed influence of each factor to estimation accuracy. The analysis conducted here will help to decide sensor system design considering valance of cost and accuracy. Because of constraint on the number of sensors, estimation results by the SHM system has tendency to provide smaller values. To overcome this problem, a compensation algorithm was discussed and presented. The usefulness of this compensation method was demonstrated for 40 story S and RC building models with nonlinear response.
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
Semiparametric reproductive dispersion nonlinear model (SRDNM) is an extension of nonlinear reproductive dispersion models and semiparametric nonlinear regression models, and includes semiparametric nonlinear model and semiparametric generalized linear model as its special cases. Based on the local kernel estimate of nonparametric component, profile-kernel and backfitting estimators of parameters of interest are proposed in SRDNM, and theoretical comparison of both estimators is also investigated in this paper. Under some regularity conditions, strong consistency and asymptotic normality of two estimators are proved. It is shown that the backfitting method produces a larger asymptotic variance than that for the profile-kernel method. A simulation study and a real example are used to illustrate the proposed methodologies.
Detecting consistent patterns of directional adaptation using differential selection codon models.
Parto, Sahar; Lartillot, Nicolas
2017-06-23
Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.
Hess, Julian; Wang, Yongqi
2016-11-01
A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.
A control-oriented self-consistent model of an inductively-coupled plasma
Keville, Bernard; Turner, Miles
2009-10-01
An essential first step in the design of real time control algorithms for plasma processes is to determine dynamical relationships between actuator quantities such as gas flow rate set points and plasma states such electron density. An ideal first principles-based, control-oriented model should exhibit the simplicity and computational requirements of an empirical model and, in addition, despite sacrificing first principles detail, capture enough of the essential physics and chemistry of the process in order to provide reasonably accurate qualitative predictions. This presentation describes a control-oriented model of a cylindrical low pressure planar inductive discharge with a stove top antenna. The model consists of equivalent circuit coupled to a global model of the plasma chemistry to produce a self-consistent zero-dimensional model of the discharge. The non-local plasma conductivity and the fields in the plasma are determined from the wave equation and the two-term solution of the Boltzmann equation. Expressions for the antenna impedance and the parameters of the transformer equivalent circuit in terms of the isotropic electron distribution and the geometry of the chamber are presented.
Consistent increase in Indian monsoon rainfall and its variability across CMIP-5 models
Directory of Open Access Journals (Sweden)
A. Menon
2013-01-01
Full Text Available The possibility of an impact of global warming on the Indian monsoon is of critical importance for the large population of this region. Future projections within the Coupled Model Intercomparison Project Phase 3 (CMIP-3 showed a wide range of trends with varying magnitude and sign across models. Here the Indian summer monsoon rainfall is evaluated in 20 CMIP-5 models for the period 1850 to 2100. In the new generation of climate models a consistent increase in seasonal mean rainfall during the summer monsoon periods arises. All models simulate stronger seasonal mean rainfall in the future compared to the historic period under the strongest warming scenario RCP-8.5. Increase in seasonal mean rainfall is the largest for the RCP-8.5 scenario compared to other RCPs. The interannual variability of the Indian monsoon rainfall also shows a consistent positive trend under unabated global warming. Since both the long-term increase in monsoon rainfall as well as the increase in interannual variability in the future is robust across a wide range of models, some confidence can be attributed to these projected trends.
Directory of Open Access Journals (Sweden)
J. Callies
2011-08-01
Full Text Available A simple model of the thermohaline circulation (THC is formulated, with the objective to represent explicitly the geostrophic force balance of the basinwide THC. The model comprises advective-diffusive density balances in two meridional-vertical planes located at the eastern and the western walls of a hemispheric sector basin. Boundary mixing constrains vertical motion to lateral boundary layers along these walls. Interior, along-boundary, and zonally integrated meridional flows are in thermal-wind balance. Rossby waves and the absence of interior mixing render isopycnals zonally flat except near the western boundary, constraining meridional flow to the western boundary layer. The model is forced by a prescribed meridional surface density profile.
This two-plane model reproduces both steady-state density and steady-state THC structures of a primitive-equation model. The solution shows narrow deep sinking at the eastern high latitudes, distributed upwelling at both boundaries, and a western boundary current with poleward surface and equatorward deep flow. The overturning strength has a 2/3-power-law dependence on vertical diffusivity and a 1/3-power-law dependence on the imposed meridional surface density difference. Convective mixing plays an essential role in the two-plane model, ensuring that deep sinking is located at high latitudes. This role of convective mixing is consistent with that in three-dimensional models and marks a~sharp contrast with previous two-dimensional models.
Overall, the two-plane model reproduces crucial features of the THC as simulated in simple-geometry three-dimensional models. At the same time, the model self-consistently makes quantitative a conceptual picture of the three-dimensional THC that hitherto has been expressed either purely qualitatively or not self-consistently.
Directory of Open Access Journals (Sweden)
J. Callies
2012-01-01
Full Text Available A simple model of the thermohaline circulation (THC is formulated, with the objective to represent explicitly the geostrophic force balance of the basinwide THC. The model comprises advective-diffusive density balances in two meridional-vertical planes located at the eastern and the western walls of a hemispheric sector basin. Boundary mixing constrains vertical motion to lateral boundary layers along these walls. Interior, along-boundary, and zonally integrated meridional flows are in thermal-wind balance. Rossby waves and the absence of interior mixing render isopycnals zonally flat except near the western boundary, constraining meridional flow to the western boundary layer. The model is forced by a prescribed meridional surface density profile.
This two-plane model reproduces both steady-state density and steady-state THC structures of a primitive-equation model. The solution shows narrow deep sinking at the eastern high latitudes, distributed upwelling at both boundaries, and a western boundary current with poleward surface and equatorward deep flow. The overturning strength has a 2/3-power-law dependence on vertical diffusivity and a 1/3-power-law dependence on the imposed meridional surface density difference. Convective mixing plays an essential role in the two-plane model, ensuring that deep sinking is located at high latitudes. This role of convective mixing is consistent with that in three-dimensional models and marks a sharp contrast with previous two-dimensional models.
Overall, the two-plane model reproduces crucial features of the THC as simulated in simple-geometry three-dimensional models. At the same time, the model self-consistently makes quantitative a conceptual picture of the three-dimensional THC that hitherto has been expressed either purely qualitatively or not self-consistently.
Non-Perturbative Self-Consistent Model in SU(N Gauge Field Theory
Directory of Open Access Journals (Sweden)
Koshelkin A.V.
2012-06-01
Full Text Available Non-perturbative quasi-classical model in a gauge theory with the Yang-Mills (YM field is developed. The self-consistent solutions of the Dirac equation in the SU(N gauge field, which is in the eikonal approximation, and the Yang-Mills (YM equations containing the external fermion current are solved. It shown that the developed model has the self-consistent solutions of the Dirac and Yang-Mills equations at N ≥ 3. In this way, the solutions take place provided that the fermion and gauge fields exist simultaneously, so that the fermion current completely compensates the current generated by the gauge field due to self-interaction of it.
Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel
2017-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.
Institute of Scientific and Technical Information of China (English)
John Jack P. RIEGEL III; David DAVISON
2016-01-01
Historically, there has been little correlation between the material properties used in (1) empirical formulae, (2) analytical formulations, and (3) numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014) to show how the Effective Flow Stress (EFS) strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN) (Anderson and Walker, 1991) and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical) to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D=10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a baseline with a full
Directory of Open Access Journals (Sweden)
John (Jack P. Riegel III
2016-04-01
Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a
Institute of Scientific and Technical Information of China (English)
YIN; Changming; ZHAO; Lincheng; WEI; Chengdong
2006-01-01
In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.
A thermodynamically consistent model of the post-translational Kai circadian clock
Lubensky, David K.; ten Wolde, Pieter Rein
2017-01-01
The principal pacemaker of the circadian clock of the cyanobacterium S. elongatus is a protein phosphorylation cycle consisting of three proteins, KaiA, KaiB and KaiC. KaiC forms a homohexamer, with each monomer consisting of two domains, CI and CII. Both domains can bind and hydrolyze ATP, but only the CII domain can be phosphorylated, at two residues, in a well-defined sequence. While this system has been studied extensively, how the clock is driven thermodynamically has remained elusive. Inspired by recent experimental observations and building on ideas from previous mathematical models, we present a new, thermodynamically consistent, statistical-mechanical model of the clock. At its heart are two main ideas: i) ATP hydrolysis in the CI domain provides the thermodynamic driving force for the clock, switching KaiC between an active conformational state in which its phosphorylation level tends to rise and an inactive one in which it tends to fall; ii) phosphorylation of the CII domain provides the timer for the hydrolysis in the CI domain. The model also naturally explains how KaiA, by acting as a nucleotide exchange factor, can stimulate phosphorylation of KaiC, and how the differential affinity of KaiA for the different KaiC phosphoforms generates the characteristic temporal order of KaiC phosphorylation. As the phosphorylation level in the CII domain rises, the release of ADP from CI slows down, making the inactive conformational state of KaiC more stable. In the inactive state, KaiC binds KaiB, which not only stabilizes this state further, but also leads to the sequestration of KaiA, and hence to KaiC dephosphorylation. Using a dedicated kinetic Monte Carlo algorithm, which makes it possible to efficiently simulate this system consisting of more than a billion reactions, we show that the model can describe a wealth of experimental data. PMID:28296888
Structures in Molecular Clouds: Modeling
Energy Technology Data Exchange (ETDEWEB)
Kane, J O; Mizuta, A; Pound, M W; Remington, B A; Ryutov, D D
2006-04-20
We attempt to predict the observed morphology, column density and velocity gradient of Pillar II of the Eagle Nebula, using Rayleigh Taylor (RT) models in which growth is seeded by an initial perturbation in density or in shape of the illuminated surface, and cometary models in which structure is arises from a initially spherical cloud with a dense core. Attempting to mitigate suppression of RT growth by recombination, we use a large cylindrical model volume containing the illuminating source and the self-consistently evolving ablated outflow and the photon flux field, and use initial clouds with finite lateral extent. An RT model shows no growth, while a cometary model appears to be more successful at reproducing observations.
Self-Consistent Ring Current/Electromagnetic Ion Cyclotron Waves Modeling
Khazanov, G. V.; Gamayunov, K. V.; Gallagher, D. L.
2006-01-01
The self-consistent treatment of the RC ion dynamics and EMIC waves, which are thought to exert important influences on the ion dynamical evolution, is an important missing element in our understanding of the storm-and recovery-time ring current evolution. For example, the EMlC waves cause the RC decay on a time scale of about one hour or less during the main phase of storms. The oblique EMIC waves damp due to Landau resonance with the thermal plasmaspheric electrons, and subsequent transport of the dissipating wave energy into the ionosphere below causes an ionosphere temperature enhancement. Under certain conditions, relativistic electrons, with energies 21 MeV, can be removed from the outer radiation belt by EMIC wave scattering during a magnetic storm. That is why the modeling of EMIC waves is critical and timely issue in magnetospheric physics. This study will generalize the self-consistent theoretical description of RC ions and EMIC waves that has been developed by Khazanov et al. [2002, 2003] and include the heavy ions and propagation effects of EMIC waves in the global dynamic of self-consistent RC - EMIC waves coupling. The results of our newly developed model that will be presented at the meeting, focusing mainly on the dynamic of EMIC waves and comparison of these results with the previous global RC modeling studies devoted to EMIC waves formation. We also discuss RC ion precipitations and wave induced thermal electron fluxes into the ionosphere.
Quantal self-consistent cranking model for monopole excitations in even-even light nuclei
Gulshani, P
2014-01-01
In this article, we derive a quantal self-consistent time-reversal invariant cranking model for isoscalar monopole excitation coupled to intrinsic motion in even-even light nuclei. The model uses a wavefunction that is a product of monopole and intrinsic wavefunctions and a constrained variational method to derive, from a many-particle Schrodinger equation, a pair of coupled self-consistent cranking-type Schrodinger equations for the monopole and intrinsic systems. The monopole and intrinsic wavefunctions are coupled to each other by the two cranking equations and their associated parameters and by two constraints imposed on the intrinsic system. For an isotropic Nilsson shell model and an effective residual two-body interaction, the two coupled cranking equations are solved in the Tamm Dancoff approximation. The strength of the interaction is determined from a Hartree-Fock self-consistency argument. The excitation energy of the first excited state is determined and found to agree closely with those observed ...
A Self-Consistent Model for Thermal Oxidation of Silicon at Low Oxide Thickness
Directory of Open Access Journals (Sweden)
Gerald Gerlach
2016-01-01
Full Text Available Thermal oxidation of silicon belongs to the most decisive steps in microelectronic fabrication because it allows creating electrically insulating areas which enclose electrically conductive devices and device areas, respectively. Deal and Grove developed the first model (DG-model for the thermal oxidation of silicon describing the oxide thickness versus oxidation time relationship with very good agreement for oxide thicknesses of more than 23 nm. Their approach named as general relationship is the basis of many similar investigations. However, measurement results show that the DG-model does not apply to very thin oxides in the range of a few nm. Additionally, it is inherently not self-consistent. The aim of this paper is to develop a self-consistent model that is based on the continuity equation instead of Fick’s law as the DG-model is. As literature data show, the relationship between silicon oxide thickness and oxidation time is governed—down to oxide thicknesses of just a few nm—by a power-of-time law. Given by the time-independent surface concentration of oxidants at the oxide surface, Fickian diffusion seems to be neglectable for oxidant migration. The oxidant flux has been revealed to be carried by non-Fickian flux processes depending on sites being able to lodge dopants (oxidants, the so-called DOCC-sites, as well as on the dopant jump rate.
A consistent modelling methodology for secondary settling tanks: a reliable numerical method.
Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena
2013-01-01
The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.
Directory of Open Access Journals (Sweden)
J. G. Fyke
2013-04-01
Full Text Available A new technique for generating ice sheet preindustrial 1850 initial conditions for coupled ice-sheet/climate models is developed and demonstrated over the Greenland Ice Sheet using the Community Earth System Model (CESM. Paleoclimate end-member simulations and ice core data are used to derive continuous surface mass balance fields which are used to force a long transient ice sheet model simulation. The procedure accounts for the evolution of climate through the last glacial period and converges to a simulated preindustrial 1850 ice sheet that is geometrically and thermodynamically consistent with the 1850 preindustrial simulated CESM state, yet contains a transient memory of past climate that compares well to observations and independent model studies. This allows future coupled ice-sheet/climate projections of climate change that include ice sheets to integrate the effect of past climate conditions on the state of the Greenland Ice Sheet, while maintaining system-wide continuity between past and future climate simulations.
Amruth, B R; R., Amruth B.; Patwardhan, Ajay
2006-01-01
Cosmological inflation models with modifications to include recent cosmological observations has been an active area of research after WMAP 3 results, which have given us information about the composition of dark matter, normal matter and dark energy and the anisotropy at the 300,000 years horizon with high precision. We work on inflation models of Guth and Linde and modify them by introducing a doublet scalar field to give normal matter particles and their supersymmetric partners which result in normal and dark matter of our universe. We include the cosmological constant term as the vaccuum expectation value of the stress energy tensor, as the dark energy. We callibrate the parameters of our model using recent observations of density fluctuations. We develop a model which consistently fits with the recent observations.
SALT Spectropolarimetry and Self-Consistent SED and Polarization Modeling of Blazars
Böttcher, Markus; van Soelen, Brian; Britto, Richard; Buckley, David; Marais, Johannes; Schutte, Hester
2017-09-01
We report on recent results from a target-of-opportunity program to obtain spectropolarimetry observations with the Southern African Large Telescope (SALT) on flaring gamma-ray blazars. SALT spectropolarimetry and contemporaneous multi-wavelength spectral energy distribution (SED) data are being modelled self-consistently with a leptonic single-zone model. Such modeling provides an accurate estimate of the degree of order of the magnetic field in the emission region and the thermal contributions (from the host galaxy and the accretion disk) to the SED, thus putting strong constraints on the physical parameters of the gamma-ray emitting region. For the specific case of the $\\gamma$-ray blazar 4C+01.02, we demonstrate that the combined SED and spectropolarimetry modeling constrains the mass of the central black hole in this blazar to $M_{\\rm BH} \\sim 10^9 \\, M_{\\odot}$.
Lousse, V; Vigneron, J P
2001-02-01
The theory of photonic crystals is extended to include the optical Kerr effect taking place in weak third-order, nonlinear materials present in the unit cell. The influence on the dispersion relations of the illumination caused by a single Bloch mode transiting through the crystal structure is examined. Special attention is given to the modification of the photonic gap width and position. Assuming an instantaneous change of refractive index with illumination, the nonlinear band structure problem is solved as a sequence of ordinary, linear band structure calculations, carried out in a plane-wave field representation.
Consistent approach to edge detection using multiscale fuzzy modeling analysis in the human retina
Directory of Open Access Journals (Sweden)
Mehdi Salimian
2012-06-01
Full Text Available Today, many widely used image processing algorithms based on human visual system have been developed. In this paper a smart edge detection based on modeling the performance of simple and complex cells and also modeling and multi-scale image processing in the primary visual cortex is presented. A way to adjust the parameters of Gabor filters (mathematical models of simple cells And the proposed non-linear threshold response are presented in order to Modeling of simple and complex cells. Also, due to multi-scale modeling analysis conducted in the human retina, in the proposed algorithm, all edges of the small and large structures with high precision are detected and localized. Comparing the results of the proposed method for a reliable database with conventional methods shows the higher Performance (about 4-13% and reliability of the proposed method in the detection and localization of edge.
Giorgi, F.; Coppola, E.; Raffaele, F.
2014-10-01
We analyze trends of six daily precipitation-based and physically interconnected hydroclimatic indices in an ensemble of historical and 21st century climate projections under forcing from increasing greenhouse gas (GHG) concentrations (Representative Concentration Pathways (RCP)8.5), along with gridded (land only) observations for the late decades of the twentieth century. The indices include metrics of intensity (SDII) and extremes (R95) of precipitation, dry (DSL), and wet spell length, the hydroclimatic intensity index (HY-INT), and a newly introduced index of precipitation area (PA). All the indices in both the 21st century and historical simulations provide a consistent picture of a predominant shift toward a hydroclimatic regime of more intense, shorter, less frequent, and less widespread precipitation events in response to GHG-induced global warming. The trends are larger and more spatially consistent over tropical than extratropical regions, pointing to the importance of tropical convection in regulating this response, and show substantial regional spatial variability. Observed trends in the indices analyzed are qualitatively and consistently in line with the simulated ones, at least at the global and full tropical scale, further supporting the robustness of the identified prevailing hydroclimatic responses. The HY-INT, PA, and R95 indices show the most consistent response to global warming, and thus offer the most promising tools for formal hydroclimatic model validation and detection/attribution studies. The physical mechanism underlying this response and some of the applications of our results are also discussed.
Fox-Rabinovitz, Michael S.; Lindzen, Richard S.
1993-01-01
Simple numerical experiments are performed in order to determine the effects of inconsistent combinations of horizontal and vertical resolution in both atmospheric models and observing systems. In both cases, we find that inconsistent spatial resolution is associated with enhanced noise generation. A rather fine horizontal resolution in a satellite-data observing system seems to be excessive when combined with the usually available relatively coarse vertical resolution. Using horizontal filters of different strengths, adjusted in such a way as to render the effective horizontal resolution more consistent with vertical resolution for the observing system, may result in improvement of the analysis accuracy. The increase of vertical resolution for a satellite data observing system with better vertically resolved data, the results are different in that little or no horizontal filtering is needed to make spatial resolution more consistent for the system. The obtained experimental estimates of consistent vertical and effective horizontal resolution are in a general agreement with consistent resolution estimates previously derived theoretically by the authors.
Rudzinski, Joseph F.; Kremer, Kurt; Bereau, Tristan
2016-02-01
Molecular simulations can provide microscopic insight into the physical and chemical driving forces of complex molecular processes. Despite continued advancement of simulation methodology, model errors may lead to inconsistencies between simulated and reference (e.g., from experiments or higher-level simulations) observables. To bound the microscopic information generated by computer simulations within reference measurements, we propose a method that reweights the microscopic transitions of the system to improve consistency with a set of coarse kinetic observables. The method employs the well-developed Markov state modeling framework to efficiently link microscopic dynamics with long-time scale constraints, thereby consistently addressing a wide range of time scales. To emphasize the robustness of the method, we consider two distinct coarse-grained models with significant kinetic inconsistencies. When applied to the simulated conformational dynamics of small peptides, the reweighting procedure systematically improves the time scale separation of the slowest processes. Additionally, constraining the forward and backward rates between metastable states leads to slight improvement of their relative stabilities and, thus, refined equilibrium properties of the resulting model. Finally, we find that difficulties in simultaneously describing both the simulated data and the provided constraints can help identify specific limitations of the underlying simulation approach.
Ring current Atmosphere interactions Model with Self-Consistent Magnetic field
Energy Technology Data Exchange (ETDEWEB)
2016-09-09
The Ring current Atmosphere interactions Model with Self-Consistent magnetic field (B) is a unique code that combines a kinetic model of ring current plasma with a three dimensional force-balanced model of the terrestrial magnetic field. The kinetic portion, RAM, solves the kinetic equation to yield the bounce-averaged distribution function as a function of azimuth, radial distance, energy and pitch angle for three ion species (H+, He+, and O+) and, optionally, electrons. The domain is a circle in the Solar-Magnetic (SM) equatorial plane with a radial span of 2 to 6.5 RE. It has an energy range of approximately 100 eV to 500 KeV. The 3-D force balanced magnetic field model, SCB, balances the JxB force with the divergence of the general pressure tensor to calculate the magnetic field configuration within its domain. The domain ranges from near the Earth’s surface, where the field is assumed dipolar, to the shell created by field lines passing through the SM equatorial plane at a radial distance of 6.5 RE. The two codes work in tandem, with RAM providing anisotropic pressure to SCB and SCB returning the self-consistent magnetic field through which RAM plasma is advected.
Directory of Open Access Journals (Sweden)
G.Shanmugarathinam
2013-01-01
Full Text Available Caching is one of the important techniques in mobile computing. In caching, frequently accessed data is stored in mobile clients to avoid network traffic and improve the performance in mobile computing. In a mobile computing environment, the number of mobile users increases and requests the server for any updation, but most of the time the server is busy and the client has to wait for a long time. The cache consistency maintenance is difficult for both client and the server. This paper is proposes a technique using a queuing system consisting of one or more servers that provide services of some sort to arrive mobile hosts using agent based technology. This services mechanism of a queuing system is specified by the number of servers each server having its own queue, Agent based technology will maintain the cache consistency between the client and the server .This model saves wireless bandwidth, reduces network traffic and reduces the workload on the server. The simulation result was analyzed with previous technique and the proposed model shows significantly better performance than the earlier approach.
Silvis, Maurits H; Verstappen, Roel
2016-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is p...
Gas cooling in semi-analytic models and SPH simulations: are results consistent?
Saro, A; Borgani, S; Dolag, K
2010-01-01
We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical SPH simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: a) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; b) while all stars associated with the BCG were formed in its progenitors i...
A Fully Nonlinear, Dynamically Consistent Numerical Model for Ship Maneuvering in a Seaway
Directory of Open Access Journals (Sweden)
Ray-Qing Lin
2011-01-01
Full Text Available This is the continuation of our research on development of a fully nonlinear, dynamically consistent, numerical ship motion model (DiSSEL. In this paper we report our results on modeling ship maneuvering in arbitrary seaway that is one of the most challenging and important problems in seakeeping. In our modeling, we developed an adaptive algorithm to maintain dynamical balances numerically as the encounter frequencies (the wave frequencies as measured on the ship varying with the ship maneuvering state. The key of this new algorithm is to evaluate the encounter frequency variation differently in the physical domain and in the frequency domain, thus effectively eliminating possible numerical dynamical imbalances. We have tested this algorithm with several well-documented maneuvering experiments, and our results agree very well with experimental data. In particular, the numerical time series of roll and pitch motions and the numerical ship tracks (i.e., surge, sway, and yaw are nearly identical to those of experiments.
Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.; Waas, Anthony M.
2013-01-01
A mesh objective crack band model was implemented within the generalized method of cells micromechanics theory. This model was linked to a macroscale finite element model to predict post-peak strain softening in composite materials. Although a mesh objective theory was implemented at the microscale, it does not preclude pathological mesh dependence at the macroscale. To ensure mesh objectivity at both scales, the energy density and the energy release rate must be preserved identically across the two scales. This requires a consistent characteristic length or localization limiter. The effects of scaling (or not scaling) the dimensions of the microscale repeating unit cell (RUC), according to the macroscale element size, in a multiscale analysis was investigated using two examples. Additionally, the ramifications of the macroscale element shape, compared to the RUC, was studied.
A self-consistent model for a longitudinal discharge excited He-Sr recombination laser
Energy Technology Data Exchange (ETDEWEB)
Carman, R.J. (Centre for Lasers and Applications, Macquarie University, Sydney NSW 2109 (AU))
1990-09-01
A computer model has been developed to simulate the plasma kinetics in a high-repetition frequency, discharge excited He-Sr recombination laser. A detailed rate equation analysis, incorporating about 80 collisional and radiative processes, is used to determine the temporal and spatial (radial) behavior of the discharge parameters and the intracavity laser field during the current pulse, recombination phase, and afterglow periods. The set of coupled first-order ordinary differential equations used to describe the plasma and external electrical circuit are integrated over multiple discharge cycles to yield fully self-consistent results. The computer model has been used to simulate the behavior of the laser for a set of standard conditions corresponding to typical operating conditions. The species population densities predicted by the model are compared with radial and time-dependent Hook measurements determined experimentally for the same set of standard conditions.
A heterogeneous traffic flow model consisting of two types of vehicles with different sensitivities
Li, Zhipeng; Xu, Xun; Xu, Shangzhi; Qian, Yeqing
2017-01-01
A heterogeneous car following model is constructed for traffic flow consisting of low- and high-sensitivity vehicles. The stability criterion of new model is obtained by using the linear stability theory. We derive the neutral stability diagram for the proposed model with five distinct regions. We conclude the effect of the percentage of low-sensitivity vehicle on the traffic stability in each region. In addition, we further consider a special case that the number of the low-sensitivity vehicles is equal to that of the high-sensitivity ones. We explore the dependence of traffic stability on the average value and the standard deviation of two sensitivities characterizing two vehicle types. The direct numerical simulation results verify the conclusion of theoretical analysis.
nIFTy cosmology: the clustering consistency of galaxy formation models
Pujol, Arnau; Skibba, Ramin A.; Gaztañaga, Enrique; Benson, Andrew; Blaizot, Jeremy; Bower, Richard; Carretero, Jorge; Castander, Francisco J.; Cattaneo, Andrea; Cora, Sofia A.; Croton, Darren J.; Cui, Weiguang; Cunnama, Daniel; De Lucia, Gabriella; Devriendt, Julien E.; Elahi, Pascal J.; Font, Andreea; Fontanot, Fabio; Garcia-Bellido, Juan; Gargiulo, Ignacio D.; Gonzalez-Perez, Violeta; Helly, John; Henriques, Bruno M. B.; Hirschmann, Michaela; Knebe, Alexander; Lee, Jaehyun; Mamon, Gary A.; Monaco, Pierluigi; Onions, Julian; Padilla, Nelson D.; Pearce, Frazer R.; Power, Chris; Somerville, Rachel S.; Srisawat, Chaichalit; Thomas, Peter A.; Tollet, Edouard; Vega-Martínez, Cristian A.; Yi, Sukyoung K.
2017-07-01
We present a clustering comparison of 12 galaxy formation models [including semi-analytic models (SAMs) and halo occupation distribution (HOD) models] all run on halo catalogues and merger trees extracted from a single Λ cold dark matter N-body simulation. We compare the results of the measurements of the mean halo occupation numbers, the radial distribution of galaxies in haloes and the two-point correlation functions (2PCF). We also study the implications of the different treatments of orphan (galaxies not assigned to any dark matter subhalo) and non-orphan galaxies in these measurements. Our main result is that the galaxy formation models generally agree in their clustering predictions but they disagree significantly between HOD and SAMs for the orphan satellites. Although there is a very good agreement between the models on the 2PCF of central galaxies, the scatter between the models when orphan satellites are included can be larger than a factor of 2 for scales smaller than 1 h-1 Mpc. We also show that galaxy formation models that do not include orphan satellite galaxies have a significantly lower 2PCF on small scales, consistent with previous studies. Finally, we show that the 2PCF of orphan satellites is remarkably different between SAMs and HOD models. Orphan satellites in SAMs present a higher clustering than in HOD models because they tend to occupy more massive haloes. We conclude that orphan satellites have an important role on galaxy clustering and they are the main cause of the differences in the clustering between HOD models and SAMs.
A consistent use of the Gurson-Tvergaard-Needleman damage model for the R-curve calculation
Directory of Open Access Journals (Sweden)
Gabriele Cricrì
2013-04-01
Full Text Available The scope of the present work is to point out a consistent simulation procedure for the quasi-static fracture processes, starting from the micro-structural characteristics of the material. To this aim, a local nine-parameters Gurson-Tvergaard-Needleman (GTN damage law has been used. The damage parameters depend on the micro-structural characteristics and must be calculated, measured or opportunely tuned. This can be done, as proposed by the author, by using an opportunely tuned GTN model for the representative volume element simulations, in order to enrich the original damage model by considering also the defect size distribution. Once determined all the material parameters, an MT fracture test has been simulated by a FE code, to calculate the R-curve in an aeronautical Al-based alloy. The simulation procedure produced results in a very good agreement with the experimental data.
Connolly, Mark; He, Xing; Gonzalez, Nestor; Vespa, Paul; DiStefano, Joe; Hu, Xiao
2014-03-01
Due to the inaccessibility of the cranial vault, it is difficult to study cerebral blood flow dynamics directly. A mathematical model can be useful to study these dynamics. The model presented here is a novel combination of a one-dimensional fluid flow model representing the major vessels of the circle of Willis (CoW), with six individually parameterized auto-regulatory models of the distal vascular beds. This model has the unique ability to simulate high temporal resolution flow and velocity waveforms, amenable to pulse-waveform analysis, as well as sophisticated phenomena such as auto-regulation. Previous work with human patients has shown that vasodilation induced by CO2 inhalation causes 12 consistent pulse-waveform changes as measured by the morphological clustering and analysis of intracranial pressure algorithm. To validate this model, we simulated vasodilation and successfully reproduced 9 out of the 12 pulse-waveform changes. A subsequent sensitivity analysis found that these 12 pulse-waveform changes were most affected by the parameters associated with the shape of the smooth muscle tension response and vessel elasticity, providing insight into the physiological mechanisms responsible for observed changes in the pulse-waveform shape.
Energy Technology Data Exchange (ETDEWEB)
Guy, Aurélien, E-mail: aurelien.guy@onera.fr; Bourdon, Anne, E-mail: anne.bourdon@lpp.polytechnique.fr; Perrin, Marie-Yvonne, E-mail: marie-yvonne.perrin@ecp.fr [CNRS, UPR 288, Laboratoire d' Énergétique Moléculaire et Macroscopique, Combustion (EM2C), Grande Voie des Vignes, 92295 Châtenay-Malabry (France); Ecole Centrale Paris, Grande Voie des Vignes, 92295 Châtenay-Malabry (France)
2015-04-15
In this work, a state-to-state vibrational and electronic collisional model is developed to investigate nonequilibrium phenomena behind a shock wave in an ionized nitrogen flow. In the ionization dynamics behind the shock wave, the electron energy budget is of key importance and it is found that the main depletion term corresponds to the electronic excitation of N atoms, and conversely the major creation terms are the electron-vibration term at the beginning, then replaced by the electron ions elastic exchange term. Based on these results, a macroscopic multi-internal-temperature model for the vibration of N{sub 2} and the electronic levels of N atoms is derived with several groups of vibrational levels of N{sub 2} and electronic levels of N with their own internal temperatures to model the shape of the vibrational distribution of N{sub 2} and of the electronic excitation of N, respectively. In this model, energy and chemistry source terms are calculated self-consistently from the rate coefficients of the state-to-state database. For the shock wave condition studied, a good agreement is observed on the ionization dynamics as well as on the atomic bound-bound radiation between the state-to-state model and the macroscopic multi-internal temperature model with only one group of vibrational levels of N{sub 2} and two groups of electronic levels of N.
Choi, Sung W; Gerencser, Akos A; Ng, Ryan; Flynn, James M; Melov, Simon; Danielson, Steven R; Gibson, Bradford W; Nicholls, David G; Bredesen, Dale E; Brand, Martin D
2012-11-21
Depressed cortical energy supply and impaired synaptic function are predominant associations of Alzheimer's disease (AD). To test the hypothesis that presynaptic bioenergetic deficits are associated with the progression of AD pathogenesis, we compared bioenergetic variables of cortical and hippocampal presynaptic nerve terminals (synaptosomes) from commonly used mouse models with AD-like phenotypes (J20 age 6 months, Tg2576 age 16 months, and APP/PS age 9 and 14 months) to age-matched controls. No consistent bioenergetic deficiencies were detected in synaptosomes from the three models; only APP/PS cortical synaptosomes from 14-month-old mice showed an increase in respiration associated with proton leak. J20 mice were chosen for a highly stringent investigation of mitochondrial function and content. There were no significant differences in the quality of the synaptosomal preparations or the mitochondrial volume fraction. Furthermore, respiratory variables, calcium handling, and membrane potentials of synaptosomes from symptomatic J20 mice under calcium-imposed stress were not consistently impaired. The recovery of marker proteins during synaptosome preparation was the same, ruling out the possibility that the lack of functional bioenergetic defects in synaptosomes from J20 mice was due to the selective loss of damaged synaptosomes during sample preparation. Our results support the conclusion that the intrinsic bioenergetic capacities of presynaptic nerve terminals are maintained in these symptomatic AD mouse models.
Buchanan, John J; Dean, Noah
2014-02-01
The experiment undertaken was designed to elucidate the impact of model skill level on observational learning processes. The task was bimanual circle tracing with a 90° relative phase lead of one hand over the other hand. Observer groups watched videos of either an instruction model, a discovery model, or a skilled model. The instruction and skilled model always performed the task with the same movement strategy, the right-arm traced clockwise and the left-arm counterclockwise around circle templates with the right-arm leading. The discovery model used several movement strategies (tracing-direction/hand-lead) during practice. Observation of the instruction and skilled model provided a significant benefit compared to the discovery model when performing the 90° relative phase pattern in a post-observation test. The observers of the discovery model had significant room for improvement and benefited from post-observation practice of the 90° pattern. The benefit of a model is found in the consistency with which that model uses the same movement strategy, and not within the skill level of the model. It is the consistency in strategy modeled that allows observers to develop an abstract perceptual representation of the task that can be implemented into a coordinated action. Theoretically, the results show that movement strategy information (relative motion direction, hand lead) and relative phase information can be detected through visual perception processes and be successfully mapped to outgoing motor commands within an observational learning context.
Luzzati, Vittorio; Tardieu, Annette; Gulik-Krzywicki, Tadeusz
1981-01-01
The observed intensities of the reflections from the body-centered cubic phase of lipid systems are shown to be incompatible with a recently reported model consisting of straight, indefinitely long rods.
Boonstra, A.; van Offenbeek, M.A.G.
2010-01-01
Telecare is the use of information and communication systems to facilitate care delivery to individuals in their homes. Although the expectations of telecare are high, its implementation has proved complex. This case study demonstrates this complexity through a structurational analysis of a telecare
A Globally Consistent Methodology for an Exposure Model for Natural Catastrophe Risk Assessment
Gunasekera, Rashmin; Ishizawa, Oscar; Pandey, Bishwa; Saito, Keiko
2013-04-01
There is a high demand for the development of a globally consistent and robust exposure data model employing a top down approach, to be used in national level catastrophic risk profiling for the public sector liability. To this effect, there are currently several initiatives such as UN-ISDR Global Assessment Report (GAR) and Global Exposure Database for Global Earthquake Model (GED4GEM). However, the consistency and granularity differs from region to region, a problem that is overcome in this proposed approach using national datasets for example in Latin America and the Caribbean Region (LCR). The methodology proposed in this paper aim to produce a global open exposure dataset based upon population, country specific building type distribution and other global/economic indicators such as World Bank indices that are suitable for natural catastrophe risk modelling purposes. The output would be a GIS raster grid at approximately 1 km spatial resolution which would highlight urbaness (building typology distribution, occupancy and use) for each cell at sub national level and compatible with other global initiatives and datasets. It would make use of datasets on population, census, demographic, building data and land use/land cover which are largely available in the public domain. The resultant exposure dataset could be used in conjunction with hazard and vulnerability components to create views of risk for multiple hazards that include earthquake, flood and windstorms. The model we hope would also assist in steps towards future initiatives for open, interchangeable and compatible databases for catastrophe risk modelling. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent.
Self-consistent Spectral Functions in the $O(N)$ Model from the FRG
Strodthoff, Nils
2016-01-01
We present the first self-consistent direct calculation of a spectral function in the framework of the Functional Renormalization Group. The study is carried out in the relativistic $O(N)$ model, where the full momentum dependence of the propagators in the complex plane as well as momentum-dependent vertices are considered. The analysis is supplemented by a comparative study of the Euclidean momentum dependence and of the complex momentum dependence on the level of spectral functions. This work lays the groundwork for the computation of full spectral functions in more complex systems.
Premixed Combustion Simulations with a Self-Consistent Plasma Model for Initiation
Energy Technology Data Exchange (ETDEWEB)
Sitaraman, Hariswaran; Grout, Ray
2016-01-08
Combustion simulations of H2-O2 ignition are presented here, with a self-consistent plasma fluid model for ignition initiation. The plasma fluid equations for a nanosecond pulsed discharge are solved and coupled with the governing equations of combustion. The discharge operates with the propagation of cathode directed streamer, with radical species produced at streamer heads. These radical species play an important role in the ignition process. The streamer propagation speeds and radical production rates were found to be sensitive to gas temperature and fuel-oxidizer equivalence ratio. The oxygen radical production rates strongly depend on equivalence ratio and subsequently results in faster ignition of leaner mixtures.
Supporting Consistency in Linked Specialized Engineering Models Through Bindings and Updating
Institute of Scientific and Technical Information of China (English)
Albertus H. Olivier; Gert C. van Rooyen; Berthold Firmenich; Karl E. Beucke
2008-01-01
Currently, some commercial software applications support users to work in an integrated environ-ment. However, this is limited to the suite of models provided by the software vendor and consequently it forces all the parties to use the same software. In contrast, the research described in this paper investigates ways of using standard software applications, which may be specialized for different professional domains.These are linked for effective transfer of information and a binding mechanism is provided to support consis-tency. The proposed solution was implemented using a CAD application and an independent finite element application in order to verify the theoretical aspects of this work.
A “Minsky crisis” in a Stock-Flow Consistent model
Mouakil, Tarik
2014-01-01
This study uses the Stock-Flow Consistent modelling approach to assess the relevance of Minsky’s demonstration of his financial instability hypothesis. We show that this demonstration, based on the assumption of a pro-cyclical leverage ratio, is incompatible with the Kaleckian analysis of profits endorsed by Minsky. Therefore we suggest replacing the assumption of a pro-cyclical leverage ratio with one of a pro-cyclical short-term borrowing, which also appears in Minsky’s work. Cet article...
DEFF Research Database (Denmark)
Jensen, Jakob Søndergaard; Nakshatrala, Praveen B.; Tortorelli, Daniel A.
2014-01-01
Gradient-based topology optimization typically involves thousands or millions of design variables. This makes efficient sensitivity analysis essential and for this the adjoint variable method (AVM) is indispensable. For transient problems it has been observed that the traditional AVM, based...... on a differentiate-then-discretize approach, may lead to inconsistent sensitivities. Herein this effect is explicitly demonstrated for a single dof system and the source of inconsistency is identified. Additionally, we outline an alternative discretize-then-differentiate AVM that inherently produces consistent...
A New Algorithm for Self-Consistent 3-D Modeling of Collisions in Dusty Debris Disks
Stark, Christopher C
2009-01-01
We present a new "collisional grooming" algorithm that enables us to model images of debris disks where the collision time is less than the Poynting Robertson time for the dominant grain size. Our algorithm uses the output of a collisionless disk simulation to iteratively solve the mass flux equation for the density distribution of a collisional disk containing planets in 3 dimensions. The algorithm can be run on a single processor in ~1 hour. Our preliminary models of disks with resonant ring structures caused by terrestrial mass planets show that the collision rate for background particles in a ring structure is enhanced by a factor of a few compared to the rest of the disk, and that dust grains in or near resonance have even higher collision rates. We show how collisions can alter the morphology of a resonant ring structure by reducing the sharpness of a resonant ring's inner edge and by smearing out azimuthal structure. We implement a simple prescription for particle fragmentation and show how Poynting-Ro...
Consistent quantization and symmetry structure of a non-Abelian chiral gauge theory
Shizuya, Ken-Ichi
1989-08-01
The SU(N) chiral Schwinger model with a Wess-Zumino term is studied by use of non-Abelian bosonization, the Becchi-Rouet-Stora formalism, and a dual transformation, and it is confirmed that this model is a sensible quantum theory in a certain range of the anomaly parameter a. The SU(N) gauge symmetry restored by the inclusion of the Wess-Zumino term gets spontaneously broken and the gauge field becomes massive. Left-handed fermions are found to be confined while right-handed fermions remain free and massless. For the specific value a=2, the symmetry of the model enlarges [to a U(N)×U(N) Kac-Moody symmetry]. It is shown by fermionization of the Wess-Zumino field that for a=2 this model is equivalent to massless two-dimensional QCD (QCD2) in the sense that they share the same gauge field and the same left-handed fermions. A dual transformation is used to cast the model into an equivalent nonlinear system of scalar fields only, which reveals the particle spectrum of the model.
Rácz, A; Bajusz, D; Héberger, K
2015-01-01
Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.
A new self-consistent hybrid chemistry model for Mars and cometary environments
Wedlund, Cyril Simon; Kallio, Esa; Jarvinen, Riku; Dyadechkin, Sergey; Alho, Markku
2014-05-01
Over the last 15 years, a 3-D hybrid-PIC planetary plasma interaction modelling platform, named HYB, has been developed, which was applied to several planetary environment such as those of Mars, Venus, Mercury, and more recently, the Moon. We present here another evolution of HYB including a fully consistent ionospheric-chemistry package designed to reproduce the main ions in the lower boundary of the model. This evolution, also permitted by the increase in computing power and the switch to spherical coordinates for higher spatial resolution (Dyadechkin et al., 2013), is motivated by the imminent arrival of the Rosetta spacecraft in the vicinity of comet 67P/Churyumov-Gerasimenko. In this presentation we show the application of the new HYB-ionosphere model to 1D and 2D hybrid simulations at Mars above 100 km altitude and demonstrate that with a limited number of chemical reactions, good agreement with 1D kinetic models may be found. This is a first validation step before applying the model to the 67P/CG comet environment, which, like Mars, is expected be rich in carbon oxide compounds.
[THE MODEL OF NEUROVASCULAR UNIT IN VITRO CONSISTING OF THREE CELLS TYPES].
Khilazheva, E D; Boytsova, E B; Pozhilenkova, E A; Solonchuk, Yu R; Salmina, A B
2015-01-01
There are many ways to model blood brain barrier and neurovascular unit in vitro. All existing models have their disadvantages, advantages and some peculiarities of preparation and usage. We obtained the three-cells neurovascular unit model in vitro using progenitor cells isolated from the rat embryos brain (Wistar, 14-16 d). After withdrawal of the progenitor cells the neurospheres were cultured with subsequent differentiation into astrocytes and neurons. Endothelial cells were isolated from embryonic brain too. During the differentiation of progenitor cells the astrocytes monolayer formation occurs after 7-9 d, neurons monolayer--after 10-14 d, endothelial cells monolayer--after 7 d. Our protocol for simultaneous isolation and cultivation of neurons, astrocytes and endothelial cells reduces the time needed to obtain neurovascular unit model in vitro, consisting of three cells types and reduce the number of animals used. It is also important to note the cerebral origin of all cell types, which is also an advantage of our model in vitro.
Application of a Multigrid Method to a Mass-Consistent Diagnostic Wind Model.
Wang, Yansen; Williamson, Chatt; Garvey, Dennis; Chang, Sam; Cogan, James
2005-07-01
A multigrid numerical method has been applied to a three-dimensional, high-resolution diagnostic model for flow over complex terrain using a mass-consistent approach. The theoretical background for the model is based on a variational analysis using mass conservation as a constraint. The model was designed for diagnostic wind simulation at the microscale in complex terrain and in urban areas. The numerical implementation takes advantage of a multigrid method that greatly improves the computation speed. Three preliminary test cases for the model's numerical efficiency and its accuracy are given. The model results are compared with an analytical solution for flow over a hemisphere. Flow over a bell-shaped hill is computed to demonstrate that the numerical method is applicable in the case of parameterized lee vortices. A simulation of the mean wind field in an urban domain has also been carried out and compared with observational data. The comparison indicated that the multigrid method takes only 3%-5% of the time that is required by the traditional Gauss-Seidel method.
Consistency in Regularizations of the Gauged NJL Model at One Loop Level
Battistel, O A
1999-01-01
In this work we revisit questions recently raised in the literature associated to relevant but divergent amplitudes in the gauged NJL model. The questions raised involve ambiguities and symmetry violations which concern the model's predictive power at one loop level. Our study shows by means of an alternative prescription to handle divergent amplitudes, that it is possible to obtain unambiguous and symmetry preserving amplitudes. The procedure adopted makes use solely of {\\it general} properties of an eventual regulator, thus avoiding an explicit form. We find, after a thorough analysis of the problem that there are well established conditions to be fulfiled by any consistent regularization prescription in order to avoid the problems of concern at one loop level.
Massive neutrinos in nonlinear large scale structure: A consistent perturbation theory
Levi, Michele
2016-01-01
A consistent formulation to incorporate massive neutrinos in the perturbation theory of the effective CDM+baryons fluid is introduced. In this formulation all linear k dependence in the growth functions of CDM+baryons perturbations, as well as all consequent additional mode coupling at higher orders, are taken into account to any desirable accuracy. Our formulation regards the neutrino fraction, which is constant in time after the non-relativistic transition of neutrinos, and much smaller than unity, as the coupling constant of the theory. Then the "bare" perturbations are those in the massless neutrino case when the neutrino fraction vanishes, and we consider the backreaction corrections due to the gravitational coupling of neutrinos. We derive the general equations for the "bare" perturbations, and backrecation corrections. Then, by employing exact time evolution with the proper analytic Green's function we explicitly derive the leading backreaction effect, and find precise agreement at the linear level. We...
Microwave air plasmas in capillaries at low pressure I. Self-consistent modeling
Coche, P.; Guerra, V.; Alves, L. L.
2016-06-01
This work presents the self-consistent modeling of micro-plasmas generated in dry air using microwaves (2.45 GHz excitation frequency), within capillaries (model couples the system of rate balance equations for the most relevant neutral and charged species of the plasma to the homogeneous electron Boltzmann equation. The maintenance electric field is self-consistently calculated adopting a transport theory for low to intermediate pressures, taking into account the presence of O- ions in addition to several positive ions, the dominant species being O{}2+ , NO+ and O+ . The low-pressure small-radius conditions considered yield very-intense reduced electric fields (˜600-1500 Td), coherent with species losses controlled by transport and wall recombination, and kinetic mechanisms strongly dependent on electron-impact collisions. The charged-particle transport losses are strongly influenced by the presence of the negative ion, despite its low-density (˜10% of the electron density). For electron densities in the range (1-≤ft. 4\\right)× {{10}12} cm-3, the system exhibits high dissociation degrees for O2 (˜20-70%, depending on the working conditions, in contrast with the ˜0.1% dissociation obtained for N2), a high concentration of O2(a) (˜1014 cm-3) and NO(X) (5× {{10}14} cm-3) and low ozone production (<{{10}-3}% ).
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.
Institute of Scientific and Technical Information of China (English)
ZHANG SanGuo; LIAO Yuan
2008-01-01
In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.
SELF-CONSISTENT FIELD MODEL OF BRUSHES FORMED BY ROOT-TETHERED DENDRONS
Directory of Open Access Journals (Sweden)
E. B. Zhulina
2015-05-01
Full Text Available We present an analytical self-consistent field (scf theory that describes planar brushes formed by regularly branched root-tethered dendrons of the second and third generations. The developed approach gives the possibility for calculation of the scf molecular potential acting at monomers of the tethered chains. In the linear elasticity regime for stretched polymers, the molecular potential has a parabolic shape with the parameter k depending on architectural parameters of tethered macromolecules: polymerization degrees of spacers, branching functionalities, and number of generations. For dendrons of the second generation, we formulate a general equation for parameter k and analyze how variations in the architectural parameters of these dendrons affect the molecular potential. For dendrons of the third generation, an analytical expression for parameter k is available only for symmetric macromolecules with equal lengths of all spacers and equal branching functionalities in all generations. We analyze how the thickness of dendron brush in a good solvent is affected by variations in the chain architecture. Results of the developed scf theory are compared with predictions of boxlike scaling model. We demonstrate that in the limit of high branching functionalities, the results of both approaches become consistent if the value of exponent bin boxlike model is put to unity.In conclusion, we briefly discuss the systems to which the developed scf theory is applicable. These are: planar and concave spherical and cylindrical brushes under various solvent conditions (including solvent-free melted brushes and brush-like layers of ionic (polyelectrolyte dendrons.
Khajepor, Sorush; Chen, Baixin
2016-01-01
A method is developed to analytically and consistently implement cubic equations of state into the recently proposed multipseudopotential interaction (MPI) scheme in the class of two-phase lattice Boltzmann (LB) models [S. Khajepor, J. Wen, and B. Chen, Phys. Rev. E 91, 023301 (2015)]10.1103/PhysRevE.91.023301. An MPI forcing term is applied to reduce the constraints on the mathematical shape of the thermodynamically consistent pseudopotentials; this allows the parameters of the MPI forces to be determined analytically without the need of curve fitting or trial and error methods. Attraction and repulsion parts of equations of state (EOSs), representing underlying molecular interactions, are modeled by individual pseudopotentials. Four EOSs, van der Waals, Carnahan-Starling, Peng-Robinson, and Soave-Redlich-Kwong, are investigated and the results show that the developed MPI-LB system can satisfactorily recover the thermodynamic states of interest. The phase interface is predicted analytically and controlled via EOS parameters independently and its effect on the vapor-liquid equilibrium system is studied. The scheme is highly stable to very high density ratios and the accuracy of the results can be enhanced by increasing the interface resolution. The MPI drop is evaluated with regard to surface tension, spurious velocities, isotropy, dynamic behavior, and the stability dependence on the relaxation time.
Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models
Vignal, Philippe
2016-02-11
Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are
Directory of Open Access Journals (Sweden)
Jiateng Guo
2016-02-01
Full Text Available Three-dimensional (3D geological models are important representations of the results of regional geological surveys. However, the process of constructing 3D geological models from two-dimensional (2D geological elements remains difficult and is not necessarily robust. This paper proposes a method of migrating from 2D elements to 3D models. First, the geological interfaces were constructed using the Hermite Radial Basis Function (HRBF to interpolate the boundaries and attitude data. Then, the subsurface geological bodies were extracted from the spatial map area using the Boolean method between the HRBF surface and the fundamental body. Finally, the top surfaces of the geological bodies were constructed by coupling the geological boundaries to digital elevation models. Based on this workflow, a prototype system was developed, and typical geological structures (e.g., folds, faults, and strata were simulated. Geological modes were constructed through this workflow based on realistic regional geological survey data. The model construction process was rapid, and the resulting models accorded with the constraints of the original data. This method could also be used in other fields of study, including mining geology and urban geotechnical investigations.
A consistent hamiltonian treatment of the Thirring-Wess and Schwinger model in the covariant gauge
Martinovič, L'ubomír
2014-06-01
We present a unified hamiltonian treatment of the massless Schwinger model in the Landau gauge and of its non-gauge counterpart-the Thirring-Wess (TW) model. The operator solution of the Dirac equation has the same structure in the both models and identifies free fields as the true dynamical degrees of freedom. The coupled boson field equations (Maxwell and Proca, respectively) can also be solved exactly. The Hamiltonan in Fock representation is derived for the TW model and its diagonalization via a Bogoliubov transformation is suggested. The axial anomaly is derived in both models directly from the operator solution using a hermitian version of the point-splitting regularization. A subtlety of the residual gauge freedom in the covariant gauge is shown to modify the usual definition of the "gauge-invariant" currents. The consequence is that the axial anomaly and the boson mass generation are restricted to the zero-mode sector only. Finally, we discuss quantization of the unphysical gauge-field components in terms of ghost modes in an indefinite-metric space and sketch the next steps within the finite-volume treatment necessary to fully reveal physical content of the model in our hamiltonian formulation.
Substrate specificity of papain dynamic structures for peptides consisting of 8-10 GLY residues
Nishiyama, Katsuhiko
2011-01-01
We investigated the substrate specificity of papain dynamic structures for peptides of 8-10 glycine residues (8-10GLY) via molecular dynamics and docking simulations. The substrate specificity of papain for 8-10GLY fluctuated considerably with time. There were several residues that were different among those that had a significant impact on binding (RESIDUES_IMPACT) with 10GLY, 9GLY, and 8GLY. Modification of these different residues should allow for control of substrate specificity, providing a framework for modifying substrate specificity in papain and other enzymes.
Building a Structural Model: Parameterization and Structurality
Directory of Open Access Journals (Sweden)
Michel Mouchart
2016-04-01
Full Text Available A specific concept of structural model is used as a background for discussing the structurality of its parameterization. Conditions for a structural model to be also causal are examined. Difficulties and pitfalls arising from the parameterization are analyzed. In particular, pitfalls when considering alternative parameterizations of a same model are shown to have lead to ungrounded conclusions in the literature. Discussions of observationally equivalent models related to different economic mechanisms are used to make clear the connection between an economically meaningful parameterization and an economically meaningful decomposition of a complex model. The design of economic policy is used for drawing some practical implications of the proposed analysis.
A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints
Directory of Open Access Journals (Sweden)
L. Kantha
2016-01-01
Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
Institute of Scientific and Technical Information of China (English)
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
Consistent post-reaction vibrational energy redistribution in DSMC simulations using TCE model
Borges Sebastião, Israel; Alexeenko, Alina
2016-10-01
The direct simulation Monte Carlo (DSMC) method has been widely applied to study shockwaves, hypersonic reentry flows, and other nonequilibrium flow phenomena. Although there is currently active research on high-fidelity models based on ab initio data, the total collision energy (TCE) and Larsen-Borgnakke (LB) models remain the most often used chemistry and relaxation models in DSMC simulations, respectively. The conventional implementation of the discrete LB model, however, may not satisfy detailed balance when recombination and exchange reactions play an important role in the flow energy balance. This issue can become even more critical in reacting mixtures involving polyatomic molecules, such as in combustion. In this work, this important shortcoming is addressed and an empirical approach to consistently specify the post-reaction vibrational states close to thermochemical equilibrium conditions is proposed within the TCE framework. Following Bird's quantum-kinetic (QK) methodology for populating post-reaction states, the new TCE-based approach involves two main steps. The state-specific TCE reaction probabilities for a forward reaction are first pre-computed from equilibrium 0-D simulations. These probabilities are then employed to populate the post-reaction vibrational states of the corresponding reverse reaction. The new approach is illustrated by application to exchange and recombination reactions relevant to H2-O2 combustion processes.
Modeling Extreme Solar Energetic Particle Acceleration with Self-Consistent Wave Generation
Arthur, A. D.; le Roux, J. A.
2015-12-01
Observations of extreme solar energetic particle (SEP) events associated with coronal mass ejection driven shocks have detected particle energies up to a few GeV at 1 AU within the first ~10 minutes to 1 hour of shock acceleration. Whether or not acceleration by a single shock is sufficient in these events or if some combination of multiple shocks or solar flares is required is currently not well understood. Furthermore, the observed onset times of the extreme SEP events place the shock in the corona when the particles escape upstream. We have updated our focused transport theory model that has successfully been applied to the termination shock and traveling interplanetary shocks in the past to investigate extreme SEP acceleration in the solar corona. This model solves the time-dependent Focused Transport Equation including particle preheating due to the cross shock electric field and the divergence, adiabatic compression, and acceleration of the solar wind flow. Diffusive shock acceleration of SEPs is included via the first-order Fermi mechanism for parallel shocks. To investigate the effects of the solar corona on the acceleration of SEPs, we have included an empirical model for the plasma number density, temperature, and velocity. The shock acceleration process becomes highly time-dependent due to the rapid variation of these coronal properties with heliocentric distance. Additionally, particle interaction with MHD wave turbulence is modeled in terms of gyroresonant interactions with parallel propagating Alfven waves. However, previous modeling efforts suggest that the background amplitude of the solar wind turbulence is not sufficient to accelerate SEPs to extreme energies over the short time scales observed. To account for this, we have included the transport and self-consistent amplification of MHD waves by the SEPs through wave-particle gyroresonance. We will present the results of this extended model for a single fast quasi-parallel CME driven shock in the
Bibliographic Relationships in MARC and Consistent with FRBR Model According to RDA Rules
Directory of Open Access Journals (Sweden)
Mahsa Fardehoseiny
2013-03-01
Full Text Available This study was conducted to investigate the bibliographic relationships in the MARC and it’s consistency with the FRBR model. With establishing the necessary relations between bibliographic records, users will retrieve their necessary information faster and more easily. It is important to make a good communication in existing bibliographic records to help users to find what they need. This study’s purpose was to define the relationships between bibliographic records in the National Library's OPAC database and the study’s method was descriptive content analysis approach. In this study, the online catalog (OPAC National Library of Iran has been used to collect information. All records with the mentioned criteria listed in the final report of the IFLA bibliographic relations about the first group entities in FRBR model and RDA rules has been implemented and analyzed. According to this study, if software has been developed in which the data transferring was based on the conceptual model and the MARC’s data that already exists in the National Library's bibliographic database, these relationships will not be transferable. Withal, in this study the relationships on consistent FRBR and MARC concluded with an intelligent mind and the machine is unable to detect them. The results of this study showed that the relations which conveyed from MARC to FRBR, was about 47/70 percent of the MARC fields, in other hand by FRBR to MARC with the use of all intelligent efforts, and diagnosis of MARC relationships, only 31/38 percent of the relations can be covered through the MARC. But based on real data and usable fields in Boostan-e-Saadi with MARC pattern, records on the National Library of Iran showed that the results reduced to 16/95 percent..
Formulation of a self-consistent model for quantum well pin solar cells
Ramey, S.; Khoie, R.
1997-04-01
A self-consistent numerical simulation model for a pin single-cell solar cell is formulated. The solar cell device consists of a p-AlGaAs region, an intrinsic i-AlGaAs/GaAs region with several quantum wells, and a n-AlGaAs region. Our simulator solves a field-dependent Schrödinger equation self-consistently with Poisson and Drift-Diffusion equations. The emphasis is given to the study of the capture of electrons by the quantum wells, the escape of electrons from the quantum wells, and the absorption and recombination within the quantum wells. We believe this would be the first such comprehensive model ever reported. The field-dependent Schrödinger equation is solved using the transfer matrix method. The eigenfunctions and eigenenergies obtained are used to calculate the escape rate of electrons from the quantum wells, and the non-radiative recombination rates of electrons at the boundaries of the quantum wells. These rates together with the capture rates of electrons by the quantum wells are then used in a self-consistent numerical Poisson-Drift-Diffusion solver. The resulting field profiles are then used in the field-dependent Schrödinger solver, and the iteration process is repeated until convergence is reached. In a p-AlGaAs i-AlGaAs/GaAs n-AlGaAs cell with aluminum mole fraction of 0.3, with one 100 Å-wide 284 meV-deep quantum well, the eigenenergies with zero field are 36meV, 136meV, and 267meV, for the first, second and third subbands, respectively. With an electric field of 50 kV/cm, the eigenenergies are shifted to 58meV, 160meV, and 282meV, respectively. With these eigenenergies, the thermionic escape time of electrons from the GaAs Γ-valley, varies from 220 pS to 90 pS for electric fields ranging from 10 to 50 kV/cm. These preliminary results are in good agreement with those reported by other researchers.
Consistent parameter fixing in the quark-meson model with vacuum fluctuations
Carignano, Stefano; Buballa, Michael; Elkamhawy, Wael
2016-08-01
We revisit the renormalization prescription for the quark-meson model in an extended mean-field approximation, where vacuum quark fluctuations are included. At a given cutoff scale the model parameters are fixed by fitting vacuum quantities, typically including the sigma-meson mass mσ and the pion decay constant fπ. In most publications the latter is identified with the expectation value of the sigma field, while for mσ the curvature mass is taken. When quark loops are included, this prescription is however inconsistent, and the correct identification involves the renormalized pion decay constant and the sigma pole mass. In the present article we investigate the influence of the parameter-fixing scheme on the phase structure of the model at finite temperature and chemical potential. Despite large differences between the model parameters in the two schemes, we find that in homogeneous matter the effect on the phase diagram is relatively small. For inhomogeneous phases, on the other hand, the choice of the proper renormalization prescription is crucial. In particular, we show that if renormalization effects on the pion decay constant are not considered, the model does not even present a well-defined renormalized limit when the cutoff is sent to infinity.
Holistic and Consistent Design Process for Hollow Structures Based on Braided Textiles and RTM
Gnädinger, Florian; Karcher, Michael; Henning, Frank; Middendorf, Peter
2014-06-01
The present paper elaborates a holistic and consistent design process for 2D braided composites in conjunction with Resin Transfer Moulding (RTM). These technologies allow a cost-effective production of composites due to their high degree of automation. Literature can be found that deals with specific tasks of the respective technologies but there is no work available that embraces the complete process chain. Therefore, an overall design process is developed within the present paper. It is based on a correlated conduction of sub-design processes for the braided preform, RTM-injection, mandrel plus mould and manufacturing. For each sub-process both, individual tasks and reasonable methods to accomplish them are presented. The information flow within the design process is specified and interdependences are illustrated. Composite designers will be equipped with an efficient set of tools because the respective methods regard the complexity of the part. The design process is applied for a demonstrator in a case study. The individual sub-design processes are accomplished exemplarily to judge about the feasibility of the presented work. For validation reasons, predicted braiding angles and fibre volume fractions are compared with measured ones and a filling and curing simulation based on PAM-RTM is checked against mould filling studies. Tool concepts for a RTM mould and mandrels that realise undercuts are tested. The individual process parameters for manufacturing are derived from previous design steps. Furthermore, the compatibility of the chosen fibre and matrix system is investigated based on pictures of a scanning electron microscope (SEM). The annual production volume of the demonstrator part is estimated based on these findings.
Directory of Open Access Journals (Sweden)
Damian M Cummings
2010-05-01
Full Text Available Since the identification of the gene responsible for HD (Huntington's disease, many genetic mouse models have been generated. Each employs a unique approach for delivery of the mutated gene and has a different CAG repeat length and background strain. The resultant diversity in the genetic context and phenotypes of these models has led to extensive debate regarding the relevance of each model to the human disorder. Here, we compare and contrast the striatal synaptic phenotypes of two models of HD, namely the YAC128 mouse, which carries the full-length huntingtin gene on a yeast artificial chromosome, and the CAG140 KI*** (knock-in mouse, which carries a human/mouse chimaeric gene that is expressed in the context of the mouse genome, with our previously published data obtained from the R6/2 mouse, which is transgenic for exon 1 mutant huntingtin. We show that striatal MSNs (medium-sized spiny neurons in YAC128 and CAG140 KI mice have similar electrophysiological phenotypes to that of the R6/2 mouse. These include a progressive increase in membrane input resistance, a reduction in membrane capacitance, a lower frequency of spontaneous excitatory postsynaptic currents and a greater frequency of spontaneous inhibitory postsynaptic currents in a subpopulation of striatal neurons. Thus, despite differences in the context of the inserted gene between these three models of HD, the primary electrophysiological changes observed in striatal MSNs are consistent. The outcomes suggest that the changes are due to the expression of mutant huntingtin and such alterations can be extended to the human condition.
Directory of Open Access Journals (Sweden)
Damian M Cummings
2010-06-01
Full Text Available Since the identification of the gene responsible for HD (Huntington's disease, many genetic mouse models have been generated. Each employs a unique approach for delivery of the mutated gene and has a different CAG repeat length and background strain. The resultant diversity in the genetic context and phenotypes of these models has led to extensive debate regarding the relevance of each model to the human disorder. Here, we compare and contrast the striatal synaptic phenotypes of two models of HD, namely the YAC128 mouse, which carries the full-length huntingtin gene on a yeast artificial chromosome, and the CAG140 KI (knock-in mouse, which carries a human/mouse chimaeric gene that is expressed in the context of the mouse genome, with our previously published data obtained from the R6/2 mouse, which is transgenic for exon 1 mutant huntingtin. We show that striatal MSNs (medium-sized spiny neurons in YAC128 and CAG140 KI mice have similar electrophysiological phenotypes to that of the R6/2 mouse. These include a progressive increase in membrane input resistance, a reduction in membrane capacitance, a lower frequency of spontaneous excitatory postsynaptic currents and a greater frequency of spontaneous inhibitory postsynaptic currents in a subpopulation of striatal neurons. Thus, despite differences in the context of the inserted gene between these three models of HD, the primary electrophysiological changes observed in striatal MSNs are consistent. The outcomes suggest that the changes are due to the expression of mutant huntingtin and such alterations can be extended to the human condition.
Zimmermann, Eva; Seifert, Udo
2015-02-01
Many single-molecule experiments for molecular motors comprise not only the motor but also large probe particles coupled to it. The theoretical analysis of these assays, however, often takes into account only the degrees of freedom representing the motor. We present a coarse-graining method that maps a model comprising two coupled degrees of freedom which represent motor and probe particle to such an effective one-particle model by eliminating the dynamics of the probe particle in a thermodynamically and dynamically consistent way. The coarse-grained rates obey a local detailed balance condition and reproduce the net currents. Moreover, the average entropy production as well as the thermodynamic efficiency is invariant under this coarse-graining procedure. Our analysis reveals that only by assuming unrealistically fast probe particles, the coarse-grained transition rates coincide with the transition rates of the traditionally used one-particle motor models. Additionally, we find that for multicyclic motors the stall force can depend on the probe size. We apply this coarse-graining method to specific case studies of the F(1)-ATPase and the kinesin motor.
McGlynn, Thomas; Fabbiano, Giuseppina; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; Pevunova, Olga; Imel, David; Berriman, Graham B.; Teplitz, Harry I.; Groom, Steve L.; Desai, Vandana R.; Landry, Walter
2016-07-01
Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.
A three-dimensional PEM fuel cell model with consistent treatment of water transport in MEA
Meng, Hua
In this paper, a three-dimensional PEM fuel cell model with a consistent water transport treatment in the membrane electrode assembly (MEA) has been developed. In this new PEM fuel cell model, the conservation equation of the water concentration is solved in the gas channels, gas diffusion layers, and catalyst layers while a conservation equation of the water content is established in the membrane. These two equations are connected using a set of internal boundary conditions based on the thermodynamic phase equilibrium and flux equality at the interface of the membrane and the catalyst layer. The existing fictitious water concentration treatment, which assumes thermodynamic phase equilibrium between the water content in the membrane phase and the water concentration, is applied in the two catalyst layers to consider water transport in the membrane phase. Since all the other conservation equations are still developed and solved in the single-domain framework without resort to interfacial boundary conditions, the present new PEM fuel cell model is termed as a mixed-domain method. Results from this mixed-domain approach have been compared extensively with those from the single-domain method, showing good accuracy in terms of not only cell performances and current distributions but also water content variations in the membrane.
Consistency of non-flat $\\Lambda$CDM model with the new result from BOSS
Kumar, Suresh
2015-01-01
Using 137,562 quasars in the redshift range $2.1\\leq z\\leq3.5$ from the Data Release 11 (DR11) of the Baryon Oscillation Spectroscopic Survey (BOSS) of Sloan Digital Sky Survey (SDSS)-III, the BOSS-SDSS collaboration estimated the expansion rate $H(z=2.34)=222\\pm7$ km/s/Mpc of Universe, and reported that this value is in tension with the predictions of flat $\\Lambda$CDM model at around 2.5$\\sigma$ level. In this letter, we briefly describe some attempts made in the literature to relieve the tension, and show that the tension can naturally be alleviated in non-flat $\\Lambda$CDM model with positive curvature. However, this idea confronts with the inflation paradigm which predicts almost a spatially flat Universe. Nevertheless, the theoretical consistency of the non-flat $\\Lambda$CDM model with the new result from BOSS deserves attention of the community.
Baraffe, I; Méra, D; Chabrier, G; Beaulieu, J P
1998-01-01
We have computed stellar evolutionary models for stars in a mass range characteristic of Cepheid variables ($3
Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül
2017-05-01
A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.
Towards Self-Consistent Modelling of the Sgr A* Accretion Flow: Linking Theory and Observation
Roberts, Shawn R; Jiang, Yan-Fei; Ostriker, Jeremiah P
2016-01-01
The interplay between supermassive black holes (SMBHs) and their environments is believed to command an essential role in galaxy evolution. The majority of these SMBHs are in the radiative inefficient accretion phase where this interplay remains elusive, but suggestively important, due to few observational constraints. To remedy this, we directly fit 2-D hydrodynamic simulations to Chandra observations of Sgr A* with Markov Chain Monte Carlo sampling, self-consistently modelling the 2-D inflow-outflow solution for the first time. We find the temperature and density at flow onset are consistent with the origin of the gas in the stellar winds of massive stars in the vicinity of Sgr A*. We place the first observational constraints on the angular momentum of the gas and estimate the centrifugal radius, r$_c$ $\\approx$ 0.056 r$_b$ $\\approx8\\times10^{-3}$ pc, where r$_b$ is the Bondi radius. Less than 1\\% of the inflowing gas accretes onto the SMBH, the remainder being ejected in a polar outflow. For the first time...
A self-consistent linear-mode model of stellar convection
Macauslan, J.
1985-01-01
A normal-mode expansion of the linearized fluid equations in terms of small subset of spherical harmonics can provide a foundation for a physically motivated, self-consistent description of a solar-type convection zone. In the absence of dissipation, a second-order differential equation governs the radial dependence of the modes, so that interpretation of the effects on convection quantities of the normal-form 'potential well' is straightforward. The philosophy is quite different from the more recent work of Narasimha and Antia (1982): all envelopes presented here differ substantially from MLT envelopes, and therefore, from theirs, which are constructed to be consistent with MLT. The amplitude of all modes is set by a Kelvin-Helmholtz-('shear'-) instability argument unrelated to solar observations, with the result that the convection description may be considered to arise from 'first-hueristic-principles'. The thermodynamics modelled vaguely resemble the sun's, and more vigorously convective envelopes show some phenomena qualitatively like solar observations (e.g., atmospheric velocity spectra).
The self-consistent field model for Fermi systems with account of three-body interactions
Directory of Open Access Journals (Sweden)
Yu.M. Poluektov
2015-12-01
Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.
Self-consistent 2-phase AGN torus models: SED library for observers
Siebenmorgen, Ralf; Efstathiou, Andreas
2015-01-01
We assume that dust near active galactic nuclei (AGN) is distributed in a torus-like geometry, which may be described by a clumpy medium or a homogeneous disk or as a combination of the two (i.e. a 2-phase medium). The dust particles considered are fluffy and have higher submillimeter emissivities than grains in the diffuse ISM. The dust-photon interaction is treated in a fully self-consistent three dimensional radiative transfer code. We provide an AGN library of spectral energy distributions (SEDs). Its purpose is to quickly obtain estimates of the basic parameters of the AGN, such as the intrinsic luminosity of the central source, the viewing angle, the inner radius, the volume filling factor and optical depth of the clouds, and the optical depth of the disk midplane, and to predict the flux at yet unobserved wavelengths. The procedure is simple and consists of finding an element in the library that matches the observations. We discuss the general properties of the models and in particular the 10mic. silic...
Height-Diameter Models for Mixed-Species Forests Consisting of Spruce, Fir, and Beech
Directory of Open Access Journals (Sweden)
Petráš Rudolf
2014-06-01
Full Text Available Height-diameter models define the general relationship between the tree height and diameter at each growth stage of the forest stand. This paper presents generalized height-diameter models for mixed-species forest stands consisting of Norway spruce (Picea abies Karst., Silver fir (Abies alba L., and European beech (Fagus sylvatica L. from Slovakia. The models were derived using two growth functions from the exponential family: the two-parameter Michailoff and three-parameter Korf functions. Generalized height-diameter functions must normally be constrained to pass through the mean stand diameter and height, and then the final growth model has only one or two parameters to be estimated. These “free” parameters are then expressed over the quadratic mean diameter, height and stand age and the final mathematical form of the model is obtained. The study material included 50 long-term experimental plots located in the Western Carpathians. The plots were established 40-50 years ago and have been repeatedly measured at 5 to 10-year intervals. The dataset includes 7,950 height measurements of spruce, 21,661 of fir and 5,794 of beech. As many as 9 regression models were derived for each species. Although the “goodness of fit” of all models showed that they were generally well suited for the data, the best results were obtained for silver fir. The coefficient of determination ranged from 0.946 to 0.948, RMSE (m was in the interval 1.94-1.97 and the bias (m was -0.031 to 0.063. Although slightly imprecise parameter estimation was established for spruce, the estimations of the regression parameters obtained for beech were quite less precise. The coefficient of determination for beech was 0.854-0.860, RMSE (m 2.67-2.72, and the bias (m ranged from -0.144 to -0.056. The majority of models using Korf’s formula produced slightly better estimations than Michailoff’s, and it proved immaterial which estimated parameter was fixed and which parameters
Motion of the Philippine Sea plate consistent with the NUVEL-1A model
Zang, Shao Xian; Chen, Qi Yong; Ning, Jie Yuan; Shen, Zheng Kang; Liu, Yong Gang
2002-09-01
We determine Euler vectors for 12 plates, including the Philippine Sea plate (PH), relative to the fixed Pacific plate (PA) by inverting the earthquake slip vectors along the boundaries of the Philippine Sea plate, GPS observed velocities, and 1122 data from the NUVEL-1 and the NUVEL-1A global plate motion model, respectively. This analysis thus also yields Euler vectors for the Philippine Sea plate relative to adjacent plates. Our results are consistent with observed data and can satisfy the geological and geophysical constraints along the Caroline (CR)-PH and PA-CR boundaries. The results also give insight into internal deformation of the Philippine Sea plate. The area enclosed by the Ryukyu Trench-Nankai Trough, Izu-Bonin Trench and GPS stations S102, S063 and Okino Torishima moves uniformly as a rigid plate, but the areas near the Philippine Trench, Mariana Trough and Yap-Palau Trench have obvious deformation.
Plasma Processes : A self-consistent kinetic modeling of a 1-D, bounded, plasma in equilibrium
Indian Academy of Sciences (India)
Monojoy Goswami; H Ramachandran
2000-11-01
A self-consistent kinetic treatment is presented here, where the Boltzmann equation is solved for a particle conserving Krook collision operator. The resulting equations have been implemented numerically. The treatment solves for the entire quasineutral column, making no assumptions about mfp/, where mfp is the ion-neutral collision mean free path and the size of the device. Coulomb collisions are neglected in favour of collisions with neutrals, and the particle source is modeled as a uniform Maxwellian. Electrons are treated as an inertialess but collisional ﬂuid. The ion distribution function for the trapped and the transiting orbits is obtained. Interesting ﬁndings include the anomalous heating of ions as they approach the presheath, the development of strongly non-Maxwellian features near the last mfp, and strong modiﬁcations of the sheath criterion.
Wen, Yan; Wang, Yi; Liu, Tian
2016-02-01
The inversion from the magnetic field to the magnetic susceptibility distribution is ill-posed because the dipole kernel, which relates the magnetic susceptibility to the magnetic field, has zeroes at a pair of cone surfaces in the k-space, leading to streaking artifacts on the reconstructed quantitative susceptibility maps (QSM). A method to impose consistency on the cone data (CCD) with structural priors is proposed to improve the solutions of k-space methods. The information in the cone region is recovered by enforcing structural consistency with structural prior, while information in the noncone trust region is enforced to be consistent with the magnetic field measurements in k-space. This CCD method was evaluated by comparing the initial results of existing QSM algorithms to the QSM results after CCD enhancement with respect to the COSMOS results in simulation, phantom, and in vivo human brain. The proposed method demonstrated suppression of streaking artifacts and the resulting QSM showed better agreement with reference standard QSM compared with other k-space based methods. By enforcing consistency with structural priors in the cone region, the missing data in the cone can be recovered and the streaking artifacts in QSM can be suppressed. © 2015 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
BRANNON,REBECCA M.
2000-11-01
A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion
Modeling of etch profile evolution including wafer charging effects using self consistent ion fluxes
Energy Technology Data Exchange (ETDEWEB)
Hoekstra, R.J.; Kushner, M.J. [Univ. of Illinois, Urbana, IL (United States). Dept. of Electrical and Computer Engineering
1996-12-31
As high density plasma reactors become more predominate in industry, the need has intensified for computer aided design tools which address both equipment issues such as ion flux uniformity onto the water and process issues such etch feature profile evolution. A hierarchy of models has been developed to address these issues with the goal of producing a comprehensive plasma processing design capability. The Hybrid Plasma Equipment Model (HPEM) produces ion and neutral densities, and electric fields in the reactor. The Plasma Chemistry Monte Carlo Model (PCMC) determines the angular and energy distributions of ion and neutral fluxes to the wafer using species source functions, time dependent bulk electric fields, and sheath potentials from the HPEM. These fluxes are then used by the Monte Carlo Feature Profile Model (MCFP) to determine the time evolution of etch feature profiles. Using this hierarchy, the effects of physical modifications of the reactor, such as changing wafer clamps or electrode structures, on etch profiles can be evaluated. The effects of wafer charging on feature evolution are examined by calculating the fields produced by the charge deposited by ions and electrons within the features. The effect of radial variations and nonuniformity in angular and energy distribution of the reactive fluxes on feature profiles and feature charging will be discussed for p-Si etching in inductively-coupled plasma (ICP) sustained in chlorine gas mixtures. The effects of over- and under-wafer topography on etch profiles will also be discussed.
Structural system identification: Structural dynamics model validation
Energy Technology Data Exchange (ETDEWEB)
Red-Horse, J.R.
1997-04-01
Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.
Churchill, Nathan W; Madsen, Kristoffer; Mørup, Morten
2016-10-01
The brain consists of specialized cortical regions that exchange information between each other, reflecting a combination of segregated (local) and integrated (distributed) processes that define brain function. Functional magnetic resonance imaging (fMRI) is widely used to characterize these functional relationships, although it is an ongoing challenge to develop robust, interpretable models for high-dimensional fMRI data. Gaussian mixture models (GMMs) are a powerful tool for parcellating the brain, based on the similarity of voxel time series. However, conventional GMMs have limited parametric flexibility: they only estimate segregated structure and do not model interregional functional connectivity, nor do they account for network variability across voxels or between subjects. To address these issues, this letter develops the functional segregation and integration model (FSIM). This extension of the GMM framework simultaneously estimates spatial clustering and the most consistent group functional connectivity structure. It also explicitly models network variability, based on voxel- and subject-specific network scaling profiles. We compared the FSIM to standard GMM in a predictive cross-validation framework and examined the importance of different model parameters, using both simulated and experimental resting-state data. The reliability of parcellations is not significantly altered by flexibility of the FSIM, whereas voxel- and subject-specific network scaling profiles significantly improve the ability to predict functional connectivity in independent test data. Moreover, the FSIM provides a set of interpretable parameters to characterize both consistent and variable aspects functional connectivity structure. As an example of its utility, we use subject-specific network profiles to identify brain regions where network expression predicts subject age in the experimental data. Thus, the FSIM is effective at summarizing functional connectivity structure in group
A Self-consistent and Spatially Dependent Model of the Multiband Emission of Pulsar Wind Nebulae
Lu, Fang-Wu; Gao, Quan-Gui; Zhang, Li
2017-01-01
A self-consistent and spatially dependent model is presented to investigate the multiband emission of pulsar wind nebulae (PWNe). In this model, a spherically symmetric system is assumed and the dynamical evolution of the PWN is included. The processes of convection, diffusion, adiabatic loss, radiative loss, and photon–photon pair production are taken into account in the electron’s evolution equation, and the processes of synchrotron radiation, inverse Compton scattering, synchrotron self-absorption, and pair production are included for the photon’s evolution equation. Both coupled equations are simultaneously solved. The model is applied to explain observed results of the PWN in MSH 15–52. Our results show that the spectral energy distributions (SEDs) of both electrons and photons are all a function of distance. The observed photon SED of MSH 15–52 can be well reproduced in this model. With the parameters obtained by fitting the observed SED, the spatial variations of photon index and surface brightness observed in the X-ray band can also be well reproduced. Moreover, it can be derived that the present-day diffusion coefficient of MSH 15–52 at the termination shock is {κ }0=6.6× {10}24 {{cm}}2 {{{s}}}-1, the spatial average has a value of \\bar{κ }=1.4× {10}25 {{cm}}2 {{{s}}}-1, and the present-day magnetic field at the termination shock has a value of {B}0=26.6 μ {{G}} and the spatial averaged magnetic field is \\bar{B}=14.9 μ {{G}}. The spatial changes of the spectral index and surface brightness at different bands are predicted.
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
Structural dynamic modifications via models
Indian Academy of Sciences (India)
T K Kundra
2000-06-01
Structural dynamic modification techniques attempt to reduce dynamic design time and can be implemented beginning with spatial models of structures, dynamic test data or updated models. The models assumed in this discussion are mathematical models, namely mass, stiffness, and damping matrices of the equations of motion of a structure. These models are identified/extracted from dynamic test data viz. frequency response functions (FRFs). Alternatively these models could have been obtained by adjusting or updating the finite element model of the structure in the light of the test data. The methods of structural modification for getting desired dynamic characteristics by using modifiers namely mass, beams and tuned absorbers are discussed.
Self-consistent modeling of CFETR baseline scenarios for steady-state operation
Chen, Jiale; Jian, Xiang; Chan, Vincent S.; Li, Zeyu; Deng, Zhao; Li, Guoqiang; Guo, Wenfeng; Shi, Nan; Chen, Xi; CFETR Physics Team
2017-07-01
Integrated modeling for core plasma is performed to increase confidence in the proposed baseline scenario in the 0D analysis for the China Fusion Engineering Test Reactor (CFETR). The steady-state scenarios are obtained through the consistent iterative calculation of equilibrium, transport, auxiliary heating and current drives (H&CD). Three combinations of H&CD schemes (NB + EC, NB + EC + LH, and EC + LH) are used to sustain the scenarios with q min > 2 and fusion power of ˜70-150 MW. The predicted power is within the target range for CFETR Phase I, although the confinement based on physics models is lower than that assumed in 0D analysis. Ideal MHD stability analysis shows that the scenarios are stable against n = 1-10 ideal modes, where n is the toroidal mode number. Optimization of RF current drive for the RF-only scenario is also presented. The simulation workflow for core plasma in this work provides a solid basis for a more extensive research and development effort for the physics design of CFETR.
A self-consistent impedance method for electromagnetic surface impedance modeling
Thiel, David V.; Mittra, Raj
2001-01-01
A two-dimensional, self-consistent impedance method has been derived and used to calculate the electromagnetic surface impedance above buried objects at very low frequencies. The earth half space is discretized using an array of impedance elements. Inhomogeneities in the complex permittivity of the earth are reflected in variations in these impedance elements. The magnetic field is calculated for each cell in the solution space using a difference equation derived from Faraday's and Ampere's laws. It is necessary to include an air layer above the earth's surface to allow the scattered magnetic field to be calculated at the surface. The source field is applied above the earth's surface as a Dirichlet boundary condition, whereas the Neumann condition is employed at all other boundaries in the solution space. This, in turn, enables users to use both finite and infinite magnetic field sources as excitations. The technique is shown to be computationally efficient and yields reasonably accurate results when applied to a number of one- and two-dimensional earth structures with a known surface impedance distribution.
Self-consistent physical parameters for 5 intermediate-age SMC stellar clusters from CMD modelling
Dias, Bruno; Barbuy, Beatriz; Santiago, Basilio; Ortolani, Sergio; Balbinot, Eduardo
2013-01-01
Context. Stellar clusters in the Small Magellanic Cloud (SMC) are useful probes to study the chemical and dynamical evolution of this neighbouring dwarf galaxy, enabling inspection of a large period covering over 10 Gyr. Aims. The main goals of this work are the derivation of age, metallicity, distance modulus, reddening, core radius and central density profile for six sample clusters, in order to place them in the context of the Small Cloud evolution. The studied clusters are: AM 3, HW 1, HW 34, HW 40, Lindsay 2, and Lindsay 3, where HW 1, HW 34, and Lindsay 2 are studied for the first time. Methods. Optical Colour-Magnitude Diagrams (V, B-V CMDs) and radial density profiles were built from images obtained with the 4.1m SOAR telescope, reaching V~23. The determination of structural parameters were carried out applying King profile fitting. The other parameters were derived in a self-consistent way by means of isochrone fitting, which uses the likelihood statistics to identify the synthetic CMDs that best rep...
Toward A Self Consistent MHD Model of Chromospheres and Winds From Late Type Evolved Stars
Airapetian, V. S.; Leake, J. E.; Carpenter, Kenneth G.
2015-01-01
We present the first magnetohydrodynamic model of the stellar chromospheric heating and acceleration of the outer atmospheres of cool evolved stars, using α Tau as a case study. We used a 1.5D MHD code with a generalized Ohm's law that accounts for the effects of partial ionization in the stellar atmosphere to study Alfvén wave dissipation and wave reflection. We have demonstrated that due to inclusion of the effects of ion-neutral collisions in magnetized weakly ionized chromospheric plasma on resistivity and the appropriate grid resolution, the numerical resistivity becomes 1-2 orders of magnitude smaller than the physical resistivity. The motions introduced by non-linear transverse Alfvé waves can explain non-thermally broadened and non-Gaussian profiles of optically thin UV lines forming in the stellar chromosphere of α Tau and other late-type giant and supergiant stars. The calculated heating rates in the stellar chromosphere due to resistive (Joule) dissipation of electric currents, induced by upward propagating non-linear Alfvé waves, are consistent with observational constraints on the net radiative losses in UV lines and the continuum from α Tau. At the top of the chromosphere, Alfvé waves experience significant reflection, producing downward propagating transverse waves that interact with upward propagating waves and produce velocity shear in the chromosphere. Our simulations also suggest that momentum deposition by non-linear Alfvé waves becomes significant in the outer chromosphere at 1 stellar radius from the photosphere. The calculated terminal velocity and the mass loss rate are consistent with the observationally derived wind properties in α Tau.
Saro, A.; De Lucia, G.; Borgani, S.; Dolag, K.
2010-08-01
We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical smoothed particle hydrodynamics (SPH) simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. This simplified comparison is thus not meant to be compared with observational data, but is aimed at understanding the level of agreement, at the stripped-down level considered, between two techniques that are widely used to model galaxy formation in a cosmological framework and which present complementary advantages and disadvantages. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: (i) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; (ii) while all stars associated with the BCG were formed in its progenitors in the SAM used here, this holds true only for half of the final BCG stellar mass in the SPH simulation, the remaining half being contributed by tidal stripping of stars from the diffuse stellar component associated with galaxies accreted on the cluster halo; (iii) SPH satellites can lose up to 90 per cent of their stellar mass at the time of accretion, due to tidal stripping, a process not included in the SAM used in this paper; (iv) in the SPH simulation, significant cooling occurs on the most massive satellite galaxies and this lasts for up to 1 Gyr after accretion. This physical process is
Iffrig, Olivier; Hennebelle, Patrick
2017-08-01
Context. Galaxy evolution and star formation are two multi-scale problems tightly linked to each other. Aims: We aim to describe simultaneously the large-scale evolution widely induced by the feedback processes and the details of the gas dynamics that controls the star formation process through gravitational collapse. This is a necessary step in understanding the interstellar cycle, which triggers galaxy evolution. Methods: We performed a set of three-dimensional high-resolution numerical simulations of a turbulent, self-gravitating and magnetized interstellar medium within a 1 kpc stratified box with supernova feedback correlated with star-forming regions. In particular, we focussed on the role played by the magnetic field and the feedback on the galactic vertical structure, the star formation rate (SFR) and the flow dynamics. For this purpose we have varied their respective intensities. We extracted properties of the dense clouds arising from the turbulent motions and compute power spectra of various quantities. Results: Using a distribution of supernovae sufficiently correlated with the dense gas, we find that supernova explosions can reproduce the observed SFR, particularly if the magnetic field is on the order of a few μG. The vertical structure, which results from a dynamical and an energy equilibrium is well reproduced by a simple analytical model, which allows us to roughly estimate the efficiency of the supernovae in driving the turbulence in the disc to be rather low, of the order of 1.5%. Strong magnetic fields may help to increase this efficiency by a factor of between two and three. To characterize the flow we compute the power spectra of various quantities in 3D but also in 2D in order to account for the stratification of the galactic disc. We find that within our setup, the compressive modes tend to dominate in the equatorial plane, while at about one scale height above it, solenoidal modes become dominant. We measured the angle between the magnetic
Self-consistent modeling of radio-frequency plasma generation in stellarators
Moiseenko, V. E.; Stadnik, Yu. S.; Lysoivan, A. I.; Korovin, V. B.
2013-11-01
A self-consistent model of radio-frequency (RF) plasma generation in stellarators in the ion cyclotron frequency range is described. The model includes equations for the particle and energy balance and boundary conditions for Maxwell's equations. The equation of charged particle balance takes into account the influx of particles due to ionization and their loss via diffusion and convection. The equation of electron energy balance takes into account the RF heating power source, as well as energy losses due to the excitation and electron-impact ionization of gas atoms, energy exchange via Coulomb collisions, and plasma heat conduction. The deposited RF power is calculated by solving the boundary problem for Maxwell's equations. When describing the dissipation of the energy of the RF field, collisional absorption and Landau damping are taken into account. At each time step, Maxwell's equations are solved for the current profiles of the plasma density and plasma temperature. The calculations are performed for a cylindrical plasma. The plasma is assumed to be axisymmetric and homogeneous along the plasma column. The system of balance equations is solved using the Crank-Nicholson scheme. Maxwell's equations are solved in a one-dimensional approximation by using the Fourier transformation along the azimuthal and longitudinal coordinates. Results of simulations of RF plasma generation in the Uragan-2M stellarator by using a frame antenna operating at frequencies lower than the ion cyclotron frequency are presented. The calculations show that the slow wave generated by the antenna is efficiently absorbed at the periphery of the plasma column, due to which only a small fraction of the input power reaches the confinement region. As a result, the temperature on the axis of the plasma column remains low, whereas at the periphery it is substantially higher. This leads to strong absorption of the RF field at the periphery via the Landau mechanism.
Self-consistent Keldysh approach to quenches in the weakly interacting Bose-Hubbard model
Lo Gullo, N.; Dell'Anna, L.
2016-11-01
We present a nonequilibrium Green's-functional approach to study the dynamics following a quench in weakly interacting Bose-Hubbard model (BHM). The technique is based on the self-consistent solution of a set of equations which represents a particular case of the most general set of Hedin's equations for the interacting single-particle Green's function. We use the ladder approximation as a skeleton diagram for the two-particle scattering amplitude useful, through the self-energy in the Dyson equation, for finding the interacting single-particle Green's function. This scheme is then implemented numerically by a parallelized code. We exploit this approach to study the correlation propagation after a quench in the interaction parameter, for one and two dimensions. In particular, we show how our approach is able to recover the crossover from the ballistic to the diffusive regime by increasing the boson-boson interaction. Finally we also discuss the role of a thermal initial state on the dynamics both for one- and two-dimensional BHMs, finding that, surprisingly, at high temperature a ballistic evolution is restored.
Self-consistent model of a solid for the description of lattice and magnetic properties
Balcerzak, T.; Szałowski, K.; Jaščur, M.
2017-03-01
In the paper a self-consistent theoretical description of the lattice and magnetic properties of a model system with magnetoelastic interaction is presented. The dependence of magnetic exchange integrals on the distance between interacting spins is assumed, which couples the magnetic and the lattice subsystem. The framework is based on summation of the Gibbs free energies for the lattice subsystem and magnetic subsystem. On the basis of minimization principle for the Gibbs energy, a set of equations of state for the system is derived. These equations of state combine the parameters describing the elastic properties (relative volume deformation) and the magnetic properties (magnetization changes). The formalism is extensively illustrated with the numerical calculations performed for a system of ferromagnetically coupled spins S=1/2 localized at the sites of simple cubic lattice. In particular, the significant influence of the magnetic subsystem on the elastic properties is demonstrated. It manifests itself in significant modification of such quantities as the relative volume deformation, thermal expansion coefficient or isothermal compressibility, in particular, in the vicinity of the magnetic phase transition. On the other hand, the influence of lattice subsystem on the magnetic one is also evident. It takes, for example, the form of dependence of the critical (Curie) temperature and magnetization itself on the external pressure, which is thoroughly investigated.
How consistent is cloudiness over Canada from satellite observations and modeling data?
Trishchenko, A. P.; Khlopenkov, K.; Latifovic, R.
2004-05-01
Being one of the major modulators of radiation budget and hydrological cycle, clouds are still significant challenge for modeling and satellite retrievals. For example, our analysis shows that for Western Canada the systematic difference in total cloud amounts between NCAR/NCEP Reanalysis-2 and ISCCP reaches 20-30 per cent. Especially difficult are satellite retrievals for Northern climate regions over snow-covered surface and during night-time. To understand better these differences and their influence on earth radiation budget in Northern latitudes, we are attempting to undertake the re-analysis of satellite AVHRR data over Canada using improved data processing and cloud detection algorithms. Details of cloud detection algorithm for day-time and night-time conditions over snow-free and snow-covered surfaces are discussed. Selected results of satellite retrievals for typical summer and winter conditions over Canada are compared to previous analyses, such as ISCCP and Pathfinder projects. Consistency between our cloud retrievals using AVHRR data and those available from MODIS will be also considered.
Directory of Open Access Journals (Sweden)
Falko Schmidt
2017-01-01
Full Text Available We perform a comprehensive theoretical study of the structural and electronic properties of potassium niobate (KNbO3 in the cubic, tetragonal, orthorhombic, monoclinic, and rhombohedral phase, based on density-functional theory. The influence of different parametrizations of the exchange-correlation functional on the investigated properties is analyzed in detail, and the results are compared to available experimental data. We argue that the PBEsol and AM05 generalized gradient approximations as well as the RTPSS meta-generalized gradient approximation yield consistently accurate structural data for both the external and internal degrees of freedom and are overall superior to the local-density approximation or other conventional generalized gradient approximations for the structural characterization of KNbO3. Band-structure calculations using a HSE-type hybrid functional further indicate significant near degeneracies of band-edge states in all phases which are expected to be relevant for the optical response of the material.
Tunable fiber laser based on a cascaded structure consisting of in-line MZI and traditional MZI
Tong, Zheng-rong; Yang, He; Zhang, Wei-hua
2016-11-01
A tunable erbium-doped fiber (EDF) laser with a cascaded structure consisting of in-line Mach-Zehnder interferometer (MZI) and traditional MZI is proposed. The transmission spectrum of the in-line MZI is modulated by the traditional MZI, which determines the period of the cascaded structure, while the in-line MZI's transmission spectrum is the outer envelope curve of the cascaded structure's transmission spectrum. A stable single-wavelength lasing operation is obtained at 1 527.14 nm. The linewidth is less than 0.07 nm with a side-mode suppression ratio ( SMSR) over 48 dB. Fixing the in-line MZI structure on a furnace, when the temperature changes from 30 °C to 230 °C, the central wavelength can be tuned within the range of 12.4 nm.
Leboissertier, Anthony; Okong'O, Nora; Bellan, Josette
2005-01-01
Large-eddy simulation (LES) is conducted of a three-dimensional temporal mixing layer whose lower stream is initially laden with liquid drops which may evaporate during the simulation. The gas-phase equations are written in an Eulerian frame for two perfect gas species (carrier gas and vapour emanating from the drops), while the liquid-phase equations are written in a Lagrangian frame. The effect of drop evaporation on the gas phase is considered through mass, species, momentum and energy source terms. The drop evolution is modelled using physical drops, or using computational drops to represent the physical drops. Simulations are performed using various LES models previously assessed on a database obtained from direct numerical simulations (DNS). These LES models are for: (i) the subgrid-scale (SGS) fluxes and (ii) the filtered source terms (FSTs) based on computational drops. The LES, which are compared to filtered-and-coarsened (FC) DNS results at the coarser LES grid, are conducted with 64 times fewer grid points than the DNS, and up to 64 times fewer computational than physical drops. It is found that both constant-coefficient and dynamic Smagorinsky SGS-flux models, though numerically stable, are overly dissipative and damp generated small-resolved-scale (SRS) turbulent structures. Although the global growth and mixing predictions of LES using Smagorinsky models are in good agreement with the FC-DNS, the spatial distributions of the drops differ significantly. In contrast, the constant-coefficient scale-similarity model and the dynamic gradient model perform well in predicting most flow features, with the latter model having the advantage of not requiring a priori calibration of the model coefficient. The ability of the dynamic models to determine the model coefficient during LES is found to be essential since the constant-coefficient gradient model, although more accurate than the Smagorinsky model, is not consistently numerically stable despite using DNS
An internally consistent inverse model to calculate ridge-axis hydrothermal fluxes
Coogan, L. A.; Dosso, S.
2010-12-01
Fluid and chemical fluxes from high-temperature, on-axis, hydrothermal systems at mid-ocean ridges have been estimated in a number of ways. These generally use simple mass balances based on either vent fluid compositions or the compositions of altered sheeted dikes. Here we combine these approaches in an internally consistent model. Seawater is assumed to enter the crust and react with the sheeted dike complex at high temperatures. Major element fluxes for both the rock and fluid are calculated from balanced stoichiometric reactions. These reactions include end-member components of the minerals plagioclase, pyroxene, amphibole, chlorite and epidote along with pure anhydrite, quartz, pyrite, pyrrhotite, titanite, magnetite, ilmenite and ulvospinel and the fluid species H2O, Mg2+, Ca2+, Fe2+, Na+, Si4+, H2S, H+ and H2. Trace element abundances (Li, B, K, Rb, Cs, Sr, Ba, U, Tl, Mn, Cu, Zn, Co, Ni, Pb and Os) and isotopic ratios (Li, B, O, Sr, Tl, Os) are calculated from simple mass balance of a fluid-rock reaction. A fraction of the Cu, Zn, Pb, Co, Ni, Os and Mn in the fluid after fluid-rock reaction is allowed to precipitate during discharge before the fluid reaches the seafloor. S-isotopes are tied to mineralogical reactions involving S-bearing phases. The free parameters in the model are the amounts of each mineralogical reaction that occurs, the amounts of the metals precipitated during discharge, and the water-to-rock ratio. These model parameters, and their uncertainties, are constrained by: (i) mineral abundances and mineral major element compositions in altered dikes from ODP Hole 504B and the Pito and Hess Deep tectonic windows (EPR crust); (ii) changes in dike bulk-rock trace element and isotopic compositions from these locations relative to fresh MORB glass compositions; and (iii) published vent fluid compositions from basalt-hosted high-temperature ridge axis hydrothermal systems. Using a numerical inversion algorithm, the probability density of different
Hu, Ziqiao; Chen, Guangming
2014-09-10
A novel type of polymer nanocomposite (NC) hydrogel with extraordinary mechanical properties at low inorganic content is prepared and investigated. The NC hydrogels consist of isethionate-loaded layered double hydroxide/polyacrylamide (LDH-Ise/PAM) - with LDH-Ise being used because of its swelling properties - and no conventional organic crosslinker. The NC hydrogels exhibit an unusual hierarchical porous structure at the micro- and nanometer scales, and their elongation at break can exceed 4000%.
DEFF Research Database (Denmark)
Sogachev, Andrey; Kelly, Mark C.; Leclerc, Monique Y.
2012-01-01
A self-consistent two-equation closure treating buoyancy and plant drag effects has been developed, through consideration of the behaviour of the supplementary equation for the length-scale-determining variable in homogeneous turbulent flow. Being consistent with the canonical flow regimes of gri...
Models of large scale structure
Energy Technology Data Exchange (ETDEWEB)
Frenk, C.S. (Physics Dept., Univ. of Durham (UK))
1991-01-01
The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, {Omega}> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.).
Sarofim, M. C.; Martinich, J.; Waldhoff, S.; DeAngelo, B. J.; McFarland, J.; Jantarasami, L.; Shouse, K.; Crimmins, A.; Li, J.
2014-12-01
The Climate Change Impacts and Risk Analysis (CIRA) project establishes a new multi-model framework to systematically assess the physical impacts, economic damages, and risks from climate change. The primary goal of this framework is to estimate the degree to which climate change impacts and damages in the United States are avoided or reduced in the 21st century under multiple greenhouse gas (GHG) emissions mitigation scenarios. The first phase of the CIRA project is a modeling exercise that included two integrated assessment models and 15 sectoral models encompassing five broad impacts sectors: water resources, electric power, infrastructure, human health, and ecosystems. Three consistent socioeconomic and climate scenarios are used to analyze the benefits of global GHG mitigation targets: a reference scenario and two policy scenarios with total radiative forcing targets in 2100 of 4.5 W/m2 and 3.7 W/m2. In this exercise, the implications of key uncertainties are explored, including climate sensitivity, climate model, natural variability, and model structures and parameters. This presentation describes the motivations and goals of the CIRA project; the design and academic contribution of the first CIRA modeling exercise; and briefly summarizes several papers published in a special issue of Climatic Change. The results across impact sectors show that GHG mitigation provides benefits to the United States that increase over time, the effects of climate change can be strongly influenced by near-term policy choices, adaptation can reduce net damages, and impacts exhibit spatial and temporal patterns that may inform mitigation and adaptation policy discussions.
PRODUCT STRUCTURE DIGITAL MODEL
Directory of Open Access Journals (Sweden)
V.M. Sineglazov
2005-02-01
Full Text Available Research results of representation of product structure made by means of CADDS5 computer-aided design (CAD system, Product Data Management Optegra (PDM system and Product Life Cycle Management Wind-chill system (PLM, are examined in this work. Analysis of structure component development and its storage in various systems is carried out. Algorithms of structure transformation required for correct representation of the structure are considered. Management analysis of electronic mockup presentation of the product structure is carried out for Windchill system.
Self consistent model of core formation and the effective metal-silicate partitioning
Ichikawa, H.; Labrosse, S.; Kameyama, M.
2010-12-01
It has been long known that the formation of the core transforms gravitational energy into heat and is able to heat up the whole Earth by about 2000 K. However, the distribution of this energy within the Earth is still debated and depends on the core formation process considered. Iron rain in the surface magma ocean is supposed to be the first mechanism of separation for large planets, iron then coalesces to form a pond at the base of the magma ocean [Stevenson 1990]. The time scale of the separation can be estimated from falling velocity of the iron phase, which is estimated by numerical simulation [Ichikawa et al., 2010] as ˜ 10cm/s with iron droplet of centimeter-scale. A simple estimate of the metal-silicate partition from the P-T condition of the base of the magma ocean, which must coincide with between peridotite liquidus and solidus by a single-stage model, is inconsistent with Earth's core-mantle partition. P-T conditions where silicate equilibrated with metal are far beyond the liquidus or solidus temperature for about ˜ 700K. For example, estimated P-T conditions are: 40GPa at 3750K for Wade and Wood, 2005, T ≧ 3600K for Chabot and Agee, 2003 and 35GPa at T ≧ 3300K for Gessmann and Rubie, 2000. Meanwhile, Rubie et al., 2003 shown that metal couldn't equilibrate with silicate on the base of the magma ocean before crystallization of silicate. On the other hand, metal-silicate equilibration is achieved only ˜ 5 s in the state of iron rain. Therefore metal and silicate simultaneously separate and equilibrate each other at the P-T condition during the course to the iron pond. Taking into account the release of gravitational energy, temperature of the middle of the magma ocean would be higher than the liquidus. Estimation of the thermal structure during the iron-silicate separation requires the development of a planetary-sized calculation model. However, because of the huge disparity of scales between the cm-sized drops and the magma ocean, a direct
Wan, Li; Xu, Shixin; Liao, Maijia; Liu, Chun; Sheng, Ping
2014-01-01
In this work, we treat the Poisson-Nernst-Planck (PNP) equations as the basis for a consistent framework of the electrokinetic effects. The static limit of the PNP equations is shown to be the charge-conserving Poisson-Boltzmann (CCPB) equation, with guaranteed charge neutrality within the computational domain. We propose a surface potential trap model that attributes an energy cost to the interfacial charge dissociation. In conjunction with the CCPB, the surface potential trap can cause a surface-specific adsorbed charge layer σ. By defining a chemical potential μ that arises from the charge neutrality constraint, a reformulated CCPB can be reduced to the form of the Poisson-Boltzmann equation, whose prediction of the Debye screening layer profile is in excellent agreement with that of the Poisson-Boltzmann equation when the channel width is much larger than the Debye length. However, important differences emerge when the channel width is small, so the Debye screening layers from the opposite sides of the channel overlap with each other. In particular, the theory automatically yields a variation of σ that is generally known as the "charge regulation" behavior, attendant with predictions of force variation as a function of nanoscale separation between two charged surfaces that are in good agreement with the experiments, with no adjustable or additional parameters. We give a generalized definition of the ζ potential that reflects the strength of the electrokinetic effect; its variations with the concentration of surface-specific and surface-nonspecific salt ions are shown to be in good agreement with the experiments. To delineate the behavior of the electro-osmotic (EO) effect, the coupled PNP and Navier-Stokes equations are solved numerically under an applied electric field tangential to the fluid-solid interface. The EO effect is shown to exhibit an intrinsic time dependence that is noninertial in its origin. Under a step-function applied electric field, a
Gubbiotti, G.; Silvani, R.; Tacchi, S.; Madami, M.; Carlotti, G.; Yang, Z.; Adeyeye, A. O.; Kostylev, M.
2017-03-01
We have investigated both experimentally and numerically the magnonic band structure of arrays of closely spaced Fe/permalloy nanowires (NWs) with an L-shape cross-section using the Brillouin light scattering technique and GPU-based micromagnetic simulations. NWs consist of a 340 nm wide and 10 nm thick permalloy layer covered by a 170 nm wide Fe overlayer. The thickness of the latter was varied in the range from 0 to 10 nm in order to analyze its influence on the magnonic band structure. We found that both the frequency and the spatial profile of the most intense and dispersive mode, can be efficiently tuned by the presence of the thin Fe NW overlayer. In particular, by increasing the Fe thickness, one observes a substantial frequency increase, while the spatial profile of the mode narrows and moves to the permalloy NW portion not covered by Fe. In addition, the presence of the Fe overlayer causes a significant increase of the number of detected modes and a change of their intensity in the Brillouin spectra as a function of the Bloch wave number. These results show that it is possible to engineer the band structure of magnonic crystals consisting of bi-layered, L-shaped, NWs by a careful control of the overlayer thickness.
Pérez-Pardal, L; Grizelj, J; Traoré, A; Cubric-Curik, V; Arsenos, G; Dovenski, T; Marković, B; Fernández, I; Cuervo, M; Alvarez, I; Beja-Pereira, A; Curik, I; Goyache, F
2014-02-01
A total of 132 mtDNA sequences from 10 Balkan donkey populations were analysed to ascertain their regional genetic structure and to contribute to the knowledge of the spreading of the species after domestication. The Balkan donkey sequences were compared with those from 40 Burkina Faso donkeys as an African outgroup to account for possible local Balkan scenarios. The 172 sequences gave 62 different haplotypes (55 in Balkan donkey). Virtually all the analysed populations had haplotypes assigned to either Clade 1 or Clade 2 even though the relative proportion of Clade 1 or 2 haplotypes differed across populations. Geographical maps constructed using factors computed via principal component analysis showed that the Balkan donkey populations are not spatially structured. AMOVA confirmed a lack of genetic structure in Balkan donkey mtDNA. Balkan populations were poorly differentiated (ΦST = 0.071). Differentiation between the Balkan donkey and the African outgroup also was low. The lack of correspondence between geographical areas and maternal genetic structure is consistent with the hypothesis suggesting a very quick spread of the species after domestication. The current research illustrates the difficulties to trace routes of expansion in donkey, as the species has no geographical structure.
Minomo, Kosho
2016-01-01
We analyze the $\\alpha$-$^{12}$C inelastic scattering to the $0^+_2$ state of $^{12}$C, the Hoyle state, in a fully microscopic framework. With no free adjustable parameter, the inelastic cross sections at forward angles are well reproduced by the microscopic reaction calculation using the transition density of $^{12}$C obtained by the resonating group method and the nucleon-nucleon $g$ matrix interaction developed by the Melbourne group. It is thus shown that the monopole transition strength obtained by the structural calculation is consistent with that extracted from the reaction observable, suggesting no missing monopole strength of the Hoyle state.
Energy Technology Data Exchange (ETDEWEB)
Ming, Y; Ramaswamy, V; Donner, L J; Phillips, V T; Klein, S A; Ginoux, P A; Horowitz, L H
2005-05-02
This paper describes a self-consistent prognostic cloud scheme that is able to predict cloud liquid water, amount and droplet number (N{sub d}) from the same updraft velocity field, and is suitable for modeling aerosol-cloud interactions in general circulation models (GCMs). In the scheme, the evolution of droplets fully interacts with the model meteorology. An explicit treatment of cloud condensation nuclei (CCN) activation allows the scheme to take into account the contributions to N{sub d} of multiple types of aerosol (i.e., sulfate, organic and sea-salt aerosols) and kinetic limitations of the activation process. An implementation of the prognostic scheme in the Geophysical Fluid Dynamics Laboratory (GFDL) AM2 GCM yields a vertical distribution of N{sub d} characteristic of maxima in the lower troposphere differing from that obtained through diagnosing N{sub d} empirically from sulfate mass concentrations. As a result, the agreement of model-predicted present-day cloud parameters with satellite measurements is improved compared to using diagnosed N{sub d}. The simulations with pre-industrial and present-day aerosols show that the combined first and second indirect effects of anthropogenic sulfate and organic aerosols give rise to a global annual mean flux change of -1.8 W m{sup -2} consisting of -2.0 W m{sup -2} in shortwave and 0.2 W m{sup -2} in longwave, as model response alters cloud field, and subsequently longwave radiation. Liquid water path (LWP) and total cloud amount increase by 19% and 0.6%, respectively. Largely owing to high sulfate concentrations from fossil fuel burning, the Northern Hemisphere mid-latitude land and oceans experience strong cooling. So does the tropical land which is dominated by biomass burning organic aerosol. The Northern/Southern Hemisphere and land/ocean ratios are 3.1 and 1.4, respectively. The calculated annual zonal mean flux changes are determined to be statistically significant, exceeding the model's natural
Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.
2003-01-01
. In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling...
Using open sidewalls for modelling self-consistent lithosphere subduction dynamics
Chertova, M.V.; Geenen, T.; van den Berg, A.; Spakman, W.
2012-01-01
Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable) vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free
Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.
2009-01-01
This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…
Self-consistent tight-binding model of B and N doping in graphene
DEFF Research Database (Denmark)
Pedersen, Thomas Garm; Pedersen, Jesper Goor
2013-01-01
Boron and nitrogen substitutional impurities in graphene are analyzed using a self-consistent tight-binding approach. An analytical result for the impurity Green's function is derived taking broken electron-hole symmetry into account and validated by comparison to numerical diagonalization...
Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.
2009-01-01
This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…
Directory of Open Access Journals (Sweden)
Susana De La Ossa
2009-03-01
Full Text Available Background: The Zung’s rating instrument for anxiety disorders has been used in several Colombian researches. Its internal consistency and factor structure have not been reported among university students.Objective: To calculate the internal consistency and explore the factor structure of three versions of the Zung’s rating instrument for anxiety disorders among university students.Method: Two-hundred and twenty-one medicine and psychology students of a private university in Cartagena, Colombia, completed the 20-item version of the Zung’s rating instrument for anxiety disorders. The mean of age of students was 20.5 years (SD=2.6, 64.4% were women, and 54.3% studied medicine. Cronbach alpha was computed and exploratory factor analysis was done for the three versions.Results: The 20-item version of the Zung’s rating instrument for anxiety disorders presents Cronbach alpha coefficient of 0.77 and three principal factors accounted for 40.1% of the total variance. The 10-item version showed Cronbach alpha of 0.83 and two-dimensional structure responsible of 54.0% of the total variance. The 5-item version showed Cronbach alpha of 0.74 and one-dimensional structure accounted for 49.5% of the total variance.Conclusions: The 10- and 5-item version of the Zung’s rating instrument for anxiety disorders present better psychometric properties than the original 20-item version. It is necessary to estimate the properties of these versions compared with a gold standard.
Directory of Open Access Journals (Sweden)
Susana De La Ossa
2009-12-01
Full Text Available Background: The Zung’s rating instrument for anxiety disorders has been used in several Colombian researches. Its internal consistency and factor structure have not been reported among university students. Objective: To calculate the internal consistency and explore the factor structure of three versions of the Zung’s rating instrument for anxiety disorders among university students. Method: Two-hundred and twenty-one medicine and psychology students of a private university in Cartagena, Colombia, completed the 20-item version of the Zung’s rating instrument for anxiety disorders. The mean of age of students was 20.5 years (SD=2.6, 64.4% were women, and 54.3% studied medicine. Cronbach alpha was computed and exploratory factor analysis was done for the three versions. Results: The 20-item version of the Zung’s rating instrument for anxiety disorders presents Cronbach alpha coefficient of 0.77 and three principal factors accounted for 40.1% of the total variance. The 10-item version showed Cronbach alpha of 0.83 and two-dimensional structure responsible of 54.0% of the total variance. The 5-item version showed Cronbach alpha of 0.74 and one-dimensional structure accounted for 49.5% of the total variance. Conclusions: The 10- and 5-item version of the Zung’s rating instrument for anxiety disorders present better psychometric properties than the original 20-item version. It is necessary to estimate the properties of these versions compared with a gold standard.
Oscillating water column structural model
Energy Technology Data Exchange (ETDEWEB)
Copeland, Guild [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bull, Diana L [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jepsen, Richard Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gordon, Margaret Ellen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-09-01
An oscillating water column (OWC) wave energy converter is a structure with an opening to the ocean below the free surface, i.e. a structure with a moonpool. Two structural models for a non-axisymmetric terminator design OWC, the Backward Bent Duct Buoy (BBDB) are discussed in this report. The results of this structural model design study are intended to inform experiments and modeling underway in support of the U.S. Department of Energy (DOE) initiated Reference Model Project (RMP). A detailed design developed by Re Vision Consulting used stiffeners and girders to stabilize the structure against the hydrostatic loads experienced by a BBDB device. Additional support plates were added to this structure to account for loads arising from the mooring line attachment points. A simplified structure was designed in a modular fashion. This simplified design allows easy alterations to the buoyancy chambers and uncomplicated analysis of resulting changes in buoyancy.
Directory of Open Access Journals (Sweden)
Roy E Barnewall
2012-06-01
Full Text Available Repeated low-level exposures to Bacillus anthracis could occur before or after the remediation of an environmental release. This is especially true for persistent agents such as Bacillus anthracis spores, the causative agent of anthrax. Studies were conducted to examine aerosol methods needed for consistent daily low aerosol concentrations to deliver a low-dose (less than 106 colony forming units (CFU of B. anthracis spores and included a pilot feasibility characterization study, acute exposure study, and a multiple fifteen day exposure study. This manuscript focuses on the state-of-the-science aerosol methodologies used to generate and aerosolize consistent daily low aerosol concentrations and resultant low inhalation doses. The pilot feasibility characterization study determined that the aerosol system was consistent and capable of producing very low aerosol concentrations. In the acute, single day exposure experiment, targeted inhaled doses of 1 x 102, 1 x 103, 1 x 104, and 1 x 105 CFU were used. In the multiple daily exposure experiment, rabbits were exposed multiple days to targeted inhaled doses of 1 x 102, 1 x 103, and 1 x 104 CFU. In all studies, targeted inhaled doses remained fairly consistent from rabbit to rabbit and day to day. The aerosol system produced aerosolized spores within the optimal mass median aerodynamic diameter particle size range to reach deep lung alveoli. Consistency of the inhaled dose was aided by monitoring and recording respiratory parameters during the exposure with real-time plethysmography. Overall, the presented results show that the animal aerosol system was stable and highly reproducible between different studies and multiple exposure days.
A Delay Model of Multiple-Valued Logic Circuits Consisting of Min, Max, and Literal Operations
Takagi, Noboru
Delay models for binary logic circuits have been proposed and clarified their mathematical properties. Kleene's ternary logic is one of the simplest delay models to express transient behavior of binary logic circuits. Goto first applied Kleene's ternary logic to hazard detection of binary logic circuits in 1948. Besides Kleene's ternary logic, there are many delay models of binary logic circuits, Lewis's 5-valued logic etc. On the other hand, multiple-valued logic circuits recently play an important role for realizing digital circuits. This is because, for example, they can reduce the size of a chip dramatically. Though multiple-valued logic circuits become more important, there are few discussions on delay models of multiple-valued logic circuits. Then, in this paper, we introduce a delay model of multiple-valued logic circuits, which are constructed by Min, Max, and Literal operations. We then show some of the mathematical properties of our delay model.
Physically-consistent wall boundary conditions for the k-ω turbulence model
DEFF Research Database (Denmark)
Fuhrman, David R.; Dixen, Martin; Jacobsen, Niels Gjøl
2010-01-01
A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components of the fluc......A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components...
Probabilistic Modeling of Timber Structures
DEFF Research Database (Denmark)
Köhler, J.D.; Sørensen, John Dalsgaard; Faber, Michael Havbro
2005-01-01
The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present pro...... probabilistic model for these basic properties is presented and possible refinements are given related to updating of the probabilistic model given new information, modeling of the spatial variation of strength properties and the duration of load effects.......The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present...
Woitke, P.; Min, M.; Pinte, C.; Thi, W. -F; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.
2016-01-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on
O. Fovet; L. Ruiz; M. Hrachowitz; M. Faucheux; C. Gascuel-Odoux
2015-01-01
While most hydrological models reproduce the general flow dynamics, they frequently fail to adequately mimic system-internal processes. In particular, the relationship between storage and discharge, which often follows annual hysteretic patterns in shallow hard-rock aquifers, is rarely considered in modelling studies. One main reason is that catchment storage is...
Vertical Equating: An Empirical Study of the Consistency of Thurstone and Rasch Model Approaches.
Schratz, Mary K.
To explore the appropriateness of the Rasch model for the vertical equating of a multi-level, multi-form achievement test series, both the Rasch model and the traditional Thurstone procedures were applied to the Listening Comprehension subtest scores of the Stanford Achievement Test. Two adjacent levels of these tests were administered in 1981 to…
Woitke, P.; Min, M.; Pinte, C.; Thi, W. -F; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.
2016-01-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on t
CONSISTENT USE OF THE KALMAN FILTER IN CHEMICAL TRANSPORT MODELS (CTMS) FOR DEDUCING EMISSIONS
Past research has shown that emissions can be deduced using observed concentrations of a chemical, a Chemical Transport Model (CTM), and the Kalman filter in an inverse modeling application. An expression was derived for the relationship between the "observable" (i.e., the con...
DEFF Research Database (Denmark)
Keck, Rolf-Erik; Veldkamp, Dick; Wedel-Heinen, Jens Jakob
This thesis describes the further development and validation of the dynamic meandering wake model for simulating the flow field and power production of wind farms operating in the atmospheric boundary layer (ABL). The overall objective of the conducted research is to improve the modelling capabil...... intensity. This power drop is comparable to measurements from the North Hoyle and OWEZ wind farms....
Structured population models in biology and epidemiology
Ruan, Shigui
2008-01-01
This book consists of six chapters written by leading researchers in mathematical biology. These chapters present recent and important developments in the study of structured population models in biology and epidemiology. Topics include population models structured by age, size, and spatial position; size-structured models for metapopulations, macroparasitc diseases, and prion proliferation; models for transmission of microparasites between host populations living on non-coincident spatial domains; spatiotemporal patterns of disease spread; method of aggregation of variables in population dynamics; and biofilm models. It is suitable as a textbook for a mathematical biology course or a summer school at the advanced undergraduate and graduate level. It can also serve as a reference book for researchers looking for either interesting and specific problems to work on or useful techniques and discussions of some particular problems.
Jacques, Kevin; Sabariego, Ruth,; Geuzaine, Christophe; GYSELINCK Johan
2015-01-01
This paper deals with the implementation of an energy-consistent ferromagnetic hysteresis model in 2D finite element computations. This vector hysteresis model relies on a strong thermodynamic foundation and ensures the closure of minor hysteresis loops. The model accuracy can be increased by controlling the number of intrinsic cell components while parameters can be easily fitted on common material measurements. Here, the native h-based material model is inverted using the Newton-Raphson met...
SPAR Model Structural Efficiencies
Energy Technology Data Exchange (ETDEWEB)
John Schroeder; Dan Henry
2013-04-01
The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches
Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency
Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.
2013-09-01
A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.
Energy Technology Data Exchange (ETDEWEB)
Keck, R.-E.
2013-07-15
This thesis describes the further development and validation of the dynamic meandering wake model for simulating the flow field and power production of wind farms operating in the atmospheric boundary layer (ABL). The overall objective of the conducted research is to improve the modelling capability of the dynamics wake meandering model to a level where it is sufficiently mature to be applied in industrial applications and for an augmentation of the IEC-standard for wind turbine wake modelling. Based on a comparison of capabilities of the dynamic wake meandering model to the requirement of the wind industry, four areas were identified as high prioritizations for further research: 1. the turbulence distribution in a single wake. 2. multiple wake deficits and build-up of turbulence over a row of turbines. 3. the effect of the atmospheric boundary layer on wake turbulence and wake deficit evolution. 4. atmospheric stability effects on wake deficit evolution and meandering. The conducted research is to a large extent based on detailed wake investigations and reference data generated through computational fluid dynamics simulations, where the wind turbine rotor has been represented by an actuator line model. As a consequence, part of the research also targets the performance of the actuator line model when generating wind turbine wakes in the atmospheric boundary layer. Highlights of the conducted research: 1. A description is given for using the dynamic wake meandering model as a standalone flow-solver for the velocity and turbulence distribution, and power production in a wind farm. The performance of the standalone implementation is validated against field data, higher-order computational fluid dynamics models, as well as the most common engineering wake models in the wind industry. 2. The EllipSys3D actuator line model, including the synthetic methods used to model atmospheric boundary layer shear and turbulence, is verified for modelling the evolution of wind
Energy Technology Data Exchange (ETDEWEB)
Batista, Enrique R [Los Alamos National Laboratory; Sproviero, Eduardo M [YALE UNIV; Newcomer, Michael [YALE UNIV; Gascon, Jose A [YALE UNIV; Batista, Victor S [YALE UNIV
2008-01-01
The combination of quantum mechanics and molecular mechanics (QM/MM) is one of the most promising approaches to study the structure, function, and properties of proteins and nucleic acids. However, there some instances in which the limitations of either the MM (lack of a proper electronic description) or QM (limited to a few number of atoms) methods prevent a proper description of the system. To address this issue, we review here our approach to fine-tune the structure of biological systems using post-QM/MM refinements. These protocols are based on spectroscopy data, and/or partitioning of the system to extend the QM description to a larger region of a protein. We illustrate these methodologies through applications to several biomolecules, which were pre-optimized at the QM/MM level and then further refined using postQM/MM refinement methodologies: mod(QM/MM), which refines the atomic charges of the residues included in the MM region accounting for polarization effects; mod(QM/MM)-opt that partition the MM region in smaller parts and optimizes each part in an iterative. self-consistent way, and the Polarized-Extended X-Ray Absorption Fine Structure (P-EXAFS) fitting procedure, which fine-tune the atomic coordinates to reproduce experimental polarized EXAFS spectra. The first two techniques were applied to the guanine quadruplex. while the P-EXAFS refinement was applied to the oxygen evolving complex of photosystem II.
The Twente lower extremity model : consistent dynamic simulation of the human locomotor apparatus
Klein Horsman, Martijn Dirk
2007-01-01
Orthopedic interventions such as tendon transfers have shown to be successful in the treatment of gait disorders. Still, in many cases dysfunctions remained or worsened. To assist clinicians, an interactive tool will be useful that allows evaluation of if-then scenarios with respect to treatment methods. Comprehensive musculoskeletal models have shown a high potential to serve as such a tool. By varying anatomical model parameters, alterations in anatomy due to surgery can be implemented. Inv...
Toward a self-consistent, high-resolution absolute plate motion model for the Pacific
Wessel, Paul; Harada, Yasushi; Kroenke, Loren W.
2006-03-01
The hot spot hypothesis postulates that linear volcanic trails form as lithospheric plates move relative to stationary or slowly moving plumes. Given geometry and ages from several trails, one can reconstruct absolute plate motions (APM) that provide valuable information about past and present tectonism, paleogeography, and volcanism. Most APM models have been designed by fitting small circles to coeval volcanic chain segments and determining stage rotation poles, opening angles, and time intervals. Unlike relative plate motion (RPM) models, such APM models suffer from oversimplicity, self-inconsistencies, inadequate fits to data, and lack of rigorous uncertainty estimates; in addition, they work only for fixed hot spots. Newer methods are now available that overcome many of these limitations. We present a technique that provides high-resolution APM models derived from stationary or moving hot spots (given prescribed paths). The simplest model assumes stationary hot spots, and an example of such a model is presented. Observations of geometry and chronology on the Pacific plate appear well explained by this type of model. Because it is a one-plate model, it does not discriminate between hot spot drift or true polar wander as explanations for inferred paleolatitudes from the Emperor chain. Whether there was significant relative motion within the hot spots under the Pacific plate during the last ˜70 m.y. is difficult to quantify, given the paucity and geological uncertainty of age determinations. Evidence in support of plume drift appears limited to the period before the 47 Ma Hawaii-Emperor Bend and, apart from the direct paleolatitude determinations, may have been somewhat exaggerated.
2012-06-13
daily low-dose Bacillus anthracis spore inhalation exposures in the rabbit model Roy E. Barnewall 1, Jason E. Comer 1, Brian D. Miller 1, BradfordW...multiple exposure days. Keywords: Bacillus anthracis , inhalation exposures, low-dose, subchronic exposures, spores, anthrax, aerosol system INTRODUCTION... Bacillus Anthracis Spore Inhalation Exposures In The Rabbit Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d
Hachem, Walid; Mestre, Xavier; Najim, Jamal; Vallet, Pascal
2011-01-01
In array processing, a common problem is to estimate the angles of arrival of $K$ deterministic sources impinging on an array of $M$ antennas, from $N$ observations of the source signal, corrupted by gaussian noise. The problem reduces to estimate a quadratic form (called "localization function") of a certain projection matrix related to the source signal empirical covariance matrix. Recently, a new subspace estimation method (called "G-MUSIC") has been proposed, in the context where the number of available samples $N$ is of the same order of magnitude than the number of sensors $M$. In this context, the traditional subspace methods tend to fail because the empirical covariance matrix of the observations is a poor estimate of the source signal covariance matrix. The G-MUSIC method is based on a new consistent estimator of the localization function in the regime where $M$ and $N$ tend to $+\\infty$ at the same rate. However, the consistency of the angles estimator was not adressed. The purpose of this paper is ...
Self-consisting modeling of entangled network strands and dangling ends
DEFF Research Database (Denmark)
Jensen, Mette Krog; Schieber, Jay D.; Khaliullin, Renat N.;
2009-01-01
of dangling ends and soluble structures. Energy dissipation is increased by adding a fraction of dangling ends, wDE, to the ensemble. We find that when wDE=0.6, G0 is about 75% lower than GN0, this suggests that the fraction of network strands, wNS=1-wDE, largely influences the plateau value at low...... frequencies. Soluble strands can also be added to the theory which is expected to increase energy dissipation further....
Woitke, P; Pinte, C; Thi, W -F; Kamp, I; Rab, C; Anthonioz, F; Antonellini, S; Baldovin-Saavedra, C; Carmona, A; Dominik, C; Dionatos, O; Greaves, J; Güdel, M; Ilee, J D; Liebhart, A; Ménard, F; Rigon, L; Waters, L B F M; Aresu, G; Meijerink, R; Spaans, M
2015-01-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. We propose new standard dust opacities for disk models, we present a simplified treatment of PAHs sufficient to reproduce the PAH emission features, and we suggest using a simple treatment of dust settling. We roughly adjust parameters to obtain a model that predicts typical Class II T Tauri star continuum and line observations. We systematically study the impact of each model parameter (disk mass, disk extension and shape, dust settling, dust size and opacity, gas/dust ratio, etc.) on all continuum and line observables, in particular on the SED, mm-slope, continuum visibilities, and emission lines including [OI] 63um, high-J CO lines, (sub-)mm CO isotopologue lines, and CO fundamental ro-vibrational lines. We find that evolved dust properties (large grains...
Shell Effect of Superheavy Nuclei in Self-consistent Mean-Field Models
Institute of Scientific and Technical Information of China (English)
RENZhong-Zhou; TAIFei; XUChang; CHENDing-Han; ZHANGHu-Yong; CAIXiang-Zhou; SHENWen-Qing
2004-01-01
We analyze in detail the numerical results of superheavy nuclei in deformed relativistic mean-field model and deformed Skyrme-Hartree-Fock model. The common points and differences of both models are systematically compared and discussed. Their consequences on the stability of superheavy nuclei are explored and explained. The theoreticalresults are compared with new data of superheavy nuclei from GSI and from Dubna and reasonable agreement is reached.Nuclear shell effect in superheavy region is analyzed and discussed. The spherical shell effect disappears in some cases due to the appearance of deformation or superdeformation in the ground states of nuclei, where valence nucleons occupysignificantly the intruder levels of nuclei. It is shown for the first time that the significant occupation of vaJence nucleons on the intruder states plays an important role for the ground state properties of superheavy nuclei. Nuclei are stable in the deformed or superdeformed configurations. We further point out that one cannot obtain the octupole deformation of even-even nuclei in the present relativistic mean-field model with the σ，ω and ρ mesons because there is no parityviolating interaction and the conservation of parity of even-even nuclei is a basic assumption of the present relativistic mean-field model.
Consistent treatment of viscoelastic effects at junctions in one-dimensional blood flow models
Müller, Lucas O.; Leugering, Günter; Blanco, Pablo J.
2016-06-01
While the numerical discretization of one-dimensional blood flow models for vessels with viscoelastic wall properties is widely established, there is still no clear approach on how to couple one-dimensional segments that compose a network of viscoelastic vessels. In particular for Voigt-type viscoelastic models, assumptions with regard to boundary conditions have to be made, which normally result in neglecting the viscoelastic effect at the edge of vessels. Here we propose a coupling strategy that takes advantage of a hyperbolic reformulation of the original model and the inherent information of the resulting system. We show that applying proper coupling conditions is fundamental for preserving the physical coherence and numerical accuracy of the solution in both academic and physiologically relevant cases.
A Hybrid EAV-Relational Model for Consistent and Scalable Capture of Clinical Research Data.
Khan, Omar; Lim Choi Keung, Sarah N; Zhao, Lei; Arvanitis, Theodoros N
2014-01-01
Many clinical research databases are built for specific purposes and their design is often guided by the requirements of their particular setting. Not only does this lead to issues of interoperability and reusability between research groups in the wider community but, within the project itself, changes and additions to the system could be implemented using an ad hoc approach, which may make the system difficult to maintain and even more difficult to share. In this paper, we outline a hybrid Entity-Attribute-Value and relational model approach for modelling data, in light of frequently changing requirements, which enables the back-end database schema to remain static, improving the extensibility and scalability of an application. The model also facilitates data reuse. The methods used build on the modular architecture previously introduced in the CURe project.
Institute of Scientific and Technical Information of China (English)
Mohamed BALAH; Hamdan Naser AL-GHAMEDY
2004-01-01
The paper presents an approach for the formulation of general laminated shells based on a third order shear deformation theory. These shells undergo finite (unlimited in size) rotations and large overall motions but with small strains. A singularity-free parametrization of the rotation field is adopted. The constitutive equations, derived with respect to laminate curvilinear coordinates,are applicable to shell elements with an arbitrary number of orthotropic layers and where the material principal axes can vary from layer to layer. A careful consideration of the consistent linearization procedure pertinent to the proposed parametrization of finite rotations leads to symmetric tangent stiffness matrices. The matrix formulation adopted here makes it possible to implement the present formulation within the framework of the finite element method as a straightforward task.
Self-consistent theory for the built-in voltage in metal-organic semiconductor-metal structures
Energy Technology Data Exchange (ETDEWEB)
Peng Yingquan, E-mail: yqpeng@lzu.edu.cn [Laboratory of Semiconductor Devices and Engineering, Lanzhou University, Tian-Shui Road, Lanzhou 730000 (China); Key Laboratory for Magnetism and Magnetic Materials of the Ministry of Education, Lanzhou University, Lanzhou 730000 (China); Meng Weimin [Key Laboratory for Magnetism and Magnetic Materials of the Ministry of Education, Lanzhou University, Lanzhou 730000 (China); Wang Runsheng; Ma Chaozhu; Li Xunshuan; Xie Hongwei; Li Ronghua; Zhao Ming; Yuan Jianting; Wang Ying [Laboratory of Semiconductor Devices and Engineering, Lanzhou University, Tian-Shui Road, Lanzhou 730000 (China)
2009-06-30
A self-consistent theory for calculation of built-in voltage (U{sub bi}) of metal-organic semiconductor-metal (MOSM) structures is developed based on Gaussian energy distribution of highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO). It is shown that the built-in voltage depends not only on the work function difference of the two electrodes, but also on the mean energy level of HOMO and LUMO, as well as the Gaussian width of the energy distribution. The theory predicts that the spreading of HOMO and LUMO levels will results in an increase of U{sub bi}, and that U{sub bi} decreases with increasing temperature.
Ayadim, A; Amokrane, S
2010-01-27
The accuracy of the structural data obtained from the recently proposed generalization to non-additive hard-spheres (Schmidt 2004 J. Phys.: Condens. Matter 16 L351) of Rosenfeld's functional is investigated. The radial distribution functions computed from the direct correlation functions generated by the functional, through the Ornstein-Zernike equations, are compared with those obtained from the density profile equations in the test-particle limit, without and with test-particle consistency. The differences between these routes and the role of the optimization of the parameters of the reference system when the functional is used to obtain the reference bridge functional are discussed in the case of symmetric binary mixtures of non-additive hard-spheres. The case of highly asymmetric mixtures is finally briefly discussed.
Energy Technology Data Exchange (ETDEWEB)
Batista, Enrique R [Los Alamos National Laboratory; Newcomer, Micharel B [YALE UNIV; Raggin, Christina M [YALE UNIV; Gascon, Jose A [YALE UNIV; Loria, J Patrick [YALE UNIV; Batista, Victor S [YALE UNIV
2008-01-01
This paper generalizes the MoD-QM/MM hybrid method, developed for ab initio computations of protein electrostatic potentials [Gasc6n, l.A.; Leung, S.S.F.; Batista, E.R.; Batista, V.S. J. Chem. Theory Comput. 2006,2, 175-186], as a practical algorithm for structural refinement of extended systems. The computational protocol involves a space-domain decomposition scheme for the formal fragmentation of extended systems into smaller, partially overlapping, molecular domains and the iterative self-consistent energy minimization of the constituent domains by relaxation of their geometry and electronic structure. The method accounts for mutual polarization of the molecular domains, modeled as Quantum-Mechanical (QM) layers embedded in the otherwise classical Molecular-Mechanics (MM) environment according to QM/MM hybrid methods. The method is applied to the description of benchmark models systems that allow for direct comparisons with full QM calculations, and subsequently applied to the structural characterization of the DNA Oxytricha nova Guanine quadruplex (G4). The resulting MoD-QM/MM structural model of the DNA G4 is compared to recently reported highresolution X-ray diffraction and NMR models, and partially validated by direct comparisons between {sup 1}H NMR chemical shifts that are highly sensitive to hydrogen-bonding and stacking interactions and the corresponding theoretical values obtained at the density functional theory DFT QM/MM (BH&H/6-31 G*:Amber) level in conjunction with the gauge independent atomic orbital (GIAO) method for the ab initio self consistent-field (SCF) calculation of NMR chemical shifts.
Application of a Mass-Consistent Wind Model to Chinook Windstorms
1988-06-01
Meteor., 6, 837--344. Endlich, R. M., F. L. Ludwig, C. M. Bhunralkar, and M. A. Estoque , 1380: A practical method for estimating wind character34szics at...Project 8349, Menlo Park, CA. 94025. Endlich, R. M., F. L. Ludwig, C. M. Bhunralkar, and M. A. Estoque , 1982: A diagnostic model for estimating winds
Baraffe, [No Value; Alibert, Y; Mera, D; Charbrier, G; Beaulieu, JP
1998-01-01
We have computed stellar evolutionary models for stars in a mass range characteristic of Cepheid variables (3
Hesse, Michael; Birn, Joachim; Schindler, Karl
1990-01-01
A self-consistent two-fluid theory that includes the magnetic field and shear patterns is developed to model stationary electrostatic structures with field-aligned potential drops. Shear flow is also included in the theory since this seems to be a prominent feature of the structures of interest. In addition, Ohmic dissipation, a Hall term, and pressure gradients in a generalized Ohm's law, modified for cases without quasi-neutrality, are included. In the analytic theory, the electrostatic force is balanced by field-aligned pressure gradients (i.e., thermal effects in the direction of the magnetic field) and by pressure gradients and magnetic stresses in the perpendicular direction. Within this theory, simple examples of applications are presented to demonstrate the kind of solutions resulting from the model. The results show how the effects of charge separation and shear in the magnetic field and the velocity can be combined to form self-consistent structures such as are found to exist above the aurora, suggested also in association with solar flares.
Models as coherent sign structures
Gazendam, H.W.M.; Jorna, R.J.J.M.; Gazendam, H.W.M.; Cijsouw, R.S.
2003-01-01
This chapter explains how models function as the glue that keeps organizations together. In an analysis of models from a semiotic and cognitive point of view, assumptions about evolutionary dynamics and bounded rationality are used. It is concluded that a model is a coherent sign structure,
Woitke, P.; Min, M.; Pinte, C.; Thi, W.-F.; Kamp, I.; Rab, C.; Anthonioz, F.; Antonellini, S.; Baldovin-Saavedra, C.; Carmona, A.; Dominik, C.; Dionatos, O.; Greaves, J.; Güdel, M.; Ilee, J. D.; Liebhart, A.; Ménard, F.; Rigon, L.; Waters, L. B. F. M.; Aresu, G.; Meijerink, R.; Spaans, M.
2016-02-01
We propose a set of standard assumptions for the modelling of Class II and III protoplanetary disks, which includes detailed continuum radiative transfer, thermo-chemical modelling of gas and ice, and line radiative transfer from optical to cm wavelengths. The first paper of this series focuses on the assumptions about the shape of the disk, the dust opacities, dust settling, and polycyclic aromatic hydrocarbons (PAHs). In particular, we propose new standard dust opacities for disk models, we present a simplified treatment of PAHs in radiative equilibrium which is sufficient to reproduce the PAH emission features, and we suggest using a simple yet physically justified treatment of dust settling. We roughly adjust parameters to obtain a model that predicts continuum and line observations that resemble typical multi-wavelength continuum and line observations of Class II T Tauri stars. We systematically study the impact of each model parameter (disk mass, disk extension and shape, dust settling, dust size and opacity, gas/dust ratio, etc.) on all mainstream continuum and line observables, in particular on the SED, mm-slope, continuum visibilities, and emission lines including [OI] 63 μm, high-J CO lines, (sub-)mm CO isotopologue lines, and CO fundamental ro-vibrational lines. We find that evolved dust properties, i.e. large grains, often needed to fit the SED, have important consequences for disk chemistry and heating/cooling balance, leading to stronger near- to far-IR emission lines in general. Strong dust settling and missing disk flaring have similar effects on continuum observations, but opposite effects on far-IR gas emission lines. PAH molecules can efficiently shield the gas from stellar UV radiation because of their strong absorption and negligible scattering opacities in comparison to evolved dust. The observable millimetre-slope of the SED can become significantly more gentle in the case of cold disk midplanes, which we find regularly in our T Tauri models
Genome scale models of yeast: towards standardized evaluation and consistent omic integration
DEFF Research Database (Denmark)
Sanchez, Benjamin J.; Nielsen, Jens
2015-01-01
Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published...... and are currently used for metabolic engineering and elucidating biological interactions. Here we review the history of yeast's GEMs, focusing on recent developments. We study how these models are typically evaluated, using both descriptive and predictive metrics. Additionally, we analyze the different ways...... in which all levels of omics data (from gene expression to flux) have been integrated in yeast GEMs. Relevant conclusions and current challenges for both GEM evaluation and omic integration are highlighted....
Advancing Nucleosynthesis in Self-consistent, Multidimensional Models of Core-Collapse Supernovae
Harris, J Austin; Chertkow, Merek A; Bruenn, Stephen W; Lentz, Eric J; Messer, O E Bronson; Mezzacappa, Anthony; Blondin, John M; Marronetti, Pedro; Yakunin, Konstantin N
2014-01-01
We investigate core-collapse supernova (CCSN) nucleosynthesis in polar axisymmetric simulations using the multidimensional radiation hydrodynamics code CHIMERA. Computational costs have traditionally constrained the evolution of the nuclear composition in CCSN models to, at best, a 14-species $\\alpha$-network. Such a simplified network limits the ability to accurately evolve detailed composition, neutronization and the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks in post-processing nucleosynthesis calculations. Limitations such as poor spatial resolution of the tracer particles, estimation of the expansion timescales, and determination of the "mass-cut" at the end of the simulation impose uncertainties inherent to this approach. We present a detailed analysis of the impact of these uncertainties on post-processing nucleosynthesis calculations and implications for future models.
Thermodynamically consistent modeling for dissolution/growth of bubbles in an incompressible solvent
Bothe, Dieter
2014-01-01
We derive mathematical models of the elementary process of dissolution/growth of bubbles in a liquid under pressure control. The modeling starts with a fully compressible version, both for the liquid and the gas phase so that the entropy principle can be easily evaluated. This yields a full PDE system for a compressible two-phase fluid with mass transfer of the gaseous species. Then the passage to an incompressible solvent in the liquid phase is discussed, where a carefully chosen equation of state for the liquid mixture pressure allows for a limit in which the solvent density is constant. We finally provide a simplification of the PDE system in case of a dilute solution.
Directory of Open Access Journals (Sweden)
Sam Walcott
2015-11-01
Full Text Available Muscle contracts due to ATP-dependent interactions of myosin motors with thin filaments composed of the proteins actin, troponin, and tropomyosin. Contraction is initiated when calcium binds to troponin, which changes conformation and displaces tropomyosin, a filamentous protein that wraps around the actin filament, thereby exposing myosin binding sites on actin. Myosin motors interact with each other indirectly via tropomyosin, since myosin binding to actin locally displaces tropomyosin and thereby facilitates binding of nearby myosin. Defining and modeling this local coupling between myosin motors is an open problem in muscle modeling and, more broadly, a requirement to understanding the connection between muscle contraction at the molecular and macro scale. It is challenging to directly observe this coupling, and such measurements have only recently been made. Analysis of these data suggests that two myosin heads are required to activate the thin filament. This result contrasts with a theoretical model, which reproduces several indirect measurements of coupling between myosin, that assumes a single myosin head can activate the thin filament. To understand this apparent discrepancy, we incorporated the model into stochastic simulations of the experiments, which generated simulated data that were then analyzed identically to the experimental measurements. By varying a single parameter, good agreement between simulation and experiment was established. The conclusion that two myosin molecules are required to activate the thin filament arises from an assumption, made during data analysis, that the intensity of the fluorescent tags attached to myosin varies depending on experimental condition. We provide an alternative explanation that reconciles theory and experiment without assuming that the intensity of the fluorescent tags varies.
Self-Consistent, Axisymmetric Two-Integral Models of Elliptical Galaxies with Embedded Nuclear Discs
Bosch, van den, PPJ Paul; de, Zeeuw, W.
1996-01-01
Recently, observations with the Hubble Space Telescope have revealed small stellar discs embedded in the nuclei of a number of ellipticals and S0s. In this paper we construct two-integral axisymmetric models for such systems. We calculate the even part of the phase-space distribution function, and specify the odd part by means of a simple parameterization. We investigate the photometric as well as the kinematic signatures of nuclear discs, including their velocity profiles (VPs), and study th...
Energy regeneration model of self-consistent field of electron beams into electric power*
Kazmin, B. N.; Ryzhov, D. R.; Trifanov, I. V.; Snezhko, A. A.; Savelyeva, M. V.
2016-04-01
We consider physic-mathematical models of electric processes in electron beams, conversion of beam parameters into electric power values and their transformation into users’ electric power grid (onboard spacecraft network). We perform computer simulation validating high energy efficiency of the studied processes to be applied in the electric power technology to produce the power as well as electric power plants and propulsion installation in the spacecraft.
Flood damage: a model for consistent, complete and multipurpose scenarios
Directory of Open Access Journals (Sweden)
S. Menoni
2016-12-01
implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.
A consistent model for leptogenesis, dark matter and the IceCube signal
Energy Technology Data Exchange (ETDEWEB)
Fiorentin, M. Re [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Niro, V. [Departamento de Física Teórica, Universidad Autónoma de Madrid,Cantoblanco, E-28049 Madrid (Spain); Instituto de Física Teórica UAM/CSIC,Calle Nicolás Cabrera 13-15, Cantoblanco, E-28049 Madrid (Spain); Fornengo, N. [Dipartimento di Fisica, Università di Torino,via P. Giuria, 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Torino,via P. Giuria, 1, 10125 Torino (Italy)
2016-11-04
We discuss a left-right symmetric extension of the Standard Model in which the three additional right-handed neutrinos play a central role in explaining the baryon asymmetry of the Universe, the dark matter abundance and the ultra energetic signal detected by the IceCube experiment. The energy spectrum and neutrino flux measured by IceCube are ascribed to the decays of the lightest right-handed neutrino N{sub 1}, thus fixing its mass and lifetime, while the production of N{sub 1} in the primordial thermal bath occurs via a freeze-in mechanism driven by the additional SU(2){sub R} interactions. The constraints imposed by IceCube and the dark matter abundance allow nonetheless the heavier right-handed neutrinos to realize a standard type-I seesaw leptogenesis, with the B−L asymmetry dominantly produced by the next-to-lightest neutrino N{sub 2}. Further consequences and predictions of the model are that: the N{sub 1} production implies a specific power-law relation between the reheating temperature of the Universe and the vacuum expectation value of the SU(2){sub R} triplet; leptogenesis imposes a lower bound on the reheating temperature of the Universe at 7×10{sup 9} GeV. Additionally, the model requires a vanishing absolute neutrino mass scale m{sub 1}≃0.
Consistent negative response of US crops to high temperatures in observations and crop models
Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja
2017-04-01
High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day above 30°C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures above 30°C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.
Consistent negative response of US crops to high temperatures in observations and crop models
Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja
2017-01-01
High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day >30 °C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures >30 °C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.
Jha, Sanjeev Kumar
2013-01-01
A downscaling approach based on multiple-point geostatistics (MPS) is presented. The key concept underlying MPS is to sample spatial patterns from within training images, which can then be used in characterizing the relationship between different variables across multiple scales. The approach is used here to downscale climate variables including skin surface temperature (TSK), soil moisture (SMOIS), and latent heat flux (LH). The performance of the approach is assessed by applying it to data derived from a regional climate model of the Murray-Darling basin in southeast Australia, using model outputs at two spatial resolutions of 50 and 10 km. The data used in this study cover the period from 1985 to 2006, with 1985 to 2005 used for generating the training images that define the relationships of the variables across the different spatial scales. Subsequently, the spatial distributions for the variables in the year 2006 are determined at 10 km resolution using the 50 km resolution data as input. The MPS geostatistical downscaling approach reproduces the spatial distribution of TSK, SMOIS, and LH at 10 km resolution with the correct spatial patterns over different seasons, while providing uncertainty estimates through the use of multiple realizations. The technique has the potential to not only bridge issues of spatial resolution in regional and global climate model simulations but also in feature sharpening in remote sensing applications through image fusion, filling gaps in spatial data, evaluating downscaled variables with available remote sensing images, and aggregating/disaggregating hydrological and groundwater variables for catchment studies.
Fioc, M; Fioc, Michel; Rocca-Volmerange, Brigitte
1999-01-01
We provide here the documentation of the new version of the spectral evolution model PEGASE. PEGASE computes synthetic spectra of galaxies in the UV to near-IR range from 0 to 20 Gyr, for a given stellar IMF and evolutionary scenario (star formation law, infall, galactic winds). The radiation emitted by stars from the main sequence to the pre-supernova or white dwarf stage is calculated, as well as the extinction by dust. A simple modeling of the nebular emission (continuum and lines) is also proposed. PEGASE may be used to model starbursts as well as old galaxies. The main improvements of PEGASE.2 relative to PEGASE.1 (Fioc & Rocca-Volmerange 1997) are the following: (1)The stellar evolutionary tracks of the Padova group for metallicities between 0.0001 and 0.1 have been included; (2)The evolution of the metallicity of the interstellar medium (ISM) due to SNII, SNIa and AGB stars is followed. Stars are formed with the same metallicity as the ISM (instead of a solar metallicity in PEGASE.1), providing thu...
The Bioenvironmental modeling of Bahar city based on Climate-consistent Architecture
Directory of Open Access Journals (Sweden)
Parna Kazemian
2014-07-01
Full Text Available The identification of the climate of a particularplace and the analysis of the climatic needs in terms of human comfort and theuse of construction materials is one of the prerequisites of aclimate-consistent design. In studies on climate and weather, usingillustrative reports, first a picture of the state of climate is offered. Then,based on the obtained results, the range of changes is determined, and thecause-effect relationships at different scales are identified. Finally, by ageneral examination of the obtained information, on the one hand, the range ofchanges is identified, and, on the other hand, their practical uses in thefuture are selected. In the present paper, the bioclimatic conditions of Baharcity, according to the 29-year-long statistics of the synoptic station between1976 and 2005 was examined, using Olgyay and Mahoney indexes. It should beadded that, because of the short distance between Bahar and Hamedan, they havea single synoptic station. The results indicate that Bahar city has dominantlycold weather during most of the months. Therefore, based on the implications ofeach method, the principles of the suggestive architectural designing can beintegrated and improved in order to achieve sustainable development.
Caprarelli, G.; de Pablo Hernandez, M. A.
2014-12-01
The Martian region located immediately north of the dichotomy scarp, between latitudes 120°E and 135°E, is covered by fretted terrains, characterised by the presence of knobs and mesas formed by eroded and reworked material of highlands provenance, and the smoother terrains between them [1]. Topographic depressions of oblong shape, generally parallel to the scarp, of rough and chaotic appearance, are also observed. The high resolution (~ 6 m/pixel, [2]) Context Camera (CTX) on board Mars Reconnaissance Orbiter (MRO) makes it possible to examine the morphologies of these topographic depressions in great detail, unveiling their complex geological histories. Here we expand on our earlier work in the adjacent Nepenthes Mensae region [3] and present the results of our observations of morphologies of likely igneous origin. We identified a variety of shapes consistent with magmatic structures and constructs: dikes, collapsed lava tubes, and lava flows are observable in the smoother terrains. Most of the elevated structures in the areas are strongly eroded knobs and mesas covered by dust and debris. In some cases however, the morphological characteristics of 2-10 km-size structures are clear and sharp, which allowed us to identify features consistent with sub-ice volcanic constructs, such as tuyas and tindars [4]. Geological reconstructions involving magma-ice interaction are supported by the presence of lobate aprons around knobs and mesas, and of scalloped ejecta surrounding complex impact craters, suggesting the existence of ice both underground and on the surface of these low elevation areas at the time of formation of these constructs. [1] Tanaka et al. (2005) Geologic Map of the Northern Plains of Mars. USGS SIM 2888. [2] Malin et al. (2007) Context Camera investigation on board the Mars Reconnaissance Orbiter. JGR 112, E05S04, 10.1029/2006JE002808. [3] dePablo and Caprarelli (2010) Possible subglacial volcanoes in Nepenthes Mensae, eastern hemisphere, Mars. LPSC
Energy Technology Data Exchange (ETDEWEB)
Nekrasov, I. A., E-mail: nekrasov@iep.uran.ru; Pavlov, N. S.; Sadovskii, M. V. [Russian Academy of Sciences, Institute for Electrophysics, Ural Branch (Russian Federation)
2013-04-15
We discuss the recently proposed LDA' + DMFT approach providing a consistent parameter-free treatment of the so-called double counting problem arising within the LDA + DMFT hybrid computational method for realistic strongly correlated materials. In this approach, the local exchange-correlation portion of the electron-electron interaction is excluded from self-consistent LDA calculations for strongly correlated electronic shells, e.g., d-states of transition metal compounds. Then, the corresponding double-counting term in the LDA' + DMFT Hamiltonian is consistently set in the local Hartree (fully localized limit, FLL) form of the Hubbard model interaction term. We present the results of extensive LDA' + DMFT calculations of densities of states, spectral densities, and optical conductivity for most typical representatives of two wide classes of strongly correlated systems in the paramagnetic phase: charge transfer insulators (MnO, CoO, and NiO) and strongly correlated metals (SrVO{sub 3} and Sr{sub 2}RuO{sub 4}). It is shown that for NiO and CoO systems, the LDA' + DMFT approach qualitatively improves the conventional LDA + DMFT results with the FLL type of double counting, where CoO and NiO were obtained to be metals. Our calculations also include transition-metal 4s-states located near the Fermi level, missed in previous LDA + DMFT studies of these monoxides. General agreement with optical and the X-ray experiments is obtained. For strongly correlated metals, the LDA' + DMFT results agree well with the earlier LDA + DMFT calculations and existing experiments. However, in general, LDA' + DMFT results give better quantitative agreement with experimental data for band gap sizes and oxygen-state positions compared to the conventional LDA + DMFT method.
Nekrasov, I. A.; Pavlov, N. S.; Sadovskii, M. V.
2013-04-01
We discuss the recently proposed LDA' + DMFT approach providing a consistent parameter-free treatment of the so-called double counting problem arising within the LDA + DMFT hybrid computational method for realistic strongly correlated materials. In this approach, the local exchange-correlation portion of the electron-electron interaction is excluded from self-consistent LDA calculations for strongly correlated electronic shells, e.g., d-states of transition metal compounds. Then, the corresponding double-counting term in the LDA' + DMFT Hamiltonian is consistently set in the local Hartree (fully localized limit, FLL) form of the Hubbard model interaction term. We present the results of extensive LDA' + DMFT calculations of densities of states, spectral densities, and optical conductivity for most typical representatives of two wide classes of strongly correlated systems in the paramagnetic phase: charge transfer insulators (MnO, CoO, and NiO) and strongly correlated metals (SrVO3 and Sr2RuO4). It is shown that for NiO and CoO systems, the LDA' + DMFT approach qualitatively improves the conventional LDA + DMFT results with the FLL type of double counting, where CoO and NiO were obtained to be metals. Our calculations also include transition-metal 4 s-states located near the Fermi level, missed in previous LDA + DMFT studies of these monoxides. General agreement with optical and the X-ray experiments is obtained. For strongly correlated metals, the LDA' + DMFT results agree well with the earlier LDA + DMFT calculations and existing experiments. However, in general, LDA' + DMFT results give better quantitative agreement with experimental data for band gap sizes and oxygen-state positions compared to the conventional LDA + DMFT method.
A self-consistent 3D model of fluctuations in the helium-ionizing background
Davies, Frederick B.; Furlanetto, Steven R.; Dixon, Keri L.
2017-03-01
Large variations in the effective optical depth of the He II Lyα forest have been observed at z ≳ 2.7, but the physical nature of these variations is uncertain: either the Universe is still undergoing the process of He II reionization, or the Universe is highly ionized but the He II-ionizing background fluctuates significantly on large scales. In an effort to build upon our understanding of the latter scenario, we present a novel model for the evolution of ionizing background fluctuations. Previous models have assumed the mean free path of ionizing photons to be spatially uniform, ignoring the dependence of that scale on the local ionization state of the intergalactic medium (IGM). This assumption is reasonable when the mean free path is large compared to the average distance between the primary sources of He II-ionizing photons, ≳ L⋆ quasars. However, when this is no longer the case, the background fluctuations become more severe, and an accurate description of the average propagation of ionizing photons through the IGM requires additionally accounting for the fluctuations in opacity. We demonstrate the importance of this effect by constructing 3D semi-analytic models of the helium-ionizing background from z = 2.5-3.5 that explicitly include a spatially varying mean free path of ionizing photons. The resulting distribution of effective optical depths at large scales in the He II Lyα forest is very similar to the latest observations with HST/COS at 2.5 ≲ z ≲ 3.5.
Consistency of different tropospheric models and mapping functions for precise GNSS processing
Graffigna, Victoria; Hernández-Pajares, Manuel; García-Rigo, Alberto; Gende, Mauricio
2017-04-01
The TOmographic Model of the IONospheric electron content (TOMION) software implements a simultaneous precise geodetic and ionospheric modeling, which can be used to test new approaches for real-time precise GNSS modeling (positioning, ionospheric and tropospheric delays, clock errors, among others). In this work, the software is used to estimate the Zenith Tropospheric Delay (ZTD) emulating real time and its performance is evaluated through a comparative analysis with a built-in GIPSY estimation and IGS final troposphere product, exemplified in a two-day experiment performed in East Australia. Furthermore, the troposphere mapping function was upgraded from Niell to Vienna approach. On a first scenario, only forward processing was activated and the coordinates of the Wide Area GNSS network were loosely constrained, without fixing the carrier phase ambiguities, for both reference and rover receivers. On a second one, precise point positioning (PPP) was implemented, iterating for a fixed coordinates set for the second day. Comparisons between TOMION, IGS and GIPSY estimates have been performed and for the first one, IGS clocks and orbits were considered. The agreement with GIPSY results seems to be 10 times better than with the IGS final ZTD product, despite having considered IGS products for the computations. Hence, the subsequent analysis was carried out with respect to the GIPSY computations. The estimates show a typical bias of 2cm for the first strategy and of 7mm for PPP, in the worst cases. Moreover, Vienna mapping function showed in general a fairly better agreement than Niell one for both strategies. The RMS values' were found to be around 1cm for all studied situations, with a slightly fitter performance for the Niell one. Further improvement could be achieved for such estimations with coefficients for the Vienna mapping function calculated from raytracing as well as integrating meteorological comparative parameters.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
Institute of Scientific and Technical Information of China (English)
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Strong consistency of maximum quasi-likelihood estimates in generalized linear models
Institute of Scientific and Technical Information of China (English)
YiN; Changming; ZHAO; Lincheng
2005-01-01
In a generalized linear model with q × 1 responses, bounded and fixed p × qregressors Zi and general link function, under the most general assumption on the mini-mum eigenvalue of∑ni＝1n ZiZ'i, the moment condition on responses as weak as possibleand other mild regular conditions, we prove that with probability one, the quasi-likelihoodequation has a solutionβn for all large sample size n, which converges to the true regres-sion parameterβo. This result is an essential improvement over the relevant results in literature.
A consistent model for \\pi N transition distribution amplitudes and backward pion electroproduction
Lansberg, J P; Semenov-Tian-Shansky, K; Szymanowski, L
2011-01-01
The extension of the concept of generalized parton distributions leads to the introduction of baryon to meson transition distribution amplitudes (TDAs), non-diagonal matrix elements of the nonlocal three quark operator between a nucleon and a meson state. We present a general framework for modelling nucleon to pion ($\\pi N$) TDAs. Our main tool is the spectral representation for \\pi N TDAs in terms of quadruple distributions. We propose a factorized Ansatz for quadruple distributions with input from the soft-pion theorem for \\pi N TDAs. The spectral representation is complemented with a D-term like contribution from the nucleon exchange in the cross channel. We then study backward pion electroproduction in the QCD collinear factorization approach in which the non-perturbative part of the amplitude involves \\pi N TDAs. Within our two component model for \\pi N TDAs we update previous leading-twist estimates of the unpolarized cross section. Finally, we compute the transverse target single spin asymmetry as a fu...
A consistent model for leptogenesis, dark matter and the IceCube signal
Fiorentin, M Re; Fornengo, N
2016-01-01
We discuss a left-right symmetric extension of the Standard Model in which the three additional right-handed neutrinos play a central role in explaining the baryon asymmetry of the Universe, the dark matter abundance and the ultra energetic signal detected by the IceCube experiment. The energy spectrum and neutrino flux measured by IceCube are ascribed to the decays of the lightest right-handed neutrino $N_1$, thus fixing its mass and lifetime, while the production of $N_1$ in the primordial thermal bath occurs via a freeze-in mechanism driven by the additional $SU(2)_R$ interactions. The constraints imposed by IceCube and the dark matter abundance allow nonetheless the heavier right-handed neutrinos to realize a standard type-I seesaw leptogenesis, with the $B-L$ asymmetry dominantly produced by the next-to-lightest neutrino $N_2$. Further consequences and predictions of the model are that: the $N_1$ production implies a specific power-law relation between the reheating temperature of the Universe and the va...
Thermal X-ray emission from a baryonic jet: a self-consistent multicolour spectral model
Khabibullin, Ildar; Sazonov, Sergey
2015-01-01
We present a publicly-available spectral model for thermal X-ray emission from a baryonic jet in an X-ray binary system, inspired by the microquasar SS 433. The jet is assumed to be strongly collimated (half-opening angle $\\Theta\\sim 1\\deg$) and mildly relativistic (bulk velocity $\\beta=V_{b}/c\\sim 0.03-0.3$). Its X-ray spectrum is found by integrating over thin slices of constant temperature, radiating in optically thin coronal regime. The temperature profile along the jet and corresponding differential emission measure distribution are calculated with full account for gas cooling due to expansion and radiative losses. Since the model predicts both the spectral shape and luminosity of the jet's emission, its normalisation is not a free parameter if the source distance is known. We also explore the possibility of using simple X-ray observables (such as flux ratios in different energy bands) to constrain physical parameters of the jet (e.g. gas temperature and density at its base) without broad-band fitting of...
A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling
Shapiro, B.; Jin, Q.
2015-12-01
Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.
Béghin, Christian
2015-02-01
This model is worked out in the frame of physical mechanisms proposed in previous studies accounting for the generation and the observation of an atypical Schumann Resonance (SR) during the descent of the Huygens Probe in the Titan's atmosphere on 14 January 2005. While Titan is staying inside the subsonic co-rotating magnetosphere of Saturn, a secondary magnetic field carrying an Extremely Low Frequency (ELF) modulation is shown to be generated through ion-acoustic instabilities of the Pedersen current sheets induced at the interface region between the impacting magnetospheric plasma and Titan's ionosphere. The stronger induced magnetic field components are focused within field-aligned arcs-like structures hanging down the current sheets, with minimum amplitude of about 0.3 nT throughout the ramside hemisphere from the ionopause down to the Moon surface, including the icy crust and its interface with a conductive water ocean. The deep penetration of the modulated magnetic field in the atmosphere is thought to be allowed thanks to the force balance between the average temporal variations of thermal and magnetic pressures within the field-aligned arcs. However, there is a first cause of diffusion of the ELF magnetic components, probably due to feeding one, or eventually several SR eigenmodes. A second leakage source is ascribed to a system of eddy-Foucault currents assumed to be induced through the buried water ocean. The amplitude spectrum distribution of the induced ELF magnetic field components inside the SR cavity is found fully consistent with the measurements of the Huygens wave-field strength. Waiting for expected future in-situ exploration of Titan's lower atmosphere and the surface, the Huygens data are the only experimental means available to date for constraining the proposed model.
Greco, Cristina; Jiang, Ying; Chen, Jeff Z. Y.; Kremer, Kurt; Daoulas, Kostas Ch.
2016-11-01
Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.
DEFF Research Database (Denmark)
Staunstrup, Jørgen
1998-01-01
This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....
Ataman, Meric; Hernandez Gardiol, Daniel F; Fengos, Georgios; Hatzimanikatis, Vassily
2017-07-01
Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.
On the (in)consistency of a multi-model ensemble of the past 30 years land surface state.
Dutra, Emanuel; Schellekens, Jaap; Beck, Hylke; Balsamo, Gianpaolo
2016-04-01
Global land-surface and hydrological models are a fundamental tool in understanding the land-surface state and evolution either coupled to atmospheric models for climate and weather predictions or in stand-alone mode. In this study we take a recently developed dataset consisting in stand-alone simulations by 10 global hydrological and land surface models sharing the same atmospheric forcing for the period 1979-2012 (the eart2Observe dataset). This multi-model ensemble provides the first freely available dataset with such a spatial/temporal scale that allows for a characterization of the multi-model characteristics such as inter-model consistency and error-spread relationship. We will present a metric for the ensemble consistency using the concept of potential predictability, that can be interpreted as a proxy for the multi-model agreement. Initial results point to regions of low inter-model agreement in the polar and tropical regions, the latter also present when comparing globally available precipitation datasets. In addition to this, the discharge ensemble spread around the ensemble mean was compared to the error of the ensemble mean for several large-scale and small scale basins. This showed a general under-estimation of the ensemble spread, particularly in tropical basins, suggesting that the current dataset lacks the representation of the precipitation uncertainty in the input meteorological data.
Directory of Open Access Journals (Sweden)
Meric Ataman
2017-07-01
Full Text Available Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.
Gauge propagator and physical consistency of the CPT-even part of the standard model extension
Casana, Rodolfo; Ferreira, Manoel M., Jr.; Gomes, Adalto R.; Pinheiro, Paulo R. D.
2009-12-01
In this work, we explicitly evaluate the gauge propagator of the Maxwell theory supplemented by the CPT-even term of the standard model extension. First, we specialize our evaluation for the parity-odd sector of the tensor Wμνρσ, using a parametrization that retains only the three nonbirefringent coefficients. From the poles of the propagator, it is shown that physical modes of this electrodynamics are stable, noncausal and unitary. In the sequel, we carry out the parity-even gauge propagator using a parametrization that allows to work with only the isotropic nonbirefringent element. In this case, we show that the physical modes of the parity-even sector of the tensor W are causal, stable and unitary for a limited range of the isotropic coefficient.
Consistency and normality of Huber-Dutter estimators for partial linear model
Institute of Scientific and Technical Information of China (English)
2008-01-01
For partial linear model Y = Xτβ0 + g0(T) + with unknown β0 ∈ Rd and an unknown smooth function g0, this paper considers the Huber-Dutter estimators of β0, scale σ for the errors and the function g0 approximated by the smoothing B-spline functions, respectively. Under some regularity conditions, the Huber-Dutter estimators of β0 and σ are shown to be asymptotically normal with the rate of convergence n-1/2 and the B-spline Huber-Dutter estimator of g0 achieves the optimal rate of convergence in nonparametric regression. A simulation study and two examples demonstrate that the Huber-Dutter estimator of β0 is competitive with its M-estimator without scale parameter and the ordinary least square estimator.
Consistency and normality of Huber-Dutter estimators for partial linear model
Institute of Scientific and Technical Information of China (English)
TONG XingWei; CUI HengJian; YU Peng
2008-01-01
For partial linear model Y = Xτβ0 + g0(T) + ∈ with unknown/β0 ∈ Rd and an unknown smooth function g0,this paper considers the Huber-Dutter estimators of/β0,scale σ for the errors and the function g0 approximated by the smoothing B-spline functions,respectively.Under some regularity conditions,the Huber-Dutter estimators of/β0 and σ are shown to be asymptotically normal with the rate of convergence n-1/2 and the B-spline Huber-Dutter estimator of go achieves the optimal rate of convergence in nonparametric regression.A simulation study and two examples demonstrate that the Huber-Dutter estimator of/β0 is competitive with its M-estimator without scale parameter and the ordinary least square estimator.
Quesada, José Manuel; Capote, Roberto; Soukhovitski, Efrem S.; Chiba, Satoshi
2016-03-01
An extension for odd-A actinides of a previously derived dispersive coupledchannel optical model potential (OMP) for 238U and 232Th nuclei is presented. It is used to fit simultaneously all the available experimental databases including neutron strength functions for nucleon scattering on 232Th, 233,235,238U and 239Pu nuclei. Quasi-elastic (p,n) scattering data on 232Th and 238U to the isobaric analogue states of the target nucleus are also used to constrain the isovector part of the optical potential. For even-even (odd) actinides almost all low-lying collective levels below 1 MeV (0.5 MeV) of excitation energy are coupled. OMP parameters show a smooth energy dependence and energy independent geometry.
Gustafsson, Leif; Sternad, Mikael
2007-10-01
Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.
Directory of Open Access Journals (Sweden)
Marco Del Giudice
Full Text Available BACKGROUND: Schizophrenia is a mental disorder marked by an evolutionarily puzzling combination of high heritability, reduced reproductive success, and a remarkably stable prevalence. Recently, it has been proposed that sexual selection may be crucially involved in the evolution of schizophrenia. In the sexual selection model (SSM of schizophrenia and schizotypy, schizophrenia represents the negative extreme of a sexually selected indicator of genetic fitness and condition. Schizotypal personality traits are hypothesized to increase the sensitivity of the fitness indicator, thus conferring mating advantages on high-fitness individuals but increasing the risk of schizophrenia in low-fitness individuals; the advantages of successful schzotypy would be mediated by enhanced courtship-related traits such as verbal creativity. Thus, schizotypy-increasing alleles would be maintained by sexual selection, and could be selectively neutral or even beneficial, at least in some populations. However, most empirical studies find that the reduction in fertility experienced by schizophrenic patients is not compensated for by increased fertility in their unaffected relatives. This finding has been interpreted as indicating strong negative selection on schizotypy-increasing alleles, and providing evidence against sexual selection on schizotypy. METHODOLOGY: A simple mathematical model is presented, showing that reduced fertility in the families of schizophrenic patients can coexist with selective neutrality of schizotypy-increasing alleles, or even with positive selection on schizotypy in the general population. If the SSM is correct, studies of patients' families can be expected to underestimate the true fertility associated with schizotypy. SIGNIFICANCE: This paper formally demonstrates that reduced fertility in the families of schizophrenic patients does not constitute evidence against sexual selection on schizotypy-increasing alleles. Futhermore, it suggests
Temporal structures in shell models
DEFF Research Database (Denmark)
Okkels, F.
2001-01-01
The intermittent dynamics of the turbulent Gledzer, Ohkitani, and Yamada shell-model is completely characterized by a single type of burstlike structure, which moves through the shells like a front. This temporal structure is described by the dynamics of the instantaneous configuration of the shell...
Zhao, Hua; Meng, Wei-Feng
2017-10-01
In this paper a five layer organic electronic device with alternately placed ferromagnetic metals and organic polymers: ferromagnetic metal/organic layer/ferromagnetic metal/organic layer/ferromagnetic metal, which is injected a spin-polarized electron from outsides, is studied theoretically using one-dimensional tight binding model Hamiltonian. We calculated equilibrium state behavior after an electron with spin is injected into the organic layer of this structure, charge density distribution and spin polarization density distribution of this injected spin-polarized electron, and mainly studied possible transport behavior of the injected spin polarized electron in this multilayer structure under different external electric fields. We analyze the physical process of the injected electron in this multilayer system. It is found by our calculation that the injected spin polarized electron exists as an electron-polaron state with spin polarization in the organic layer and it can pass through the middle ferromagnetic layer from the right-hand organic layer to the left-hand organic layer by the action of increasing external electric fields, which indicates that this structure may be used as a possible spin-polarized charge electronic device and also may provide a theoretical base for the organic electronic devices and it is also found that in the boundaries between the ferromagnetic layer and the organic layer there exist induced interface local dipoles due to the external electric fields.
Toward A Self Consistent MHD Model of Chromospheres and Winds From Late Type Evolved Stars
Airapetian, V S; Carpenter, K G
2014-01-01
We present the first magnetohydrodynamic model of the stellar chromospheric heating and acceleration of the outer atmospheres of cool evolved stars, using alpha Tau as a case study. We used a 1.5D MHD code with a generalized Ohm's law that accounts for the effects of partial ionization in the stellar atmosphere to study Alfven wave dissipation and wave reflection. We have demonstrated that due to inclusion of the effects of ion-neutral collisions in magnetized weakly ionized chromospheric plasma on resistivity and the appropriate grid resolution, the numerical resistivity becomes 1-2 orders of magnitude smaller than the physical resistivity. The motions introduced by non-linear transverse Alfven waves can explain non-thermally broadened and non-Gaussian profiles of optically thin UV lines forming in the stellar chromosphere of alpha Tau and other late-type giant and supergiant stars. The calculated heating rates in the stellar chromosphere due to resistive (Joule) dissipation of electric currents, induced by ...
Complementarity of DM searches in a consistent simplified model: the case of Z{sup ′}
Energy Technology Data Exchange (ETDEWEB)
Jacques, Thomas [SISSA and INFN,via Bonomea 265, 34136 Trieste (Italy); Katz, Andrey [Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Morgante, Enrico; Racco, Davide [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Rameez, Mohamed [Département de Physique Nucléaire et Corpusculaire,Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Riotto, Antonio [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland)
2016-10-14
We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC, direct and indirect detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy Z{sup ′} mediates the interactions between the SM and the DM. We find that for heavy dark matter indirect detection provides the strongest bounds on this scenario, while IceCube bounds are typically stronger than those from direct detection. The LHC constraints are dominant for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun and the Galactic Center are either bb̄ or tt̄, while the heavy DM annihilation is completely dominated by Zh channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast IceCube constraints to allow proper comparison with constraints from direct and indirect detection experiments and LHC exclusions.
Complementarity of DM Searches in a Consistent Simplified Model: the Case of Z'
Jacques, Thomas; Morgante, Enrico; Racco, Davide; Rameez, Mohamed; Riotto, Antonio
2016-01-01
We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC and direct detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy $Z'$ mediates the interactions between the SM and the DM. We find that in most cases IceCube provides the strongest bounds on this scenario, while the LHC constraints are only meaningful for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun are either $b \\bar b$ or $t \\bar t$, while the heavy DM annihilation is completely dominated by $Zh$ channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast Ice...
Self-consistent, axisymmetric two-integral models of elliptical galaxies with embedded nuclear discs
Van den Bosch, F C; van den Bosch, Frank C; de Zeeuw, P Tim
1996-01-01
Recently, observations with the Hubble Space Telescope have revealed small stellar discs embedded in the nuclei of a number of ellipticals and S0s. In this paper we construct two-integral axisymmetric models for such systems. We calculate the even part of the phase-space distribution function, and specify the odd part by means of a simple parameterization. We investigate the photometric as well as the kinematic signatures of nuclear discs, including their velocity profiles (VPs), and study the influence of seeing convolution. The rotation curve of a nuclear disc gives an excellent measure of the central mass-to-light ratio whenever the VPs clearly reveal the narrow, rapidly rotating component associated with the nuclear disc. Steep cusps and seeing convolution both result in central VPs that are dominated by the bulge light, and these VPs barely show the presence of the nuclear disc, impeding measurements of the central rotation velocities of the disc stars. However, if a massive BH is present, the disc compo...
Probabilistic modeling of timber structures
DEFF Research Database (Denmark)
Köhler, Jochen; Sørensen, John Dalsgaard; Faber, Michael Havbro
2007-01-01
The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet Publ...... is presented and possible refinements are given related to updating of the probabilistic model given new information, modeling of the spatial variation of strength properties and the duration of load effects.......The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet...... and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for timber components. The recommended probabilistic model for these basic properties...
Pisnichenko, I A
2007-01-01
The regional climate model prepared from Eta WS (workstation) forecast model has been integrated over South America with the horizontal resolution of 40 km for the period of 1961-1977. The model was forced at its lateral boundaries by the outputs of HadAMP. The data of HadAMP represent the simulation of modern climate with the resolution about150 km. In order to prepare climate regional model from the Eta forecast model was added new blocks and multiple modifications and corrections was made in the original model. The running of climate Eta model was made on the supercomputer SX-6. The detailed analysis of the results of dynamical downscaling experiment includes an investigation of a consistency between the regional and AGCM models as well as of ability of the regional model to resolve important features of climate fields on the finer scale than that resolved by AGCM. In this work we show the results of our investigation of the consistency of the output fields of the Eta model and HadAMP. We have analysed geo...
Bordin, Lorenzo; Creminelli, Paolo; Mirbabayi, Mehrdad; Noreña, Jorge
2017-03-01
We argue that isotropic scalar fluctuations in solid inflation are adiabatic in the super-horizon limit. During the solid phase this adiabatic mode has peculiar features: constant energy-density slices and comoving slices do not coincide, and their curvatures, parameterized respectively by ζ and Script R, both evolve in time. The existence of this adiabatic mode implies that Maldacena's squeezed limit consistency relation holds after angular average over the long mode. The correlation functions of a long-wavelength spherical scalar mode with several short scalar or tensor modes is fixed by the scaling behavior of the correlators of short modes, independently of the solid inflation action or dynamics of reheating.
Models of vertical coordination consistent with the development of bio-energetics
Directory of Open Access Journals (Sweden)
Gianluca Nardone
2009-04-01
Full Text Available To foster the development of the biomasses for solid fuel it is fundamental to build up a strategy at a local level in which co-exists farms as well as industrial farms. To such aim, it is necessary to implement an effective vertical coordination between the stakeholders with the definition of a contract that prevents opportunistic behaviors and guarantees the industrial investments of constant supplies over the time. Starting from a project that foresees a biomasses power plant in the south of Italy, this study reflects on the payments to fix in an eventual contract in such a way to maintain the fidelity of the farmers. These one have a greater flexibility since they can choose the most convenient crop. Therefore, their fidelity can be obtained tying the contractual payments to the price of the main alternative crop to the energetic one. The results of the study seem to indicate the opportunity to fix a purchase price of the raw materials linked to the one of durum wheat that is the most widespread crop in the territory and the one that depends more on a volatile market. Using the data of the District 12 of the province of Foggia Water Consortium with an area of 11.300 hectares (instead of the 20.000 demanded in the proposal, it has been possible to organize approximately 600 enterprises in five cluster, each of them identified by a representative farm. With a model of linear programming, we have run different simulations taking into account the possibility to grow sorghum in different ways. Through an aggregation process, it has been calculated that farmers may find it convenient to supply the energetic crop at a price of 50 €/t when the price of durum wheat is 150 €/t. Anyway, this price is lower than the one offered by firm that is planning to build the power plant. Moreover, it has been identified a strong correlation between the price of the durum wheat and the price that makes convenient for the farmers to grow the sorghum. When the
Models of vertical coordination consistent with the development of bio-energetics
Directory of Open Access Journals (Sweden)
Rosaria Viscecchia
2011-02-01
Full Text Available To foster the development of the biomasses for solid fuel it is fundamental to build up a strategy at a local level in which co-exists farms as well as industrial farms. To such aim, it is necessary to implement an effective vertical coordination between the stakeholders with the definition of a contract that prevents opportunistic behaviors and guarantees the industrial investments of constant supplies over the time. Starting from a project that foresees a biomasses power plant in the south of Italy, this study reflects on the payments to fix in an eventual contract in such a way to maintain the fidelity of the farmers. These one have a greater flexibility since they can choose the most convenient crop. Therefore, their fidelity can be obtained tying the contractual payments to the price of the main alternative crop to the energetic one. The results of the study seem to indicate the opportunity to fix a purchase price of the raw materials linked to the one of durum wheat that is the most widespread crop in the territory and the one that depends more on a volatile market. Using the data of the District 12 of the province of Foggia Water Consortium with an area of 11.300 hectares (instead of the 20.000 demanded in the proposal, it has been possible to organize approximately 600 enterprises in five cluster, each of them identified by a representative farm. With a model of linear programming, we have run different simulations taking into account the possibility to grow sorghum in different ways. Through an aggregation process, it has been calculated that farmers may find it convenient to supply the energetic crop at a price of 50 €/t when the price of durum wheat is 150 €/t. Anyway, this price is lower than the one offered by firm that is planning to build the power plant. Moreover, it has been identified a strong correlation between the price of the durum wheat and the price that makes convenient for the farmers to grow the sorghum. When the
Aaij, Roel; Adinolfi, Marco; Ajaltouni, Ziad; Akar, Simon; Albrecht, Johannes; Alessio, Federico; Alexander, Michael; Ali, Suvayu; Alkhazov, Georgy; Alvarez Cartelle, Paula; Alves Jr, Antonio Augusto; Amato, Sandra; Amerio, Silvia; Amhis, Yasmine; An, Liupan; Anderlini, Lucio; Andreassi, Guido; Andreotti, Mirco; Andrews, Jason; Appleby, Robert; Aquines Gutierrez, Osvaldo; Archilli, Flavio; d'Argent, Philippe; Arnau Romeu, Joan; Artamonov, Alexander; Artuso, Marina; Aslanides, Elie; Auriemma, Giulio; Baalouch, Marouen; Babuschkin, Igor; Bachmann, Sebastian; Back, John; Badalov, Alexey; Baesso, Clarissa; Baldini, Wander; Barlow, Roger; Barschel, Colin; Barsuk, Sergey; Barter, William; Batozskaya, Varvara; Batsukh, Baasansuren; Battista, Vincenzo; Bay, Aurelio; Beaucourt, Leo; Beddow, John; Bedeschi, Franco; Bediaga, Ignacio; Bel, Lennaert; Bellee, Violaine; Belloli, Nicoletta; Belous, Konstantin; Belyaev, Ivan; Ben-Haim, Eli; Bencivenni, Giovanni; Benson, Sean; Benton, Jack; Berezhnoy, Alexander; Bernet, Roland; Bertolin, Alessandro; Betti, Federico; Bettler, Marc-Olivier; van Beuzekom, Martinus; Bezshyiko, Iaroslava; Bifani, Simone; Billoir, Pierre; Bird, Thomas; Birnkraut, Alex; Bitadze, Alexander; Bizzeti, Andrea; Blake, Thomas; Blanc, Frederic; Blouw, Johan; Blusk, Steven; Bocci, Valerio; Boettcher, Thomas; Bondar, Alexander; Bondar, Nikolay; Bonivento, Walter; Borgheresi, Alessio; Borghi, Silvia; Borisyak, Maxim; Borsato, Martino; Bossu, Francesco; Boubdir, Meriem; Bowcock, Themistocles; Bowen, Espen Eie; Bozzi, Concezio; Braun, Svende; Britsch, Markward; Britton, Thomas; Brodzicka, Jolanta; Buchanan, Emma; Burr, Christopher; Bursche, Albert; Buytaert, Jan; Cadeddu, Sandro; Calabrese, Roberto; Calvi, Marta; Calvo Gomez, Miriam; Campana, Pierluigi; Campora Perez, Daniel; Capriotti, Lorenzo; Carbone, Angelo; Carboni, Giovanni; Cardinale, Roberta; Cardini, Alessandro; Carniti, Paolo; Carson, Laurence; Carvalho Akiba, Kazuyoshi; Casse, Gianluigi; Cassina, Lorenzo; Castillo Garcia, Lucia; Cattaneo, Marco; Cauet, Christophe; Cavallero, Giovanni; Cenci, Riccardo; Charles, Matthew; Charpentier, Philippe; Chatzikonstantinidis, Georgios; Chefdeville, Maximilien; Chen, Shanzhen; Cheung, Shu-Faye; Chobanova, Veronika; Chrzaszcz, Marcin; Cid Vidal, Xabier; Ciezarek, Gregory; Clarke, Peter; Clemencic, Marco; Cliff, Harry; Closier, Joel; Coco, Victor; Cogan, Julien; Cogneras, Eric; Cogoni, Violetta; Cojocariu, Lucian; Collazuol, Gianmaria; Collins, Paula; Comerma-Montells, Albert; Contu, Andrea; Cook, Andrew; Coquereau, Samuel; Corti, Gloria; Corvo, Marco; Costa Sobral, Cayo Mar; Couturier, Benjamin; Cowan, Greig; Craik, Daniel Charles; Crocombe, Andrew; Cruz Torres, Melissa Maria; Cunliffe, Samuel; Currie, Robert; D'Ambrosio, Carmelo; Dall'Occo, Elena; Dalseno, Jeremy; David, Pieter; Davis, Adam; De Aguiar Francisco, Oscar; De Bruyn, Kristof; De Capua, Stefano; De Cian, Michel; De Miranda, Jussara; De Paula, Leandro; De Serio, Marilisa; De Simone, Patrizia; Dean, Cameron Thomas; Decamp, Daniel; Deckenhoff, Mirko; Del Buono, Luigi; Demmer, Moritz; Derkach, Denis; Deschamps, Olivier; Dettori, Francesco; Dey, Biplab; Di Canto, Angelo; Dijkstra, Hans; Dordei, Francesca; Dorigo, Mirco; Dosil Suárez, Alvaro; Dovbnya, Anatoliy; Dreimanis, Karlis; Dufour, Laurent; Dujany, Giulio; Dungs, Kevin; Durante, Paolo; Dzhelyadin, Rustem; Dziurda, Agnieszka; Dzyuba, Alexey; Déléage, Nicolas; Easo, Sajan; Egede, Ulrik; Egorychev, Victor; Eidelman, Semen; Eisenhardt, Stephan; Eitschberger, Ulrich; Ekelhof, Robert; Eklund, Lars; Elsasser, Christian; Ely, Scott; Esen, Sevda; Evans, Hannah Mary; Evans, Timothy; Falabella, Antonio; Farley, Nathanael; Farry, Stephen; Fay, Robert; Fazzini, Davide; Ferguson, Dianne; Fernandez Albor, Victor; Ferrari, Fabio; Ferreira Rodrigues, Fernando; Ferro-Luzzi, Massimiliano; Filippov, Sergey; Fini, Rosa Anna; Fiore, Marco; Fiorini, Massimiliano; Firlej, Miroslaw; Fitzpatrick, Conor; Fiutowski, Tomasz; Fleuret, Frederic; Fohl, Klaus; Fontana, Marianna; Fontanelli, Flavio; Forshaw, Dean Charles; Forty, Roger; Frank, Markus; Frei, Christoph; Fu, Jinlin; Furfaro, Emiliano; Färber, Christian; Gallas Torreira, Abraham; Galli, Domenico; Gallorini, Stefano; Gambetta, Silvia; Gandelman, Miriam; Gandini, Paolo; Gao, Yuanning; García Pardiñas, Julián; Garra Tico, Jordi; Garrido, Lluis; Garsed, Philip John; Gascon, David; Gaspar, Clara; Gavardi, Laura; Gazzoni, Giulio; Gerick, David; Gersabeck, Evelina; Gersabeck, Marco; Gershon, Timothy; Ghez, Philippe; Gianì, Sebastiana; Gibson, Valerie; Girard, Olivier Göran; Giubega, Lavinia-Helena; Gizdov, Konstantin; Gligorov, V.V.; Golubkov, Dmitry; Golutvin, Andrey; Gomes, Alvaro; Gorelov, Igor Vladimirovich; Gotti, Claudio; Grabalosa Gándara, Marc; Graciani Diaz, Ricardo; Granado Cardoso, Luis Alberto; Graugés, Eugeni; Graverini, Elena; Graziani, Giacomo; Grecu, Alexandru; Griffith, Peter; Grillo, Lucia; Gruberg Cazon, Barak Raimond; Grünberg, Oliver; Gushchin, Evgeny; Guz, Yury; Gys, Thierry; Göbel, Carla; Hadavizadeh, Thomas; Hadjivasiliou, Christos; Haefeli, Guido; Haen, Christophe; Haines, Susan; Hall, Samuel; Hamilton, Brian; Han, Xiaoxue; Hansmann-Menzemer, Stephanie; Harnew, Neville; Harnew, Samuel; Harrison, Jonathan; Hatch, Mark; He, Jibo; Head, Timothy; Heister, Arno; Hennessy, Karol; Henrard, Pierre; Henry, Louis; Hernando Morata, Jose Angel; van Herwijnen, Eric; Heß, Miriam; Hicheur, Adlène; Hill, Donal; Hombach, Christoph; Hulsbergen, Wouter; Humair, Thibaud; Hushchyn, Mikhail; Hussain, Nazim; Hutchcroft, David; Idzik, Marek; Ilten, Philip; Jacobsson, Richard; Jaeger, Andreas; Jalocha, Pawel; Jans, Eddy; Jawahery, Abolhassan; John, Malcolm; Johnson, Daniel; Jones, Christopher; Joram, Christian; Jost, Beat; Jurik, Nathan; Kandybei, Sergii; Kanso, Walaa; Karacson, Matthias; Kariuki, James Mwangi; Karodia, Sarah; Kecke, Matthieu; Kelsey, Matthew; Kenyon, Ian; Kenzie, Matthew; Ketel, Tjeerd; Khairullin, Egor; Khanji, Basem; Khurewathanakul, Chitsanu; Kirn, Thomas; Klaver, Suzanne; Klimaszewski, Konrad; Koliiev, Serhii; Kolpin, Michael; Komarov, Ilya; Koopman, Rose; Koppenburg, Patrick; Kozachuk, Anastasiia; Kozeiha, Mohamad; Kravchuk, Leonid; Kreplin, Katharina; Kreps, Michal; Krokovny, Pavel; Kruse, Florian; Krzemien, Wojciech; Kucewicz, Wojciech; Kucharczyk, Marcin; Kudryavtsev, Vasily; Kuonen, Axel Kevin; Kurek, Krzysztof; Kvaratskheliya, Tengiz; Lacarrere, Daniel; Lafferty, George; Lai, Adriano; Lambert, Dean; Lanfranchi, Gaia; Langenbruch, Christoph; Langhans, Benedikt; Latham, Thomas; Lazzeroni, Cristina; Le Gac, Renaud; van Leerdam, Jeroen; Lees, Jean-Pierre; Leflat, Alexander; Lefrançois, Jacques; Lefèvre, Regis; Lemaitre, Florian; Lemos Cid, Edgar; Leroy, Olivier; Lesiak, Tadeusz; Leverington, Blake; Li, Yiming; Likhomanenko, Tatiana; Lindner, Rolf; Linn, Christian; Lionetto, Federica; Liu, Bo; Liu, Xuesong; Loh, David; Longstaff, Iain; Lopes, Jose; Lucchesi, Donatella; Lucio Martinez, Miriam; Luo, Haofei; Lupato, Anna; Luppi, Eleonora; Lupton, Oliver; Lusiani, Alberto; Lyu, Xiao-Rui; Machefert, Frederic; Maciuc, Florin; Maev, Oleg; Maguire, Kevin; Malde, Sneha; Malinin, Alexander; Maltsev, Timofei; Manca, Giulia; Mancinelli, Giampiero; Manning, Peter Michael; Maratas, Jan; Marchand, Jean François; Marconi, Umberto; Marin Benito, Carla; Marino, Pietro; Marks, Jörg; Martellotti, Giuseppe; Martin, Morgan; Martinelli, Maurizio; Martinez Santos, Diego; Martinez Vidal, Fernando; Martins Tostes, Danielle; Massacrier, Laure Marie; Massafferri, André; Matev, Rosen; Mathad, Abhijit; Mathe, Zoltan; Matteuzzi, Clara; Mauri, Andrea; Maurin, Brice; Mazurov, Alexander; McCann, Michael; McCarthy, James; McNab, Andrew; McNulty, Ronan; Meadows, Brian; Meier, Frank; Meissner, Marco; Melnychuk, Dmytro; Merk, Marcel; Merli, Andrea; Michielin, Emanuele; Milanes, Diego Alejandro; Minard, Marie-Noelle; Mitzel, Dominik Stefan; Molina Rodriguez, Josue; Monroy, Ignacio Alberto; Monteil, Stephane; Morandin, Mauro; Morawski, Piotr; Mordà, Alessandro; Morello, Michael Joseph; Moron, Jakub; Morris, Adam Benjamin; Mountain, Raymond; Muheim, Franz; Mulder, Mick; Mussini, Manuel; Müller, Dominik; Müller, Janine; Müller, Katharina; Müller, Vanessa; Naik, Paras; Nakada, Tatsuya; Nandakumar, Raja; Nandi, Anita; Nasteva, Irina; Needham, Matthew; Neri, Nicola; Neubert, Sebastian; Neufeld, Niko; Neuner, Max; Nguyen, Anh Duc; Nguyen-Mau, Chung; Nieswand, Simon; Niet, Ramon; Nikitin, Nikolay; Nikodem, Thomas; Novoselov, Alexey; O'Hanlon, Daniel Patrick; Oblakowska-Mucha, Agnieszka; Obraztsov, Vladimir; Ogilvy, Stephen; Oldeman, Rudolf; Onderwater, Gerco; Otalora Goicochea, Juan Martin; Otto, Adam; Owen, Patrick; Oyanguren, Maria Aranzazu; Pais, Preema Rennee; Palano, Antimo; Palombo, Fernando; Palutan, Matteo; Panman, Jacob; Papanestis, Antonios; Pappagallo, Marco; Pappalardo, Luciano; Pappenheimer, Cheryl; Parker, William; Parkes, Christopher; Passaleva, Giovanni; Pastore, Alessandra; Patel, Girish; Patel, Mitesh; Patrignani, Claudia; Pearce, Alex; Pellegrino, Antonio; Penso, Gianni; Pepe Altarelli, Monica; Perazzini, Stefano; Perret, Pascal; Pescatore, Luca; Petridis, Konstantinos; Petrolini, Alessandro; Petrov, Aleksandr; Petruzzo, Marco; Picatoste Olloqui, Eduardo; Pietrzyk, Boleslaw; Pikies, Malgorzata; Pinci, Davide; Pistone, Alessandro; Piucci, Alessio; Playfer, Stephen; Plo Casasus, Maximo; Poikela, Tuomas; Polci, Francesco; Poluektov, Anton; Polyakov, Ivan; Polycarpo, Erica; Pomery, Gabriela Johanna; Popov, Alexander; Popov, Dmitry; Popovici, Bogdan; Potterat, Cédric; Price, Eugenia; Price, Joseph David; Prisciandaro, Jessica; Pritchard, Adrian; Prouve, Claire; Pugatch, Valery; Puig Navarro, Albert; Punzi, Giovanni; Qian, Wenbin; Quagliani, Renato; Rachwal, Bartolomiej; Rademacker, Jonas; Rama, Matteo; Ramos Pernas, Miguel; Rangel, Murilo; Raniuk, Iurii; Raven, Gerhard; Redi, Federico; Reichert, Stefanie; dos Reis, Alberto; Remon Alepuz, Clara; Renaudin, Victor; Ricciardi, Stefania; Richards, Sophie; Rihl, Mariana; Rinnert, Kurt; Rives Molina, Vicente; Robbe, Patrick; Rodrigues, Ana Barbara; Rodrigues, Eduardo; Rodriguez Lopez, Jairo Alexis; Rodriguez Perez, Pablo; Rogozhnikov, Alexey; Roiser, Stefan; Romanovskiy, Vladimir; Romero Vidal, Antonio; Ronayne, John William; Rotondo, Marcello; Ruf, Thomas; Ruiz Valls, Pablo; Saborido Silva, Juan Jose; Sadykhov, Elnur; Sagidova, Naylya; Saitta, Biagio; Salustino Guimaraes, Valdir; Sanchez Mayordomo, Carlos; Sanmartin Sedes, Brais; Santacesaria, Roberta; Santamarina Rios, Cibran; Santimaria, Marco; Santovetti, Emanuele; Sarti, Alessio; Satriano, Celestina; Satta, Alessia; Saunders, Daniel Martin; Savrina, Darya; Schael, Stefan; Schellenberg, Margarete; Schiller, Manuel; Schindler, Heinrich; Schlupp, Maximilian; Schmelling, Michael; Schmelzer, Timon; Schmidt, Burkhard; Schneider, Olivier; Schopper, Andreas; Schubert, Konstantin; Schubiger, Maxime; Schune, Marie Helene; Schwemmer, Rainer; Sciascia, Barbara; Sciubba, Adalberto; Semennikov, Alexander; Sergi, Antonino; Serra, Nicola; Serrano, Justine; Sestini, Lorenzo; Seyfert, Paul; Shapkin, Mikhail; Shapoval, Illya; Shcheglov, Yury; Shears, Tara; Shekhtman, Lev; Shevchenko, Vladimir; Shires, Alexander; Siddi, Benedetto Gianluca; Silva Coutinho, Rafael; Silva de Oliveira, Luiz Gustavo; Simi, Gabriele; Simone, Saverio; Sirendi, Marek; Skidmore, Nicola; Skwarnicki, Tomasz; Smith, Eluned; Smith, Iwan Thomas; Smith, Jackson; Smith, Mark; Snoek, Hella; Sokoloff, Michael; Soler, Paul; Souza, Daniel; Souza De Paula, Bruno; Spaan, Bernhard; Spradlin, Patrick; Sridharan, Srikanth; Stagni, Federico; Stahl, Marian; Stahl, Sascha; Stefko, Pavol; Stefkova, Slavorima; Steinkamp, Olaf; Stenyakin, Oleg; Stevenson, Scott; Stoica, Sabin; Stone, Sheldon; Storaci, Barbara; Stracka, Simone; Straticiuc, Mihai; Straumann, Ulrich; Sun, Liang; Sutcliffe, William; Swientek, Krzysztof; Syropoulos, Vasileios; Szczekowski, Marek; Szumlak, Tomasz; T'Jampens, Stephane; Tayduganov, Andrey; Tekampe, Tobias; Tellarini, Giulia; Teubert, Frederic; Thomas, Christopher; Thomas, Eric; van Tilburg, Jeroen; Tisserand, Vincent; Tobin, Mark; Tolk, Siim; Tomassetti, Luca; Tonelli, Diego; Topp-Joergensen, Stig; Toriello, Francis; Tournefier, Edwige; Tourneur, Stephane; Trabelsi, Karim; Traill, Murdo; Tran, Minh Tâm; Tresch, Marco; Trisovic, Ana; Tsaregorodtsev, Andrei; Tsopelas, Panagiotis; Tully, Alison; Tuning, Niels; Ukleja, Artur; Ustyuzhanin, Andrey; Uwer, Ulrich; Vacca, Claudia; Vagnoni, Vincenzo; Valat, Sebastien; Valenti, Giovanni; Vallier, Alexis; Vazquez Gomez, Ricardo; Vazquez Regueiro, Pablo; Vecchi, Stefania; van Veghel, Maarten; Velthuis, Jaap; Veltri, Michele; Veneziano, Giovanni; Venkateswaran, Aravindhan; Vernet, Maxime; Vesterinen, Mika; Viaud, Benoit; Vieira, Daniel; Vieites Diaz, Maria; Vilasis-Cardona, Xavier; Volkov, Vladimir; Vollhardt, Achim; Voneki, Balazs; Voong, David; Vorobyev, Alexey; Vorobyev, Vitaly; Voß, Christian; de Vries, Jacco; Vázquez Sierra, Carlos; Waldi, Roland; Wallace, Charlotte; Wallace, Ronan; Walsh, John; Wang, Jianchun; Ward, David; Wark, Heather Mckenzie; Watson, Nigel; Websdale, David; Weiden, Andreas; Whitehead, Mark; Wicht, Jean; Wilkinson, Guy; Wilkinson, Michael; Williams, Mark Richard James; Williams, Matthew; Williams, Mike; Williams, Timothy; Wilson, Fergus; Wimberley, Jack; Wishahi, Julian; Wislicki, Wojciech; Witek, Mariusz; Wormser, Guy; Wotton, Stephen; Wraight, Kenneth; Wright, Simon; Wyllie, Kenneth; Xie, Yuehong; Xing, Zhou; Xu, Zhirui; Yang, Zhenwei; Yin, Hang; Yu, Jiesheng; Yuan, Xuhao; Yushchenko, Oleg; Zangoli, Maria; Zarebski, Kristian Alexander; Zavertyaev, Mikhail; Zhang, Liming; Zhang, Yanxi; Zhang, Yu; Zhelezov, Alexey; Zheng, Yangheng; Zhokhov, Anatoly; Zhukov, Valery; Zucchelli, Stefano
2016-01-01
The first full amplitude analysis of $B^+\\to J/\\psi \\phi K^+$ with $J/\\psi\\to\\mu^+\\mu^-$, $\\phi\\to K^+K^-$ decays is performed with a data sample of 3 fb$^{-1}$ of $pp$ collision data collected at $\\sqrt{s}=7$ and $8$ TeV with the LHCb detector. The data cannot be described by a model that contains only excited kaon states decaying into $\\phi K^+$, and four $J/\\psi\\phi$ structures are observed, each with significance over $5$ standard deviations. The quantum numbers of these structures are determined with significance of at least $4$ standard deviations. The lightest is best described as a $D_s^{\\pm}D_s^{*\\mp}$ cusp, but a resonant interpretation is also possible with mass consistent with, but width much larger than, previous measurements of the claimed $X(4140)$ state.
Schnell, D J; Galavotti, C; Fishbein, M; Chan, D K
1996-01-01
The stages of behavior change model has been used to understand a variety of health behaviors. Since consistent condom use has been promoted as a risk-reduction behavior for prevention of human immunodeficiency virus (HIV) infection, an algorithm for staging the adoption of consistent condom use during vaginal sex was empirically developed using three considerations: HIV prevention efficacy, analogy with work on staging other health-related behaviors, and condom use data from groups at high risk for HIV infection. This algorithm suggests that the adoption of consistent condom use among persons at high risk can be meaningfully measured with the model. However, variations in the algorithm details affect both the interpretation of stages and apportionment of persons across stages.
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.
Electronic structure of PrBa2Cu3O7 within LSDA+U: Different self-consistent solutions
Directory of Open Access Journals (Sweden)
M R Mohammadizadeh
2009-08-01
Full Text Available Based on the density functional theory and using the full-potential linearized augmented-plane-waves method the electronic structure of PrBa2Cu3O7 (Pr123 system was calculated. The rotationally invariant local spin density approximation plus Hubbard parameter U was employed for Pr(4f orbitals. One self-consistent solution more stable than the previous solution, which has been proposed by Liechtenstein and Mazin (LM, was found. In contrast to the LM solution, it can explain the results of the 17O NMR spectroscopy study of nonsuperconducting Pr123 samples. This new solution favors the suggestion that the pure Pr123 samples should be intrinsically superconductor and metal similar to the other RBa2Cu3O7 (R=Y or a rare earth element samples. The imperfections cause the superconducting holes are transferred to the nonsuperconducting hole states around the high-symmetry (π/a, π/b, kz line in the Brillouin zone and so, superconductivity is suppressed in the conventional samples. It predicts that the superconducting 2pσ holes in the O2 sites of nonsuperconducting Pr123 samples should be depleted and the ones in the O3 sites should be almost unchanged .
Gracceva, Giulia; Koolhaas, Jaap M; Groothuis, Ton G G
2011-09-01
Animal personality has been extensively studied from a functional and evolutionary point of view. Less attention has been paid to the development of personality, its phenotypic plasticity, and the influence of manipulation of early environmental factors. Here we describe the effects of manipulating the sex ratio of the litter, at postnatal day (pnd) 3, in wild-type rats, on personality traits in adulthood. We measured the treatment effects on aggression, defensive burying, and open field behavior at pnd 90 and 120, as well as on their contextual generality, and stability over time (differential and structural consistency). Main effects of litter composition were found on open field behavior at pnd 120 but not on the other behaviors. Since correlations between behaviors changed over time irrespective of the specific treatment, whereas in previous studies on unmanipulated litters this was not the case we suggest that early handling may disrupt adult personality traits. Overall the data indicate that personality is less stable over time that often assumed, having both proximate and ultimate implications.
Yeaman, Andrew R. J.
The Fishbein and Ajzen model of attitude-behavior consistency was applied to 56 undergraduates learning to use a microcomputer. Two levels of context for this act were compared: the students' beliefs about themselves, and their beliefs about people in general. The results indicated that students' beliefs were good predictors of their behavioral…
DEFF Research Database (Denmark)
Zahid, F.; Paulsson, Magnus; Polizzi, E.;
2005-01-01
We present a transport model for molecular conduction involving an extended Huckel theoretical treatment of the molecular chemistry combined with a nonequilibrium Green's function treatment of quantum transport. The self-consistent potential is approximated by CNDO (complete neglect of differential...
Postmus, B.R.; Leermakers, F.A.M.; Cohen Stuart, M.A.
2008-01-01
We have constructed a model to predict the properties of non-ionic (alkyl-ethylene oxide) (C(n)E(m)) surfactants, both in aqueous solutions and near a silica surface, based upon the self-consistent field theory using the Scheutjens-Fleer discretisation scheme. The system has the pH and the ionic
FERRINI, Silvia; Fezzi, Carlo; Day, Brett H.; BATEMAN, Ian J.
2008-01-01
We argue that the literature concerning the valuation of non-market, spatially defined goods (such as those provided by the natural environment) is crucially deficient in two respects. First, it fails to employ a theoretically consistent structural model of utility to the separate and hence correct definition of use and non-use values. Second, applications (particularly those using stated preference methods) typically fail to capture the spatially complex distribution of resources and their s...
Wang, Bo; Bauer, Sebastian
2016-04-01
Geological models are the prerequisite of exploring possible use of the subsurface and evaluating induced impacts. Subsurface geological models often show strong complexity in geometry and hydraulic connectivity because of their heterogeneous nature. In order to model that complexity, the corner point grid approach has been applied by geologists for decades. The corner point grid utilizes a set of hexahedral blocks to represent geological formations. Due to the appearance of eroded geological layers, some edges of those blocks may be collapsed and the blocks thus degenerate. This leads to the inconsistency and the impossibility of using the corner point grid directly with a finite element based simulator. Therefore, in this study, we introduce a workflow for transferring heterogeneous geological models to consistent finite element models. In the corner point grid, the hexahedral blocks without collapsed edges are converted to hexahedral elements directly. But if they degenerate, each block is divided into prism, pyramid and tetrahedral elements based on individual degenerated situation. This approach consistently converts any degenerated corner point grid to a consistent hybrid finite element mesh. Along with the above converting scheme, the corresponding heterogeneous geological data, e.g. permeability and porosity, can be transferred as well. Moreover, well trajectories designed in the corner point grid can be resampled to the nodes in the finite element mesh, which represents the location for source terms along the well path. As a proof of concept, we implement the workflow in the framework of transferring models from Petrel to the finite element OpenGeoSys simulator. As application scenario we choose a deep geothermal reservoir operation in the North German Basin. A well doublet is defined in a saline aquifer in the Rhaetian formation, which has a depth of roughly 4000 m. The geometric model shows all kinds of degenerated blocks due to eroded layers and the
Handbook of structural equation modeling
Hoyle, Rick H
2012-01-01
The first comprehensive structural equation modeling (SEM) handbook, this accessible volume presents both the mechanics of SEM and specific SEM strategies and applications. The editor, contributors, and editorial advisory board are leading methodologists who have organized the book to move from simpler material to more statistically complex modeling approaches. Sections cover the foundations of SEM; statistical underpinnings, from assumptions to model modifications; steps in implementation, from data preparation through writing the SEM report; and basic and advanced applications, inclu
Energy Technology Data Exchange (ETDEWEB)
Sidhu, D.P.
1980-09-01
I discuss a left-right-symmetric model of weak and electromagnetic interactions which is consistent with the results of all weak-interaction experiments including observed parity violation in eN interactions. The model is essentially indistinguishable from the Weinberg-Salam (WS) model at low energies and differs from it significantly at high q/sup 2/. Of the two (Z/sub 1/,Z/sub 2/) neutral bosons of the model, MZ-italic/sub 1/approx. =M/sub Z/ of the WS model and MZ-italic/sub 2/approx. =2.5M/sub Z//sub 1/approx. =230 GeV. The prospects of distinguishing the two classes of models in e/sup +/e/sup -/ experiments at LEP and in pp and p-barp colliding-beam experiments at ISABELLE are also discussed.
Stochastic Time Models of Syllable Structure
Shaw, Jason A.; Gafos, Adamantios I.
2015-01-01
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153
Edrisi, Siroos; Bidhendi, Norollah Kasiri; Haghighi, Maryam
2017-01-01
Effective thermal conductivity of the porous media was modeled based on a self-consistent method. This model estimates the heat transfer between insulator surface and air cavities accurately. In this method, the pore size and shape, the temperature gradient and other thermodynamic properties of the fluid was taken into consideration. The results are validated by experimental data for fire bricks used in cracking furnaces at the olefin plant of Maroon petrochemical complexes well as data published for polyurethane foam (synthetic polymers) IPTM and IPM. The model predictions present a good agreement against experimental data with thermal conductivity deviating <1 %.
A Magnetic Consistency Relation
Jain, Rajeev Kumar
2012-01-01
If cosmic magnetic fields are indeed produced during inflation, they are likely to be correlated with the scalar metric perturbations that are responsible for the Cosmic Microwave Background anisotropies and Large Scale Structure. Within an archetypical model of inflationary magnetogenesis, we show that there exists a new simple consistency relation for the non-Gaussian cross correlation function of the scalar metric perturbation with two powers of the magnetic field in the squeezed limit where the momentum of the metric perturbation vanishes. We emphasize that such a consistency relation turns out to be extremely useful to test some recent calculations in the literature. Apart from primordial non-Gaussianity induced by the curvature perturbations, such a cross correlation might provide a new observational probe of inflation and can in principle reveal the primordial nature of cosmic magnetic fields.
Yeo, MyungGu; Lee, Ji-Seon; Chun, Wook; Kim, Geun Hyung
2016-04-11
Three-dimensional (3D) cell printing processes have been used widely in various tissue engineering applications due to the efficient embedding of living cells in appropriately designed micro- or macro-structures. However, there are several issues to overcome, such as the limited choice of bioinks and tailor-made fabricating strategies. Here, we suggest a new, innovative cell-printing process, supplemented with a core-sheath nozzle and an aerosol cross-linking method, to obtain multilayered cell-laden mesh structure and a newly considered collagen-based cell-laden bioink. To obtain a mechanically and biologically enhanced cell-laden structure, we used collagen-bioink in the core region, and also used pure alginate in the sheath region to protect the cells in the collagen during the printing and cross-linking process and support the 3D cell-laden mesh structure. To achieve the most appropriate conditions for fabricating cell-embedded cylindrical core-sheath struts, various processing conditions, including weight fractions of the cross-linking agent and pneumatic pressure in the core region, were tested. The fabricated 3D MG63-laden mesh structure showed significantly higher cell viability (92 ± 3%) compared with that (83 ± 4%) of the control, obtained using a general alginate-based cell-printing process. To expand the feasibility to stem cell-embedded structures, we fabricated a cell-laden mesh structure consisting of core (cell-laden collagen)/sheath (pure alginate) using human adipose stem cells (hASCs). Using the selected processing conditions, we could achieve a stable 3D hASC-laden mesh structure. The fabricated cell-laden 3D core-sheath structure exhibited outstanding cell viability (91%) compared to that (83%) of an alginate-based hASC-laden mesh structure (control), and more efficient hepatogenic differentiations (albumin: ∼ 1.7-fold, TDO-2: ∼ 7.6-fold) were observed versus the control. The selection of collagen-bioink and the new printing strategy
Yuan, Yao-Ming; Jiang, Rui; Hu, Mao-Bin; Wu, Qing-Song; Wang, Ruili
2009-06-01
In this paper, we have investigated traffic flow characteristics in a traffic system consisting of a mixture of adaptive cruise control (ACC) vehicles and manual-controlled (manual) vehicles, by using a hybrid modelling approach. In the hybrid approach, (i) the manual vehicles are described by a cellular automaton (CA) model, which can reproduce different traffic states (i.e., free flow, synchronised flow, and jam) as well as probabilistic traffic breakdown phenomena; (ii) the ACC vehicles are simulated by using a car-following model, which removes artificial velocity fluctuations due to intrinsic randomisation in the CA model. We have studied the traffic breakdown probability from free flow to congested flow, the phase transition probability from synchronised flow to jam in the mixed traffic system. The results are compared with that, where both ACC vehicles and manual vehicles are simulated by CA models. The qualitative and quantitative differences are indicated.
Institute of Scientific and Technical Information of China (English)
TANG NianSheng; CHEN XueDong; WANG XueRen
2009-01-01
Semiparametric reproductive dispersion nonlinear model (SRDNM) is an extension of nonlinear reproductive dispersion models and semiparametric nonlinear regression models, and includes semiparametric nonlinear model and semiparametric generalized linear model as its special cases. Based on the local kernel estimate of nonparametric component, profile-kernel and backfitting estimators of parameters of interest are proposed in SRDNM, and theoretical comparison of both estimators is also investigated in this paper. Under some regularity conditions, strong consistency and asymptotic normality of two estimators are proved. It is shown that the backtitting method produces a larger asymptotic variance than that for the profile-kernel method. A simulation study and a real example are used to illustrate the proposed methodologies.
Modelling structured data with Probabilistic Graphical Models
Forbes, F.
2016-05-01
Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.
Dreizler, S.; Wolff, B.
1999-08-01
We present a multi-wavelength spectral analysis of the DA white dwarf G 191-B2B. The employed atmospheric models account for gravitational settling and radiative levitation, which are, for the first time, calculated self-consistently with the atmospheric structure. The resulting spectra can reproduce the complete EUVE spectrum and the ultraviolet lines of iron. Some restrictions regarding the UV lines of other elements (C, N, O, Ni), however, still remain. In contrast to homogeneous models, it is not necessary to introduce additional photospheric or interstellar absorbers to account for the high opacity at lambda Research in Astronomy, Inc. under NASA contract NAS 5-26555.
Cafiso, Salvatore; Di Graziano, Alessandro; Di Silvestro, Giacomo; La Cava, Grazia; Persaud, Bhagwant
2010-07-01
In Europe, approximately 60% of road accident fatalities occur on two-lane rural roads. Thus, research to develop and enhance explanatory and predictive models for this road type continues to be of interest in mitigating these accidents. To this end, this paper describes a novel and extensive data collection and modeling effort to define accident models for two-lane road sections based on a unique combination of exposure, geometry, consistency and context variables directly related to the safety performance. The first part of the paper documents how these were identified for the segmentation of highways into homogeneous sections. Next, is a description of the extensive data collection effort that utilized differential cinematic GPS surveys to define the horizontal alignment variables, and road safety inspections (RSIs) to quantify the other road characteristics related to safety. The final part of the paper focuses on the calibration of models for estimating the expected number of accidents on homogeneous sections that can be characterized by constant values of the explanatory variables. Several candidate models were considered for calibration using the Generalized Linear Modeling (GLM) approach. After considering the statistical significance of the parameters related to exposure, geometry, consistency and context factors, and goodness of fit statistics, 19 models were ranked and three were selected as the recommended models. The first of the three is a base model, with length and traffic as the only predictor variables; since these variables are the only ones likely to be available network-wide, this base model can be used in an empirical Bayesian calculation to conduct network screening for ranking "sites with promise" of safety improvement. The other two models represent the best statistical fits with different combinations of significant variables related to exposure, geometry, consistency and context factors. These multiple variable models can be used, with
Lin, M. C.; Verboncoeur, J.
2016-10-01
A maximum electron current transmitted through a planar diode gap is limited by space charge of electrons dwelling across the gap region, the so called space charge limited (SCL) emission. By introducing a counter-streaming ion flow to neutralize the electron charge density, the SCL emission can be dramatically raised, so electron current transmission gets enhanced. In this work, we have developed a relativistic self-consistent model for studying the enhancement of maximum transmission by a counter-streaming ion current. The maximum enhancement is found when the ion effect is saturated, as shown analytically. The solutions in non-relativistic, intermediate, and ultra-relativistic regimes are obtained and verified with 1-D particle-in-cell simulations. This self-consistent model is general and can also serve as a comparison for verification of simulation codes, as well as extension to higher dimensions.
Institute of Scientific and Technical Information of China (English)
Lan Chao-Hui; Lan Chao-Zhen; Hu Xi-Wei; Chen Zhao-Quan; Liu Ming-Hai
2009-01-01
A self-consistent and three-dimensional (3D) model of argon discharge in a large-scale rectangular surface-wave plasma (SWP) source is presented in this paper, which is based on the finite-difference time-domain (FDTD) approximation to Maxwell's equations self-consistently coupled with a fluid model for plasma evolution. The discharge characteristics at an input microwave power of 1200 W and a filling gas pressure of 50 Pa in the SWP source are analyzed. The simulation shows the time evolution of deposited power density at different stages, and the 3D distributions of electron density and temperature in the chamber at steady state. In addition, the results show that there is a peak of plasma density approximately at a vertical distance of 3 cm from the quartz window.
Donnellan, M. Brent; Kenny, David A.; Trzesniewski, Kali H.; Lucas, Richard E.; Conger, Rand D.
2012-01-01
The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-est...
Directory of Open Access Journals (Sweden)
A. Mairesse
2013-12-01
Full Text Available The mid-Holocene (6 kyr BP; thousand years before present is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of this relatively large amount of information, we have compared a compilation of 50 air and sea surface temperature reconstructions with the results of three simulations performed with general circulation models and one carried out with LOVECLIM, a model of intermediate complexity. The conclusions derived from this analysis confirm that models and data agree on the large-scale spatial pattern but the models underestimate the magnitude of some observed changes and that large discrepancies are observed at the local scale. To further investigate the origin of those inconsistencies, we have constrained LOVECLIM to follow the signal recorded by the proxies selected in the compilation using a data-assimilation method based on a particle filter. In one simulation, all the 50 proxy-based records are used while in the other two only the continental or oceanic proxy-based records constrain the model results. As expected, data assimilation leads to improving the consistency between model results and the reconstructions. In particular, this is achieved in a robust way in all the experiments through a strengthening of the westerlies at midlatitude that warms up northern Europe. Furthermore, the comparison of the LOVECLIM simulations with and without data assimilation has also objectively identified 16 proxy-based paleoclimate records whose reconstructed signal is either incompatible with the signal recorded by some other proxy-based records or with model physics.
Energy Technology Data Exchange (ETDEWEB)
Andrade, Maria Celia Ramos; Ludwig, Gerson Otto [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Lab. Associado de Plasma]. E-mail: mcr@plasma.inpe.br
2004-07-01
Different bootstrap current formulations are implemented in a self-consistent equilibrium calculation obtained from a direct variational technique in fixed boundary tokamak plasmas. The total plasma current profile is supposed to have contributions of the diamagnetic, Pfirsch-Schlueter, and the neoclassical Ohmic and bootstrap currents. The Ohmic component is calculated in terms of the neoclassical conductivity, compared here among different expressions, and the loop voltage determined consistently in order to give the prescribed value of the total plasma current. A comparison among several bootstrap current models for different viscosity coefficient calculations and distinct forms for the Coulomb collision operator is performed for a variety of plasma parameters of the small aspect ratio tokamak ETE (Experimento Tokamak Esferico) at the Associated Plasma Laboratory of INPE, in Brazil. We have performed this comparison for the ETE tokamak so that the differences among all the models reported here, mainly regarding plasma collisionality, can be better illustrated. The dependence of the bootstrap current ratio upon some plasma parameters in the frame of the self-consistent calculation is also analysed. We emphasize in this paper what we call the Hirshman-Sigmar/Shaing model, valid for all collisionality regimes and aspect ratios, and a fitted formulation proposed by Sauter, which has the same range of validity but is faster to compute than the previous one. The advantages or possible limitations of all these different formulations for the bootstrap current estimate are analysed throughout this work. (author)
Directory of Open Access Journals (Sweden)
A. Mairesse
2013-07-01
Full Text Available The mid-Holocene (6 thousand years before present is a key period to study the consistency between model results and proxy data as it corresponds to a standard test for models and a reasonable number of proxy records are available. Taking advantage of this relatively large amount of information, we have first compared a compilation of 50 air and sea surface temperature reconstructions with the results of three simulations performed with general circulation models and one carried out with LOVECLIM, a model of intermediate complexity. The conclusions derived from this analysis confirm that models and data agree on the large-scale spatial pattern but underestimate the magnitude of some observed changes and that large discrepancies are observed at the local scale. To further investigate the origin of those inconsistencies, we have constrained LOVECLIM to follow the signal recorded by the proxies selected in the compilation using a data assimilation method based on a particle filter. In one simulation, all the 50 proxies are used while in the other two, only the continental or oceanic proxies constrains the model results. This assimilation improves the consistency between model results and the reconstructions. In particular, this is achieved in a robust way in all the experiments through a strengthening of the westerlies at mid-latitude that warms up the Northern Europe. Furthermore, the comparison of the LOVECLIM simulations with and without data assimilation has also objectively identified 16 proxies whose reconstructed signal is either incompatible with the one recorded by some other proxies or with model physics.
Young, Hsu-Wen Vincent; Hsu, Ke-Hsin; Pham, Van-Truong; Tran, Thi-Thao; Lo, Men-Tzung
2017-09-01
A new method for signal decomposition is proposed and tested. Based on self-consistent nonlinear wave equations with self-sustaining physical mechanisms in mind, the new method is adaptive and particularly effective for dealing with synthetic signals consisting of components of multiple time scales. By formulating the method into an optimization problem and developing the corresponding algorithm and tool, we have proved its usefulness not only for analyzing simulated signals, but, more importantly, also for real clinical data.
Berg, Matthew; Hartley, Brian; Richters, Oliver
2015-01-01
By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.
A self-consistent model for the evolution of the gas produced in the debris disc of β Pictoris
Kral, Q.; Wyatt, M.; Carswell, R. F.; Pringle, J. E.; Matrà, L.; Juhász, A.
2016-09-01
This paper presents a self-consistent model for the evolution of gas produced in the debris disc of β Pictoris. Our model proposes that atomic carbon and oxygen are created from the photodissociation of CO, which is itself released from volatile-rich bodies in the debris disc due to grain-grain collisions or photodesorption. While the CO lasts less than one orbit, the atomic gas evolves by viscous spreading resulting in an accretion disc inside the parent belt and a decretion disc outside. The temperature, ionization fraction and population levels of carbon and oxygen are followed with the photodissociation region model CLOUDY, which is coupled to a dynamical viscous α model. We present new gas observations of β Pic, of C I observed with Atacama Pathfinder EXperiment and O I observed with Herschel, and show that these along with published C II and CO observations can all be explained with this new model. Our model requires a viscosity α > 0.1, similar to that found in sufficiently ionized discs of other astronomical objects; we propose that the magnetorotational instability is at play in this highly ionized and dilute medium. This new model can be tested from its predictions for high-resolution ALMA observations of C I. We also constrain the water content of the planetesimals in β Pic. The scenario proposed here might be at play in all debris discs and this model could be used more generally on all discs with C, O or CO detections.
Institute of Scientific and Technical Information of China (English)
刘建宾; 郝克刚; 龚世生
2001-01-01
Abstract Concept Structure Diagram,an Abstract diagrammatized representation for program process logic ,is a concept algorithm description tool independent of program implementing language. In this paper ,a formal model of Abstract Concept Structure Diagram,its graphical notations,and a smooth transition method from Abstract Concept Structure Diagram to JAVA Process Blueprint and mapping rules are presented. The validation and consistency of concept program and logical program is defined,and related theorems and prove procedures are also presented.
Queiroz, G.; Goulart, C.; Gaspar, J. L.; Gomes, A.; Resendes, J. P.; Marques, R.; Gonçalves, P.; Silveira, D.; Valadão, P.
2003-04-01
The Geographic Information Systems (GIS) are becoming a major tool in the domain of geological hazard assessment and risk mitigation. When available, hazard and vulnerability data can easily be represented in a GIS and a great diversity of risk maps can be produced following the implementation of specific predicting models. A major difficulty for those that deal with GIS is to obtain high quality, well geo-referenced and validated data. This situation is particularly evident in the scope of risk analysis due to the diversity of data that need to be considered. In order to develop a coherent database for the geological risk analysis of the Azores archipelago it was decided to use the digital maps edited in 2001 by the Instituto Geográfico do Exército de Portugal (scale 1:25000), comprising altimetry, urban areas, roads and streams network. For the particular case of S. Miguel Island the information contained in these layers was revised and rectifications were made whenever needed. Moreover basic additional layers were added to the system, including counties and parishes administrative limits, agriculture and forested areas. For detailed studies all the edifices (e.g. houses, public buildings, monuments) are being individualized and characterized taking in account several parameters that can become crucial to assess their direct vulnerability to geological hazards (e.g. type of construction, number of floors, roof stability). Geological data obtained (1) through the interpretation of historical documents, (2) during recent fieldwork campaigns (e.g. mapping of volcanic centres and associated deposits, faults, dikes, soil degassing anomalies, landslides) and (3) by the existent monitoring networks (e.g. seismic, geodetic, fluid geochemistry) are also being digitised. The acquisition, storage and maintenance of all this information following the same criteria of quality are critical to guarantee the accuracy and consistency of the GIS database through time. In this
Cappelluti, Federica; Ma, Shuai; Pugliese, Diego; Sacco, Adriano; Lamberti, Andrea; Ghione, Giovanni; Tresso, Elena
2013-09-21
A numerical device-level model of dye-sensitized solar cells (DSCs) is presented, which self-consistently couples a physics-based description of the photoactive layer with a compact circuit-level description of the passive parts of the cell. The opto-electronic model of the nanoporous dyed film includes a detailed description of photogeneration and trap-limited kinetics, and a phenomenological description of nonlinear recombination. Numerical simulations of the dynamic small-signal behavior of DSCs, accounting for trapping and nonlinear recombination mechanisms, are reported for the first time and validated against experiments. The model is applied to build a consistent picture of the static and dynamic small-signal performance of nanocrystalline TiO2-based DSCs under different incident illumination intensity and direction, analyzed in terms of current-voltage characteristic, Incident Photon to Current Efficiency, and Electrochemical Impedance Spectroscopy. This is achieved with a reliable extraction and validation of a unique set of model parameters against a large enough set of experimental data. Such a complete and validated description allows us to gain a detailed view of the cell collection efficiency dependence on different operating conditions. In particular, based on dynamic numerical simulations, we provide for the first time a sound support to the interpretation of the diffusion length, in the presence of nonlinear recombination and non-uniform electron density distribution, as derived from small-signal characterization techniques and clarify its correlation with different estimation methods based on spectral measurements.
Feofilov, Artem G.; Yankovsky, Valentine A.; Pesnell, William D.; Kutepov, Alexander A.; Goldberg, Richard A.; Mauilova, Rada O.
2007-01-01
We present the new version of the ALI-ARMS (for Accelerated Lambda Iterations for Atmospheric Radiation and Molecular Spectra) model. The model allows simultaneous self-consistent calculating the non-LTE populations of the electronic-vibrational levels of the O3 and O2 photolysis products and vibrational level populations of CO2, N2,O2, O3, H2O, CO and other molecules with detailed accounting for the variety of the electronic-vibrational, vibrational-vibrational and vibrational-translational energy exchange processes. The model was used as the reference one for modeling the O2 dayglows and infrared molecular emissions for self-consistent diagnostics of the multi-channel space observations of MLT in the SABER experiment It also allows reevaluating the thermalization efficiency of the absorbed solar ultraviolet energy and infrared radiative cooling/heating of MLT by detailed accounting of the electronic-vibrational relaxation of excited photolysis products via the complex chain of collisional energy conversion processes down to the vibrational energy of optically active trace gas molecules.
Stability patterns for a size-structured population model and its stage-structured counterpart
DEFF Research Database (Denmark)
Zhang, Lai; Pedersen, Michael; Lin, Zhigui
2015-01-01
In this paper we compare a general size-structured population model, where a size-structured consumer feeds upon an unstructured resource, to its simplified stage-structured counterpart in terms of equilibrium stability. Stability of the size-structured model is understood in terms of an equivalent...... delayed system consisting of a renewal equation for the consumer population birth rate and a delayed differential equation for the resource. Results show that the size- and stage-structured models differ considerably with respect to equilibrium stability, although the two models have completely identical...
Linden, Tim; Anderson, Brandon
2010-01-01
A generic prediction in the paradigm of weakly interacting dark matter is the production of relativistic particles from dark matter pair-annihilation in regions of high dark matter density. Ultra-relativistic electrons and positrons produced in the center of the Galaxy by dark matter annihilation should produce a diffuse synchrotron emission. While the spectral shape of the synchrotron dark matter haze depends on the particle model (and secondarily on the galactic magnetic fields), the morphology of the haze depends primarily on (1) the dark matter density distribution, (2) the galactic magnetic field morphology, and (3) the diffusion model for high-energy cosmic-ray leptons. Interestingly, an unidentified excess of microwave radiation with characteristics similar to those predicted by dark matter models has been claimed to exist near the galactic center region in the data reported by the WMAP satellite, and dubbed the "WMAP haze". In this study, we carry out a self-consistent treatment of the variables enume...
Energy Technology Data Exchange (ETDEWEB)
Filanovich, A.N., E-mail: a.n.filanovich@urfu.ru; Povzner, A.A., E-mail: a.a.povzner@urfu.ru
2016-06-15
A self-consistent thermodynamic model of PuCoGa{sub 5} is developed, which for the first time takes into account the anharmonicity of both acoustic phonons, described within a Debye model, and optical phonons, considered in an Einstein approximation. Within the framework of this model, we have calculated the temperature dependencies of lattice contributions to heat capacity, bulk modulus, volumetric coefficient of thermal expansion, Debye and Einstein temperatures and their Grüneisen parameters. The electronic heat capacity of PuCoGa{sub 5} is obtained, which demonstrates an unusual temperature dependence with two maxima. In addition, it is shown that an abnormal low temperature behavior of the bulk modulus of PuCoGa{sub 5} is not caused by the effects of lattice anharmonicity and is most likely due to the valence fluctuations, which is in agreement with previous studies.
A Structured Population Model of Cell Differentiation
Doumic, Marie; Perthame, Benoit; Zubelli, Jorge P
2010-01-01
We introduce and analyze several aspects of a new model for cell differentiation. It assumes that differentiation of progenitor cells is a continuous process. From the mathematical point of view, it is based on partial differential equations of transport type. Specifically, it consists of a structured population equation with a nonlinear feedback loop. This models the signaling process due to cytokines, which regulate the differentiation and proliferation process. We compare the continuous model to its discrete counterpart, a multi-compartmental model of a discrete collection of cell subpopulations recently proposed by Marciniak-Czochra et al. in 2009 to investigate the dynamics of the hematopoietic system. We obtain uniform bounds for the solutions, characterize steady state solutions, and analyze their linearized stability. We show how persistence or extinction might occur according to values of parameters that characterize the stem cells self-renewal. We also perform numerical simulations and discuss the q...
Donnellan, M. Brent; Kenny, David A.; Trzesniewski, Kali H.; Lucas, Richard E.; Conger, Rand D.
2012-01-01
The present research used a latent variable trait-state model to evaluate the longitudinal consistency of self-esteem during the transition from adolescence to adulthood. Analyses were based on ten administrations of the Rosenberg Self-Esteem scale (Rosenberg, 1965) spanning the ages of approximately 13 to 32 for a sample of 451 participants. Results indicated that a completely stable trait factor and an autoregressive trait factor accounted for the majority of the variance in latent self-esteem assessments, whereas state factors accounted for about 16% of the variance in repeated assessments of latent self-esteem. The stability of individual differences in self-esteem increased with age consistent with the cumulative continuity principle of personality development. PMID:23180899
Budkov, Yu. A.; Nogovitsyn, E. A.; Kiselev, M. G.
2013-04-01
A theoretical approach to calculating the thermodynamic and structural functions of solutions of polyelectrolytes based on Gaussian equivalent representation for the calculation of functional integrals is proposed. It is noted that a new analytical result of this work is the direct assumption of counterions, along with an equation for the gyration radius of a polymer chain as a function of the concentrations of monomers and added low-molecular salt. An equation of state is obtained within the proposed model. Our theoretical results are used to describe the thermodynamic and structural properties of an aqueous solution of sodium polystyrene sulfonate with additions of NaCl.
Track structure in biological models.
Curtis, S B
1986-01-01
High-energy heavy ions in the galactic cosmic radiation (HZE particles) may pose a special risk during long term manned space flights outside the sheltering confines of the earth's geomagnetic field. These particles are highly ionizing, and they and their nuclear secondaries can penetrate many centimeters of body tissue. The three dimensional patterns of ionizations they create as they lose energy are referred to as their track structure. Several models of biological action on mammalian cells attempt to treat track structure or related quantities in their formulation. The methods by which they do this are reviewed. The proximity function is introduced in connection with the theory of Dual Radiation Action (DRA). The ion-gamma kill (IGK) model introduces the radial energy-density distribution, which is a smooth function characterizing both the magnitude and extension of a charged particle track. The lethal, potentially lethal (LPL) model introduces lambda, the mean distance between relevant ion clusters or biochemical species along the track. Since very localized energy depositions (within approximately 10 nm) are emphasized, the proximity function as defined in the DRA model is not of utility in characterizing track structure in the LPL formulation.
Bast, Radovan; Thorvaldsen, Andreas J.; Ringholm, Magnus; Ruud, Kenneth
2009-02-01
We present the first analytic calculations of the second hyperpolarizability in a relativistic framework. The calculations are made possible by our recent developments of a response theory built on a quasienergy formalism, in which the basis set may be both time and perturbation dependent. The approach is formulated for an arbitrary self-consistent field state in the atomic orbital basis. The implementation consists of a stand-alone code that only requires the unperturbed density in the atomic orbital basis as input, as well as a linear response solver by which we can determine the perturbed density matrices to different orders, at each new order solving equations that have the same structure as the linear response equation. Using these features of our formalism, we extend in this paper our approach to the relativistic domain, utilizing both two- and four-component relativistic wave functions. We apply the formalism to the calculation of the electronic and pure vibrational contributions to the second hyperpolarizability tensor for the hydrogen halides. Our results demonstrate that relativistic effects can be substantial for frequency-dependent second hyperpolarizabilities. Due to changes in the pole structure when going to the relativistic domain, the relativistic corrections to the hyperpolarizabilities are not transferable between different optical processes, except for very low frequencies.
Energy Technology Data Exchange (ETDEWEB)
Bast, Radovan; Thorvaldsen, Andreas J.; Ringholm, Magnus [Centre for Theoretical and Computational Chemistry (CTCC), Department of Chemistry, University of Tromso, N-9037 Tromso (Norway); Ruud, Kenneth [Centre for Theoretical and Computational Chemistry (CTCC), Department of Chemistry, University of Tromso, N-9037 Tromso (Norway)], E-mail: kenneth.ruud@chem.uit.no
2009-02-17
We present the first analytic calculations of the second hyperpolarizability in a relativistic framework. The calculations are made possible by our recent developments of a response theory built on a quasienergy formalism, in which the basis set may be both time and perturbation dependent. The approach is formulated for an arbitrary self-consistent field state in the atomic orbital basis. The implementation consists of a stand-alone code that only requires the unperturbed density in the atomic orbital basis as input, as well as a linear response solver by which we can determine the perturbed density matrices to different orders, at each new order solving equations that have the same structure as the linear response equation. Using these features of our formalism, we extend in this paper our approach to the relativistic domain, utilizing both two- and four-component relativistic wave functions. We apply the formalism to the calculation of the electronic and pure vibrational contributions to the second hyperpolarizability tensor for the hydrogen halides. Our results demonstrate that relativistic effects can be substantial for frequency-dependent second hyperpolarizabilities. Due to changes in the pole structure when going to the relativistic domain, the relativistic corrections to the hyperpolarizabilities are not transferable between different optical processes, except for very low frequencies.
Greczynski, G.; Hultman, L.
2016-11-01
We present first self-consistent modelling of x-ray photoelectron spectroscopy (XPS) Ti 2p, N 1s, O 1s, and C 1s core level spectra with a cross-peak quantitative agreement for a series of TiN thin films grown by dc magnetron sputtering and oxidized to different extent by varying the venting temperature Tv of the vacuum chamber before removing the deposited samples. So-obtained film series constitute a model case for XPS application studies, where certain degree of atmosphere exposure during sample transfer to the XPS instrument is unavoidable. The challenge is to extract information about surface chemistry without invoking destructive pre-cleaning with noble gas ions. All TiN surfaces are thus analyzed in the as-received state by XPS using monochromatic Al Kα radiation (hν = 1486.6 eV). Details of line shapes and relative peak areas obtained from deconvolution of the reference Ti 2p and N 1 s spectra representative of a native TiN surface serve as an input to model complex core level signals from air-exposed surfaces, where contributions from oxides and oxynitrides make the task very challenging considering the influence of the whole deposition process at hand. The essential part of the presented approach is that the deconvolution process is not only guided by the comparison to the reference binding energy values that often show large spread, but in order to increase reliability of the extracted chemical information the requirement for both qualitative and quantitative self-consistency between component peaks belonging to the same chemical species is imposed across all core-level spectra (including often neglected O 1s and C 1s signals). The relative ratios between contributions from different chemical species vary as a function of Tv presenting a self-consistency check for our model. We propose that the cross-peak self-consistency should be a prerequisite for reliable XPS peak modelling as it enhances credibility of obtained chemical information, while relying
Energy Technology Data Exchange (ETDEWEB)
NONE
2000-03-01
Development was made and marketing was begun on a model of the microwave consistency meter with the world's smallest caliber of 50 mm (non-sanitary type) for general industrial fields, with the food industry as the main object. Added commercialization of this model made inline consistency measurement possible in the flow rate process such as in foodstuff material processing. In addition, since the maximum fluid conductivity specification is set to 15 mS/cm, the applicable range in various consistency measurements is expanded, and high consistency measurement has become possible, which could not have been realized with consistency meters of the conventional types. Therefore, application to diversified processes is possible, such as to chemical plants. (translated by NEDO)
Distributed Prognostics based on Structural Model Decomposition
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.
2014-01-01
Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS