Running of radiative neutrino masses: the scotogenic model — revisited
Merle, Alexander; Platscher, Moritz [Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany)
2015-11-23
A few years ago, it had been shown that effects stemming from renormalisation group running can be quite large in the scotogenic model, where neutrinos obtain their mass only via a 1-loop diagram (or, more generally, in many models in which the light neutrino mass is generated via quantum corrections at loop-level). We present a new computation of the renormalisation group equations (RGEs) for the scotogenic model, thereby updating previous results. We discuss the matching in detail, in particular in what regards the different mass spectra possible for the new particles involved. We furthermore develop approximate analytical solutions to the RGEs for an extensive list of illustrative cases, covering all general tendencies that can appear in the model. Comparing them with fully numerical solutions, we give a comprehensive discussion of the running in the scotogenic model. Our approach is mainly top-down, but we also discuss an attempt to get information on the values of the fundamental parameters when inputting the low-energy measured quantities in a bottom-up manner. This work serves the basis for a full parameter scan of the model, thereby relating its low- and high-energy phenomenology, to fully exploit the available information.
High precision mass measurements in Ψ and Υ families revisited
Artamonov, A.S.; Baru, S.E.; Blinov, A.E.
2000-01-01
High precision mass measurements in Ψ and Υ families performed in 1980-1984 at the VEPP-4 collider with OLYA and MD-1 detectors are revisited. The corrections for the new value of the electron mass are presented. The effect of the updated radiative corrections has been calculated for the J/Ψ(1S) and Ψ(2S) mass measurements [ru
The mass of the black hole in 1A 0620-00, revisiting the ellipsoidal light curve modelling
van Grunsven, Theo F. J.; Jonker, Peter G.; Verbunt, Frank W. M.; Robinson, Edward L.
2017-12-01
The mass distribution of stellar-mass black holes can provide important clues to supernova modelling, but observationally it is still ill constrained. Therefore, it is of importance to make black hole mass measurements as accurate as possible. The X-ray transient 1A 0620-00 is well studied, with a published black hole mass of 6.61 ± 0.25 M⊙, based on an orbital inclination i of 51.0° ± 0.9°. This was obtained by Cantrell et al. (2010) as an average of independent fits to V-, I- and H-band light curves. In this work, we perform an independent check on the value of i by re-analysing existing YALO/SMARTS V-, I- and H-band photometry, using different modelling software and fitting strategy. Performing a fit to the three light curves simultaneously, we obtain a value for i of 54.1° ± 1.1°, resulting in a black hole mass of 5.86 ± 0.24 M⊙. Applying the same model to the light curves individually, we obtain 58.2° ± 1.9°, 53.6° ± 1.6° and 50.5° ± 2.2° for V-, I- and H-band, respectively, where the differences in best-fitting i are caused by the contribution of the residual accretion disc light in the three different bands. We conclude that the mass determination of this black hole may still be subject to systematic effects exceeding the statistical uncertainty. Obtaining more accurate masses would be greatly helped by continuous phase-resolved spectroscopic observations simultaneous with photometry.
Neretnieks, Ivars; Liu Longcheng; Moreno, Luis
2010-03-01
Models are presented for solute transport between seeping water in fractured rock and a copper canister embedded in a clay buffer. The migration through an undamaged buffer is by molecular diffusion only as the clay has so low hydraulic conductivity that water flow can be neglected. In the fractures and in any damaged zone seeping water carries the solutes to or from the vicinity of the buffer in the deposition hole. During the time the water passes the deposition hole molecular diffusion aids in the mass transfer of solutes between the water/buffer interface and the water at some distance from the interface. The residence time of the water and the contact area between the water and the buffer determine the rate of mass transfer between water and buffer. Simple analytical solutions are presented for the mass transfer in the seeping water. For complex migration geometries simplifying assumptions are made that allow analytical solutions to be obtained. The influence of variable apertures on the mass transfer is discussed and is shown to be moderate. The impact of damage to the rock around the deposition hole by spalling and by the presence of a cemented and fractured buffer is also explored. These phenomena lead to an increase of mass transfer between water and buffer. The overall rate of mass transfer between the bulk of the water and the canister is proportional to the overall concentration difference and inversely proportional to the sum of the mass transfer resistances. For visualization purposes the concept of equivalent flowrate is introduced. This entity can be thought as of the flowrate of water that will be depleted of its solute during the water passage past the deposition hole. The equivalent flowrate is also used to assess the release rate of radionuclides from a damaged canister. Examples are presented to illustrate how various factors influence the rate of mass transfer
Asymmetric Gepner models (revisited)
Gato-Rivera, B. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands)] [Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); Schellekens, A.N., E-mail: t58@nikhef.n [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands)] [Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain)] [IMAPP, Radboud Universiteit, Nijmegen (Netherlands)
2010-12-11
We reconsider a class of heterotic string theories studied in 1989, based on tensor products of N=2 minimal models with asymmetric simple current invariants. We extend this analysis from (2,2) and (1,2) spectra to (0,2) spectra with SO(10) broken to the Standard Model. In the latter case the spectrum must contain fractionally charged particles. We find that in nearly all cases at least some of them are massless. However, we identify a large subclass where the fractional charges are at worst half-integer, and often vector-like. The number of families is very often reduced in comparison to the 1989 results, but there are no new tensor combinations yielding three families. All tensor combinations turn out to fall into two classes: those where the number of families is always divisible by three, and those where it is never divisible by three. We find an empirical rule to determine the class, which appears to extend beyond minimal N=2 tensor products. We observe that distributions of physical quantities such as the number of families, singlets and mirrors have an interesting tendency towards smaller values as the gauge groups approaches the Standard Model. We compare our results with an analogous class of free fermionic models. This displays similar features, but with less resolution. Finally we present a complete scan of the three family models based on the triply-exceptional combination (1,16{sup *},16{sup *},16{sup *}) identified originally by Gepner. We find 1220 distinct three family spectra in this case, forming 610 mirror pairs. About half of them have the gauge group SU(3)xSU(2){sub L}xSU(2){sub R}xU(1){sup 5}, the theoretical minimum, and many others are trinification models.
Fawaz, S.; Khan, Zulfiquar A.; Mossa, Samir Y.
2006-01-01
A new definition is proposed for analyzing the consultation in the primary health care, integrating other models of consultation and provides a framework by which general practitioners can apply the principles of consultation using communication skills to reconcile the respective agenda and autonomy of both doctor and patient into a negotiated agreed plan, which includes both management of health problems and health promotion. Achieving success of consultations depends on time and mutual cooperation between patient and doctor showed by doctor-patient relationship. (author)
Kern, J.
1996-01-01
The problem of identification of multiphonon states for vibrational nuclei is discussed. It is shown that an examination of the excitation patterns provides an adequate filter to select good or potentially good vibrational nuclei as the global nuclear properties (such as the level energies) being less strongly perturbed by the presence of additional structures than the local properties (like the wave functions and the transitions probabilities). The energies of the first 2 + states are systematically low by about 15% with respect to the values expected from the global nuclear properties. This appears to be in contradiction with the general belief that these states have a high purity. The comparison of the experimental results with the predictions of the Brink model is made. The conclusion is made that the predictions are quite good, but it is necessary to renormalize the 1 phonon energy, i.e. to increase it by about 15%. Since the modified Brink method involves only the use of a virtual 2 1 + energy and no level fit, a problem of weights cannot be invoked. The calculations confirm the existence of multiphonon states at high excitation energies and the persistence of the symmetry properties well inside regions where one would expect the appearance of disorder
The light gluino mass window revisited
Janot, Patrick
2003-01-01
The precise measurements of the ``electroweak observables'' performed at LEP and SLC are well consistent with the standard model predictions. Deviations from the standard model arising from vacuum polarization diagrams (also called ``weak loop corrections'') have been constrained in a model-independent manner with the epsilon formalism. Within the same formalism, additional deviations from new physics production processes can also be constrained, still in a model-independent way. For instance, a 95% C.L. limit of Delta Gamma_had} q qbar gluino gluino process, it allows an absolute lower limit to be set on the gluino mass, m_gluino > 6.3 GeV/c2 at 95% C.L., which definitely closes the so-called light gluino mass window. The precise measurements of the "electroweak observables" performed at LEP and SLC are well consistent with the standard model predictions. Deviations from the standard model arising from vacuum polarization diagrams (also called "weak loop corrections") have been constrained in a model-indepe...
Mass inflation inside black holes revisited
Dokuchaev, Vyacheslav I
2014-01-01
The mass inflation phenomenon implies that black hole interiors are unstable due to a back-reaction divergence of the perturbed black hole mass function at the Cauchy horizon. The mass inflation was initially derived by using the generalized Dray–’t Hooft–Redmount (DTR) relation in the linear approximation of the Einstein equations near the perturbed Cauchy horizon of the Reissner–Nordström black hole. However, this linear approximation for the DTR relation is improper for the highly nonlinear behavior of back-reaction perturbations at the black hole horizons. An additional weak point in the standard mass inflation calculations is in a fallacious using of the global Cauchy horizon as a place for the maximal growth of the back-reaction perturbations instead of the local inner apparent horizon. It is derived the new spherically symmetric back-reaction solution for two counter-streaming light-like fluxes near the inner apparent horizon of the charged black hole by taking into account its separation from the Cauchy horizon. In this solution the back-reaction perturbations of the background metric are truly the largest at the inner apparent horizon, but, nevertheless, remain small. The back reaction, additionally, removes the infinite blue-shift singularity at the inner apparent horizon and at the Cauchy horizon. (paper)
A revisited standard solar model
Casse, M.; Cahen, S.; Doom, C.
1985-09-01
Recent models of the Sun, including our own, based on canonical physics and featuring modern reaction rates and radiative opacities are presented. They lead to a presolar helium abundance of approximately 0.28 by mass, at variance with the value of 0.25 proposed by Bahcall et al. (1982, 1985), but in better agreement with the value found in the Orion nebula. Most models predict a neutrino counting rate greater than 6 SNU in the chlorine-argon detector, which is at least 3 times higher than the observed rate. The primordial helium abundance derived from the solar one, on the basis of recent models of helium production from the birth of the Galaxy to the birth of the sun, Ysub(P) approximately 0.26, is significantly higher than the value inferred from observations of extragalactic metal-poor nebulae (Y approximately 0.23). This indicates that the stellar production of helium is probably underestimated by the models considered
Revisiting fifth forces in the Galileon model
Burrage, Clare [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie; Seery, David [Sussex Univ., Brighton (United Kingdom). Dept. of Physics and Astronomy
2010-05-15
A Galileon field is one which obeys a spacetime generalization of the non- relativistic Galilean invariance. Such a field may possess non-canonical kinetic terms, but ghost-free theories with a well-defined Cauchy problem exist, constructed using a finite number of relevant operators. The interactions of this scalar with matter are hidden by the Vainshtein effect, causing the Galileon to become weakly coupled near heavy sources. We revisit estimates of the fifth force mediated by a Galileon field, and show that the parameters of the model are less constrained by experiment than previously supposed. (orig.)
Simple Tidal Prism Models Revisited
Luketina, D.
1998-01-01
Simple tidal prism models for well-mixed estuaries have been in use for some time and are discussed in most text books on estuaries. The appeal of this model is its simplicity. However, there are several flaws in the logic behind the model. These flaws are pointed out and a more theoretically correct simple tidal prism model is derived. In doing so, it is made clear which effects can, in theory, be neglected and which can not.
Pilotto, F.; Vasconcellos, C.A.Z.; Coelho, H.T.
2001-01-01
In this work we develop a new version of the fuzzy bag model. Th main ideas is to include the conservation of energy and momentum in the model. This feature is not included in the original formulation of the fuzzy bag model, but is of paramount importance to interpret the model as being a bag model - that, is a model in which the outward pressure of the quarks inside the bag is balanced by the inward pressure of the non-perturbative vacuum outside the bag - as opposed to a relativistic potential model, in which there is no energy-momentum conservation. In the MT bag model, as well as in the original version of the fuzzy bag model, the non-perturbative QCD vacuum is parametrized by a constant B in the Lagrangian density. One immediate consequence of including energy-momentum conservation in the fuzzy bag model is that the bag constant B will acquire a radial dependence, B = B(r). (author)
Pilotto, F.; Vasconcellos, C.A.Z. [Rio Grande do Sul Univ., Porto Alegre, RS (Brazil). Inst. de Fisica; Coelho, H.T. [Pernambuco Univ., Recife, PE (Brazil). Inst. de Fisica
2001-07-01
In this work we develop a new version of the fuzzy bag model. Th main ideas is to include the conservation of energy and momentum in the model. This feature is not included in the original formulation of the fuzzy bag model, but is of paramount importance to interpret the model as being a bag model - that, is a model in which the outward pressure of the quarks inside the bag is balanced by the inward pressure of the non-perturbative vacuum outside the bag - as opposed to a relativistic potential model, in which there is no energy-momentum conservation. In the MT bag model, as well as in the original version of the fuzzy bag model, the non-perturbative QCD vacuum is parametrized by a constant B in the Lagrangian density. One immediate consequence of including energy-momentum conservation in the fuzzy bag model is that the bag constant B will acquire a radial dependence, B = B(r). (author)
A revisited standard solar model
Casse, M.; Cahen, S.; Doom, C.
1987-01-01
Recent models of the Sun, including our own, based on canonical physics and featuring modern reaction rates and radiative opacities are presented. They lead to a presolar helium abundance, in better agreement with the value found in the Orion nebula. Most models predict a neutrino counting rate greater than 6 SNU in the chlorine-argon detector, which is at least 3 times higher than the observed rate. The primordial helium abundance derived from the solar one, on the basis of recent models of helium production from the birth of the Galaxy to the birth of the sun, is significantly higher than the value inferred from observations of extragalactic metal-poor nebulae. This indicates that the stellar production of helium is probably underestimated by the models considered
Revisiting the Lund Fragmentation Model
Andersson, B.; Nilsson, A.
1992-10-01
We present a new method to implement the Lund Model fragmentation distributions for multi-gluon situations. The method of Sjoestrand, implemented in the well-known Monte Carlo simulation program JETSET, is robust and direct and according to his findings there are no observable differences between different ways to implement his scheme. His method can be described as a space-time method because the breakup proper time plays a major role. The method described in this paper is built on energy-momentum space methods. We make use of the χ-curve, which is defined directly from the energy momentum vectors of the partons. We have shown that the χ-curve describes the breakup properties and the final state energy momentum distributions in the mean. We present a method to find the variations around the χ-curve, which also implements the basic Lund Model fragmentation distributions (the area-law and the corresponding iterative cascade). We find differences when comparing the corresponding Monte Carlo implementation REVJET to the JETSET distributions inside the gluon jets. (au)
Sub-Chandrasekhar-mass White Dwarf Detonations Revisited
Shen, Ken J.; Kasen, Daniel; Miles, Broxton J.; Townsley, Dean M.
2018-02-01
The detonation of a sub-Chandrasekhar-mass white dwarf (WD) has emerged as one of the most promising Type Ia supernova (SN Ia) progenitor scenarios. Recent studies have suggested that the rapid transfer of a very small amount of helium from one WD to another is sufficient to ignite a helium shell detonation that subsequently triggers a carbon core detonation, yielding a “dynamically driven double-degenerate double-detonation” SN Ia. Because the helium shell that surrounds the core explosion is so minimal, this scenario approaches the limiting case of a bare C/O WD detonation. Motivated by discrepancies in previous literature and by a recent need for detailed nucleosynthetic data, we revisit simulations of naked C/O WD detonations in this paper. We disagree to some extent with the nucleosynthetic results of previous work on sub-Chandrasekhar-mass bare C/O WD detonations; for example, we find that a median-brightness SN Ia is produced by the detonation of a 1.0 {M}ȯ WD instead of a more massive and rarer 1.1 {M}ȯ WD. The neutron-rich nucleosynthesis in our simulations agrees broadly with some observational constraints, although tensions remain with others. There are also discrepancies related to the velocities of the outer ejecta and light curve shapes, but overall our synthetic light curves and spectra are roughly consistent with observations. We are hopeful that future multidimensional simulations will resolve these issues and further bolster the dynamically driven double-degenerate double-detonation scenario’s potential to explain most SNe Ia.
Tegtmeier, Silke; Meyer, Verena; Pakura, Stefanie
2017-01-01
were captured when they described entrepreneurs. Therefore, this paper aims to revisit gender role stereotypes among young adults. Design/methodology/approach: To measure stereotyping, participants were asked to describe entrepreneurs in general and either women or men in general. The Schein......Purpose: Entrepreneurship is shaped by a male norm, which has been widely demonstrated in qualitative studies. The authors strive to complement these methods by a quantitative approach. First, gender role stereotypes were measured in entrepreneurship. Second, the explicit notions of participants......: The images of men and entrepreneurs show a high and significant congruence (r = 0.803), mostly in those adjectives that are untypical for men and entrepreneurs. The congruence of women and entrepreneurs was low (r = 0.152) and insignificant. Contrary to the participants’ beliefs, their explicit notions did...
A Multi-Level Model of Moral Functioning Revisited
Reed, Don Collins
2009-01-01
The model of moral functioning scaffolded in the 2008 "JME" Special Issue is here revisited in response to three papers criticising that volume. As guest editor of that Special Issue I have formulated the main body of this response, concerning the dynamic systems approach to moral development, the problem of moral relativism and the role of…
Constraints on constituent quark masses from potential models
Silvestre-Brac, B.
1998-01-01
Starting from reasonable hypotheses, the magnetic moments for the baryons are revisited dat the light of general space wave functions. They allow to put very severe bounds on the quark masses as derived from usual potential models. The experimental situation cannot be explained in the framework of such models. (author)
Masses of scalar and axial-vector B mesons revisited
Cheng, Hai-Yang [Academia Sinica, Institute of Physics, Taipei (China); Yu, Fu-Sheng [Lanzhou University, School of Nuclear Science and Technology, Lanzhou (China)
2017-10-15
The SU(3) quark model encounters a great challenge in describing even-parity mesons. Specifically, the q anti q quark model has difficulties in understanding the light scalar mesons below 1 GeV, scalar and axial-vector charmed mesons and 1{sup +} charmonium-like state X(3872). A common wisdom for the resolution of these difficulties lies on the coupled channel effects which will distort the quark model calculations. In this work, we focus on the near mass degeneracy of scalar charmed mesons, D{sub s0}{sup *} and D{sub 0}{sup *0}, and its implications. Within the framework of heavy meson chiral perturbation theory, we show that near degeneracy can be qualitatively understood as a consequence of self-energy effects due to strong coupled channels. Quantitatively, the closeness of D{sub s0}{sup *} and D{sub 0}{sup *0} masses can be implemented by adjusting two relevant strong couplings and the renormalization scale appearing in the loop diagram. Then this in turn implies the mass similarity of B{sub s0}{sup *} and B{sub 0}{sup *0} mesons. The P{sub 0}{sup *}P{sub 1}{sup '} interaction with the Goldstone boson is crucial for understanding the phenomenon of near degeneracy. Based on heavy quark symmetry in conjunction with corrections from QCD and 1/m{sub Q} effects, we obtain the masses of B{sup *}{sub (s)0} and B{sup '}{sub (s)1} mesons, for example, M{sub B{sub s{sub 0{sup *}}}} = (5715 ± 1) MeV + δΔ{sub S}, M{sub B}{sup {sub '{sub s{sub 1}}}} = (5763 ± 1) MeV + δΔ{sub S} with δΔ{sub S} being 1/m{sub Q} corrections. We find that the predicted mass difference of 48 MeV between B{sup '}{sub s1} and B{sub s0}{sup *} is larger than that of 20-30 MeV inferred from the relativistic quark models, whereas the difference of 15 MeV between the central values of M{sub B}{sup {sub '{sub s{sub 1}}}} and M{sub B}{sup {sub '{sub 1}}} is much smaller than the quark model expectation of 60-100 MeV. Experimentally, it is important to have a precise
The hard-core model on random graphs revisited
Barbier, Jean; Krzakala, Florent; Zhang, Pan; Zdeborová, Lenka
2013-01-01
We revisit the classical hard-core model, also known as independent set and dual to vertex cover problem, where one puts particles with a first-neighbor hard-core repulsion on the vertices of a random graph. Although the case of random graphs with small and very large average degrees respectively are quite well understood, they yield qualitatively different results and our aim here is to reconciliate these two cases. We revisit results that can be obtained using the (heuristic) cavity method and show that it provides a closed-form conjecture for the exact density of the densest packing on random regular graphs with degree K ≥ 20, and that for K > 16 the nature of the phase transition is the same as for large K. This also shows that the hard-code model is the simplest mean-field lattice model for structural glasses and jamming
Quark matter revisited with non-extensive MIT bag model
Cardoso, Pedro H.G.; Nunes da Silva, Tiago; Menezes, Debora P. [Universidade Federal de Santa Catarina, Departamento de Fisica, CFM, Florianopolis (Brazil); Deppman, Airton [Instituto de Fisica da Universidade de Sao Paulo, Sao Paulo (Brazil)
2017-10-15
In this work we revisit the MIT bag model to describe quark matter within both the usual Fermi-Dirac and the Tsallis statistics. We verify the effects of the non-additivity of the latter by analysing two different pictures: the first order phase transition of the QCD phase diagram and stellar matter properties. While the QCD phase diagram is visually affected by the Tsallis statistics, the resulting effects on quark star macroscopic properties are barely noticed. (orig.)
A Structural Equation Model of Risk Perception of Rockfall for Revisit Intention
Ya-Fen Lee; Yun-Yao Chi
2014-01-01
The study aims to explore the relationship between risk perception of rockfall and revisit intention using a Structural Equation Modeling (SEM) analysis. A total of 573 valid questionnaires are collected from travelers to Taroko National Park, Taiwan. The findings show the majority of travelers have the medium perception of rockfall risk, and are willing to revisit the Taroko National Park. The revisit intention to Taroko National Park is influenced by hazardous preferences, willingness-to-pa...
The isobaric multiplet mass equation for A≤71 revisited
Lam, Yi Hua, E-mail: lamyihua@gmail.com [CENBG (UMR 5797 — Université Bordeaux 1 — CNRS/IN2P3), Chemin du Solarium, Le Haut Vigneau, BP 120, 33175 Gradignan Cedex (France); Blank, Bertram, E-mail: blank@cenbg.in2p3.fr [CENBG (UMR 5797 — Université Bordeaux 1 — CNRS/IN2P3), Chemin du Solarium, Le Haut Vigneau, BP 120, 33175 Gradignan Cedex (France); Smirnova, Nadezda A. [CENBG (UMR 5797 — Université Bordeaux 1 — CNRS/IN2P3), Chemin du Solarium, Le Haut Vigneau, BP 120, 33175 Gradignan Cedex (France); Bueb, Jean Bernard; Antony, Maria Susai [IPHC, Université de Strasbourg, CNRS/UMR7178, 23 Rue du Loess, 67037 Strasbourg Cedex (France)
2013-11-15
Accurate mass determination of short-lived nuclides by Penning-trap spectrometers and progress in the spectroscopy of proton-rich nuclei have triggered renewed interest in the isobaric multiplet mass equation (IMME). The energy levels of the members of T=1/2,1,3/2, and 2 multiplets and the coefficients of the IMME are tabulated for A≤71. The new compilation is based on the most recent mass evaluation (AME2011) and it includes the experimental results on energies of the states evaluated up to end of 2011. Taking into account the error bars, a significant deviation from the quadratic form of the IMME for the A=9,35 quartets and the A=32 quintet is observed.
Geostationary secular dynamics revisited: application to high area-to-mass ratio objects
Gachet, Fabien; Celletti, Alessandra; Pucacco, Giuseppe; Efthymiopoulos, Christos
2017-06-01
The long-term dynamics of the geostationary Earth orbits (GEO) is revisited through the application of canonical perturbation theory. We consider a Hamiltonian model accounting for all major perturbations: geopotential at order and degree two, lunisolar perturbations with a realistic model for the Sun and Moon orbits, and solar radiation pressure. The long-term dynamics of the GEO region has been studied both numerically and analytically, in view of the relevance of such studies to the issue of space debris or to the disposal of GEO satellites. Past studies focused on the orbital evolution of objects around a nominal solution, hereafter called the forced equilibrium solution, which shows a particularly strong dependence on the area-to-mass ratio. Here, we (i) give theoretical estimates for the long-term behavior of such orbits, and (ii) we examine the nature of the forced equilibrium itself. In the lowest approximation, the forced equilibrium implies motion with a constant non-zero average `forced eccentricity', as well as a constant non-zero average inclination, otherwise known in satellite dynamics as the inclination of the invariant `Laplace plane'. Using a higher order normal form, we demonstrate that this equilibrium actually represents not a point in phase space, but a trajectory taking place on a lower-dimensional torus. We give analytical expressions for this special trajectory, and we compare our results to those found by numerical orbit propagation. We finally discuss the use of proper elements, i.e., approximate integrals of motion for the GEO orbits.
The mass-action-law theory of micellization revisited.
Rusanov, Anatoly I
2014-12-09
Among numerous definitions of the critical micelle concentration (CMC), there is one related to the constant K of the mass action law as CMC = K(1-n) (n is the aggregation number). In this paper, the generalization of this definition for multicomponent micelles and the development of the mass-action-law theory of micellization based on this definition and the analysis of a multiple-equilibrium polydisperse micellar system have been presented. This variant of the theory of micellization looks more consistent than the earlier one. In addition, two thermodynamic findings are reported: the stability conditions for micellar systems and the dependence of aggregation numbers on the surfactant concentrations. The growth of the monomer concentration with the total surfactant concentration is shown to be a thermodynamic rule only in the case of a single sort of aggregative particles or at adding a single surfactant to a mixture. The stability condition takes more complex form when adding a mixture of aggregative particles. For the aggregation number of a micelle, it has been deduced a thermodynamic rule obeying it to increase with the total surfactant concentration. However, if the monomer concentration increases slowly, the aggregation number increases much more slowly and the more slowly the more pronounced is a maximum corresponding to a micelle on the distribution hypersurface (curve in the one-component case). This forms grounding for the quasi-chemical approximation in the mass-action-law theory (the constancy of aggregation numbers).
Reconsideration of mass-distribution models
Ninković S.
2014-01-01
Full Text Available The mass-distribution model proposed by Kuzmin and Veltmann (1973 is revisited. It is subdivided into two models which have a common case. Only one of them is subject of the present study. The study is focused on the relation between the density ratio (the central one to that corresponding to the core radius and the total-mass fraction within the core radius. The latter one is an increasing function of the former one, but it cannot exceed one quarter, which takes place when the density ratio tends to infinity. Therefore, the model is extended by representing the density as a sum of two components. The extension results into possibility of having a correspondence between the infinite density ratio and 100% total-mass fraction. The number of parameters in the extended model exceeds that of the original model. Due to this, in the extended model, the correspondence between the density ratio and total-mass fraction is no longer one-to-one; several values of the total-mass fraction can correspond to the same value for the density ratio. In this way, the extended model could explain the contingency of having two, or more, groups of real stellar systems (subsystems in the diagram total-mass fraction versus density ratio. [Projekat Ministarstva nauke Republike Srbije, br. 176011: Dynamics and Kinematics of Celestial Bodies and Systems
Entropy of measurement and erasure: Szilard's membrane model revisited
Leff, Harvey S.; Rex, Andrew F.
1994-11-01
It is widely believed that measurement is accompanied by irreversible entropy increase. This conventional wisdom is based in part on Szilard's 1929 study of entropy decrease in a thermodynamic system by intelligent intervention (i.e., a Maxwell's demon) and Brillouin's association of entropy with information. Bennett subsequently argued that information acquisition is not necessarily irreversible, but information erasure must be dissipative (Landauer's principle). Inspired by the ensuing debate, we revisit the membrane model introduced by Szilard and find that it can illustrate and clarify (1) reversible measurement, (2) information storage, (3) decoupling of the memory from the system being measured, and (4) entropy increase associated with memory erasure and resetting.
Revisiting hyper- and hypo-androgenism by tandem mass spectrometry.
Fanelli, Flaminia; Gambineri, Alessandra; Mezzullo, Marco; Vicennati, Valentina; Pelusi, Carla; Pasquali, Renato; Pagotto, Uberto
2013-06-01
Modern endocrinology is living a critical age of transition as far as laboratory testing and biochemical diagnosis are concerned. Novel liquid chromatography-tandem mass spectrometry (LC-MS/MS) assays for steroid measurement in biological fluids have abundantly demonstrated their analytical superiority over immunometric platforms that until now have dominated the world of steroid hormones determination in clinical laboratories. One of the most useful applications of LC-MS/MS is in the hypogonadism and hyperandrogenism field: LC-MS/MS has proved particularly suitable for the detection of low levels of testosterone typical of women and children, and in general more reliable in accurately determining hypogonadal male levels. This technique also offers increased informative power by allowing multi-analytical profiles that give a more comprehensive picture of the overall hormonal asset. Several LC-MS/MS methods for testosterone have been published in the last decade, some of them included other androgen or more comprehensive steroid profiles. LC-MS/MS offers the concrete possibility of achieving a definitive standardization of testosterone measurements and the generation of widely accepted reference intervals, that will set the basis for a consensus on the diagnostic value of biochemical testing. The present review is aimed at summarizing technological advancements in androgen measurements in serum and saliva. We also provide a picture of the state of advancement of standardization of testosterone assays, of the redefinition of androgen reference intervals by novel assays and of studies using LC-MS/MS for the characterization and diagnosis of female hyperandrogenism and male hypogonadism.
The random field Blume-Capel model revisited
Santos, P. V.; da Costa, F. A.; de Araújo, J. M.
2018-04-01
We have revisited the mean-field treatment for the Blume-Capel model under the presence of a discrete random magnetic field as introduced by Kaufman and Kanner (1990). The magnetic field (H) versus temperature (T) phase diagrams for given values of the crystal field D were recovered in accordance to Kaufman and Kanner original work. However, our main goal in the present work was to investigate the distinct structures of the crystal field versus temperature phase diagrams as the random magnetic field is varied because similar models have presented reentrant phenomenon due to randomness. Following previous works we have classified the distinct phase diagrams according to five different topologies. The topological structure of the phase diagrams is maintained for both H - T and D - T cases. Although the phase diagrams exhibit a richness of multicritical phenomena we did not found any reentrant effect as have been seen in similar models.
Darwin model in plasma physics revisited
Xie, Huasheng; Zhu, Jia; Ma, Zhiwei
2014-01-01
Dispersion relations from the Darwin (a.k.a., magnetoinductive or magnetostatic) model are given and compared with those of the full electromagnetic model. Analytical and numerical solutions show that the errors from the Darwin approximation can be large even if phase velocity for a low-frequency wave is close to or larger than the speed of light. Besides missing two wave branches associated mainly with the electron dynamics, the coupling branch of the electrons and ions in the Darwin model is modified to become a new artificial branch that incorrectly represents the coupling dynamics of the electrons and ions. (paper)
Single toxin dose-response models revisited
Demidenko, Eugene, E-mail: eugened@dartmouth.edu [Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH03756 (United States); Glaholt, SP, E-mail: sglaholt@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States); Kyker-Snowman, E, E-mail: ek2002@wildcats.unh.edu [Department of Natural Resources and the Environment, University of New Hampshire, Durham, NH03824 (United States); Shaw, JR, E-mail: joeshaw@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Chen, CY, E-mail: Celia.Y.Chen@dartmouth.edu [Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States)
2017-01-01
The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the four models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.
Whole body acid-base modeling revisited.
Ring, Troels; Nielsen, Søren
2017-04-01
The textbook account of whole body acid-base balance in terms of endogenous acid production, renal net acid excretion, and gastrointestinal alkali absorption, which is the only comprehensive model around, has never been applied in clinical practice or been formally validated. To improve understanding of acid-base modeling, we managed to write up this conventional model as an expression solely on urine chemistry. Renal net acid excretion and endogenous acid production were already formulated in terms of urine chemistry, and we could from the literature also see gastrointestinal alkali absorption in terms of urine excretions. With a few assumptions it was possible to see that this expression of net acid balance was arithmetically identical to minus urine charge, whereby under the development of acidosis, urine was predicted to acquire a net negative charge. The literature already mentions unexplained negative urine charges so we scrutinized a series of seminal papers and confirmed empirically the theoretical prediction that observed urine charge did acquire negative charge as acidosis developed. Hence, we can conclude that the conventional model is problematic since it predicts what is physiologically impossible. Therefore, we need a new model for whole body acid-base balance, which does not have impossible implications. Furthermore, new experimental studies are needed to account for charge imbalance in urine under development of acidosis. Copyright © 2017 the American Physiological Society.
The sine-Gordon model revisited I
Niccoli, G.; Teschner, J.
2009-10-15
We study integrable lattice regularizations of the Sine-Gordon model with the help of the Separation of Variables method of Sklyanin and the Baxter Q-operators. This allows us to characterize the spectrum (eigenvalues and eigenstates) completely in terms of polynomial solutions of the Baxter equation with certain properties. This result is analogous to the completeness of the Bethe ansatz. (orig.)
Packet models revisited: tandem and priority systems
M.R.H. Mandjes (Michel)
2004-01-01
textabstractWe examine two extensions of traditional single-node packet-scale queueing models: tandem networks and (strict) priority systems. Two generic input processes are considered: periodic and Poisson arrivals. For the two-node tandem, an exact expression is derived for the joint distribution
The Motive--Strategy Congruence Model Revisited.
Watkins, David; Hattie, John
1992-01-01
Research with 1,266 Australian secondary school students supports 2 propositions critical to the motive-strategy congruence model of J. B. Biggs (1985). Students tend to use learning strategies congruent with motivation for learning, and congruent motive-strategy combinations are associated with higher average school grades. (SLD)
Diffusion approximation of neuronal models revisited
Čupera, Jakub
2014-01-01
Roč. 11, č. 1 (2014), s. 11-25 ISSN 1547-1063. [International Workshop on Neural Coding (NC) /10./. Praha, 02.09.2012-07.09.2012] R&D Projects: GA ČR(CZ) GAP103/11/0282 Institutional support: RVO:67985823 Keywords : stochastic model * neuronal activity * first-passage time Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.840, year: 2014
Minimalistic Neutrino Mass Model
De Gouvêa, A; Gouvea, Andre de
2001-01-01
We consider the simplest model which solves the solar and atmospheric neutrino puzzles, in the sense that it contains the smallest amount of beyond the Standard Model ingredients. The solar neutrino data is accounted for by Planck-mass effects while the atmospheric neutrino anomaly is due to the existence of a single right-handed neutrino at an intermediate mass scale between 10^9 GeV and 10^14 GeV. Even though the neutrino mixing angles are not exactly predicted, they can be naturally large, which agrees well with the current experimental situation. Furthermore, the amount of lepton asymmetry produced in the early universe by the decay of the right-handed neutrino is very predictive and may be enough to explain the current baryon-to-photon ratio if the right-handed neutrinos are produced out of thermal equilibrium. One definitive test for the model is the search for anomalous seasonal effects at Borexino.
Revisited global drift fluid model for linear devices
Reiser, Dirk
2012-01-01
The problem of energy conserving global drift fluid simulations is revisited. It is found that for the case of cylindrical plasmas in a homogenous magnetic field, a straightforward reformulation is possible avoiding simplifications leading to energetic inconsistencies. The particular new feature is the rigorous treatment of the polarisation drift by a generalization of the vorticity equation. The resulting set of model equations contains previous formulations as limiting cases and is suitable for efficient numerical techniques. Examples of applications on studies of plasma blobs and its impact on plasma target interaction are presented. The numerical studies focus on the appearance of plasma blobs and intermittent transport and its consequences on the release of sputtered target materials in the plasma. Intermittent expulsion of particles in radial direction can be observed and it is found that although the neutrals released from the target show strong fluctuations in their propagation into the plasma column, the overall effect on time averaged profiles is negligible for the conditions considered. In addition, the numerical simulations are utilised to perform an a-posteriori assessment of the magnitude of energetic inconsistencies in previously used simplified models. It is found that certain popular approximations, in particular by the use of simplified vorticity equations, do not significantly affect energetics. However, popular model simplifications with respect to parallel advection are found to provide significant deterioration of the model consistency.
Effective-Medium Models for Marine Gas Hydrates, Mallik Revisited
Terry, D. A.; Knapp, C. C.; Knapp, J. H.
2011-12-01
Hertz-Mindlin type effective-medium dry-rock elastic models have been commonly used for more than three decades in rock physics analysis, and recently have been applied to assessment of marine gas hydrate resources. Comparisons of several effective-medium models with derivative well-log data from the Mackenzie River Valley, Northwest Territories, Canada (i.e. Mallik 2L-38 and 5L-38) were made several years ago as part of a marine gas hydrate joint industry project in the Gulf of Mexico. The matrix/grain supporting model (one of the five models compared) was clearly a better representation of the Mallik data than the other four models (2 cemented sand models; a pore-filling model; and an inclusion model). Even though the matrix/grain supporting model was clearly better, reservations were noted that the compressional velocity of the model was higher than the compressional velocity measured via the sonic logs, and that the shear velocities showed an even greater discrepancy. Over more than thirty years, variations of Hertz-Mindlin type effective medium models have evolved for unconsolidated sediments and here, we briefly review their development. In the past few years, the perfectly smooth grain version of the Hertz-Mindlin type effective-medium model has been favored over the infinitely rough grain version compared in the Gulf of Mexico study. We revisit the data from the Mallik wells to review assertions that effective-medium models with perfectly smooth grains are a better predictor than models with infinitely rough grains. We briefly review three Hertz-Mindlin type effective-medium models, and standardize nomenclature and notation. To calibrate the extended effective-medium model in gas hydrates, we use a well accepted framework for unconsolidated sediments through Hashin-Shtrikman bounds. We implement the previously discussed effective-medium models for saturated sediments with gas hydrates and compute theoretical curves of seismic velocities versus gas hydrate
Revisiting the Microlensing Event OGLE 2012-BLG-0026: A Solar Mass Star with Two Cold Giant Planets
Beaulieu, J.-P.; Bennett, D. P.; Batista, V.; Fukui, A.; Marquette, J.-B.; Brillant, S.; Cole, A. A.; Rogers, L. A.; Sumi, T.; Abe, F.
2016-01-01
Two cold gas giant planets orbiting a G-type main-sequence star in the galactic disk were previously discovered in the high-magnification microlensing event OGLE-2012-BLG-0026. Here, we present revised host star flux measurements and a refined model for the two-planet system using additional light curve data. We performed high angular resolution adaptive optics imaging with the Keck and Subaru telescopes at two epochs while the source star was still amplified. We detected the lens flux, H = 16.39 +/- 0.08. The lens, a disk star, is brighter than predicted from the modeling in the original study. We revisited the light curve modeling using additional photometric data from the B and C telescope in New Zealand and CTIO 1.3 m H-band light curve. We then include the Keck and Subaru adaptive optic observation constraints. The system is composed of an approximately 4-9 Gyr lens star of M(sub lens) = 1.06 +/- 0.05 solar mass at a distance of D(sub lens) = 4.0 +/- 0.3 kpc, orbited by two giant planets of 0.145 +/- 0.008 M(sub Jup) and 0.86 +/- 0.06 M(sub Jup), with projected separations of 4.0 +/- 0.5 au and 4.8 +/- 0.7 au, respectively. Because the lens is brighter than the source star by 16 +/- 8% in H, with no other blend within one arcsec, it will be possible to estimate its metallicity using subsequent IR spectroscopy with 8-10 m class telescopes. By adding a constraint on the metallicity it will be possible to refine the age of the system.
Revisiting the quasi-particle model of the quark-gluon plasma
Bannur, V.M.
2007-01-01
The quasi-particle model of the quark-gluon plasma (QGP) is revisited here with a new method, different from earlier studies, one without the need of a temperature dependent bag constant and other effects such as confinement, effective degrees of freedom etc. Our model has only one system dependent parameter and shows a surprisingly good fit to the lattice results for the gluon plasma, and for 2-flavor, 3-flavor and (2+1)-flavor QGP. The basic idea is first to evaluate the energy density ε from the grand partition function of quasi-particle QGP, and then derive all other thermodynamic functions from ε. Quasi-particles are assumed to have a temperature dependent mass equal to the plasma frequency. Energy density, pressure and speed of sound at zero chemical potential are evaluated and compared with the available lattice data. We further extend the model to a finite chemical potential, without any new parameters, to obtain the quark density, quark susceptibility etc., and the model fits very well with the lattice results on 2-flavor QGP. (orig.)
Revisiting the advection-dispersion model - Testing an alternative
Neretnieks, I.
2001-01-01
Some of the basic assumptions of the Advection-Dispersion model, AD-model, are revisited. That model assumes a continuous mixing along the flowpath similar to Fickian diffusion. This implies that there is a constant dispersion length irrespective of observation distance. This is contrary to most field observations. The properties of an alternative model based on the assumption that individual water packages can retain their identity over long distances are investigated. The latter model is called the Multi-Channel model, MChM. Inherent in the latter model is that if the waters in the different pathways are collected and mixed, the 'dispersion length' is proportional to observation distance. Using diffusion theory it is investigated over which distances or contact times, adjacent water packages will keep their identity. It is found that for a contact time of 10 hours, two streams, each wider than 6 mm, that flow side by side, will not have lost their identity. For 1000 hours contact time the minimum width is 6 cm. The MChM and AD-models were found to have very similar Residence Time Distributions, RTD, for Peclet numbers larger than 3. A generalised relation between flowrate and residence time is developed, including the so-called cubic law and constant aperture assumptions. Using the generalised relation, surprisingly it is found that for a system that has the same average flow volume and average flowrate the form of the RTD curves are the same irrespective of the form of the relation. Both models are also compared for a system where there is strong interaction of the solute with the rock matrix. In this case it is assumed that the solute can diffuse into and out of the fracture walls and also to sorb on the micro-fractures of the matrix. The so-called Flow Wetted Surface, FWS, between the flowing water in the fracture and the rock is a key entity in such systems. It is found that the AD-model predicts much later arrivals and lower concentrations than does the MCh-model
Standing and travelling waves in a spherical brain model: The Nunez model revisited
Visser, S.; Nicks, R.; Faugeras, O.; Coombes, S.
2017-06-01
The Nunez model for the generation of electroencephalogram (EEG) signals is naturally described as a neural field model on a sphere with space-dependent delays. For simplicity, dynamical realisations of this model either as a damped wave equation or an integro-differential equation, have typically been studied in idealised one dimensional or planar settings. Here we revisit the original Nunez model to specifically address the role of spherical topology on spatio-temporal pattern generation. We do this using a mixture of Turing instability analysis, symmetric bifurcation theory, centre manifold reduction and direct simulations with a bespoke numerical scheme. In particular we examine standing and travelling wave solutions using normal form computation of primary and secondary bifurcations from a steady state. Interestingly, we observe spatio-temporal patterns which have counterparts seen in the EEG patterns of both epileptic and schizophrenic brain conditions.
Running-mass inflation model and primordial black holes
Drees, Manuel; Erfani, Encieh
2011-01-01
We revisit the question whether the running-mass inflation model allows the formation of Primordial Black Holes (PBHs) that are sufficiently long-lived to serve as candidates for Dark Matter. We incorporate recent cosmological data, including the WMAP 7-year results. Moreover, we include ''the running of the running'' of the spectral index of the power spectrum, as well as the renormalization group ''running of the running'' of the inflaton mass term. Our analysis indicates that formation of sufficiently heavy, and hence long-lived, PBHs still remains possible in this scenario. As a by-product, we show that the additional term in the inflaton potential still does not allow significant negative running of the spectral index
Neutrino Mass and Flavour Models
King, Stephen F.
2010-01-01
We survey some of the recent promising developments in the search for the theory behind neutrino mass and tri-bimaximal mixing, and indeed all fermion masses and mixing. We focus in particular on models with discrete family symmetry and unification, and show how such models can also solve the SUSY flavour and CP problems. We also discuss the theoretical implications of the measurement of a non-zero reactor angle, as hinted at by recent experimental measurements.
Critical rotation of general-relativistic polytropic models revisited
Geroyannis, V.; Karageorgopoulos, V.
2013-09-01
We develop a perturbation method for computing the critical rotational parameter as a function of the equatorial radius of a rigidly rotating polytropic model in the "post-Newtonia approximation" (PNA). We treat our models as "initial value problems" (IVP) of ordinary differential equations in the complex plane. The computations are carried out by the code dcrkf54.f95 (Geroyannis and Valvi 2012 [P1]; modified Runge-Kutta-Fehlberg code of fourth and fifth order for solving initial value problems in the complex plane). Such a complex-plane treatment removes the syndromes appearing in this particular family of IVPs (see e.g. P1, Sec. 3) and allows continuation of the numerical integrations beyond the surface of the star. Thus all the required values of the Lane-Emden function(s) in the post-Newtonian approximation are calculated by interpolation (so avoiding any extrapolation). An interesting point is that, in our computations, we take into account the complete correction due to the gravitational term, and this issue is a remarkable difference compared to the classical PNA. We solve the generalized density as a function of the equatorial radius and find the critical rotational parameter. Our computations are extended to certain other physical characteristics (like mass, angular momentum, rotational kinetic energy, etc). We find that our method yields results comparable with those of other reliable methods. REFERENCE: V.S. Geroyannis and F.N. Valvi 2012, International Journal of Modern Physics C, 23, No 5, 1250038:1-15.
Schwinger Model Mass Anomalous Dimension
Keegan, Liam
2016-06-20
The mass anomalous dimension for several gauge theories with an infrared fixed point has recently been determined using the mode number of the Dirac operator. In order to better understand the sources of systematic error in this method, we apply it to a simpler model, the massive Schwinger model with two flavours of fermions, where analytical results are available for comparison with the lattice data.
Mass generation in composite models
Peccei, R.D.
1985-10-01
I discuss aspects of composite models of quarks and leptons connected with the dynamics of how these fermions acquire mass. Several issues related to the protection mechanisms necessary to keep quarks and leptons light are illustrated by means of concrete examples and a critical overview of suggestions for family replications is given. Some old and new ideas of how one may actually be able to generate small quark and lepton masses are examined, along with some of the difficulties they encounter in practice. (orig.)
Revisiting the ADT mass of the five-dimensional rotating black holes with squashed horizons
Peng, Jun-Jin [Guizhou Normal University, Guizhou Provincial Key Laboratory of Radio Astronomy and Data Processing, Guiyang (China)
2017-10-15
We evaluate the Abbott-Deser-Tekin (ADT) mass of the five-dimensional rotating black holes with squashed horizons on two different on-shell reference backgrounds, which are the flat background and the boundary matched Kaluza-Klein (KK) monopole. The mass on the former, identified with the one on the background of the asymptotic geometry, differs from the mass on the latter by that of the KK monopole. However, each mass satisfies the first law of black hole thermodynamics. To test the results in five dimensions, we compute the mass in the context of the dimensionally reduced theory. Finally, in contrast with the original ADT formulation, its off-shell generalisation is applied to calculate the mass as well. (orig.)
Revisiting the ADT mass of the five-dimensional rotating black holes with squashed horizons
Peng, Jun-Jin
2017-01-01
We evaluate the Abbott-Deser-Tekin (ADT) mass of the five-dimensional rotating black holes with squashed horizons on two different on-shell reference backgrounds, which are the flat background and the boundary matched Kaluza-Klein (KK) monopole. The mass on the former, identified with the one on the background of the asymptotic geometry, differs from the mass on the latter by that of the KK monopole. However, each mass satisfies the first law of black hole thermodynamics. To test the results in five dimensions, we compute the mass in the context of the dimensionally reduced theory. Finally, in contrast with the original ADT formulation, its off-shell generalisation is applied to calculate the mass as well. (orig.)
Revisiting the ADT mass of the five-dimensional rotating black holes with squashed horizons
Peng, Jun-Jin
2017-10-01
We evaluate the Abbott-Deser-Tekin (ADT) mass of the five-dimensional rotating black holes with squashed horizons on two different on-shell reference backgrounds, which are the flat background and the boundary matched Kaluza-Klein (KK) monopole. The mass on the former, identified with the one on the background of the asymptotic geometry, differs from the mass on the latter by that of the KK monopole. However, each mass satisfies the first law of black hole thermodynamics. To test the results in five dimensions, we compute the mass in the context of the dimensionally reduced theory. Finally, in contrast with the original ADT formulation, its off-shell generalisation is applied to calculate the mass as well.
The upper bound on the lightest Higgs mass in the NMSSM revisited
Ellwanger, Ulrich; Hugonie, Cyril
2007-04-01
We update the upper bound on the lightest CP even Higgs mass in the NMSSM, which is given as a function of tanβ and λ. We include the available one and two loop corrections to the NMSSM Higgs masses, and constraints from the absence of Landau singularities below the GUT scale as well as from the stability of the NMSSM Higgs potential. For m top varying between 171.4 and 178 GeV, squark masses of 1 TeV and maximal mixing the upper bound is assumed near tanβ ∼ 2 and varies between 139.9 and 141.4 GeV
Star-Triangle Relation of the Chiral Potts Model Revisited
Horibe, M.; Shigemoto, K.
2001-01-01
We give the simple proof of the star-triangle relation of the chiral Potts model. We also give the constructive way to understand the star-triangle relation of the chiral Potts model, which may give the hint to give the new integrable models.
The Candy model revisited: Markov properties and inference
M.N.M. van Lieshout (Marie-Colette); R.S. Stoica
2001-01-01
textabstractThis paper studies the Candy model, a marked point process introduced by Stoica et al. (2000). We prove Ruelle and local stability, investigate its Markov properties, and discuss how the model may be sampled. Finally, we consider estimation of the model parameters and present some
Critical assessment of nuclear mass models
Moeller, P.; Nix, J.R.
1992-01-01
Some of the physical assumptions underlying various nuclear mass models are discussed. The ability of different mass models to predict new masses that were not taken into account when the models were formulated and their parameters determined is analyzed. The models are also compared with respect to their ability to describe nuclear-structure properties in general. The analysis suggests future directions for mass-model development
Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter
Flächer, Henning; Haller, J; Höcker, A; Mönig, K; Stelzer, J
2009-01-01
The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter projec...
Kawase & McDermott revisited with a proper ocean model.
Jochum, Markus; Poulsen, Mads; Nuterman, Roman
2017-04-01
A suite of experiments with global ocean models is used to test the hypothesis that Southern Ocean (SO) winds can modify the strength of the Atlantic Meridional Overturning Circulation (AMOC). It is found that for 3 and 1 degree resolution models the results are consistent with Toggweiler & Samuels (1995): stronger SO winds lead to a slight increase of the AMOC. In the simulations with 1/10 degree resolution, however, stronger SO winds weaken the AMOC. We show that these different outcomes are determined by the models' representation of topographic Rossby and Kelvin waves. Consistent with previous literature based on theory and idealized models, first baroclinic waves are slower in the coarse resolution models, but still manage to establish a pattern of global response that is similar to the one in the eddy-permitting model. Because of its different stratification, however, the Atlantic signal is transmitted by higher baroclinic modes. In the coarse resolution model these higher modes are dissipated before they reach 30N, whereas in the eddy-permitting model they reach the subpolar gyre undiminished. This inability of non-eddy-permitting ocean models to represent planetary waves with higher baroclinic modes casts doubt on the ability of climate models to represent non-local effects of climate change. Ideas on how to overcome these difficulties will be discussed.
Revisiting the direct detection of dark matter in simplified models
Li, Tong
2018-01-01
In this work we numerically re-examine the loop-induced WIMP-nucleon scattering cross section for the simplified dark matter models and the constraint set by the latest direct detection experiment. We consider a fermion, scalar or vector dark matter component from five simplified models with leptophobic spin-0 mediators coupled only to Standard Model quarks and dark matter particles. The tree-level WIMP-nucleon cross sections in these models are all momentum-suppressed. We calculate the non-s...
Business modelling revisited: The configuration of control and value
Ballon, P.J.P.
2007-01-01
Purpose - This paper aims to provide a theoretically grounded framework for designing and analysing business models for (mobile) information communication technology (ICT) services and systems. Design/methodology/approach - The paper reviews the most topical literature on business modelling, as well
Terrestrial nitrogen cycling in Earth system models revisited
Stocker, Benjamin D; Prentice, I. Colin; Cornell, Sarah; Davies-Barnard, T; Finzi, Adrien; Franklin, Oskar; Janssens, Ivan; Larmola, Tuula; Manzoni, Stefano; Näsholm, Torgny; Raven, John; Rebel, Karin; Reed, Sasha C.; Vicca, Sara; Wiltshire, Andy; Zaehle, Sönke
2016-01-01
Understanding the degree to which nitrogen (N) availability limits land carbon (C) uptake under global environmental change represents an unresolved challenge. First-generation ‘C-only’vegetation models, lacking explicit representations of N cycling,projected a substantial and increasing land C sink under rising atmospheric CO2 concentrations. This prediction was questioned for not taking into account the potentially limiting effect of N availability, which is necessary for plant growth (Hungate et al.,2003). More recent global models include coupled C and N cycles in land ecosystems (C–N models) and are widely assumed to be more realistic. However, inclusion of more processes has not consistently improved their performance in capturing observed responses of the global C cycle (e.g. Wenzel et al., 2014). With the advent of a new generation of global models, including coupled C, N, and phosphorus (P) cycling, model complexity is sure to increase; but model reliability may not, unless greater attention is paid to the correspondence of model process representations ande mpirical evidence. It was in this context that the ‘Nitrogen Cycle Workshop’ at Dartington Hall, Devon, UK was held on 1–5 February 2016. Organized by I. Colin Prentice and Benjamin D. Stocker (Imperial College London, UK), the workshop was funded by the European Research Council,project ‘Earth system Model Bias Reduction and assessing Abrupt Climate change’ (EMBRACE). We gathered empirical ecologists and ecosystem modellers to identify key uncertainties in terrestrial C–N cycling, and to discuss processes that are missing or poorly represented in current models.
Schedulability of Herschel revisited using statistical model checking
David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel
2015-01-01
-approximation technique. We can safely conclude that the system is schedulable for varying values of BCET. For the cases where deadlines are violated, we use polyhedra to try to confirm the witnesses. Our alternative method to confirm non-schedulability uses statistical model-checking (SMC) to generate counter...... and blocking times of tasks. Consequently, the method may falsely declare deadline violations that will never occur during execution. This paper is a continuation of previous work of the authors in applying extended timed automata model checking (using the tool UPPAAL) to obtain more exact schedulability...... analysis, here in the presence of non-deterministic computation times of tasks given by intervals [BCET,WCET]. Computation intervals with preemptive schedulers make the schedulability analysis of the resulting task model undecidable. Our contribution is to propose a combination of model checking techniques...
Reactor kinetics revisited: a coefficient based model (CBM)
Ratemi, W.M.
2011-01-01
In this paper, a nuclear reactor kinetics model based on Guelph expansion coefficients calculation ( Coefficients Based Model, CBM), for n groups of delayed neutrons is developed. The accompanying characteristic equation is a polynomial form of the Inhour equation with the same coefficients of the CBM- kinetics model. Those coefficients depend on Universal abc- values which are dependent on the type of the fuel fueling a nuclear reactor. Furthermore, such coefficients are linearly dependent on the inserted reactivity. In this paper, the Universal abc- values have been presented symbolically, for the first time, as well as with their numerical values for U-235 fueled reactors for one, two, three, and six groups of delayed neutrons. Simulation studies for constant and variable reactivity insertions are made for the CBM kinetics model, and a comparison of results, with numerical solutions of classical kinetics models for one, two, three, and six groups of delayed neutrons are presented. The results show good agreements, especially for single step insertion of reactivity, with the advantage of the CBM- solution of not encountering the stiffness problem accompanying the numerical solutions of the classical kinetics model. (author)
Akkerman, M; Rauh, V M; Christensen, M; Johansen, L B; Hammershøj, M; Larsen, L B
2016-01-01
Previous standards in the area of effect of heat treatment processes on milk protein denaturation were based primarily on laboratory-scale analysis and determination of denaturation degrees by, for example, electrophoresis. In this study, whey protein denaturation was revisited by pilot-scale heating strategies and liquid chromatography quadrupole time-of-flight mass spectrometer (LC/MC Q-TOF) analysis. Skim milk was heat treated by the use of 3 heating strategies, namely plate heat exchanger (PHE), tubular heat exchanger (THE), and direct steam injection (DSI), under various heating temperatures (T) and holding times. The effect of heating strategy on the degree of denaturation of β-lactoglobulin and α-lactalbumin was determined using LC/MC Q-TOF of pH 4.5-soluble whey proteins. Furthermore, effect of heating strategy on the rennet-induced coagulation properties was studied by oscillatory rheometry. In addition, rennet-induced coagulation of heat-treated micellar casein concentrate subjected to PHE was studied. For skim milk, the whey protein denaturation increased significantly as T and holding time increased, regardless of heating method. High denaturation degrees were obtained for T >100°C using PHE and THE, whereas DSI resulted in significantly lower denaturation degrees, compared with PHE and THE. Rennet coagulation properties were impaired by increased T and holding time regardless of heating method, although DSI resulted in less impairment compared with PHE and THE. No significant difference was found between THE and PHE for effect on rennet coagulation time, whereas the curd firming rate was significantly larger for THE compared with PHE. Micellar casein concentrate possessed improved rennet coagulation properties compared with skim milk receiving equal heat treatment. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Measurement of the Mass of an Object Hanging from a Spring--Revisited
Serafin, Kamil; Oracz, Joanna; Grzybowski, Marcin; Koperski, Maciej; Sznajder, Pawel; Zinkiewicz, Lukasz; Wasylczyk, Piotr
2012-01-01
In an open competition, students were to determine the mass of a metal cylinder hanging on a spring inside a transparent enclosure. With the time for experiments limited to 24 h due to the unexpectedly large number of participants, a few surprisingly accurate results were submitted, the best of them differing by no more than 0.5% from the true…
REVISITING ρ1 CANCRI e: A NEW MASS DETERMINATION OF THE TRANSITING SUPER-EARTH
Endl, Michael; Cochran, William D.; MacQueen, Phillip J.; Barnes, Stuart I.; Robertson, Paul; Brugamyer, Erik J.; Caldwell, Caroline; Gullikson, Kevin; Wittenmyer, Robert A.
2012-01-01
We present a mass determination for the transiting super-Earth ρ 1 Cancri e based on nearly 700 precise radial velocity (RV) measurements. This extensive RV data set consists of data collected by the McDonald Observatory planet search and published data from Lick and Keck observatories. We obtained 212 RV measurements with the Tull Coudé Spectrograph at the Harlan J. Smith 2.7 m Telescope and combined them with a new Doppler reduction of the 131 spectra that we have taken in 2003-2004 with the High-Resolution Spectrograph (HRS) at the Hobby-Eberly Telescope for the original discovery of ρ 1 Cancri e. Using this large data set we obtain a five-planet Keplerian orbital solution for the system and measure an RV semi-amplitude of K = 6.29 ± 0.21 m s –1 for ρ 1 Cnc e and determine a mass of 8.37 ± 0.38 M ⊕ . The uncertainty in mass is thus less than 5%. This planet was previously found to transit its parent star, which allowed them to estimate its radius. Combined with the latest radius estimate from Gillon et al., we obtain a mean density of ρ = 4.50 ± 0.20 g cm –3 . The location of ρ 1 Cnc e in the mass-radius diagram suggests that the planet contains a significant amount of volatiles, possibly a water-rich envelope surrounding a rocky core.
Revisiting the mouse model of oxygen-induced retinopathy
Kim CB
2016-05-01
Full Text Available Clifford B Kim,1,2 Patricia A D’Amore,2–4 Kip M Connor1,2 1Angiogenesis Laboratory, Massachusetts Eye and Ear, 2Department of Ophthalmology, Harvard Medical School, 3Schepens Eye Research Institute, Massachusetts Eye and Ear, 4Department of Pathology, Harvard Medical School, Boston, MA, USA Abstract: Abnormal blood vessel growth in the retina is a hallmark of many retinal diseases, such as retinopathy of prematurity (ROP, proliferative diabetic retinopathy, and the wet form of age-related macular degeneration. In particular, ROP has been an important health concern for physicians since the advent of routine supplemental oxygen therapy for premature neonates more than 70 years ago. Since then, researchers have explored several animal models to better understand ROP and retinal vascular development. Of these models, the mouse model of oxygen-induced retinopathy (OIR has become the most widely used, and has played a pivotal role in our understanding of retinal angiogenesis and ocular immunology, as well as in the development of groundbreaking therapeutics such as anti-vascular endothelial growth factor injections for wet age-related macular degeneration. Numerous refinements to the model have been made since its inception in the 1950s, and technological advancements have expanded the use of the model across multiple scientific fields. In this review, we explore the historical developments that have led to the mouse OIR model utilized today, essential concepts of OIR, limitations of the model, and a representative selection of key findings from OIR, with particular emphasis on current research progress. Keywords: ROP, OIR, angiogenesis
Energy-economy interactions revisited within a comprehensive sectoral model
Hanson, D. A.; Laitner, J. A.
2000-07-24
This paper describes a computable general equilibrium (CGE) model with considerable sector and technology detail, the ``All Modular Industry Growth Assessment'' Model (AMIGA). It is argued that a detailed model is important to capture and understand the several rolls that energy plays within the economy. Fundamental consumer and industrial demands are for the services from energy; hence, energy demand is a derived demand based on the need for heating, cooling mechanical, electrical, and transportation services. Technologies that provide energy-services more efficiently (on a life cycle basis), when adopted, result in increased future output of the economy and higher paths of household consumption. The AMIGA model can examine the effects on energy use and economic output of increases in energy prices (e.g., a carbon charge) and other incentive-based policies or energy-efficiency programs. Energy sectors and sub-sector activities included in the model involve energy extraction conversion and transportation. There are business opportunities to produce energy-efficient goods (i.e., appliances, control systems, buildings, automobiles, clean electricity). These activities are represented in the model by characterizing their likely production processes (e.g., lighter weight motor vehicles). Also, multiple industrial processes can produce the same output but with different technologies and inputs. Secondary recovery, i.e., recycling processes, are examples of these multiple processes. Combined heat and power (CHP) is also represented for energy-intensive industries. Other modules represent residential and commercial building technologies to supply energy services. All sectors of the economy command real resources (capital services and labor).
Massive (p,q)-supersymmetric sigma models revisited
Papadopoulos, G.
1994-06-01
We recently obtained the conditions on the couplings of the general two-dimensional massive sigma-model required by (p,q)-supersymmetry. Here wer compute the Poisson bracket algebra of the supersymmetry and central Noether charges, and show that the action is invariant under the automorphism group of this algebra. Surprisingly, for the (4,4) case the automorphism group is always a subgroup of SO(3), rather than SO(4). We also re-analyse the conditions for (2,2) and 4,4) supersymmetry of the zero torsion models without assumptions about the central charge matrix. (orig.)
A Simple Singlet Fermionic Dark-Matter Model Revisited
Qin Hong-Yi; Wang Wen-Yu; Xiong Zhao-Hua
2011-01-01
We evaluate the spin-independent elastic dark matter-nucleon scattering cross section in the framework of the simple singlet fermionic dark matter extension of the standard model and constrain the model parameter space with the following considerations: (i) new dark matter measurement, in which, apart from WMAP and CDMS, the results from the XENON experiment are also used in constraining the model; (ii) new fitted value of the quark fractions in nucleons, in which the updated value of f T s from the recent lattice simulation is much smaller than the previous one and may reduce the scattering rate significantly; (iii) new dark matter annihilation channels, in which the scenario where top quark and Higgs pairs produced by dark matter annihilation was not included in the previous works. We find that unlike in the minimal supersymmetric standard model, the cross section is just reduced by a factor of about 1/4 and dark matter lighter than 100 GeV is not favored by the WMAP, CDMS and XENON experiments. (the physics of elementary particles and fields)
Measurement of the mass of an object hanging from a spring—revisited
Serafin, Kamil; Oracz, Joanna; Grzybowski, Marcin; Koperski, Maciej; Sznajder, Paweł; Zinkiewicz, Łukasz; Wasylczyk, Piotr
2012-01-01
In an open competition, students were to determine the mass of a metal cylinder hanging on a spring inside a transparent enclosure. With the time for experiments limited to 24 h due to the unexpectedly large number of participants, a few surprisingly accurate results were submitted, the best of them differing by no more than 0.5% from the true value with a relative uncertainty of less than 1%.
Revisiting ρ1 Cancri e: A New Mass Determination of the Transiting Super-Earth
Endl, Michael; Robertson, Paul; Cochran, William D.; MacQueen, Phillip J.; Brugamyer, Erik J.; Caldwell, Caroline; Wittenmyer, Robert A.; Barnes, Stuart I.; Gullikson, Kevin
2012-11-01
We present a mass determination for the transiting super-Earth ρ1 Cancri e based on nearly 700 precise radial velocity (RV) measurements. This extensive RV data set consists of data collected by the McDonald Observatory planet search and published data from Lick and Keck observatories. We obtained 212 RV measurements with the Tull Coudé Spectrograph at the Harlan J. Smith 2.7 m Telescope and combined them with a new Doppler reduction of the 131 spectra that we have taken in 2003-2004 with the High-Resolution Spectrograph (HRS) at the Hobby-Eberly Telescope for the original discovery of ρ1 Cancri e. Using this large data set we obtain a five-planet Keplerian orbital solution for the system and measure an RV semi-amplitude of K = 6.29 ± 0.21 m s-1 for ρ1 Cnc e and determine a mass of 8.37 ± 0.38 M ⊕. The uncertainty in mass is thus less than 5%. This planet was previously found to transit its parent star, which allowed them to estimate its radius. Combined with the latest radius estimate from Gillon et al., we obtain a mean density of ρ = 4.50 ± 0.20 g cm-3. The location of ρ1 Cnc e in the mass-radius diagram suggests that the planet contains a significant amount of volatiles, possibly a water-rich envelope surrounding a rocky core. Based partly on observations obtained with the Hobby-Eberly Telescope, which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximilians-Universität München, and Georg-August-Universität Göttingen.
What drives health care expenditure?--Baumol's model of 'unbalanced growth' revisited.
Hartwig, Jochen
2008-05-01
The share of health care expenditure in GDP rises rapidly in virtually all OECD countries, causing increasing concern among politicians and the general public. Yet, economists have to date failed to reach an agreement on what the main determinants of this development are. This paper revisits Baumol's [Baumol, W.J., 1967. Macroeconomics of unbalanced growth: the anatomy of urban crisis. American Economic Review 57 (3), 415-426] model of 'unbalanced growth', showing that the latter offers a ready explanation for the observed inexorable rise in health care expenditure. The main implication of Baumol's model in this context is that health care expenditure is driven by wage increases in excess of productivity growth. This hypothesis is tested empirically using data from a panel of 19 OECD countries. Our tests yield robust evidence in favor of Baumol's theory.
Fowler, Keirnan J. A.; Peel, Murray C.; Western, Andrew W.; Zhang, Lu; Peterson, Tim J.
2016-03-01
Hydrologic models have potential to be useful tools in planning for future climate variability. However, recent literature suggests that the current generation of conceptual rainfall runoff models tend to underestimate the sensitivity of runoff to a given change in rainfall, leading to poor performance when evaluated over multiyear droughts. This research revisited this conclusion, investigating whether the observed poor performance could be due to insufficient model calibration and evaluation techniques. We applied an approach based on Pareto optimality to explore trade-offs between model performance in different climatic conditions. Five conceptual rainfall runoff model structures were tested in 86 catchments in Australia, for a total of 430 Pareto analyses. The Pareto results were then compared with results from a commonly used model calibration and evaluation method, the Differential Split Sample Test. We found that the latter often missed potentially promising parameter sets within a given model structure, giving a false negative impression of the capabilities of the model. This suggests that models may be more capable under changing climatic conditions than previously thought. Of the 282[347] cases of apparent model failure under the split sample test using the lower [higher] of two model performance criteria trialed, 155[120] were false negatives. We discuss potential causes of remaining model failures, including the role of data errors. Although the Pareto approach proved useful, our aim was not to suggest an alternative calibration strategy, but to critically assess existing methods of model calibration and evaluation. We recommend caution when interpreting split sample results.
Revisiting the coupled-mass system and analogy with a simple band gap structure
Levesque, L
2006-01-01
A great deal of insight can be gained from the analysis of coupled masses connected to springs in order to understand better the origin of band gaps in physical systems. The approach is based on the application of the superposition principle for finding the general solution in simple mechanical systems involving functions, which vary periodically with time. Graphs show that sums of periodic functions oscillating at different frequencies lead to an exchange of energy from one oscillator to another in a simple mechanical system of three objects connected by identical springs. A system of a large number of masses connected to springs having the same spring constant K is then considered and compared with a system in which the spring constants alternate from K to another value G when connecting one mass to another. Using the results found from the mechanical systems, an analogy of charge oscillations excited on both uniform and corrugated surfaces is presented. The results obtained attempt to expand understanding of the origin of the band gap occurring in some systems involving periodic motions
Time-independent models of asset returns revisited
Gillemot, L.; Töyli, J.; Kertesz, J.; Kaski, K.
2000-07-01
In this study we investigate various well-known time-independent models of asset returns being simple normal distribution, Student t-distribution, Lévy, truncated Lévy, general stable distribution, mixed diffusion jump, and compound normal distribution. For this we use Standard and Poor's 500 index data of the New York Stock Exchange, Helsinki Stock Exchange index data describing a small volatile market, and artificial data. The results indicate that all models, excluding the simple normal distribution, are, at least, quite reasonable descriptions of the data. Furthermore, the use of differences instead of logarithmic returns tends to make the data looking visually more Lévy-type distributed than it is. This phenomenon is especially evident in the artificial data that has been generated by an inflated random walk process.
Goodwin accelerator model revisited with fixed time delays
Matsumoto, Akio; Merlone, Ugo; Szidarovszky, Ferenc
2018-05-01
Dynamics of Goodwin's accelerator business cycle model is reconsidered. The model is characterized by a nonlinear accelerator and an investment time delay. The role of the nonlinearity for the birth of persistent oscillations is fully discussed in the existing literature. On the other hand, not much of the role of the delay has yet been revealed. The purpose of this paper is to show that the delay really matters. In the original framework of Goodwin [6], it is first demonstrated that there is a threshold value of the delay: limit cycles arise for smaller values than the threshold and so do sawtooth oscillations for larger values. In the extended framework in which a consumption or saving delay, in addition to the investment delay, is introduced, three main results are demonstrated under assumption of the identical length of investment and consumption delays. The dynamics with consumption delay is basically the same as that of the single delay model. Second, in the case of saving delay, the steady state can coexist with the stable and unstable limit cycles in the stable case. Third, in the unstable case, there is an interval of delay in which the limit cycle or the sawtooth oscillation emerges depending on the choice of the constant initial function.
REVISITING THE MICROLENSING EVENT OGLE 2012-BLG-0026: A SOLAR MASS STAR WITH TWO COLD GIANT PLANETS
Beaulieu, J.-P.; Batista, V.; Marquette, J.-B.
2016-01-01
Two cold gas giant planets orbiting a G-type main-sequence star in the galactic disk were previously discovered in the high-magnification microlensing event OGLE-2012-BLG-0026. Here, we present revised host star flux measurements and a refined model for the two-planet system using additional light curve data. We performed high angular resolution adaptive optics imaging with the Keck and Subaru telescopes at two epochs while the source star was still amplified. We detected the lens flux, H = 16.39 ± 0.08. The lens, a disk star, is brighter than predicted from the modeling in the original study. We revisited the light curve modeling using additional photometric data from the B and C telescope in New Zealand and CTIO 1.3 m H -band light curve. We then include the Keck and Subaru adaptive optic observation constraints. The system is composed of a ∼4–9 Gyr lens star of M lens = 1.06 ± 0.05 M ⊙ at a distance of D lens = 4.0 ± 0.3 kpc, orbited by two giant planets of 0.145 ± 0.008 M Jup and 0.86 ± 0.06 M Jup , with projected separations of 4.0 ± 0.5 au and 4.8 ± 0.7 au, respectively. Because the lens is brighter than the source star by 16 ± 8% in H, with no other blend within one arcsec, it will be possible to estimate its metallicity using subsequent IR spectroscopy with 8–10 m class telescopes. By adding a constraint on the metallicity it will be possible to refine the age of the system.
Knowing Our Neighbors: 2MASS 2306-0502 (TRAPPIST-1) Revisited
Bartlett, Jennifer Lynn; Lurie, John; Jao, W.-C.; Ianna, P. A.; Riedel, A.; Finch, C.; Winters, J.; Subasavage, J.; Henry, T.
2018-01-01
Obtaining a well-understood, volume-limited (and ultimately volume-complete) sample of stellar systems within 25 pc is essential for determining the stellar luminosity function, the mass-luminosity relationship, the stellar velocity distribution, and the stellar multiplicity fraction. Such a sample also provides insight into the local star formation history. Towards that end, Research Consortium On Nearby Stars (RECONS) measures trigonometric parallaxes to establish which systems truly lie within the 25-pc radius of the Solar Neighborhood. Recent observations with the CTIO/SMARTS 0.9-m telescope allow us to update the astrometry and VRI photometry for 2MASS J23062928-0502285 (TRAPPIST-1). Extrasolar planet searches by others detected 7 Earth-sized planets transiting this cool dwarf.Based on our 2004—2016 observations, we measure a parallax of 78.76 ± 1.04 mas with a proper motion of 1034.8 ± 0.3 mas/yr in 118.5○ ± 0.03○ for 2MASS 2306-0502. During this 12.2-year period, we did not detect any perturbations in the astrometric residuals. Because this parallax is independent of the earlier CTIOPI/SMARTS 1.5-m result, we calculate its weighted mean parallax to be 79.29 ± 0.96 mas (12.6 ± 0.2 pc), which is ~4% farther than the original distance. Our improved parallax implies its radii and that of its planets would be ~4% larger than previously reported.During our astrometric observations, 2MASS 2306-0502 demonstrated an overall photometric variability of 11.6 mmag in I-band, which is less than the 20-mmag limit for significant variability. Removing a July 2009 flaring event drops the mean variability to 8.2 mmag.Our VRI photometry indicates the brightness of 2MASS 2306-0502 is 18.75, 16.54, 14.10 mag, respectively, based on 3 nights.Even as we continue to look for new neighbors, we should also keep an eye on old friends.NSF grants AST 05-07711 and AST 09-08402, NASA-SIM, Georgia State University, the University of Virginia, Hampden-Sydney College, and the
Hydraulic modeling support for conflict analysis: The Manayunk canal revisited
Chadderton, R.A.; Traver, R.G.; Rao, J.N.
1992-01-01
This paper presents a study which used a standard, hydraulic computer model to generate detailed design information to support conflict analysis of a water resource use issue. As an extension of previous studies, the conflict analysis in this case included several scenarios for stability analysis - all of which reached the conclusion that compromising, shared access to the water resources available would result in the most benefits to society. This expected equilibrium outcome was found to maximize benefit-cost estimates. 17 refs., 1 fig., 2 tabs
Foam Assisted WAG, Snorre Revisit with New Foam Screening Model
Spirov, Pavel; Rudyk, Svetlana Nikolayevna; Khan, Arif
2012-01-01
This study deals with simulation model of Foam Assisted Water Alternating Gas (FAWAG) method that had been implemented to two Norwegian Reservoirs. Being studied on number of pilot projects, the method proved successful, but Field Scale simulation was never understood properly. New phenomenological...... of the simulation contributes to more precise planning of the schedule of water and gas injection, prediction of the injection results and evaluation of the method efficiency. The testing of the surfactant properties allows making grounded choice of surfactant to use. The analysis of the history match gives insight...
Revisiting a model-independent dark energy reconstruction method
Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)
2012-09-15
In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)
The Zipf Law revisited: An evolutionary model of emerging classification
Levitin, L.B. [Boston Univ., MA (United States); Schapiro, B. [TINA, Brandenburg (Germany); Perlovsky, L. [NRC, Wakefield, MA (United States)
1996-12-31
Zipf`s Law is a remarkable rank-frequency relationship observed in linguistics (the frequencies of the use of words are approximately inversely proportional to their ranks in the decreasing frequency order) as well as in the behavior of many complex systems of surprisingly different nature. We suggest an evolutionary model of emerging classification of objects into classes corresponding to concepts and denoted by words. The evolution of the system is derived from two basic assumptions: first, the probability to recognize an object as belonging to a known class is proportional to the number of objects in this class already recognized, and, second, there exists a small probability to observe an object that requires creation of a new class ({open_quotes}mutation{close_quotes} that gives birth to a new {open_quotes}species{close_quotes}). It is shown that the populations of classes in such a system obey the Zipf Law provided that the rate of emergence of new classes is small. The model leads also to the emergence of a second-tier structure of {open_quotes}super-classes{close_quotes} - groups of classes with almost equal populations.
The hierarchy problem of the electroweak standard model revisited
Jegerlehner, Fred
2013-05-01
A careful renormalization group analysis of the electroweak Standard Model reveals that there is no hierarchy problem in the SM. In the broken phase a light Higgs turns out to be natural as it is self-protected and self-tuned by the Higgs mechanism. It means that the scalar Higgs needs not be protected by any extra symmetry, specifically super symmetry, in order not to be much heavier than the other SM particles which are protected by gauge- or chiral-symmetry. Thus the existence of quadratic cutoff effects in the SM cannot motivate the need for a super symmetric extensions of the SM, but in contrast plays an important role in triggering the electroweak phase transition and in shaping the Higgs potential in the early universe to drive inflation as supported by observation.
Hubbert's Oil Peak Revisited by a Simulation Model
Giraud, P.N.; Sutter, A.; Denis, T.; Leonard, C.
2010-01-01
As conventional oil reserves are declining, the debate on the oil production peak has become a burning issue. An increasing number of papers refer to Hubbert's peak oil theory to forecast the date of the production peak, both at regional and world levels. However, in our views, this theory lacks micro-economic foundations. Notably, it does not assume that exploration and production decisions in the oil industry depend on market prices. In an attempt to overcome these shortcomings, we have built an adaptative model, accounting for the behavior of one agent, standing for the competitive exploration-production industry, subjected to incomplete but improving information on the remaining reserves. Our work yields challenging results on the reasons for an Hubbert type peak oil, lying mainly 'above the ground', both at regional and world levels, and on the shape of the production and marginal cost trajectories. (authors)
The hierarchy problem of the electroweak standard model revisited
Jegerlehner, Fred [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2013-05-15
A careful renormalization group analysis of the electroweak Standard Model reveals that there is no hierarchy problem in the SM. In the broken phase a light Higgs turns out to be natural as it is self-protected and self-tuned by the Higgs mechanism. It means that the scalar Higgs needs not be protected by any extra symmetry, specifically super symmetry, in order not to be much heavier than the other SM particles which are protected by gauge- or chiral-symmetry. Thus the existence of quadratic cutoff effects in the SM cannot motivate the need for a super symmetric extensions of the SM, but in contrast plays an important role in triggering the electroweak phase transition and in shaping the Higgs potential in the early universe to drive inflation as supported by observation.
Temperature Effect on Micelle Formation: Molecular Thermodynamic Model Revisited.
Khoshnood, Atefeh; Lukanov, Boris; Firoozabadi, Abbas
2016-03-08
Temperature affects the aggregation of macromolecules such as surfactants, polymers, and proteins in aqueous solutions. The effect on the critical micelle concentration (CMC) is often nonmonotonic. In this work, the effect of temperature on the micellization of ionic and nonionic surfactants in aqueous solutions is studied using a molecular thermodynamic model. Previous studies based on this technique have predicted monotonic behavior for ionic surfactants. Our investigation shows that the choice of tail transfer energy to describe the hydrophobic effect between the surfactant tails and the polar solvent molecules plays a key role in the predicted CMC. We modify the tail transfer energy by taking into account the effect of the surfactant head on the neighboring methylene group. The modification improves the description of the CMC and the predicted micellar size for aqueous solutions of sodium n-alkyl sulfate, dodecyl trimethylammonium bromide (DTAB), and n-alkyl polyoxyethylene. The new tail transfer energy describes the nonmonotonic behavior of CMC versus temperature. In the DTAB-water system, we redefine the head size by including the methylene group, next to the nitrogen, in the head. The change in the head size along with our modified tail transfer energy improves the CMC and aggregation size prediction significantly. Tail transfer is a dominant energy contribution in micellar and microemulsion systems. It also promotes the adsorption of surfactants at fluid-fluid interfaces and affects the formation of adsorbed layer at fluid-solid interfaces. Our proposed modifications have direct applications in the thermodynamic modeling of the effect of temperature on molecular aggregation, both in the bulk and at the interfaces.
Revisiting non-Gaussianity from non-attractor inflation models
Cai, Yi-Fu; Chen, Xingang; Namjoo, Mohammad Hossein; Sasaki, Misao; Wang, Dong-Gang; Wang, Ziwei
2018-05-01
Non-attractor inflation is known as the only single field inflationary scenario that can violate non-Gaussianity consistency relation with the Bunch-Davies vacuum state and generate large local non-Gaussianity. However, it is also known that the non-attractor inflation by itself is incomplete and should be followed by a phase of slow-roll attractor. Moreover, there is a transition process between these two phases. In the past literature, this transition was approximated as instant and the evolution of non-Gaussianity in this phase was not fully studied. In this paper, we follow the detailed evolution of the non-Gaussianity through the transition phase into the slow-roll attractor phase, considering different types of transition. We find that the transition process has important effect on the size of the local non-Gaussianity. We first compute the net contribution of the non-Gaussianities at the end of inflation in canonical non-attractor models. If the curvature perturbations keep evolving during the transition—such as in the case of smooth transition or some sharp transition scenarios—the Script O(1) local non-Gaussianity generated in the non-attractor phase can be completely erased by the subsequent evolution, although the consistency relation remains violated. In extremal cases of sharp transition where the super-horizon modes freeze immediately right after the end of the non-attractor phase, the original non-attractor result can be recovered. We also study models with non-canonical kinetic terms, and find that the transition can typically contribute a suppression factor in the squeezed bispectrum, but the final local non-Gaussianity can still be made parametrically large.
Revisiting R-invariant direct gauge mediation
Chiang, Cheng-Wei [Center for Mathematics and Theoretical Physics andDepartment of Physics, National Central University,Taoyuan, Taiwan 32001, R.O.C. (China); Institute of Physics, Academia Sinica,Taipei, Taiwan 11529, R.O.C. (China); Physics Division, National Center for Theoretical Sciences,Hsinchu, Taiwan 30013, R.O.C. (China); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8583 (Japan); Harigaya, Keisuke [Department of Physics, University of California,Berkeley, California 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory,Berkeley, California 94720 (United States); ICRR, University of Tokyo,Kashiwa, Chiba 277-8582 (Japan); Ibe, Masahiro [Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8583 (Japan); ICRR, University of Tokyo,Kashiwa, Chiba 277-8582 (Japan); Yanagida, Tsutomu T. [Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8583 (Japan)
2016-03-21
We revisit a special model of gauge mediated supersymmetry breaking, the “R-invariant direct gauge mediation.” We pay particular attention to whether the model is consistent with the minimal model of the μ-term, i.e., a simple mass term of the Higgs doublets in the superpotential. Although the incompatibility is highlighted in view of the current experimental constraints on the superparticle masses and the observed Higgs boson mass, the minimal μ-term can be consistent with the R-invariant gauge mediation model via a careful choice of model parameters. We derive an upper limit on the gluino mass from the observed Higgs boson mass. We also discuss whether the model can explain the 3σ excess of the Z+jets+E{sub T}{sup miss} events reported by the ATLAS collaboration.
Revisiting of Stommel's model for the understanding of the abrupt climate change
Scatamacchia, R.; Purini, R.; Rafanelli, C.
2010-01-01
Despite the enormous number of papers devoted to modelling climate changes, the pionieristic Stommel paper (1961) remains a still valid tool for the understanding of the basic mechanism that governs the abrupt climate change, i.e. the existence of multipla equilibria in the governing non-linear equations. Using non-dimensional quantities, Stommel did not provide any explicit information about the temporal scale affecting the process under examination when the control parameters are varied. On the basis of this consideration, the present paper revisits the Stommel theory putting some emphasis on the quantitative estimate of how the variations of the control system parameters system modify the fundamental motor of the climate change, i.e. the thermohaline circulation.
Revisiting directed flow in relativistic heavy-ion collisions from a multiphase transport model
Guo, Chong-Qiang; Zhang, Chun-Jian; Xu, Jun
2017-12-01
We have revisited several interesting questions on how the rapidity-odd directed flow is developed in relativistic 197Au+197Au collisions at √{s_{NN}} = 200 and 39 GeV based on a multiphase transport model. As the partonic phase evolves with time, the slope of the parton directed flow at midrapidity region changes from negative to positive as a result of the later dynamics at 200 GeV, while it remains negative at 39 GeV due to the shorter life time of the partonic phase. The directed flow splitting for various quark species due to their different initial eccentricities is observed at 39 GeV, while the splitting is very small at 200GeV. From a dynamical coalescence algorithm with Wigner functions, we found that the directed flow of hadrons is a result of competition between the coalescence in momentum and coordinate space as well as further modifications by the hadronic rescatterings.
Konevskikh, Tatiana; Ponossov, Arkadi; Blümel, Reinhold; Lukacs, Rozalia; Kohler, Achim
2015-06-21
The appearance of fringes in the infrared spectroscopy of thin films seriously hinders the interpretation of chemical bands because fringes change the relative peak heights of chemical spectral bands. Thus, for the correct interpretation of chemical absorption bands, physical properties need to be separated from chemical characteristics. In the paper at hand we revisit the theory of the scattering of infrared radiation at thin absorbing films. Although, in general, scattering and absorption are connected by a complex refractive index, we show that for the scattering of infrared radiation at thin biological films, fringes and chemical absorbance can in good approximation be treated as additive. We further introduce a model-based pre-processing technique for separating fringes from chemical absorbance by extended multiplicative signal correction (EMSC). The technique is validated by simulated and experimental FTIR spectra. It is further shown that EMSC, as opposed to other suggested filtering methods for the removal of fringes, does not remove information related to chemical absorption.
The measurement of the W mass at the LHC: shortcuts revisited
Dydak, F; Voss, R; CERN. Geneva. The LHC experiments Committee; LHCC
2009-01-01
The claim that the W mass will be measured at the LHC with a precision of $\\cal{O}$(10)~MeV is critically reviewed. It is argued that in order to achieve such precision, a considerably better knowledge of the $u_{\\rm v}$, $d_{\\rm v}$, $s$, $c$, and $b$ structure functions of the proton than available today is needed. This will permit to assess with adequate precision the production characteristics of the W and Z bosons in the proton--proton collisions at the LHC, and their effect on the $p_{\\rm T}$ spectra of charged leptons from W and Z decays. An experimental programme is suggested that will deliver the missing information. The core of this programme is a dedicated muon scattering experiment at the CERN SPS, with simultaneous measurements on hydrogen and deuterium targets.
The measurement of the W mass at the LHC: shortcuts revisited
Dydak, F; Voss, RF; CERN. Geneva. SPS and PS Experiments Committee; SPSC
2009-01-01
The claim that the W mass will be measured at the LHC with a precision of $cal{O}$(10)~MeV is critically reviewed. It is argued that in order to achieve such precision, a considerably better knowledge of the $u_{m v}$, $d_{m v}$, $s$, $c$, and $b$ structure functions of the proton than available today is needed. This will permit to assess with adequate precision the production characteristics of the W and Z bosons in the proton--proton collisions at the LHC, and their effect on the $p_{m T}$ spectra of charged leptons from W and Z decays. An experimental programme is suggested that will deliver the missing information. The core of this programme is a dedicated muon scattering experiment at the CERN SPS, with simultaneous measurements on hydrogen and deuterium targets.
Tauber, Sean; Navarro, Daniel J; Perfors, Amy; Steyvers, Mark
2017-07-01
Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hömberg, D.; Patacchini, F. S.; Sakamoto, K.; Zimmer, J.
2016-01-01
The classical Johnson-Mehl-Avrami-Kolmogorov approach for nucleation and growth models of diffusive phase transitions is revisited and applied to model the growth of ferrite in multiphase steels. For the prediction of mechanical properties of such steels, a deeper knowledge of the grain structure is essential. To this end, a Fokker-Planck evolution law for the volume distribution of ferrite grains is developed and shown to exhibit a log-normally distributed solution. Numerical parameter studi...
Revisiting the mesoscopic Termonia and Smith model for deformation of polymers
Krishna Reddy, B; Basu, Sumit; Estevez, Rafael
2008-01-01
Mesoscopic models for polymers have the potential to link macromolecular properties with the mechanical behaviour without being too expensive computationally. An interesting, popular and rather simple model to this end was proposed by Termonia and Smith (1987 Macromolecules 20 835–8). In this model the macromolecular ensemble is viewed as a collection of two-dimensional self-avoiding random walks on a regular lattice whose lattice points represent entanglements. The load is borne by members representing van der Waals bonds as well as macromolecular strands between two entanglement points. Model polymers simulated via this model exhibited remarkable qualitative similarity with real polymers with respect to their molecular weight, entanglement spacing, strain rate and temperature dependence. In this work, we revisit this model and present a detailed reformulation within the framework of a finite deformation finite element scheme. The physical origins of each of the parameters in the model are investigated and inherent assumptions in the model which contribute to its success are critically probed
A Hydrostatic Paradox Revisited
Ganci, Salvatore
2012-01-01
This paper revisits a well-known hydrostatic paradox, observed when turning upside down a glass partially filled with water and covered with a sheet of light material. The phenomenon is studied in its most general form by including the mass of the cover. A historical survey of this experiment shows that a common misunderstanding of the phenomenon…
MASS CUSTOMIZATION and PRODUCT MODELS
Svensson, Carsten; Malis, Martin
2003-01-01
to the product. Through the application of a mass customization strategy, companies have a unique opportunity to create increased customer satisfaction. In a customized production, knowledge and information have to be easily accessible since every product is a unique combination of information. If the dream...... of a customized alternative instead of a uniform mass-produced product shall become a reality, then the cross-organizational efficiency must be kept at a competitive level. This is the real challenge for mass customization. A radical restructuring of both the internal and the external knowledge management systems...
An efficient numerical progressive diagonalization scheme for the quantum Rabi model revisited
Pan, Feng; Bao, Lina; Dai, Lianrong; Draayer, Jerry P
2017-01-01
An efficient numerical progressive diagonalization scheme for the quantum Rabi model is revisited. The advantage of the scheme lies in the fact that the quantum Rabi model can be solved almost exactly by using the scheme that only involves a finite set of one variable polynomial equations. The scheme is especially efficient for a specified eigenstate of the model, for example, the ground state. Some low-lying level energies of the model for several sets of parameters are calculated, of which one set of the results is compared to that obtained from the Braak’s exact solution proposed recently. It is shown that the derivative of the entanglement measure defined in terms of the reduced von Neumann entropy with respect to the coupling parameter does reach the maximum near the critical point deduced from the classical limit of the Dicke model, which may provide a probe of the critical point of the crossover in finite quantum many-body systems, such as that in the quantum Rabi model. (paper)
Revisiting the EC/CMB model for extragalactic large scale jets
Lucchini, M.; Tavecchio, F.; Ghisellini, G.
2017-04-01
One of the most outstanding results of the Chandra X-ray Observatory was the discovery that AGN jets are bright X-ray emitters on very large scales, up to hundreds of kpc. Of these, the powerful and beamed jets of flat-spectrum radio quasars are particularly interesting, as the X-ray emission cannot be explained by an extrapolation of the lower frequency synchrotron spectrum. Instead, the most common model invokes inverse Compton scattering of photons of the cosmic microwave background (EC/CMB) as the mechanism responsible for the high-energy emission. The EC/CMB model has recently come under criticism, particularly because it should predict a significant steady flux in the MeV-GeV band which has not been detected by the Fermi/LAT telescope for two of the best studied jets (PKS 0637-752 and 3C273). In this work, we revisit some aspects of the EC/CMB model and show that electron cooling plays an important part in shaping the spectrum. This can solve the overproduction of γ-rays by suppressing the high-energy end of the emitting particle population. Furthermore, we show that cooling in the EC/CMB model predicts a new class of extended jets that are bright in X-rays but silent in the radio and optical bands. These jets are more likely to lie at intermediate redshifts and would have been missed in all previous X-ray surveys due to selection effects.
Masses in the Weinberg-Salam model
Flores, F.A.
1984-01-01
This thesis is a detailed discussion of the currently existing limits on the masses of Higgs scalars and fermions in the Weinberg-Salam model. The spontaneous breaking of the gauge symmetry of the model generates arbitrary masses for Higgs scalars and fermions, which for the known fermions have to be set to their experimentally known values. In this thesis, the authors discuss in detail both the theoretical and experimental constraints on these otherwise arbitrary masses
A goodness-of-fit test for occupancy models with correlated within-season revisits
Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.
2016-01-01
Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and
A classical model wind turbine wake “blind test” revisited by remote sensing lidars
Sjöholm, Mikael; Angelou, Nikolas; Nielsen, Morten Busk
2017-01-01
One of the classical model wind turbine wake “blind test” experiments1 conducted in the boundary-layer wind tunnel at NTNU in Trondheim and used for benchmarking of numerical flow models has been revisited by remote sensing lidars in a joint experiment called “Lidars For Wind Tunnels” (L4WT) under...... was D=0.894 m and it was designed for a tip speed ratio (TSR) of 6. However, the TSRs used were 3, 6, and 10 at a free-stream velocity of 10 m/s. Due to geometrical constraints imposed by for instance the locations of the wind tunnel windows, all measurements were performed in the very same vertical...... cross-section of the tunnel and the various down-stream distances of the wake, i.e. 1D, 3D, and 5D were achieved by re-positioning the turbine. The approach used allows for unique studies of the influence of the inherent lidar spatial filtering on previously both experimentally and numerically well...
Galindo-Nava, E.I., E-mail: eg375@cam.ac.uk; Rae, C.M.F.
2016-01-10
A new approach for modelling dislocation creep during primary and secondary creep in FCC metals is proposed. The Orowan equation and dislocation behaviour at the grain scale are revisited to include the effects of different microstructures such as the grain size and solute atoms. Dislocation activity is proposed to follow a jog-diffusion law. It is shown that the activation energy for cross-slip E{sub cs} controls dislocation mobility and the strain increments during secondary creep. This is confirmed by successfully comparing E{sub cs} with the experimentally determined activation energy during secondary creep in 5 FCC metals. It is shown that the inverse relationship between the grain size and dislocation creep is attributed to the higher number of strain increments at the grain level dominating their magnitude as the grain size decreases. An alternative approach describing solid solution strengthening effects in nickel alloys is presented, where the dislocation mobility is reduced by dislocation pinning around solute atoms. An analysis on the solid solution strengthening effects of typical elements employed in Ni-base superalloys is also discussed. The model results are validated against measurements of Cu, Ni, Ti and 4 Ni-base alloys for wide deformation conditions and different grain sizes.
Double beta decay and neutrino mass models
Helo, J.C. [Universidad Técnica Federico Santa María, Centro-Científico-Tecnológico de Valparaíso, Casilla 110-V, Valparaíso (Chile); Hirsch, M. [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Ota, T. [Department of Physics, Saitama University, Shimo-Okubo 255, 338-8570 Saitama-Sakura (Japan); Santos, F.A. Pereira dos [Departamento de Física, Pontifícia Universidade Católica do Rio de Janeiro,Rua Marquês de São Vicente 225, 22451-900 Gávea, Rio de Janeiro (Brazil)
2015-05-19
Neutrinoless double beta decay allows to constrain lepton number violating extensions of the standard model. If neutrinos are Majorana particles, the mass mechanism will always contribute to the decay rate, however, it is not a priori guaranteed to be the dominant contribution in all models. Here, we discuss whether the mass mechanism dominates or not from the theory point of view. We classify all possible (scalar-mediated) short-range contributions to the decay rate according to the loop level, at which the corresponding models will generate Majorana neutrino masses, and discuss the expected relative size of the different contributions to the decay rate in each class. Our discussion is general for models based on the SM group but does not cover models with an extended gauge. We also work out the phenomenology of one concrete 2-loop model in which both, mass mechanism and short-range diagram, might lead to competitive contributions, in some detail.
Experimental tests for the Babu-Zee two-loop model of Majorana neutrino masses
Sierra, Diego Aristizabal; Hirsch, Martin
2006-01-01
The smallness of the observed neutrino masses might have a radiative origin. Here we revisit a specific two-loop model of neutrino mass, independently proposed by Babu and Zee. We point out that current constraints from neutrino data can be used to derive strict lower limits on the branching ratio of flavour changing charged lepton decays, such as μ→eγ. Non-observation of Br(μ→eγ) at the level of 10 -13 would rule out singly charged scalar masses smaller than 590 GeV (5.04 TeV) in case of normal (inverse) neutrino mass hierarchy. Conversely, decay branching ratios of the non-standard scalars of the model can be fixed by the measured neutrino angles (and mass scale). Thus, if the scalars of the model are light enough to be produced at the LHC or ILC, measuring their decay properties would serve as a direct test of the model as the origin of neutrino masses
Experimental tests for the Babu-Zee two-loop model of Majorana neutrino masses
Aristizabal, D.
2006-01-01
Abstract: The smallness of the observed neutrino masses might have a radiative origin. Here we revisit a specific two-loop model of neutrino mass, independently proposed by Babu and Zee. We point out that current constraints from neutrino data can be used to derive strict lower limits on the branching ratio of flavour changing charged lepton decays, such as μ → e γ. Non-observation of Br(μ → e γ) at the level of 10 -13 would rule out singly charged scalar masses smaller than 590 GeV (5.04 TeV) in case of normal (inverse) neutrino mass hierarchy. Conversely, decay branching ratios of the non-standard scalars of the model can be fixed by the measured neutrino angles (and mass scale). Thus, if the scalars of the model are light enough to be produced at the LHC or ILC, measuring their decay properties would serve as a direct test of the model as the origin of neutrino masses. (author)
Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond
Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2008-11-15
The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225
Relativistic mean-field mass models
Pena-Arteaga, D.; Goriely, S.; Chamel, N. [Universite Libre de Bruxelles, Institut d' Astronomie et d' Astrophysique, CP-226, Brussels (Belgium)
2016-10-15
We present a new effort to develop viable mass models within the relativistic mean-field approach with density-dependent meson couplings, separable pairing and microscopic estimations for the translational and rotational correction energies. Two interactions, DD-MEB1 and DD-MEB2, are fitted to essentially all experimental masses, and also to charge radii and infinite nuclear matter properties as determined by microscopic models using realistic interactions. While DD-MEB1 includes the σ, ω and ρ meson fields, DD-MEB2 also considers the δ meson. Both mass models describe the 2353 experimental masses with a root mean square deviation of about 1.1 MeV and the 882 measured charge radii with a root mean square deviation of 0.029 fm. In addition, we show that the Pb isotopic shifts and moments of inertia are rather well reproduced, and the equation of state in pure neutron matter as well as symmetric nuclear matter are in relatively good agreement with existing realistic calculations. Both models predict a maximum neutron-star mass of more than 2.6 solar masses, and thus are able to accommodate the heaviest neutron stars observed so far. However, the new Lagrangians, like all previously determined RMF models, present the drawback of being characterized by a low effective mass, which leads to strong shell effects due to the strong coupling between the spin-orbit splitting and the effective mass. Complete mass tables have been generated and a comparison with other mass models is presented. (orig.)
Revisiting directed flow in relativistic heavy-ion collisions from a multiphase transport model
Guo, Chong-Qiang; Zhang, Chun-Jian [Chinese Academy of Sciences, Shanghai Institute of Applied Physics, Shanghai (China); University of Chinese Academy of Sciences, Beijing (China); Xu, Jun [Chinese Academy of Sciences, Shanghai Institute of Applied Physics, Shanghai (China)
2017-12-15
We have revisited several interesting questions on how the rapidity-odd directed flow is developed in relativistic {sup 197}Au + {sup 197}Au collisions at √(s{sub NN}) = 200 and 39 GeV based on a multiphase transport model. As the partonic phase evolves with time, the slope of the parton directed flow at midrapidity region changes from negative to positive as a result of the later dynamics at 200 GeV, while it remains negative at 39 GeV due to the shorter life time of the partonic phase. The directed flow splitting for various quark species due to their different initial eccentricities is observed at 39 GeV, while the splitting is very small at 200 GeV. From a dynamical coalescence algorithm with Wigner functions, we found that the directed flow of hadrons is a result of competition between the coalescence in momentum and coordinate space as well as further modifications by the hadronic rescatterings. (orig.)
Cerezo, Javier; Santoro, Fabrizio
2016-10-11
Vertical models for the simulation of spectroscopic line shapes expand the potential energy surface (PES) of the final state around the equilibrium geometry of the initial state. These models provide, in principle, a better approximation of the region of the band maximum. At variance, adiabatic models expand each PES around its own minimum. In the harmonic approximation, when the minimum energy structures of the two electronic states are connected by large structural displacements, adiabatic models can breakdown and are outperformed by vertical models. However, the practical application of vertical models faces the issues related to the necessity to perform a frequency analysis at a nonstationary point. In this contribution we revisit vertical models in harmonic approximation adopting both Cartesian (x) and valence internal curvilinear coordinates (s). We show that when x coordinates are used, the vibrational analysis at nonstationary points leads to a deficient description of low-frequency modes, for which spurious imaginary frequencies may even appear. This issue is solved when s coordinates are adopted. It is however necessary to account for the second derivative of s with respect to x, which here we compute analytically. We compare the performance of the vertical model in the s-frame with respect to adiabatic models and previously proposed vertical models in x- or Q 1 -frame, where Q 1 are the normal coordinates of the initial state computed as combination of Cartesian coordinates. We show that for rigid molecules the vertical approach in the s-frame provides a description of the final state very close to the adiabatic picture. For sizable displacements it is a solid alternative to adiabatic models, and it is not affected by the issues of vertical models in x- and Q 1 -frames, which mainly arise when temperature effects are included. In principle the G matrix depends on s, and this creates nonorthogonality problems of the Duschinsky matrix connecting the normal
Baryons electromagnetic mass splittings in potential models
Genovese, M.; Richard, J.-M.; Silvestre-Brac, B.; Varga, K.
1998-01-01
We study electromagnetic mass splittings of charmed baryons. We point out discrepancies among theoretical predictions in non-relativistic potential models; none of these predictions seems supported by experimental data. A new calculation is presented
Testing substellar models with dynamical mass measurements
Liu M.C.
2011-07-01
Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.
Relating masses and mixing angles. A model-independent model
Hollik, Wolfgang Gregor [DESY, Hamburg (Germany); Saldana-Salazar, Ulises Jesus [CINVESTAV (Mexico)
2016-07-01
In general, mixing angles and fermion masses are seen to be independent parameters of the Standard Model. However, exploiting the observed hierarchy in the masses, it is viable to construct the mixing matrices for both quarks and leptons in terms of the corresponding mass ratios only. A closer view on the symmetry properties leads to potential realizations of that approach in extensions of the Standard Model. We discuss the application in the context of flavored multi-Higgs models.
Modeling of alpha mass-efficiency curve
Semkow, T.M.; Jeter, H.W.; Parsa, B.; Parekh, P.P.; Haines, D.K.; Bari, A.
2005-01-01
We present a model for efficiency of a detector counting gross α radioactivity from both thin and thick samples, corresponding to low and high sample masses in the counting planchette. The model includes self-absorption of α particles in the sample, energy loss in the absorber, range straggling, as well as detector edge effects. The surface roughness of the sample is treated in terms of fractal geometry. The model reveals a linear dependence of the detector efficiency on the sample mass, for low masses, as well as a power-law dependence for high masses. It is, therefore, named the linear-power-law (LPL) model. In addition, we consider an empirical power-law (EPL) curve, and an exponential (EXP) curve. A comparison is made of the LPL, EPL, and EXP fits to the experimental α mass-efficiency data from gas-proportional detectors for selected radionuclides: 238 U, 230 Th, 239 Pu, 241 Am, and 244 Cm. Based on this comparison, we recommend working equations for fitting mass-efficiency data. Measurement of α radioactivity from a thick sample can determine the fractal dimension of its surface
Laura Casas
Full Text Available The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude × nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype, those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here. We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dose-dependent effect probably due to a concerted action of multiple pathways involved in scale formation.
Casas, Laura; Szűcs, Ré ka; Vij, Shubha; Goh, Chin Heng; Kathiresan, Purushothaman; Né meth, Sá ndor; Jeney, Zsigmond; Bercsé nyi, Mikló s; Orbá n, Lá szló
2013-01-01
The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.
Casas, Laura
2013-12-30
The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the \\'S\\' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called \\'N\\' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov\\'s work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.
Tseng, W. L.; Johnson, R. E.; Tucker, O. J.; Perry, M. E.; Ip, W. H.
2017-12-01
During the Cassini Grand Finale mission, this spacecraft, for the first time, has done the in-situ measurements of Saturn's upper atmosphere and its rings and provides critical information for understanding the coupling dynamics between the main rings and the Saturnian system. The ring atmosphere is the source of neutrals (i.e., O2, H2, H; Tseng et al., 2010; 2013a), which is primarily generated by photolytic decomposition of water ice (Johnson et al., 2006), and plasma (i.e., O2+ and H2+; Tseng et al., 2011) in the Saturnian magnetosphere. In addition, the main rings have strong interaction with Saturn's atmosphere and ionosphere (i.e., a source of oxygen into Saturn's upper atmosphere and/or the "ring rain" in O'Donoghue et al., 2013). Furthermore, the near-ring plasma environment is complicated by the neutrals from both the seasonally dependent ring atmosphere and Enceladus torus (Tseng et al., 2013b), and, possibly, from small grains from the main and tenuous F and G rings (Johnson et al.2017). The data now coming from Cassini Grand Finale mission already shed light on the dominant physics and chemistry in this region of Saturn's magnetosphere, for example, the presence of carbonaceous material from meteorite impacts in the main rings and each gas species have similar distribution in the ring atmosphere. We will revisit the details in our ring atmosphere/ionosphere model to study, such as the source mechanism for the organic material and the neutral-grain-plasma interaction processes.
Running-mass inflation model and WMAP
Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.
2004-01-01
We consider the observational constraints on the running-mass inflationary model, and, in particular, on the scale dependence of the spectral index, from the new cosmic microwave background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale dependence of n, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into specific types of interaction (gauge and Yukawa) we find that the parameter space is significantly constrained by the new data, but that the running-mass model remains viable
Model for the generation of leptonic mass
Fryberger, D.
1979-01-01
A self-consistent model for the generation of leptonic mass is developed. In this model it is assumed that bare masses are zero, all of the (charged) leptonic masses being generated by the QED self-interaction. A perturbation expansion for the QED self-mass is formulated, and contact is made between this expansion and the work of Landau and his collaborators. In order to achieve a finite result using this expansion, it is assumed that there is a cutoff at the Landau singularity and that the functional form of the (self-mass) integrand is the same beyond that singularity as it is below. Physical interpretations of these assumptions are discussed. Self-consistency equations are obtained which show that the Landau singularity is in the neighborhood of the Planck mass. This result implies that, as originally suggested by Landau, gravitation may play a role in an ultraviolet cutoff for QED. These equations also yield estimates for the (effective) number of additional pointlike particles that electromagnetically couple to the photon. This latter quantity is consistent with present data from e + e - storage rings
Pseudoscaler meson masses in the quark model
Karl, G.
1976-10-01
Pseudoscaler meson masses and sum rules are compared in two different limits of a quark model with 4 quarks. The conventional limit corresponds to a heavy c anti c state and generalizes ideal mixing in a nonet. The second limit corresponds to a missing SU 4 unitary singlet and appears more relevant to the masses of π, K, eta, eta'. If SU 3 is broken only by the mass difference between the strange and nonstrange quarks, the physical masses imply that the u anti u, d anti d and s anti s pairs account only for 33% of the composition of the eta'(960), while for the eta(548) this fraction is 86%. If some of the remaining matter is in the form of the constituents of J/psi, the relative proportion of the relative decays J/psi → eta γ vs J/psi → etaγ is accounted for in satisfactory agreement with experiment. (author)
Gerry, Christopher J
2012-07-01
Cross-national statistical analyses based on country-level panel data are increasingly popular in social epidemiology. To provide reliable results on the societal determinants of health, analysts must give very careful consideration to conceptual and methodological issues: aggregate (historical) data are typically compatible with multiple alternative stories of the data-generating process. Studies in this field which fail to relate their empirical approach to the true underlying data-generating process are likely to produce misleading results if, for example, they misspecify their models by failing to explore the statistical properties of the longitudinal aspect of their data or by ignoring endogeneity issues. We illustrate the importance of this extra need for care with reference to a recent debate on whether discussing the role of rapid mass privatisation can explain post-communist mortality fluctuations. We demonstrate that the finding that rapid mass privatisation was a "crucial determinant" of male mortality fluctuations in the post-communist world is rejected once better consideration is given to the way in which the data are generated. Copyright © 2012 Elsevier Ltd. All rights reserved.
3D MODEL ATMOSPHERES FOR EXTREMELY LOW-MASS WHITE DWARFS
Tremblay, P.-E. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD, 21218 (United States); Gianninas, A.; Kilic, M. [Department of Physics and Astronomy, University of Oklahoma, 440 W. Brooks St., Norman, OK, 73019 (United States); Ludwig, H.-G. [Zentrum für Astronomie der Universität Heidelberg, Landessternwarte, Königstuhl 12, D-69117 Heidelberg (Germany); Steffen, M. [Leibniz-Institut für Astrophysik Potsdam, An der Sternwarte 16, D-14482 Potsdam (Germany); Freytag, B. [Department of Physics and Astronomy at Uppsala University, Regementsvägen 1, Box 516, SE-75120 Uppsala (Sweden); Hermes, J. J., E-mail: tremblay@stsci.edu [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom)
2015-08-20
We present an extended grid of mean three-dimensional (3D) spectra for low-mass, pure-hydrogen atmosphere DA white dwarfs (WDs). We use CO5BOLD radiation-hydrodynamics 3D simulations covering T{sub eff} = 6000–11,500 K and log g = 5–6.5 (g in cm s{sup −2}) to derive analytical functions to convert spectroscopically determined 1D temperatures and surface gravities to 3D atmospheric parameters. Along with the previously published 3D models, the 1D to 3D corrections are now available for essentially all known convective DA WDs (i.e., log g = 5–9). For low-mass WDs, the correction in temperature is relatively small (a few percent at the most), but the surface gravities measured from the 3D models are lower by as much as 0.35 dex. We revisit the spectroscopic analysis of the extremely low-mass (ELM) WDs, and demonstrate that the 3D models largely resolve the discrepancies seen in the radius and mass measurements for relatively cool ELM WDs in eclipsing double WD and WD + millisecond pulsar binary systems. We also use the 3D corrections to revise the boundaries of the ZZ Ceti instability strip, including the recently found ELM pulsators.
Seidl, Roman; Barthel, Roland
2016-04-01
Interdisciplinary scientific and societal knowledge plays an increasingly important role in global change research. Also, in the field of water resources interdisciplinarity as well as cooperation with stakeholders from outside academia have been recognized as important. In this contribution, we revisit an integrated regional modelling system (DANUBIA), which was developed by an interdisciplinary team of researchers and relied on stakeholder participation in the framework of the GLOWA-Danube project from 2001 to 2011 (Mauser and Prasch 2016). As the model was developed before the current increase in literature on participatory modelling and interdisciplinarity, we ask how a socio-hydrology approach would have helped and in what way it would have made the work different. The present contribution firstly presents the interdisciplinary concept of DANUBIA, mainly with focus on the integration of human behaviour in a spatially explicit, process-based numerical modelling system (Roland Barthel, Janisch, Schwarz, Trifkovic, Nickel, Schulz, and Mauser 2008; R. Barthel, Nickel, Meleg, Trifkovic, and Braun 2005). Secondly, we compare the approaches to interdisciplinarity in GLOWA-Danube with concepts and ideas presented by socio-hydrology. Thirdly, we frame DANUBIA and a review of key literature on socio-hydrology in the context of a survey among hydrologists (N = 184). This discussion is used to highlight gaps and opportunities of the socio-hydrology approach. We show that the interdisciplinary aspect of the project and the participatory process of stakeholder integration in DANUBIA were not entirely successful. However, important insights were gained and important lessons were learnt. Against the background of these experiences we feel that in its current state, socio-hydrology is still lacking a plan for knowledge integration. Moreover, we consider necessary that socio-hydrology takes into account the lessons learnt from these earlier examples of knowledge integration
Lorentz violation naturalness revisited
Belenchia, Alessio; Gambassi, Andrea; Liberati, Stefano [SISSA - International School for Advanced Studies, via Bonomea 265, 34136 Trieste (Italy); INFN, Sezione di Trieste, via Valerio 2, 34127 Trieste (Italy)
2016-06-08
We revisit here the naturalness problem of Lorentz invariance violations on a simple toy model of a scalar field coupled to a fermion field via a Yukawa interaction. We first review some well-known results concerning the low-energy percolation of Lorentz violation from high energies, presenting some details of the analysis not explicitly discussed in the literature and discussing some previously unnoticed subtleties. We then show how a separation between the scale of validity of the effective field theory and that one of Lorentz invariance violations can hinder this low-energy percolation. While such protection mechanism was previously considered in the literature, we provide here a simple illustration of how it works and of its general features. Finally, we consider a case in which dissipation is present, showing that the dissipative behaviour does not percolate generically to lower mass dimension operators albeit dispersion does. Moreover, we show that a scale separation can protect from unsuppressed low-energy percolation also in this case.
Mass renormalization in sine-Gordon model
Xu Bowei; Zhang Yumei
1991-09-01
With a general gaussian wave functional, we investigate the mass renormalization in the sine-Gordon model. At the phase transition point, the sine-Gordon system tends to a system of massless free bosons which possesses conformal symmetry. (author). 8 refs, 1 fig
and density-dependent quark mass model
Since a fair proportion of such dense proto stars are likely to be ... the temperature- and density-dependent quark mass (TDDQM) model which we had em- ployed in .... instead of Tc ~170 MeV which is a favoured value for the ud matter [26].
Minihalo model for the low-redshift Lyα absorbers revisited
Lalović A.
2008-01-01
Full Text Available We reconsider the basic properties of the classical minihalo model of Rees and Milgrom in light of the new work, both observational (on 'dark galaxies' and masses of baryonic haloes and theoretical (on the cosmological mass function and the history of star formation. In particular, we show that more detailed models of ionized gas in haloes of dark matter following isothermal and Navarro-Frenk-White density profile can effectively reproduce particular aspects of the observed column density distribution function in a heterogeneous sample of low- and intermediate-redshift Lyα forest absorption lines.
Minihalo Model for the Low-Redshift Lyman alpha Absorbers Revisited
Lalović, A.
2008-06-01
Full Text Available We reconsider the basic properties of the classical minihalo model of Rees and Milgrom in light of the new work, both observational (on "dark galaxies" and masses of baryonic haloes and theoretical (on the cosmological mass function and the history of star formation. In particular, we show that more detailed models of ionized gas in haloes of dark matter following isothermal and Navarro-Frenk-White density profile can effectively reproduce particular aspects of the observed column density distribution function in a heterogeneous sample of low-and intermediate-redshift Ly$alpha$ forest absorption lines.
Standard Model mass spectrum in inflationary universe
Chen, Xingang [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics,60 Garden Street, Cambridge, MA 02138 (United States); Wang, Yi [Department of Physics, The Hong Kong University of Science and Technology,Clear Water Bay, Kowloon, Hong Kong (China); Xianyu, Zhong-Zhi [Center of Mathematical Sciences and Applications, Harvard University,20 Garden Street, Cambridge, MA 02138 (United States)
2017-04-11
We work out the Standard Model (SM) mass spectrum during inflation with quantum corrections, and explore its observable consequences in the squeezed limit of non-Gaussianity. Both non-Higgs and Higgs inflation models are studied in detail. We also illustrate how some inflationary loop diagrams can be computed neatly by Wick-rotating the inflation background to Euclidean signature and by dimensional regularization.
Falling chains as variable-mass systems: theoretical model and experimental analysis
De Sousa, Célia A; Costa, Pedro; Gordo, Paulo M
2012-01-01
In this paper, we revisit, theoretically and experimentally, the fall of a folded U-chain and of a pile-chain. The model calculation implies the division of the whole system into two subsystems of variable mass, allowing us to explore the role of tensional contact forces at the boundary of the subsystems. This justifies, for instance, that the folded U-chain falls faster than the acceleration due to the gravitational force. This result, which matches quite well with the experimental data independently of the type of chain, implies that the falling chain is well described by energy conservation. We verify that these conclusions are not observed for the pile-chain motion. (paper)
Mass generation in perturbed massless integrable models
Controzzi, D.; Mussardo, G.
2005-01-01
We extend form-factor perturbation theory to non-integrable deformations of massless integrable models, in order to address the problem of mass generation in such systems. With respect to the standard renormalisation group analysis this approach is more suitable for studying the particle content of the perturbed theory. Analogously to the massive case, interesting information can be obtained already at first order, such as the identification of the operators which create a mass gap and those which induce the confinement of the massless particles in the perturbed theory
From the trees to the forest: a review of radiative neutrino mass models
Cai, Yi; Herrero García, Juan; Schmidt, Michael A.; Vicente, Avelino; Volkas, Raymond R.
2017-12-01
A plausible explanation for the lightness of neutrino masses is that neutrinos are massless at tree level, with their mass (typically Majorana) being generated radiatively at one or more loops. The new couplings, together with the suppression coming from the loop factors, imply that the new degrees of freedom cannot be too heavy (they are typically at the TeV scale). Therefore, in these models there are no large mass hierarchies and they can be tested using different searches, making their detailed phenomenological study very appealing. In particular, the new particles can be searched for at colliders and generically induce signals in lepton-flavor and lepton-number violating processes (in the case of Majorana neutrinos), which are not independent from reproducing correctly the neutrino masses and mixings. The main focus of the review is on Majorana neutrinos. We order the allowed theory space from three different perspectives: (i) using an effective operator approach to lepton number violation, (ii) by the number of loops at which the Weinberg operator is generated, (iii) within a given loop order, by the possible irreducible topologies. We also discuss in more detail some popular radiative models which involve qualitatively different features, revisiting their most important phenomenological implications. Finally, we list some promising avenues to pursue.
Court, Deborah
1999-01-01
Revisits and reviews Imre Lakatos' ideas on "Falsification and the Methodology of Scientific Research Programmes." Suggests that Lakatos' framework offers an insightful way of looking at the relationship between theory and research that is relevant not only for evaluating research programs in theoretical physics, but in the social…
Mass functions from the excursion set model
Hiotelis, Nicos; Del Popolo, Antonino
2017-11-01
Aims: We aim to study the stochastic evolution of the smoothed overdensity δ at scale S of the form δ(S) = ∫0S K(S,u)dW(u), where K is a kernel and dW is the usual Wiener process. Methods: For a Gaussian density field, smoothed by the top-hat filter, in real space, we used a simple kernel that gives the correct correlation between scales. A Monte Carlo procedure was used to construct random walks and to calculate first crossing distributions and consequently mass functions for a constant barrier. Results: We show that the evolution considered here improves the agreement with the results of N-body simulations relative to analytical approximations which have been proposed from the same problem by other authors. In fact, we show that an evolution which is fully consistent with the ideas of the excursion set model, describes accurately the mass function of dark matter haloes for values of ν ≤ 1 and underestimates the number of larger haloes. Finally, we show that a constant threshold of collapse, lower than it is usually used, it is able to produce a mass function which approximates the results of N-body simulations for a variety of redshifts and for a wide range of masses. Conclusions: A mass function in good agreement with N-body simulations can be obtained analytically using a lower than usual constant collapse threshold.
Electric solar wind sail mass budget model
P. Janhunen
2013-02-01
Full Text Available The electric solar wind sail (E-sail is a new type of propellantless propulsion system for Solar System transportation, which uses the natural solar wind to produce spacecraft propulsion. The E-sail consists of thin centrifugally stretched tethers that are kept charged by an onboard electron gun and, as such, experience Coulomb drag through the high-speed solar wind plasma stream. This paper discusses a mass breakdown and a performance model for an E-sail spacecraft that hosts a mission-specific payload of prescribed mass. In particular, the model is able to estimate the total spacecraft mass and its propulsive acceleration as a function of various design parameters such as the number of tethers and their length. A number of subsystem masses are calculated assuming existing or near-term E-sail technology. In light of the obtained performance estimates, an E-sail represents a promising propulsion system for a variety of transportation needs in the Solar System.
Hertzum, Morten
1994-01-01
: (1) the text model, also known as the inverted file approach, (2) the hypertext model, and (3) the relational model. In the design of the relational model changeability was a key consideration, but more often it is sacrificed to save development resources or improve performance. As it is not uncommon...... to see successful TSARS exist for 15-20 years and be subject to manifold changes during their lifetime, it is the relational model which is considered for use in the unified toolkit. It seems as if the relational model can be enhanced to incorporate the text model and the hypertext model...
Models of neutrino mass and mixing
Ma, Ernest
2000-01-01
There are two basic theoretical approaches to obtaining neutrino mass and mixing. In the minimalist approach, one adds just enough new stuff to the Minimal Standard Model to get m ν ≠0 and U αi ≠1. In the holistic approach, one uses a general framework or principle to enlarge the Minimal Standard Model such that, among other things, m ν ≠0 and U αi ≠1. In both cases, there are important side effects besides neutrino oscillations. I discuss a number of examples, including the possibility of leptogenesis from R parity nonconservation in supersymmetry
Salaris, Maurizio; Serenelli, Aldo; Weiss, Achim; Miller Bertolami, Marcelo
2009-01-01
Using the most recent results about white dwarfs (WDs) in ten open clusters, we revisit semiempirical estimates of the initial-final mass relation (IFMR) in star clusters, with emphasis on the use of stellar evolution models. We discuss the influence of these models on each step of the derivation. One intention of our work is to use consistent sets of calculations both for the isochrones and the WD cooling tracks. The second one is to derive the range of systematic errors arising from stellar evolution theory. This is achieved by using different sources for the stellar models and by varying physical assumptions and input data. We find that systematic errors, including the determination of the cluster age, are dominating the initial mass values, while observational uncertainties influence the final mass primarily. After having determined the systematic errors, the initial-final mass relation allows us finally to draw conclusions about the physics of the stellar models, in particular about convective overshooting.
Basic, Ivan; Nadramija, Damir; Flajslik, Mario; Amic, Dragan; Lucic, Bono
2007-01-01
Several quantitative structure-activity studies for this data set containing 107 HEPT derivatives have been performed since 1997, using the same set of molecules by (more or less) different classes of molecular descriptors. Multivariate Regression (MR) and Artificial Neural Network (ANN) models were developed and in each study the authors concluded that ANN models are superior to MR ones. We re-calculated multivariate regression models for this set of molecules using the same set of descriptors, and compared our results with the previous ones. Two main reasons for overestimation of the quality of the ANN models in previous studies comparing with MR models are: (1) wrong calculation of leave-one-out (LOO) cross-validated (CV) correlation coefficient for MR models in Luco et al., J. Chem. Inf. Comput. Sci. 37 392-401 (1997), and (2) incorrect estimation/interpretation of leave-one-out (LOO) cross-validated and predictive performance and power of ANN models. More precise and fairer comparison of fit and LOO CV statistical parameters shows that MR models are more stable. In addition, MR models are much simpler than ANN ones. For real testing the predictive performance of both classes of models we need more HEPT derivatives, because all ANN models that presented results for external set of molecules used experimental values in optimization of modeling procedure and model parameters
Borza, Liana Rada; Gavrilovici, Cristina; Stockman, René
2015-01-01
The present paper revisits the ethical models of patient--physician relationship from the perspective of patient autonomy and values. It seems that the four traditional models of physician--patient relationship proposed by Emanuel & Emanuel in 1992 closely link patient values and patient autonomy. On the other hand, their reinterpretation provided by Agarwal & Murinson twenty years later emphasizes the independent expression of values and autonomy in individual patients. Additionally, patient education has been assumed to join patient values and patient autonomy. Moreover, several authors have noted that, over the past few decades, patient autonomy has gradually replaced the paternalistic approach based on the premise that the physician knows what is best for the patient. Neither the paternalistic model of physician-patient relationship, nor the informative model is considered to be satisfactory, as the paternalistic model excludes patient values from decision making, while the informative model excludes physician values from decision making. However, the deliberative model of patient-physician interaction represents an adequate alternative to the two unsatisfactory approaches by promoting shared decision making between the physician and the patient. It has also been suggested that the deliberative model would be ideal for exercising patient autonomy in chronic care and that the ethical role of patient education would be to make the deliberative model applicable to chronic care. In this regard, studies have indicated that the use of decision support interventions might increase the deliberative capacity of chronic patients.
Holt, Robin; Cornelissen, Joep
2014-01-01
We critique and extend theory on organizational sensemaking around three themes. First, we investigate sense arising non-productively and so beyond any instrumental relationship with things; second, we consider how sense is experienced through mood as well as our cognitive skills of manipulation ...... research by revisiting Weick’s seminal reading of Norman Maclean’s book surrounding the tragic events of a 1949 forest fire at Mann Gulch, USA....
Mass and power modeling of communication satellites
Price, Kent M.; Pidgeon, David; Tsao, Alex
1991-01-01
Analytic estimating relationships for the mass and power requirements for major satellite subsystems are described. The model for each subsystem is keyed to the performance drivers and system requirements that influence their selection and use. Guidelines are also given for choosing among alternative technologies which accounts for other significant variables such as cost, risk, schedule, operations, heritage, and life requirements. These models are intended for application to first order systems analyses, where resources do not warrant detailed development of a communications system scenario. Given this ground rule, the models are simplified to 'smoothed' representation of reality. Therefore, the user is cautioned that cost, schedule, and risk may be significantly impacted where interpolations are sufficiently different from existing hardware as to warrant development of new devices.
Introduction to models of neutrino masses and mixings
Joshipura, Anjan S.
2004-01-01
This review contains an introduction to models of neutrino masses for non-experts. Topics discussed are i) different types of neutrino masses ii) structure of neutrino masses and mixing needed to understand neutrino oscillation results iii) mechanism to generate neutrino masses in gauge theories and iv) discussion of generic scenarios proposed to realize the required neutrino mass structures. (author)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.
2017-11-01
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.
Pekka Sulkunen
2012-07-01
Full Text Available Representative democracy has been based on the idea that interest groups form parliaments through competitive elections, and legislate in favour of their supporters. Declining electoral participation, rise of populist rightwing parties, contingent coalitions, personalized electoral success and scandaldriven politics indicate a crisis in representative democracy. Mass society theories after the Second World War predicted a decline of democracy on the basis of homogenisation of mass consumption societies. The threat was seen to involve totalitarian rule, combined with bureaucracy serving the interests of elites. This paper examines the underlying presuppositions of mass society theory, and argues that the homogeneity argument is insufficient to fit the realities. Following David Riesman, it is argued that the otherdirected character growsfrom unstable interest group identities, but its determinant is not sameness but agency and therefore difference. To have agency is to orient oneself to others as a self, as unique, separate and autonomous subject. This is vindicated by trends in public administration since the 1980s, which stress citizens’ selfcontrol,autonomy and partnership rather than conformity. Political disputes arise around contradictions between difference and autonomy in societies where agency is a principle of justification. Universal autonomy requires homogeneity but agency stresses difference and uniqueness.
Revisiting simplified dark matter models in terms of AMS-02 and Fermi-LAT
Li, Tong
2018-01-01
We perform an analysis of the simplified dark matter models in the light of cosmic ray observables by AMS-02 and Fermi-LAT. We assume fermion, scalar or vector dark matter particle with a leptophobic spin-0 mediator that couples only to Standard Model quarks and dark matter via scalar and/or pseudo-scalar bilinear. The propagation and injection parameters of cosmic rays are determined by the observed fluxes of nuclei from AMS-02. We find that the AMS-02 observations are consistent with the dark matter framework within the uncertainties. The AMS-02 antiproton data prefer 30 (50) GeV - 5 TeV dark matter mass and require an effective annihilation cross section in the region of 4 × 10-27 (7 × 10-27) - 4 × 10-24 cm3/s for the simplified fermion (scalar and vector) dark matter models. The cross sections below 2 × 10-26 cm3/s can evade the constraint from Fermi-LAT dwarf galaxies for about 100 GeV dark matter mass.
Neutrino mass models and CP violation
Joshipura, Anjan S.
2011-01-01
Theoretical ideas on the origin of (a) neutrino masses (b) neutrino mass hierarchies and (c) leptonic mixing angles are reviewed. Topics discussed include (1) symmetries of neutrino mass matrix and their origin (2) ways to understand the observed patterns of leptonic mixing angles and (3)unified description of neutrino masses and mixing angles in grand unified theories.
Elliott, E. A.; Rodriguez, A. B.; McKee, B. A.
2017-12-01
Traditional models of estuarine systems show deposition occurs primarily within the central basin. There, accommodation space is high within the deep central valley, which is below regional wave base and where current energy is presumed to reach a relative minimum, promoting direct deposition of cohesive sediment and minimizing erosion. However, these models often reflect long-term (decadal-millennial) timescales, where accumulation rates are in relative equilibrium with the rate of relative sea-level rise, and lack the resolution to capture shorter term changes in sediment deposition and erosion within the central estuary. This work presents a conceptual model for estuarine sedimentation during non-equilibrium conditions, where high-energy inputs to the system reach a relative maximum in the central basin, resulting in temporary deposition and/or remobilization over sub-annual to annual timescales. As an example, we present a case study of Core Sound, NC, a lagoonal estuarine system where the regional base-level has been reached, and sediment deposition, resuspension and bypassing is largely a result of non-equilibrium, high-energy events. Utilizing a 465 cm-long sediment core from a mini-basin located between Core Sound and the continental shelf, a 40-year sub-annual chronology was developed for the system, with sediment accumulation rates (SAR) interpolated to a monthly basis over the 40-year record. This study links erosional processes in the estuary directly with sediment flux to the continental shelf, taking advantage of the highly efficient sediment trapping capability of the mini-basin. The SAR record indicates high variation in the estuarine sediment supply, with peaks in the SAR record at a recurrence interval of 1 year (+/- 0.25). This record has been compared to historical storm influence for the area. Through this multi-decadal record, sediment flushing events occur at a much more frequent interval than previously thought (i.e. annual rather than
Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.
Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H
2008-05-01
The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.
Neutrino dark energy. Revisiting the stability issue
Eggers Bjaelde, O.; Hannestad, S. [Aarhus Univ. (Denmark). Dept. of Physics and Astronomy; Brookfield, A.W. [Sheffield Univ. (United Kingdom). Dept. of Applied Mathematics and Dept. of Physics, Astro-Particle Theory and Cosmology Group; Van de Bruck, C. [Sheffield Univ. (United Kingdom). Dept. of Applied Mathematics, Astro-Particle Theory and Cosmology Group; Mota, D.F. [Heidelberg Univ. (Germany). Inst. fuer Theoretische Physik]|[Institute of Theoretical Astrophysics, Oslo (Norway); Schrempp, L. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Tocchini-Valentini, D. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Physics and Astronomy
2007-05-15
A coupling between a light scalar field and neutrinos has been widely discussed as a mechanism for linking (time varying) neutrino masses and the present energy density and equation of state of dark energy. However, it has been pointed out that the viability of this scenario in the non-relativistic neutrino regime is threatened by the strong growth of hydrodynamic perturbations associated with a negative adiabatic sound speed squared. In this paper we revisit the stability issue in the framework of linear perturbation theory in a model independent way. The criterion for the stability of a model is translated into a constraint on the scalar-neutrino coupling, which depends on the ratio of the energy densities in neutrinos and cold dark matter. We illustrate our results by providing meaningful examples both for stable and unstable models. (orig.)
Bottomonium spectrum revisited
Segovia, Jorge; Entem, David R.; Fernández, Francisco
2016-01-01
We revisit the bottomonium spectrum motivated by the recently exciting experimental progress in the observation of new bottomonium states, both conventional and unconventional. Our framework is a nonrelativistic constituent quark model which has been applied to a wide range of hadronic observables from the light to the heavy quark sector and thus the model parameters are completely constrained. Beyond the spectrum, we provide a large number of electromagnetic, strong and hadronic decays in order to discuss the quark content of the bottomonium states and give more insights about the better way to determine their properties experimentally.
Meille, Christophe; Barbolosi, Dominique; Ciccolini, Joseph; Freyer, Gilles; Iliadis, Athanassios
2016-08-01
Controlling effects of drugs administered in combination is particularly challenging with a densified regimen because of life-threatening hematological toxicities. We have developed a mathematical model to optimize drug dosing regimens and to redesign the dose intensification-dose escalation process, using densified cycles of combined anticancer drugs. A generic mathematical model was developed to describe the main components of the real process, including pharmacokinetics, safety and efficacy pharmacodynamics, and non-hematological toxicity risk. This model allowed for computing the distribution of the total drug amount of each drug in combination, for each escalation dose level, in order to minimize the average tumor mass for each cycle. This was achieved while complying with absolute neutrophil count clinical constraints and without exceeding a fixed risk of non-hematological dose-limiting toxicity. The innovative part of this work was the development of densifying and intensifying designs in a unified procedure. This model enabled us to determine the appropriate regimen in a pilot phase I/II study in metastatic breast patients for a 2-week-cycle treatment of docetaxel plus epirubicin doublet, and to propose a new dose-ranging process. In addition to the present application, this method can be further used to achieve optimization of any combination therapy, thus improving the efficacy versus toxicity balance of such a regimen.
Coastal Water Quality Modeling in Tidal Lake: Revisited with Groundwater Intrusion
Kim, C.
2016-12-01
A new method for predicting the temporal and spatial variation of water quality, with accounting for a groundwater effect, has been proposed and applied to a water body partially connected to macro-tidal coastal waters in Korea. The method consists of direct measurement of environmental parameters, and it indirectly incorporates a nutrients budget analysis to estimate the submarine groundwater fluxes. Three-dimensional numerical modeling of water quality has been used with the directly collected data and the indirectly estimated groundwater fluxes. The applied area is Saemangeum tidal lake that is enclosed by 33km-long sea dyke with tidal openings at two water gates. Many investigations of groundwater impact reveal that 10 50% of nutrient loading in coastal waters comes from submarine groundwater, particularly in the macro-tidal flat, as in the west coast of Korea. Long-term monitoring of coastal water quality signals the possibility of groundwater influence on salinity reversal and on the excess mass outbalancing the normal budget in Saemangeum tidal lake. In the present study, we analyze the observed data to examine the influence of submarine groundwater, and then a box model is demonstrated for quantifying the influx and efflux. A three-dimensional numerical model has been applied to reproduce the process of groundwater dispersal and its effect on the water quality of Saemangeum tidal lake. The results show that groundwater influx during the summer monsoon then contributes significantly, 20% more than during dry season, to water quality in the tidal lake.
Revisiting large break LOCA with the CATHARE-3 three-field model
Valette, Michel; Pouvreau, Jerome; Bestion, Dominique; Emonot, Philippe
2009-01-01
Some aspects of large break LOCA analysis (steam binding, oscillatory reflooding, top-down reflooding) are expected to be improved in advanced system codes from more detailed description of flows by adding a third field for droplets. The future system code CATHARE-3 is under development by CEA and supported by EDF, AREVA-NP and IRSN in the frame of the NEPTUNE project and this paper shows some preliminary results obtained in reflooding conditions. A three-field model has been implemented, including vapor, continuous liquid and liquid droplet fields. This model features a set of nine equations of mass, momentum and energy balance. Such a model allows a more detailed description of the droplet transportation from core to steam generator, while countercurrent flow of continuous liquid is allowed. Code assessment against reflooding experiments in an isolated rod bundle mockup is presented, using 1D meshing of the bundle. Comparisons of CATHARE-3 simulations against data series from PERICLES and RBHT full scale experiments show satisfactory results. Quench front motions are well predicted, as well as clad temperatures in most of the tested runs. The BETHSY 6.7C Integral Effect Test simulating the gravity driven Reflooding process in a scaled PWR circuit is then compared to CATHARE-3 simulation. The three-field model is applied in several parts of the circuit : core, upper plenum, hot leg and steam generator, represented by either 1D or 3D modules, while the classic 6-equation model is used in the other parts of the loop. A short analysis of the results is presented. (author)
Revisiting large break LOCA with the CATHARE-3 three-field model
Valette, Michel; Pouvreau, Jérôme; Bestion, Dominique; Emonot, Philippe
2011-01-01
Highlights: ► CATHARE 3 enables a three-field analysis of a LB LOCA. ► Reflooding experiments in isolated rod bundles are satisfactory predicted. ► A BETHSY integral test simulation supports the CATHARE 3 3-field assessment. - Abstract: Some aspects of large break LOCA analysis (steam binding, oscillatory reflooding, top-down reflooding) are expected to be improved in advanced system codes from more detailed description of flows by adding a third field for droplets. The future system code CATHARE-3 is under development by CEA and supported by EDF, AREVA-NP and IRSN in the frame of the NEPTUNE project and this paper shows some preliminary results obtained in reflooding conditions. A three-field model has been implemented, including vapor, continuous liquid and liquid droplet fields. This model features a set of nine equations of mass, momentum and energy balance. Such a model allows a more detailed description of the droplet transportation from core to steam generator, while countercurrent flow of continuous liquid is allowed. Code assessment against reflooding experiments in a rod bundle is presented, using 1D meshing of the bundle. Comparisons of CATHARE-3 simulations against data series from PERICLES and RBHT full scale experiments show satisfactory results. Quench front motions are well predicted, as well as clad temperatures in most of the tested runs. The BETHSY 6.7C Integral Effect Test simulating the gravity driven reflooding process in a scaled PWR circuit is then compared to CATHARE-3 simulation. The three-field model is applied in several parts of the circuit: core, upper plenum, hot leg and steam generator, represented by either 1D or 3D modules, while the classic six-equation model is used in the other parts of the loop. An analysis of these first results is presented and future work is defined for improving the droplet behavior simulation in both the upper plenum and the hot legs.
Limit on mass differences in the Weinberg model
Veltman, M.J.G.
1977-01-01
Within the Weinberg model mass differences between members of a multiplet generate further mass differences between the neutral and charged vector bosons. The experimental situation on the Weinberg model leads to an upper limit of about 800 GeV on mass differences within a multiplet. No limit on the
Abraham Wall-Medrano
2016-03-01
Full Text Available Abstract Background A body mass index (BMI ≥30 kg/m2 and a waist circumference (WC ≥80 cm in women (WCF or ≥90 cm in men (WCM are reference cardiometabolic risk markers (CMM for Mexicans adults. However, their reliability to predict other CMM (index tests in young Mexicans has not been studied in depth. Methods A cross-sectional descriptive study evaluating several anthropometric, physiological and biochemical CMM from 295 young Mexicans was performed. Sensitivity (Se, specificity (Sp and Youden’s index (J of reference BMI/WC cutoffs toward other CMM (n = 14 were obtained and their most reliable cutoffs were further calculated at Jmax. Results Prevalence, incidence and magnitude of most CMM increased along the BMI range (p < 0.01. BMI explained 81 % of WC’s variance [Se (97 %, Sp (71 %, J (68 %, Jmax (86 %, BMI = 30 kg/m2] and 4–50 % of other CMM. The five most prevalent (≥71 % CMM in obese subjects were high WC, low HDL-C, and three insulin-related CMM [Fasting insulin, HOMA-IR, and QUICKI]. For a BMI = 30 kg/m2, J ranged from 16 % (HDL-C/LDL-C to 68 % (WC, being moderately reliable (Jmax = 61–67 to predict high uric acid (UA, metabolic syndrome (MetS and the hypertriglyceridemic-waist phenotype (HTGW. Corrected WCM/WCF were moderate-highly reliable (Jmax = 66–90 to predict HTGW, MetS, fasting glucose and UA. Most CMM were moderate-highly predicted at 27 ± 3 kg/m2 (CI 95 %, 25–28, 85 ± 5 cm (CI 95 %, 82–88 and 81 ± 6cm (CI 95 %, 75–87, for BMI, WCM and WCF, respectively. Conclusion BMI and WC are good predictors of several CMM in the studied population, although at different cutoffs than current reference values.
Testing the predictive power of nuclear mass models
Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.
2008-01-01
A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool
Third generation masses from a two Higgs model fixed point
Froggatt, C.D.; Knowles, I.G.; Moorhouse, R.G.
1990-01-01
The large mass ratio between the top and bottom quarks may be attributed to a hierarchy in the vacuum expectation values of scalar doublets. We consider an effective renormalisation group fixed point determination of the quartic scalar and third generation Yukawa couplings in such a two doublet model. This predicts a mass m t =220 GeV and a mass ratio m b /m τ =2.6. In its simplest form the model also predicts the scalar masses, including a light scalar with a mass of order the b quark mass. Experimental implications are discussed. (orig.)
Neutrino assisted GUT baryogenesis revisited
Huang, Wei-Chih; Päs, Heinrich; Zeißner, Sinan
2018-03-01
Many grand unified theory (GUT) models conserve the difference between the baryon and lepton number, B -L . These models can create baryon and lepton asymmetries from heavy Higgs or gauge boson decays with B +L ≠0 but with B -L =0 . Since the sphaleron processes violate B +L , such GUT-generated asymmetries will finally be washed out completely, making GUT baryogenesis scenarios incapable of reproducing the observed baryon asymmetry of the Universe. In this work, we revisit the idea to revive GUT baryogenesis, proposed by Fukugita and Yanagida, where right-handed neutrinos erase the lepton asymmetry before the sphaleron processes can significantly wash out the original B +L asymmetry, and in this way one can prevent a total washout of the initial baryon asymmetry. By solving the Boltzmann equations numerically for baryon and lepton asymmetries in a simplified 1 +1 flavor scenario, we can confirm the results of the original work. We further generalize the analysis to a more realistic scenario of three active and two right-handed neutrinos to highlight flavor effects of the right-handed neutrinos. Large regions in the parameter space of the Yukawa coupling and the right-handed neutrino mass featuring successful baryogenesis are identified.
The Halo Occupation Distribution of obscured quasars: revisiting the unification model
Mitra, Kaustav; Chatterjee, Suchetana; DiPompeo, Michael A.; Myers, Adam D.; Zheng, Zheng
2018-06-01
We model the projected angular two-point correlation function (2PCF) of obscured and unobscured quasars selected using the Wide-field Infrared Survey Explorer (WISE), at a median redshift of z ˜ 1 using a five parameter Halo Occupation Distribution (HOD) parametrization, derived from a cosmological hydrodynamic simulation by Chatterjee et al. The HOD parametrization was previously used to model the 2PCF of optically selected quasars and X-ray bright active galactic nuclei (AGNs) at z ˜ 1. The current work shows that a single HOD parametrization can be used to model the population of different kinds of AGN in dark matter haloes suggesting the universality of the relationship between AGN and their host dark matter haloes. Our results show that the median halo mass of central quasar hosts increases from optically selected (4.1^{+0.3}_{-0.4} × 10^{12} h^{-1} M_{⊙}) and infra-red (IR) bright unobscured populations (6.3^{+6.2}_{-2.3} × 10^{12} h^{-1} M_{⊙}) to obscured quasars (10.0^{+2.6}_{-3.7} × 10^{12} h^{-1} M_{⊙}), signifying an increase in the degree of clustering. The projected satellite fractions also increase from optically bright to obscured quasars and tend to disfavour a simple `orientation only' theory of active galactic nuclei unification. Our results also show that future measurements of the small-scale clustering of obscured quasars can constrain current theories of galaxy evolution where quasars evolve from an IR-bright obscured phase to the optically bright unobscured phase.
Zhang, Baocheng; Yuan, Yunbin
2017-04-01
A synthesis of two prevailing Global Navigation Satellite System (GNSS) positioning technologies, namely the precise point positioning (PPP) and the network-based real-time kinematic (NRTK), results in the emergence of the PPP-RTK. This new concept preferably integrates the typical advantage of PPP (e.g. flexibility) and that of NRTK (e.g. efficiency), such that it enables single-receiver users to achieve high positioning accuracy with reasonable timeliness through integer ambiguity resolution (IAR). The realization of PPP-RTK needs to accomplish two sequential tasks. The first task is to determine a class of corrections including, necessarily, the satellite orbits, the satellite clocks and the satellite phase (and code, in case of more than two frequencies) biases at the network level. With these corrections, the second task, then, is capable of solving for the ambiguity-fixed, absolute position(s) at the user level. In this contribution, we revisit three variants (geometry-free, geometry-fixed, and geometry- and satellite-clock-fixed) of undifferenced, uncombined PPP-RTK network model and discuss their implications for practical use. We carry out a case study using multi-day, dual-frequency GPS data from the Crustal Movement Observation Network of China (CMONOC), aiming to assess the (static and kinematic) positioning performance (in terms of time-to-first-fix and accuracy) that is achievable by PPP-RTK users across China.
Modelling baryonic effects on galaxy cluster mass profiles
Shirasaki, Masato; Lau, Erwin T.; Nagai, Daisuke
2018-06-01
Gravitational lensing is a powerful probe of the mass distribution of galaxy clusters and cosmology. However, accurate measurements of the cluster mass profiles are limited by uncertainties in cluster astrophysics. In this work, we present a physically motivated model of baryonic effects on the cluster mass profiles, which self-consistently takes into account the impact of baryons on the concentration as well as mass accretion histories of galaxy clusters. We calibrate this model using the Omega500 hydrodynamical cosmological simulations of galaxy clusters with varying baryonic physics. Our model will enable us to simultaneously constrain cluster mass, concentration, and cosmological parameters using stacked weak lensing measurements from upcoming optical cluster surveys.
Modelling Baryonic Effects on Galaxy Cluster Mass Profiles
Shirasaki, Masato; Lau, Erwin T.; Nagai, Daisuke
2018-03-01
Gravitational lensing is a powerful probe of the mass distribution of galaxy clusters and cosmology. However, accurate measurements of the cluster mass profiles are limited by uncertainties in cluster astrophysics. In this work, we present a physically motivated model of baryonic effects on the cluster mass profiles, which self-consistently takes into account the impact of baryons on the concentration as well as mass accretion histories of galaxy clusters. We calibrate this model using the Omega500 hydrodynamical cosmological simulations of galaxy clusters with varying baryonic physics. Our model will enable us to simultaneously constrain cluster mass, concentration, and cosmological parameters using stacked weak lensing measurements from upcoming optical cluster surveys.
Bogolubov, N.N. Jr.; Prykarpatsky, A.K.; Ufuk Taneri
2008-07-01
The main fundamental principles characterizing the vacuum field structure are formulated and the modeling of the related vacuum medium and charged point particle dynamics by means of de- vised field theoretic tools are analyzed. The Maxwell electrodynamic theory is revisited and newly derived from the suggested vacuum field structure principles and the classical special relativity theory relationship between the energy and the corresponding point particle mass is revisited and newly obtained. The Lorentz force expression with respect to arbitrary non-inertial reference frames is revisited and discussed in detail, and some new interpretations of relations between the special relativity theory and quantum mechanics are presented. The famous quantum-mechanical Schroedinger type equations for a relativistic point particle in the external potential and magnetic fields within the quasiclassical approximation as the Planck constant (h/2π) → 0 and the light velocity c → ∞ are obtained. (author)
The 1992 FRDM mass model and unstable nuclei
Moeller, P.
1994-01-01
We discuss the reliability of a recent global nuclear-structure calculation in regions far from β stability. We focus on the results for nuclear masses, but also mention other results obtained in the nuclear-structure calculation, for example ground-state spins. We discuss what should be some minimal requirements of a nuclear mass model and study how the macroscopic-microscopic method and other nuclear mass models fullfil such basic requirements. We study in particular the reliability of nuclear mass models in regions of nuclei that were not considered in the determination of the model parameters
Peculiarities of constructing the models of mass religious communication
Petrushkevych Maria Stefanivna
2017-07-01
Full Text Available Religious communication is a full-fledged, effective part of the mass information field. It uses new media to fulfil its needs. And it also functions in the field of mass culture and the information society. To describe the features of mass religious communication in the article, the author constructs a graphic model of its functioning.
eWOM, Revisit Intention, Destination Trust and Gender
Abubakar, Abubakar Mohammed; Ilkan, Mustafa; Al-Tal, Raad Meshall; Eluwole, Kayode
2017-01-01
This article investigates the impact of eWOM on intention to revisit and destination trust, and the moderating role of gender in medical tourism industry. Result from structural equation modeling (n=240) suggests the following: (1) that eWOM influences intention to revisit and destination trust; (2) that destination trust influences intention to revisit; (3) that the impact of eWOM on intention to revisit is about 1.3 times higher in men; (4) that the impact of eWOM on destination trust is ab...
Fermion masses in potential models of chiral symmetry breaking
Jaroszewicz, T.
1983-01-01
A class of models of spontaneous chiral symmetry breaking is considered, based on the Hamiltonian with an instantaneous potential interaction of fermions. An explicit mass term mΨ-barΨ is included and the physical meaning of the mass parameter is discussed. It is shown that if the Hamiltonian is normal-ordered (i.e. self-energy omitted), then the mass m introduced in the Hamiltonian is not the current mass appearing in the current algebra relations. (author)
Ups and downs of Viagra: revisiting ototoxicity in the mouse model.
Au, Adrian; Stuyt, John Gerka; Chen, Daniel; Alagramam, Kumar
2013-01-01
Sildenafil citrate (Viagra), a phosphodiesterase 5 inhibitor (PDE5i), is a commonly prescribed drug for erectile dysfunction. Since the introduction of Viagra in 1997, several case reports have linked Viagra to sudden sensorineural hearing loss. However, these studies are not well controlled for confounding factors, such as age and noise-induced hearing loss and none of these reports are based on prospective double-blind studies. Further, animal studies report contradictory data. For example, one study (2008) reported hearing loss in rats after long-term and high-dose exposure to sildenafil citrate. The other study (2012) showed vardenafil, another formulation of PDE5i, to be protective against noise-induced hearing loss in mice and rats. Whether or not clinically relevant doses of sildenafil citrate cause hearing loss in normal subjects (animals or humans) is controversial. One possibility is that PDE5i exacerbates age-related susceptibility to hearing loss in adults. Therefore, we tested sildenafil citrate in C57BL/6J, a strain of mice that displays increased susceptibility to age-related hearing loss, and compared the results to those obtained from the FVB/N, a strain of mice with no predisposition to hearing loss. Six-week-old mice were injected with the maximum tolerated dose of sildenafil citrate (10 mg/kg/day) or saline for 30 days. Auditory brainstem responses (ABRs) were recorded pre- and post injection time points to assess hearing loss. Entry of sildenafil citrate in the mouse cochlea was confirmed by qRT-PCR analysis of a downstream target of the cGMP-PKG cascade. ABR data indicated no statistically significant difference in hearing between treated and untreated mice in both backgrounds. Results show that the maximum tolerated dose of sildenafil citrate administered daily for 4 weeks does not affect hearing in the mouse. Our study gives no indication that Viagra will negatively impact hearing and it emphasizes the need to revisit the issue of Viagra
Ups and downs of Viagra: revisiting ototoxicity in the mouse model.
Adrian Au
Full Text Available Sildenafil citrate (Viagra, a phosphodiesterase 5 inhibitor (PDE5i, is a commonly prescribed drug for erectile dysfunction. Since the introduction of Viagra in 1997, several case reports have linked Viagra to sudden sensorineural hearing loss. However, these studies are not well controlled for confounding factors, such as age and noise-induced hearing loss and none of these reports are based on prospective double-blind studies. Further, animal studies report contradictory data. For example, one study (2008 reported hearing loss in rats after long-term and high-dose exposure to sildenafil citrate. The other study (2012 showed vardenafil, another formulation of PDE5i, to be protective against noise-induced hearing loss in mice and rats. Whether or not clinically relevant doses of sildenafil citrate cause hearing loss in normal subjects (animals or humans is controversial. One possibility is that PDE5i exacerbates age-related susceptibility to hearing loss in adults. Therefore, we tested sildenafil citrate in C57BL/6J, a strain of mice that displays increased susceptibility to age-related hearing loss, and compared the results to those obtained from the FVB/N, a strain of mice with no predisposition to hearing loss. Six-week-old mice were injected with the maximum tolerated dose of sildenafil citrate (10 mg/kg/day or saline for 30 days. Auditory brainstem responses (ABRs were recorded pre- and post injection time points to assess hearing loss. Entry of sildenafil citrate in the mouse cochlea was confirmed by qRT-PCR analysis of a downstream target of the cGMP-PKG cascade. ABR data indicated no statistically significant difference in hearing between treated and untreated mice in both backgrounds. Results show that the maximum tolerated dose of sildenafil citrate administered daily for 4 weeks does not affect hearing in the mouse. Our study gives no indication that Viagra will negatively impact hearing and it emphasizes the need to revisit the issue
Brome McCreary
2009-12-01
Full Text Available Amphibian declines have been reported in mountainous areas around the western USA. Few data quantify the extent of population losses in the Pacific Northwest, a region in which amphibian declines have received much attention. From 2001–2004, we resurveyed historical breeding sites of two species of conservation concern, the Western Toad (Bufo [=Anaxyrus] boreas and Cascades Frog (Rana cascadae. We detected B. boreas breeding at 75.9% and R. cascadae breeding at 66.6% of historical sites. When we analyzed the data using occupancy models that accounted for detection probability, we estimated the current use of historically occupied sites in our study area was 84.9% (SE = 4.9 for B. boreas and 72.4% (SE = 6.6 for R. cascadae. Our ability to detect B. boreas at sites where they were present was lower in the first year of surveys (a low snowpack year and higher at sites with introduced fish. Our ability to detect R. cascadae was lower at sites with fish. The probability that B. boreas still uses a historical site for breeding was related to the easting of the site (+ and the age of record (-. None of the variables we analyzed was strongly related to R. cascadae occupancy. Both species had increased odds of occupancy with higher latitude, but model support for this variable was modest. Our analysis suggests that while local losses are possible, these two amphibians have not experienced recent, broad population losses in the Oregon Cascades. Historical site revisitation studies such as ours cannot distinguish between population losses and site switching, and do not account for colonization of new habitats, so our analysis may overestimate declines in occupancy within our study area.
Unification of gauge couplings in radiative neutrino mass models
Hagedorn, Claudia; Ohlsson, Tommy; Riad, Stella
2016-01-01
masses at one-loop level and (III) models with particles in the adjoint representation of SU(3). In class (I), gauge couplings unify in a few models and adding dark matter amplifies the chances for unification. In class (II), about a quarter of the models admits gauge coupling unification. In class (III......We investigate the possibility of gauge coupling unification in various radiative neutrino mass models, which generate neutrino masses at one- and/or two-loop level. Renormalization group running of gauge couplings is performed analytically and numerically at one- and two-loop order, respectively....... We study three representative classes of radiative neutrino mass models: (I) minimal ultraviolet completions of the dimension-7 ΔL = 2 operators which generate neutrino masses at one- and/or two-loop level without and with dark matter candidates, (II) models with dark matter which lead to neutrino...
Models of neutrino masses and baryogenesis
Majorana masses of the neutrino implies lepton number violation and is intimately related to the lepton asymmetry of the universe, which gets related to the baryon asymmetry of the universe in the presence of the sphalerons during the electroweak phase transition. Assuming that the baryon asymmetry of the universe is ...
Masses of particles in the SO(18) grand unified model
Asatryan, G.M.
1984-01-01
The grand unified model based on the orthogonal group SO(18) is treated. The model involves four familiar and four mirror families of fermions. Arising of masses of familiar and mirror particles is studied. The mass of the right-handed Wsub(R) boson interacting via right-handed current way is estimated
A Coupled Chemical and Mass Transport Model for Concrete Durability
Jensen, Mads Mønster; Johannesson, Björn; Geiker, Mette Rica
2012-01-01
In this paper a general continuum theory is used to evaluate the service life of cement based materials, in terms of mass transport processes and chemical degradation of the solid matrix. The model established is a reactive mass transport model, based on an extended version of the Poisson-Nernst-...
The simultaneous mass and energy evaporation (SM2E) model.
Choudhary, Rehan; Klauda, Jeffery B
2016-01-01
In this article, the Simultaneous Mass and Energy Evaporation (SM2E) model is presented. The SM2E model is based on theoretical models for mass and energy transfer. The theoretical models systematically under or over predicted at various flow conditions: laminar, transition, and turbulent. These models were harmonized with experimental measurements to eliminate systematic under or over predictions; a total of 113 measured evaporation rates were used. The SM2E model can be used to estimate evaporation rates for pure liquids as well as liquid mixtures at laminar, transition, and turbulent flow conditions. However, due to limited availability of evaporation data, the model has so far only been tested against data for pure liquids and binary mixtures. The model can take evaporative cooling into account and when the temperature of the evaporating liquid or liquid mixture is known (e.g., isothermal evaporation), the SM2E model reduces to a mass transfer-only model.
Models of neutrino masses: Anarchy versus hierarchy
Altarelli, Guido; Feruglio, Ferruccio; Masina, Isabella
2003-01-01
We present a quantitative study of the ability of models with different levels of hierarchy to reproduce the solar neutrino solutions, in particular the LA solution. As a flexible testing ground we consider models based on SU(5)xU(1) F . In this context, we have made statistical simulations of models with different patterns from anarchy to various types of hierarchy: normal hierarchical models with and without automatic suppression of the 23 (sub)determinant and inverse hierarchy models. We find that, not only for the LOW or VO solutions, but even in the LA case, the hierarchical models have a significantly better success rate than those based on anarchy. The normal hierarchy and the inverse hierarchy models have comparable performances in models with see-saw dominance, while the inverse hierarchy models are particularly good in the no see-saw versions. As a possible distinction between these categories of models, the inverse hierarchy models favour a maximal solar mixing angle and their rate of success drops dramatically as the mixing angle decreases, while normal hierarchy models are far more stable in this respect. (author)
Dumusque, Xavier; Buchhave, Lars A.; Latham, David W.; Charbonneau, David; Dressing, Courtney D.; Gettel, Sara; Lopez-Morales, Mercedes [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Bonomo, Aldo S. [INAF - Osservatorio Astrofisico di Torino, via Osservatorio 20, I-10025 Pino Torinese (Italy); Haywood, Raphaëlle D.; Cameron, Andrew Collier; Horne, Keith [SUPA, School of Physics and Astronomy, University of St. Andrews, North Haugh, St. Andrews Fife KY16 9SS (United Kingdom); Malavolta, Luca [Dipartimento di Fisica e Astronomia " Galileo Galilei," Universita' di Padova, Vicolo dell' Osservatorio 3, I-35122 Padova (Italy); Ségransan, Damien; Pepe, Francesco; Udry, Stéphane [Observatoire Astronomique de l' Université de Genève, 51 ch. des Maillettes, CH-1290 Versoix (Switzerland); Molinari, Emilio; Cosentino, Rosario; Fiorenzano, Aldo F. M.; Harutyunyan, Avet [INAF - Fundacin Galileo Galilei, Rambla Jos Ana Fernandez Prez 7, E-38712 Brea Baja (Spain); Figueira, Pedro, E-mail: xdumusque@cfa.harvard.edu [Centro de Astrofìsica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); and others
2014-07-10
Kepler-10b was the first rocky planet detected by the Kepler satellite and confirmed with radial velocity follow-up observations from Keck-HIRES. The mass of the planet was measured with a precision of around 30%, which was insufficient to constrain models of its internal structure and composition in detail. In addition to Kepler-10b, a second planet transiting the same star with a period of 45 days was statistically validated, but the radial velocities were only good enough to set an upper limit of 20 M{sub ⊕} for the mass of Kepler-10c. To improve the precision on the mass for planet b, the HARPS-N Collaboration decided to observe Kepler-10 intensively with the HARPS-N spectrograph on the Telescopio Nazionale Galileo on La Palma. In total, 148 high-quality radial-velocity measurements were obtained over two observing seasons. These new data allow us to improve the precision of the mass determination for Kepler-10b to 15%. With a mass of 3.33 ± 0.49 M{sub ⊕} and an updated radius of 1.47{sub −0.02}{sup +0.03} R{sub ⊕}, Kepler-10b has a density of 5.8 ± 0.8 g cm{sup –3}, very close to the value predicted by models with the same internal structure and composition as the Earth. We were also able to determine a mass for the 45-day period planet Kepler-10c, with an even better precision of 11%. With a mass of 17.2 ± 1.9 M{sub ⊕} and radius of 2.35{sub −0.04}{sup +0.09} R{sub ⊕}, Kepler-10c has a density of 7.1 ± 1.0 g cm{sup –3}. Kepler-10c appears to be the first strong evidence of a class of more massive solid planets with longer orbital periods.
RSMASS: A simple model for estimating reactor and shield masses
Marshall, A.C.; Aragon, J.; Gallup, D.
1987-01-01
A simple mathematical model (RSMASS) has been developed to provide rapid estimates of reactor and shield masses for space-based reactor power systems. Approximations are used rather than correlations or detailed calculations to estimate the reactor fuel mass and the masses of the moderator, structure, reflector, pressure vessel, miscellaneous components, and the reactor shield. The fuel mass is determined either by neutronics limits, thermal/hydraulic limits, or fuel damage limits, whichever yields the largest mass. RSMASS requires the reactor power and energy, 24 reactor parameters, and 20 shield parameters to be specified. This parametric approach should be applicable to a very broad range of reactor types. Reactor and shield masses calculated by RSMASS were found to be in good agreement with the masses obtained from detailed calculations
Anatomy of Higgs mass in supersymmetric inverse seesaw models
Chun, Eung Jin, E-mail: ejchun@kias.re.kr [Korea Institute for Advanced Study, Seoul 130-722 (Korea, Republic of); Mummidi, V. Suryanarayana, E-mail: soori9@cts.iisc.ernet.in [Centre for High Energy Physics, Indian Institute of Science, Bangalore 560012 (India); Vempati, Sudhir K., E-mail: vempati@cts.iisc.ernet.in [Centre for High Energy Physics, Indian Institute of Science, Bangalore 560012 (India)
2014-09-07
We compute the one loop corrections to the CP-even Higgs mass matrix in the supersymmetric inverse seesaw model to single out the different cases where the radiative corrections from the neutrino sector could become important. It is found that there could be a significant enhancement in the Higgs mass even for Dirac neutrino masses of O(30) GeV if the left-handed sneutrino soft mass is comparable or larger than the right-handed neutrino mass. In the case where right-handed neutrino masses are significantly larger than the supersymmetry breaking scale, the corrections can utmost account to an upward shift of 3 GeV. For very heavy multi TeV sneutrinos, the corrections replicate the stop corrections at 1-loop. We further show that general gauge mediation with inverse seesaw model naturally accommodates a 125 GeV Higgs with TeV scale stops.
Effects of confinement on rock mass modulus: A synthetic rock mass modelling (SRM study
I. Vazaios
2018-06-01
Full Text Available The main objective of this paper is to examine the influence of the applied confining stress on the rock mass modulus of moderately jointed rocks (well interlocked undisturbed rock mass with blocks formed by three or less intersecting joints. A synthetic rock mass modelling (SRM approach is employed to determine the mechanical properties of the rock mass. In this approach, the intact body of rock is represented by the discrete element method (DEM-Voronoi grains with the ability of simulating the initiation and propagation of microcracks within the intact part of the model. The geometry of the pre-existing joints is generated by employing discrete fracture network (DFN modelling based on field joint data collected from the Brockville Tunnel using LiDAR scanning. The geometrical characteristics of the simulated joints at a representative sample size are first validated against the field data, and then used to measure the rock quality designation (RQD, joint spacing, areal fracture intensity (P21, and block volumes. These geometrical quantities are used to quantitatively determine a representative range of the geological strength index (GSI. The results show that estimating the GSI using the RQD tends to make a closer estimate of the degree of blockiness that leads to GSI values corresponding to those obtained from direct visual observations of the rock mass conditions in the field. The use of joint spacing and block volume in order to quantify the GSI value range for the studied rock mass suggests a lower range compared to that evaluated in situ. Based on numerical modelling results and laboratory data of rock testing reported in the literature, a semi-empirical equation is proposed that relates the rock mass modulus to confinement as a function of the areal fracture intensity and joint stiffness. Keywords: Synthetic rock mass modelling (SRM, Discrete fracture network (DFN, Rock mass modulus, Geological strength index (GSI, Confinement
Ellis, John
2016-01-01
We revisit minimal supersymmetric SU(5) grand unification (GUT) models in which the soft supersymmetry-breaking parameters of the minimal supersymmetric Standard Model (MSSM) are universal at some input scale, $M_{in}$, above the supersymmetric gauge coupling unification scale, $M_{GUT}$. As in the constrained MSSM (CMSSM), we assume that the scalar masses and gaugino masses have common values, $m_0$ and $m_{1/2}$ respectively, at $M_{in}$, as do the trilinear soft supersymmetry-breaking parameters $A_0$. Going beyond previous studies of such a super-GUT CMSSM scenario, we explore the constraints imposed by the lower limit on the proton lifetime and the LHC measurement of the Higgs mass, $m_h$. We find regions of $m_0$, $m_{1/2}$, $A_0$ and the parameters of the SU(5) superpotential that are compatible with these and other phenomenological constraints such as the density of cold dark matter, which we assume to be provided by the lightest neutralino. Typically, these allowed regions appear for $m_0$ and $m_{1/...
Behaviour of turbulence models near a turbulent/non-turbulent interface revisited
Ferrey, P.; Aupoix, B.
2006-01-01
The behaviour of turbulence models near a turbulent/non-turbulent interface is investigated. The analysis holds as well for two-equation as for Reynolds stress turbulence models using Daly and Harlow diffusion model. The behaviour near the interface is shown not to be a power law, as usually considered, but a more complex parametric solution. Why previous works seemed to numerically confirm the power law solution is explained. Constraints for turbulence modelling, i.e., for ensuring that models have a good behaviour near a turbulent/non-turbulent interface so that the solution is not sensitive to small turbulence levels imposed in the irrotational flow, are drawn
Mass corrections to Green functions in instanton vacuum model
Esaibegyan, S.V.; Tamaryan, S.N.
1987-01-01
The first nonvanishing mass corrections to the effective Green functions are calculated in the model of instanton-based vacuum consisting of a superposition of instanton-antiinstanton fluctuations. The meson current correlators are calculated with account of these corrections; the mass spectrum of pseudoscalar octet as well as the value of the kaon axial constant are found. 7 refs
Systematics of quark mass matrices in the standard electroweak model
Frampton, P.H.; Jarlskog, C.; Stockholm Univ.
1985-01-01
It is shown that the quark mass matrices in the standard electroweak model satisfy the empirical relation M = M' + O(lambda 2 ), where M(M') refers to the mass matrix of the charge 2/3 (-1/3) quarks normalized to the largest eigenvalue, msub(t) (msub(b)), and lambda = Vsub(us) approx.= 0.22. (orig.)
Vlček, Lukáš; Nezbeda, Ivo
2004-01-01
Roč. 102, č. 5 (2004), s. 485-497 ISSN 0026-8976 R&D Projects: GA ČR GA203/02/0764; GA AV ČR IAA4072303; GA AV ČR IAA4072309 Institutional research plan: CEZ:AV0Z4072921 Keywords : primitive model * association fluids * ethanol Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.406, year: 2004
Revisiting Temporal Markov Chains for Continuum modeling of Transport in Porous Media
Delgoshaie, A. H.; Jenny, P.; Tchelepi, H.
2017-12-01
The transport of fluids in porous media is dominated by flow-field heterogeneity resulting from the underlying permeability field. Due to the high uncertainty in the permeability field, many realizations of the reference geological model are used to describe the statistics of the transport phenomena in a Monte Carlo (MC) framework. There has been strong interest in working with stochastic formulations of the transport that are different from the standard MC approach. Several stochastic models based on a velocity process for tracer particle trajectories have been proposed. Previous studies have shown that for high variances of the log-conductivity, the stochastic models need to account for correlations between consecutive velocity transitions to predict dispersion accurately. The correlated velocity models proposed in the literature can be divided into two general classes of temporal and spatial Markov models. Temporal Markov models have been applied successfully to tracer transport in both the longitudinal and transverse directions. These temporal models are Stochastic Differential Equations (SDEs) with very specific drift and diffusion terms tailored for a specific permeability correlation structure. The drift and diffusion functions devised for a certain setup would not necessarily be suitable for a different scenario, (e.g., a different permeability correlation structure). The spatial Markov models are simple discrete Markov chains that do not require case specific assumptions. However, transverse spreading of contaminant plumes has not been successfully modeled with the available correlated spatial models. Here, we propose a temporal discrete Markov chain to model both the longitudinal and transverse dispersion in a two-dimensional domain. We demonstrate that these temporal Markov models are valid for different correlation structures without modification. Similar to the temporal SDEs, the proposed model respects the limited asymptotic transverse spreading of
Distinct and yet not Separate: Revisiting the Welfare Models in the EU New Member States
Helena Tendera-Właszczuk
2017-03-01
Full Text Available Objective: The objective of this paper is to evaluate the welfare state models in the EU countries and to start the discussion if the new member states (NMS, i.e. those EU member states that joined the EU in 2004/2007, fit the Sapir typology (Nordic model, Continental model, Anglo-Saxon model, Mediterranean model. The second objective is to examine the labour market situation, reduction of poverty and social inequalities in the EU countries. The third one is to open the issue if the public spending can be managed both justly and effectively. Research Design & Methods: The linear regression function and correlation has been used to present effectiveness of social expenditures to reduce poverty, as well as evidence that public spending can be managed both justly and effectively. Findings: This paper demonstrates more similarities can be drawn across the NMS and the EU-15 than within the NMS and EU-15, respectively. The typology of welfare state models is applied to the NMS and their effectiveness is tested. Accordingly, we classify the Czech Republic, Slovenia and Cyprus as countries of the Nordic model; Hungary, Slovakia and Malta as the Continental model; Lithuania, Latvia and Estonia as the Anglo-Saxon model and, finally, Poland, Croatia, Romania and Bulgaria as the Mediterranean model. Implications & Recommendations: Recent data suggest that the global crisis has caused an increase in the level of poverty and social spending in the EU countries. However, this is just a temporary situation and it does reflect the solutions of models. Contribution & Value Added: The NMS tend to be examined as a separate group of countries that – as the literature suggests – depict different qualities of the welfare models than those pursued in the EU-15.
Thermal modelling of Advanced LIGO test masses
Wang, H; Dovale Álvarez, M; Mow-Lowry, C M; Freise, A; Blair, C; Brooks, A; Kasprzack, M F; Ramette, J; Meyers, P M; Kaufer, S; O’Reilly, B
2017-01-01
High-reflectivity fused silica mirrors are at the epicentre of today’s advanced gravitational wave detectors. In these detectors, the mirrors interact with high power laser beams. As a result of finite absorption in the high reflectivity coatings the mirrors suffer from a variety of thermal effects that impact on the detectors’ performance. We propose a model of the Advanced LIGO mirrors that introduces an empirical term to account for the radiative heat transfer between the mirror and its surroundings. The mechanical mode frequency is used as a probe for the overall temperature of the mirror. The thermal transient after power build-up in the optical cavities is used to refine and test the model. The model provides a coating absorption estimate of 1.5–2.0 ppm and estimates that 0.3 to 1.3 ppm of the circulating light is scattered onto the ring heater. (paper)
Schmidt, Alexandre G M; Paiva, Milena M
2012-01-01
We revisit the quantum two-person duel. In this problem, both Alice and Bob each possess a spin-1/2 particle which models dead and alive states for each player. We review the Abbott and Flitney result—now considering non-zero α 1 and α 2 in order to decide if it is better for Alice to shoot or not the second time—and we also consider a duel where players do not necessarily start alive. This simple assumption allows us to explore several interesting special cases, namely how a dead player can win the duel shooting just once, or how can Bob revive Alice after one shot, and the better strategy for Alice—being either alive or in a superposition of alive and dead states—fighting a dead opponent. (paper)
Logistics Innovation Process Revisited
Gammelgaard, Britta; Su, Shong-Iee Ivan; Yang, Su-Lan
2011-01-01
Purpose – The purpose of this paper is to learn more about logistics innovation processes and their implications for the focal organization as well as the supply chain, especially suppliers. Design/methodology/approach – The empirical basis of the study is a longitudinal action research project...... that was triggered by the practical needs of new ways of handling material flows of a hospital. This approach made it possible to revisit theory on logistics innovation process. Findings – Apart from the tangible benefits reported to the case hospital, five findings can be extracted from this study: the logistics...... innovation process model may include not just customers but also suppliers; logistics innovation in buyer-supplier relations may serve as an alternative to outsourcing; logistics innovation processes are dynamic and may improve supplier partnerships; logistics innovations in the supply chain are as dependent...
Radiative neutrino mass model with degenerate right-handed neutrinos
Kashiwase, Shoichi; Suematsu, Daijiro
2016-01-01
The radiative neutrino mass model can relate neutrino masses and dark matter at a TeV scale. If we apply this model to thermal leptogenesis, we need to consider resonant leptogenesis at that scale. It requires both finely degenerate masses for the right-handed neutrinos and a tiny neutrino Yukawa coupling. We propose an extension of the model with a U(1) gauge symmetry, in which these conditions are shown to be simultaneously realized through a TeV scale symmetry breaking. Moreover, this extension can bring about a small quartic scalar coupling between the Higgs doublet scalar and an inert doublet scalar which characterizes the radiative neutrino mass generation. It also is the origin of the Z 2 symmetry which guarantees the stability of dark matter. Several assumptions which are independently supposed in the original model are closely connected through this extension. (orig.)
Hubble induced mass after inflation in spectator field models
Fujita, Tomohiro [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94306 (United States); Harigaya, Keisuke, E-mail: tomofuji@stanford.edu, E-mail: keisukeh@icrr.u-tokyo.ac.jp [Department of Physics, University of California, Berkeley, CA 94720 (United States)
2016-12-01
Spectator field models such as the curvaton scenario and the modulated reheating are attractive scenarios for the generation of the cosmic curvature perturbation, as the constraints on inflation models are relaxed. In this paper, we discuss the effect of Hubble induced masses on the dynamics of spectator fields after inflation. We pay particular attention to the Hubble induced mass by the kinetic energy of an oscillating inflaton, which is generically unsuppressed but often overlooked. In the curvaton scenario, the Hubble induced mass relaxes the constraint on the property of the inflaton and the curvaton, such as the reheating temperature and the inflation scale. We comment on the implication of our discussion for baryogenesis in the curvaton scenario. In the modulated reheating, the predictions of models e.g. the non-gaussianity can be considerably altered. Furthermore, we propose a new model of the modulated reheating utilizing the Hubble induced mass which realizes a wide range of the local non-gaussianity parameter.
Myong, R. S.; Nagdewe, S. P.
2011-01-01
The Grad's closure for the high-order moment equation is revisited and, by extending his theory, a physically motivated closure is developed for the one-dimensional velocity shear gas flow. The closure is based on the physical argument of the relative importance of various terms appearing in the moment equation. Also, the closure is derived such that the resulting theory may be inclusive of the well established linear theory (Navier-Stokes-Fourier) as limiting case near local thermal equilibrium.
Optimal management of ecosystem services with pollution traps : The lake model revisited
de Zeeuw, Aart; Grass, Dieter; Xepapadeas, Anastasios
2017-01-01
In this paper, optimal management of the lake model and common-property outcomes are reconsidered when the lake model is extended with the slowly changing variable. New optimal trajectories are found that were hidden in the simplified analysis. Furthermore, it is shown that two Nash equilibria may
The Two-Capacitor Problem Revisited: A Mechanical Harmonic Oscillator Model Approach
Lee, Keeyung
2009-01-01
The well-known two-capacitor problem, in which exactly half the stored energy disappears when a charged capacitor is connected to an identical capacitor, is discussed based on the mechanical harmonic oscillator model approach. In the mechanical harmonic oscillator model, it is shown first that "exactly half" the work done by a constant applied…
Revisiting the Model of Creative Destruction: St. Jacobs, Ontario, a Decade Later
Mitchell, Clare J. A.; de Waal, Sarah B.
2009-01-01
Ten years ago, the model of creative destruction was developed to predict the fate of communities that base their development on the commodification of rural heritage (Mitchell, C.J.A., 1998. Entrepreneurialism, commodification and creative destruction: a model of post-modern community development. Journal of Rural Studies 14, 273-286). Its…
Fellnhofer, Katharina
2017-01-01
Relying on Bandura's (1986) social learning theory, Ajzen's (1988) theory of planned behaviour (TPB), and Dyer's (1994) model of entrepreneurial careers, this study aims to highlight the potential of entrepreneurial role models to entrepreneurship education. The results suggest that entrepreneurial courses would greatly benefit from real-life…
Remembered Experiences and Revisit Intentions
Barnes, Stuart; Mattsson, Jan; Sørensen, Flemming
2016-01-01
Tourism is an experience-intensive sector in which customers seek and pay for experiences above everything else. Remembering past tourism experiences is also crucial for an understanding of the present, including the predicted behaviours of visitors to tourist destinations. We adopt a longitudinal...... approach to memory data collection from psychological science, which has the potential to contribute to our understanding of tourist behaviour. In this study, we examine the impact of remembered tourist experiences in a safari park. In particular, using matched survey data collected longitudinally and PLS...... path modelling, we examine the impact of positive affect tourist experiences on the development of revisit intentions. We find that longer-term remembered experiences have the strongest impact on revisit intentions, more so than predicted or immediate memory after an event. We also find that remembered...
The Gogny-Hartree-Fock-Bogoliubov nuclear-mass model
Goriely, S. [Universite Libre de Bruxelles, Institut d' Astronomie et d' Astrophysique, CP-226, Brussels (Belgium); Hilaire, S.; Girod, M.; Peru, S. [CEA, DAM, DIF, Arpajon (France)
2016-07-15
We present the Gogny-Hartree-Fock-Bogoliubov model which reproduces nuclear masses with an accuracy comparable with the best mass formulas. In contrast to the Skyrme-HFB nuclear-mass models, an explicit and self-consistent account of all the quadrupole correlation energies is included within the 5D collective Hamiltonian approach. The final rms deviation with respect to the 2353 measured masses is 789 keV in the 2012 atomic mass evaluation. In addition, the D1M Gogny force is shown to predict nuclear and neutron matter properties in agreement with microscopic calculations based on realistic two- and three-body forces. The D1M properties and its predictions of various observables are compared with those of D1S and D1N. (orig.)
BrainSignals Revisited: Simplifying a Computational Model of Cerebral Physiology.
Matthew Caldwell
Full Text Available Multimodal monitoring of brain state is important both for the investigation of healthy cerebral physiology and to inform clinical decision making in conditions of injury and disease. Near-infrared spectroscopy is an instrument modality that allows non-invasive measurement of several physiological variables of clinical interest, notably haemoglobin oxygenation and the redox state of the metabolic enzyme cytochrome c oxidase. Interpreting such measurements requires the integration of multiple signals from different sources to try to understand the physiological states giving rise to them. We have previously published several computational models to assist with such interpretation. Like many models in the realm of Systems Biology, these are complex and dependent on many parameters that can be difficult or impossible to measure precisely. Taking one such model, BrainSignals, as a starting point, we have developed several variant models in which specific regions of complexity are substituted with much simpler linear approximations. We demonstrate that model behaviour can be maintained whilst achieving a significant reduction in complexity, provided that the linearity assumptions hold. The simplified models have been tested for applicability with simulated data and experimental data from healthy adults undergoing a hypercapnia challenge, but relevance to different physiological and pathophysiological conditions will require specific testing. In conditions where the simplified models are applicable, their greater efficiency has potential to allow their use at the bedside to help interpret clinical data in near real-time.
Modeling rapidly disseminating infectious disease during mass gatherings
Chowell Gerardo
2012-12-01
Full Text Available Abstract We discuss models for rapidly disseminating infectious diseases during mass gatherings (MGs, using influenza as a case study. Recent innovations in modeling and forecasting influenza transmission dynamics at local, regional, and global scales have made influenza a particularly attractive model scenario for MG. We discuss the behavioral, medical, and population factors for modeling MG disease transmission, review existing model formulations, and highlight key data and modeling gaps related to modeling MG disease transmission. We argue that the proposed improvements will help integrate infectious-disease models in MG health contingency plans in the near future, echoing modeling efforts that have helped shape influenza pandemic preparedness plans in recent years.
Revisiting the T2K data using different models for the neutrino-nucleus cross sections
Meloni, D., E-mail: meloni@fis.uniroma3.it [Dipartimento di Fisica ' E. Amaldi' , Universita degli Studi Roma Tre, Via della Vasca Navale 84, 00146 Roma (Italy); Martini, M., E-mail: mmartini@ulb.ac.be [Institut d' Astronomie et d' Astrophysique, CP-226, Universite Libre de Bruxelles, 1050 Brussels (Belgium)
2012-09-17
We present a three-flavor fit to the recent {nu}{sub {mu}}{yields}{nu}{sub e} and {nu}{sub {mu}}{yields}{nu}{sub {mu}} T2K oscillation data with different models for the neutrino-nucleus cross section. We show that, even for a limited statistics, the allowed regions and best fit points in the ({theta}{sub 13},{delta}{sub CP}) and ({theta}{sub 23},{Delta}m{sub atm}{sup 2}) planes are affected if, instead of using the Fermi gas model to describe the quasielastic cross section, we employ a model including the multinucleon emission channel.
An algebraic model for quark mass matrices with heavy top
Krolikowski, W.; Warsaw Univ.
1991-01-01
In terms of an intergeneration U(3) algebra, a numerical model is constructed for quark mass matrices, predicting the top-quark mass around 170 GeV and the CP-violating phase around 75 deg. The CKM matrix is nonsymmetric in moduli with |V ub | being very small. All moduli are consistent with their experimental limits. The model is motivated by the author's previous work on three replicas of the Dirac particle, presumably resulting into three generations of leptons and quarks. The paper may be also viewed as an introduction to a new method of intrinsic dynamical description of lepton and quark mass matrices. (author)
Bayesian modeling of the mass and density of asteroids
Dotson, Jessie L.; Mathias, Donovan
2017-10-01
Mass and density are two of the fundamental properties of any object. In the case of near earth asteroids, knowledge about the mass of an asteroid is essential for estimating the risk due to (potential) impact and planning possible mitigation options. The density of an asteroid can illuminate the structure of the asteroid. A low density can be indicative of a rubble pile structure whereas a higher density can imply a monolith and/or higher metal content. The damage resulting from an impact of an asteroid with Earth depends on its interior structure in addition to its total mass, and as a result, density is a key parameter to understanding the risk of asteroid impact. Unfortunately, measuring the mass and density of asteroids is challenging and often results in measurements with large uncertainties. In the absence of mass / density measurements for a specific object, understanding the range and distribution of likely values can facilitate probabilistic assessments of structure and impact risk. Hierarchical Bayesian models have recently been developed to investigate the mass - radius relationship of exoplanets (Wolfgang, Rogers & Ford 2016) and to probabilistically forecast the mass of bodies large enough to establish hydrostatic equilibrium over a range of 9 orders of magnitude in mass (from planemos to main sequence stars; Chen & Kipping 2017). Here, we extend this approach to investigate the mass and densities of asteroids. Several candidate Bayesian models are presented, and their performance is assessed relative to a synthetic asteroid population. In addition, a preliminary Bayesian model for probablistically forecasting masses and densities of asteroids is presented. The forecasting model is conditioned on existing asteroid data and includes observational errors, hyper-parameter uncertainties and intrinsic scatter.
Fellnhofer, Katharina
2017-01-01
Relying on Bandura’s (1986) social learning theory, Ajzen’s (1988) theory of planned behaviour (TPB), and Dyer’s (1994) model of entrepreneurial careers, this study aims to highlight the potential of entrepreneurial role models to entrepreneurship education. The results suggest that entrepreneurial courses would greatly benefit from real-life experiences, either positive or negative. The results of regression analysis based on 426 individuals, primarily from Austria, Finland, and Greece, show that role models increase learners’ entrepreneurial perceived behaviour control (PBC) by increasing their self-efficacy. This study can inform the research and business communities and governments about the importance of integrating entrepreneurs into education to stimulate entrepreneurial PBC. This study is the first of its kind using its approach, and its results warrant more in-depth studies of storytelling by entrepreneurial role models in the context of multimedia entrepreneurship education. PMID:29104604
Fellnhofer, Katharina
2017-01-01
Relying on Bandura's (1986) social learning theory, Ajzen's (1988) theory of planned behaviour (TPB), and Dyer's (1994) model of entrepreneurial careers, this study aims to highlight the potential of entrepreneurial role models to entrepreneurship education. The results suggest that entrepreneurial courses would greatly benefit from real-life experiences, either positive or negative. The results of regression analysis based on 426 individuals, primarily from Austria, Finland, and Greece, show that role models increase learners' entrepreneurial perceived behaviour control (PBC) by increasing their self-efficacy. This study can inform the research and business communities and governments about the importance of integrating entrepreneurs into education to stimulate entrepreneurial PBC. This study is the first of its kind using its approach, and its results warrant more in-depth studies of storytelling by entrepreneurial role models in the context of multimedia entrepreneurship education.
The consensus in the two-feature two-state one-dimensional Axelrod model revisited
Biral, Elias J. P.; Tilles, Paulo F. C.; Fontanari, José F.
2015-04-01
The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains.
Undergraduate Groupwork Revisited: the Use of the Scrum Model to Create Agile Learning Environments
Jurado-Navas, Antonio; Munoz-Luna, Rosa
2016-01-01
The present paper aims to analyse the impact of an innovative teaching model in the learning outcomes of a group of undergraduate students at the University of Malaga (Spain). Based on agile scrum models adopted in the engineering industry, the authors have extraposed the scrum methodology...... to pedagogical contexts at university level. This paper describes the impact of the innovative Scrum model in relation to groupwork management in undergraduate education. The already existing communication problems when working in group yield slow cooperation among group members and therefore, poorer learning...... outcomes. Such communication deficiency can be alleviated with the introduction of short and frequent meetings in each group of 4-5 members so that learning objectives are short-termed and attainable. The scrum model offers the procedural framework where to insert those frequent meetings and where all...
Revisiting low-fidelity two-fluid models for gas–solids transport
Adeleke, Najeem, E-mail: najm@psu.edu; Adewumi, Michael, E-mail: m2a@psu.edu; Ityokumbul, Thaddeus
2016-08-15
Two-phase gas–solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas–solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe–Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.
Revisiting low-fidelity two-fluid models for gas–solids transport
Adeleke, Najeem; Adewumi, Michael; Ityokumbul, Thaddeus
2016-01-01
Two-phase gas–solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas–solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe–Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.
Revisiting low-fidelity two-fluid models for gas-solids transport
Adeleke, Najeem; Adewumi, Michael; Ityokumbul, Thaddeus
2016-08-01
Two-phase gas-solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas-solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe-Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.
Mass models for disk and halo components in spiral galaxies
Athanassoula, E.; Bosma, A.
1987-01-01
The mass distribution in spiral galaxies is investigated by means of numerical simulations, summarizing the results reported by Athanassoula et al. (1986). Details of the modeling technique employed are given, including bulge-disk decomposition; computation of bulge and disk rotation curves (assuming constant mass/light ratios for each); and determination (for spherical symmetry) of the total halo mass out to the optical radius, the concentration indices, the halo-density power law, the core radius, the central density, and the velocity dispersion. Also discussed are the procedures for incorporating galactic gas and checking the spiral structure extent. It is found that structural constraints limit disk mass/light ratios to a range of 0.3 dex, and that the most likely models are maximum-disk models with m = 1 disturbances inhibited. 19 references
Growth and energy nexus in Europe revisited: Evidence from a fixed effects political economy model
Menegaki, Angeliki N.; Ozturk, Ilhan
2013-01-01
This is an empirical study on the causal relationship between economic growth and energy for 26 European countries in a multivariate panel framework over the period 1975–2009 using a two-way fixed effects model and including greenhouse gas emissions, capital, fossil energy consumption, Herfindahl index (political competition) and number of years the government chief executive stays in office (political stability) as independent variables in the model. Empirical results confirm bidirectional causality between growth and political stability, capital and political stability, capital and fossil energy consumption. Whether political stability favors the implementation of growth or leads to corruption demands further research. - Highlights: • Economic growth and energy for 26 European countries is examined. • Two-way fixed effects model with political economy variables is employed. • Bidirectional causality is observed between growth and political stability
Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach
Maxwell, R.M.; Welty, C.; Harvey, R.W.
2007-01-01
Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured
Roustan, Yelva; Duhanyan, Nora; Bocquet, Marc; Winiarek, Victor
2013-04-01
A sensitivity study of the numerical model, as well as, an inverse modelling approach applied to the atmospheric dispersion issues after the Chernobyl disaster are both presented in this paper. On the one hand, the robustness of the source term reconstruction through advanced data assimilation techniques was tested. On the other hand, the classical approaches for sensitivity analysis were enhanced by the use of an optimised forcing field which otherwise is known to be strongly uncertain. The POLYPHEMUS air quality system was used to perform the simulations of radionuclide dispersion. Activity concentrations in air and deposited to the ground of iodine-131, caesium-137 and caesium-134 were considered. The impact of the implemented parameterizations of the physical processes (dry and wet depositions, vertical turbulent diffusion), of the forcing fields (meteorology and source terms) and of the numerical configuration (horizontal resolution) were investigated for the sensitivity study of the model. A four dimensional variational scheme (4D-Var) based on the approximate adjoint of the chemistry transport model was used to invert the source term. The data assimilation is performed with measurements of activity concentrations in air extracted from the Radioactivity Environmental Monitoring (REM) database. For most of the investigated configurations (sensitivity study), the statistics to compare the model results to the field measurements as regards the concentrations in air are clearly improved while using a reconstructed source term. As regards the ground deposited concentrations, an improvement can only be seen in case of satisfactorily modelled episode. Through these studies, the source term and the meteorological fields are proved to have a major impact on the activity concentrations in air. These studies also reinforce the use of reconstructed source term instead of the usual estimated one. A more detailed parameterization of the deposition process seems also to be
Reconstructing building mass models from UAV images
Li, Minglei
2015-07-26
We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first generate a dense point cloud from the aerial images. Based on the statistical analysis of the footprint grid of the buildings, the point cloud is classified into different categories (i.e., buildings, ground, trees, and others). Roof structures are extracted for each individual building using Markov random field optimization. Then, a contour refinement algorithm based on pivot point detection is utilized to refine the contour of patches. Finally, polygonal mesh models are extracted from the refined contours. Experiments on various scenes as well as comparisons with state-of-the-art reconstruction methods demonstrate the effectiveness and robustness of the proposed method.
Phase transitions in relativistic models: revisiting the Nolen-Schiffer anomaly
Menezes, D.P.; Providencia, C.
2003-01-01
We use the non-linear Walecka model in a Thomas-Fermi approximation to investigate the effects of the ρ-ω mixing term in infinite nuclear matter and in finite nuclei. For finite nuclei the contribution of the isospin mixing term is very large as compared with the expected value to solve the Nolen-Schiffer anomaly. (author)
The consensus in the two-feature two-state one-dimensional Axelrod model revisited
Biral, Elias J P; Tilles, Paulo F C; Fontanari, José F
2015-01-01
The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains. (paper)
What Time Is Sunrise? Revisiting the Refraction Component of Sunrise/set Prediction Models
Wilson, Teresa; Bartlett, Jennifer L.; Hilton, James Lindsay
2017-01-01
Algorithms that predict sunrise and sunset times currently have an error of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, even including difficulties determining when the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction. We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We then compare these predictions with data sets of observed rise/set times to create a better model. Sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem. While there are a few data sets available, we will also begin collecting this data using smartphones as part of a citizen science project. The mobile application for this project will be available in the Google Play store. Data analysis will lead to more complete models that will provide more accurate rise/set times for the benefit of astronomers, navigators, and outdoorsmen everywhere.
School leadership effects revisited: a review of empirical studies guided by indirect-effect models
Hendriks, Maria A.; Scheerens, Jaap
2013-01-01
Fourteen leadership effect studies that used indirect-effect models were quantitatively analysed to explore the most promising mediating variables. The results indicate that total effect sizes based on indirect-effect studies appear to be low, quite comparable to the results of some meta-analyses of
Holland in Iceland Revisited: An Emic Approach to Evaluating U.S. Vocational Interest Models
Einarsdottir, Sif; Rounds, James; Su, Rong
2010-01-01
An emic approach was used to test the structural validity and applicability of Holland's (1997) RIASEC (Realistic, Investigative, Artistic, Social, Enterprising, Conventional) model in Iceland. Archival data from the development of the Icelandic Interest Inventory (Einarsdottir & Rounds, 2007) were used in the present investigation. The data…
Revisiting Precede-Proceed: A Leading Model for Ecological and Ethical Health Promotion
Porter, Christine M.
2016-01-01
Background: The Precede-Proceed model has provided moral and practical guidance for the fields of health education and health promotion since Lawrence Green first developed Precede in 1974 and Green and Kreuter added Proceed in 1991. Precede-Proceed today remains the most comprehensive and one of the most used approaches to promoting health.…
Engsted, Tom
1994-01-01
I tidligere studier af de klassiske Europæiske hyperinflationer antages det at stød til pengeefterspørgslen er ikke-stationære. I artiklen vises det v.h.a. kointegrationstests at denne antagelse er fejlagtig. Med udgangspunkt i en kointegreret VAR model findes det, at der under de Europæiske hype...
Packaging the News: Propaganda Model Revisited and the Implications for Foreign Affairs Coverage.
Hsu, Mei-Ling
This research review explores the propaganda model proposed by E. S. Herman and N. Chomsky (1988) as an alternative way of looking at the American news media. The study begins with a review of the theoretical assumptions and the supporting empirical findings highlighting the propaganda framework, following which is a synthesis of research…
Revisiting the concept level of detail in 3D city modelling
Biljecki, F.; Zhao, J.; Stoter, J.E.; Ledoux, H.
2013-01-01
This review paper discusses the concept of level of detail in 3D city modelling, and is a first step towards a foundation for a standardised definition. As an introduction, a few level of detail specifications, outlooks and approaches are given from the industry. The paper analyses the general
Revisiting Kappa to account for change in the accuracy assessment of land-use models
Vliet, van J.; Bregt, A.K.; Hagen-Zanker, A.
2011-01-01
Land-use change models are typically calibrated to reproduce known historic changes. Calibration results can then be assessed by comparing two datasets: the simulated land-use map and the actual land-use map at the same time. A common method for this is the Kappa statistic, which expresses the
Kleijnen, J.P.C.
2006-01-01
Classic linear regression models and their concomitant statistical designs assume a univariate response and white noise.By definition, white noise is normally, independently, and identically distributed with zero mean.This survey tries to answer the following questions: (i) How realistic are these
How inequality hurts growth: Revisiting the Galor-Zeira model through a Korean case
Jun, Bogang; Kaltenberg, Mary; Hwang, Won-sik
2017-01-01
This paper aims to show that the level of inequality increases via the human capital channel with credit market imperfections generating negative effects on economic growth. We expand the model presented by Galor and Zeira (1993) to represent the fact that the economy benefits from endogenous
Mass of decaying wino from AMS-02 2014
Ibe, Masahiro [Tokyo Univ. (Japan). Inst. for Cosmic Ray Research; Univ. Tokyo (Japan). Kavli Inst. for the Physics and Mathematics of the Universe; Matsumoto, Shigeki; Yanagida, Tsutomu T. [Univ. Tokyo (Japan). Kavli Inst. for the Physics and Mathematics of the Universe; Shirai, Satoshi [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2014-09-15
We revisit the decaying wino dark matter scenario in the light of the updated positron fraction, electron and positron fluxes in cosmic ray recently reported by the AMS-02 collaboration. We show the AMS-02 results favor the mass of the wino dark matter at around a few TeV, which is consistent with the prediction on the wino mass in the pure gravity mediation model.
Mass of decaying wino from AMS-02 2014
Ibe, Masahiro, E-mail: ibe@icrr.u-tokyo.ac.jp [Institute for Cosmic Ray Research (ICRR), Theory Group, University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (IPMU), University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Matsumoto, Shigeki [Kavli Institute for the Physics and Mathematics of the Universe (IPMU), University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Shirai, Satoshi [Deutsches Elektronen-Synchrotron (DESY), 22607 Hamburg (Germany); Yanagida, Tsutomu T. [Kavli Institute for the Physics and Mathematics of the Universe (IPMU), University of Tokyo, Kashiwa, Chiba 277-8568 (Japan)
2015-02-04
We revisit the decaying wino dark matter scenario in the light of the updated positron fraction, electron and positron fluxes in cosmic ray recently reported by the AMS-02 collaboration. We show the AMS-02 results favor the mass of the wino dark matter at around a few TeV, which is consistent with the prediction on the wino mass in the pure gravity mediation model.
Mass of decaying wino from AMS-02 2014
Ibe, Masahiro
2014-09-01
We revisit the decaying wino dark matter scenario in the light of the updated positron fraction, electron and positron fluxes in cosmic ray recently reported by the AMS-02 collaboration. We show the AMS-02 results favor the mass of the wino dark matter at around a few TeV, which is consistent with the prediction on the wino mass in the pure gravity mediation model.
Revisiting the O(3) non-linear sigma model and its Pohlmeyer reduction
Pastras, Georgios [NCSR ' ' Demokritos' ' , Institute of Nuclear and Particle Physics, Attiki (Greece)
2018-01-15
It is well known that sigma models in symmetric spaces accept equivalent descriptions in terms of integrable systems, such as the sine-Gordon equation, through Pohlmeyer reduction. In this paper, we study the mapping between known solutions of the Euclidean O(3) non-linear sigma model, such as instantons, merons and elliptic solutions that interpolate between the latter, and solutions of the Pohlmeyer reduced theory, namely the sinh-Gordon equation. It turns out that instantons do not have a counterpart, merons correspond to the ground state, while the class of elliptic solutions is characterized by a two to one correspondence between solutions in the two descriptions. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
The general dynamic model of island biogeography revisited on the level of major plant families
Lenzner, Bernd; Beierkuhnlein, Carl; Patrick, Weigelt
2017-01-01
Aim: The general dynamic model (GDM) proposed by Whittaker et al. (2008) is a widely accepted theoretical framework in island biogeography. In this study, we explore whether GDM predictions hold when overall plant diversity is deconstructed into major plant families. Location: 101 islands from 14...... oceanic archipelagos worldwide. Methods: Occurrence data for all species of nine large, cosmopolitan flowering plant families were used to test predictions derived from the GDM. We analyzed the effects of island area and age on species richness as well as number and percentage of single-island endemic...... species per family using mixed-effect models. Results: Total species and endemic richness as well as the percentage of endemic species showed a hump-shaped relationship with island age. The overall pattern was mainly driven by few species-rich plant families. Varying patterns were found for individual...
The signal-to-noise analysis of the Little-Hopfield model revisited
Bolle, D; Blanco, J Busquets; Verbeiren, T
2004-01-01
Using the generating functional analysis an exact recursion relation is derived for the time evolution of the effective local field of the fully connected Little-Hopfield model. It is shown that, by leaving out the feedback correlations arising from earlier times in this effective dynamics, one precisely finds the recursion relations usually employed in the signal-to-noise approach. The consequences of this approximation as well as the physics behind it are discussed. In particular, it is pointed out why it is hard to notice the effects, especially for model parameters corresponding to retrieval. Numerical simulations confirm these findings. The signal-to-noise analysis is then extended to include all correlations, making it a full theory for dynamics at the level of the generating functional analysis. The results are applied to the frequently employed extremely diluted (a)symmetric architectures and to sequence processing networks
Ponzano-Regge model revisited: I. Gauge fixing, observables and interacting spinning particles
Freidel, Laurent; Louapre, David
2004-01-01
We show how to properly gauge fix all the symmetries of the Ponzano-Regge model for 3D quantum gravity. This amounts to doing explicit finite computations for transition amplitudes. We give the construction of the transition amplitudes in the presence of interacting quantum spinning particles. We introduce a notion of operators whose expectation value gives rise to either gauge fixing, introduction of time, or insertion of particles, according to the choice. We give the link between the spin foam quantization and the Hamiltonian quantization. We finally show the link between the Ponzano-Regge model and the quantization of Chern-Simons theory based on the double quantum group of SU(2)
Bachschmid-Romano, Ludovica; Opper, Manfred
2015-01-01
We study analytically the performance of a recently proposed algorithm for learning the couplings of a random asymmetric kinetic Ising model from finite length trajectories of the spin dynamics. Our analysis shows the importance of the nontrivial equal time correlations between spins induced by the dynamics for the speed of learning. These correlations become more important as the spin’s stochasticity is decreased. We also analyse the deviation of the estimation error (paper)
Elementary isovector spin and orbital magnetic dipole modes revisited in the shell model
Richter, A.
1988-08-01
A review is given on the status of mainly spin magnetic dipole modes in some sd- and fp-shell nuclei studied with inelastic electron and proton scattering, and by β + -decay. Particular emphasis is also placed on a fairly new, mainly orbital magnetic dipole mode investigated by high-resolution (e,e') and (p,p') scattering experiments on a series of fp-shell nuclei. Both modes are discussed in terms of the shell model with various effective interactions. (orig.)
The Educational Model of Private Colleges of Osteopathic Medicine: Revisited for 2003-2013.
Cummings, Mark
2015-12-01
Trends in the development of new private colleges of osteopathic medicine (COMs) described by the author in 2003 have accelerated in the ensuing decade. During 2003 to 2013, 10 new COMs as well as 2 remote teaching sites and 4 new branch campuses at private institutions were accredited, leading to a 98% increase in the number of students enrolled in private COMs. The key features of the private COM educational model during this period were a reliance on student tuition, the establishment of health professions education programs around the medical school, the expansion of class size, the creation of branch campuses and remote teaching sites, an environment that emphasizes teaching over research, and limited involvement in facilities providing clinical services to patients. There is institutional ownership of preclinical instruction, but clinical instruction occurs in affiliated hospitals and medical institutions where students are typically taught by volunteer and/or adjunct faculty.Between 2003 and 2013, this model attracted smaller universities and organizations, which implemented the strategies of established private COMs in initiating new private COMs, branch campuses, and remote teaching sites. The new COMs have introduced changes to the osteopathic profession and private COM model by expanding to new parts of the country and establishing the first for-profit medical school accredited in the United States in modern times. They have also increased pressure on the system of osteopathic graduate medical education, as the number of funded GME positions available to their graduates is less than the need.
The two-capacitor problem revisited: a mechanical harmonic oscillator model approach
Lee, Keeyung
2009-01-01
The well-known two-capacitor problem, in which exactly half the stored energy disappears when a charged capacitor is connected to an identical capacitor, is discussed based on the mechanical harmonic oscillator model approach. In the mechanical harmonic oscillator model, it is shown first that exactly half the work done by a constant applied force is dissipated irrespective of the form of dissipation mechanism when the system comes to a new equilibrium after a constant force is abruptly applied. This model is then applied to the energy loss mechanism in the capacitor charging problem or the two-capacitor problem. This approach allows a simple explanation of the energy dissipation mechanism in these problems and shows that the dissipated energy should always be exactly half the supplied energy whether that is caused by the Joule heat or by the radiation. This paper, which provides a simple treatment of the energy dissipation mechanism in the two-capacitor problem, is suitable for all undergraduate levels
Thermodynamic modeling of the U–Zr system – A revisit
Xiong, Wei; Xie, Wei; Shen, Chao; Morgan, Dane
2013-01-01
Graphical abstract: Display Omitted -- Abstract: A new thermodynamic description of the U–Zr system is developed using the CALPHAD (CALculation of PHAse Diagrams) method with the aid of ab initio calculations. Thermodynamic properties, such as heat capacity, activities, and enthalpy of mixing, are well predicted using the improved thermodynamic description in this work. The model-predicted enthalpies of formation for the bcc and δ phases are in good agreement with the results from DFT + U ab initio calculations. The calculations in this work show better agreements with experimental data comparing with the previous assessments. Using the integrated method of ab initio and CALPHAD modeling, an unexpected relation between the enthalpy of formation of the δ phase and energy of Zr with hexagonal structure is revealed and the model improved by fitting these energies together. The present work has demonstrated that ab initio calculations can help support a successful thermodynamic assessment of actinide systems, for which the thermodynamic properties are often difficult to measure
Dynamical mass generation in the continuum Thirring model
Girardello, L.; Immirzi, G.; Rossi, P.; Massachusetts Inst. of Tech., Cambridge; Massachusetts Inst. of Tech., Cambridge
1982-01-01
We study the renormalization of the Thirring model in the neighbourhood of μ = 0,g = -π/2, and find that on the trajectory which tends to this point when the scale goes to infinity the behaviour of the model reproduces what one obtains decomposing the N = 2 Gross-Neveu model. The existence of this trajectory is consistent with the dynamical mass generation found by McCoy and Wu in the discrete version of the massless model. (orig.)
The S-wave model for electron-hydrogen scattering revisited
Bartschat, K.; Bray, I.
1996-03-01
The R-matrix with pseudo-states (RMPS) and convergent close-coupling (CCC) methods are applied to the calculation of elastic, excitation, and total as well as single-differential ionization cross sections for the simplified S-wave model of electron-hydrogen scattering. Excellent agreement is obtained for the total cross section results obtained at electron energies between 0 and 100 eV. The two calculations also agree on the single-differential ionization cross section at 54.4 eV for the triplet spin channel, while discrepancies are evident in the singlet channel which shows remarkable structure. 18 refs., 3 figs
Statistical Texture Model for mass Detection in Mammography
Nicolás Gallego-Ortiz
2013-12-01
Full Text Available In the context of image processing algorithms for mass detection in mammography, texture is a key feature to be used to distinguish abnormal tissue from normal tissue. Recently, a texture model based on a multivariate Gaussian mixture was proposed, of which the parameters are learned in an unsupervised way from the pixel intensities of images. The model produces images that are probabilistic maps of texture normality and it was proposed as a visualization aid for diagnostic by clinical experts. In this paper, the usability of the model is studied for automatic mass detection. A segmentation strategy is proposed and evaluated using 79 mammography cases.
A review of Higgs mass calculations in supersymmetric models
Draper, P.; Rzehak, H.
2016-01-01
The discovery of the Higgs boson is both a milestone achievement for the Standard Model and an exciting probe of new physics beyond the SM. One of the most important properties of the Higgs is its mass, a number that has proven to be highly constraining for models of new physics, particularly those...... related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass...
Study on the constitutive model for jointed rock mass.
Qiang Xu
Full Text Available A new elasto-plastic constitutive model for jointed rock mass, which can consider the persistence ratio in different visual angle and anisotropic increase of plastic strain, is proposed. The proposed the yield strength criterion, which is anisotropic, is not only related to friction angle and cohesion of jointed rock masses at the visual angle but also related to the intersection angle between the visual angle and the directions of the principal stresses. Some numerical examples are given to analyze and verify the proposed constitutive model. The results show the proposed constitutive model has high precision to calculate displacement, stress and plastic strain and can be applied in engineering analysis.
Neutrino mass in flavor dependent gauged lepton model
Nomura, Takaaki; Okada, Hiroshi
2018-03-01
We study a neutrino model introducing an additional nontrivial gauged lepton symmetry where the neutrino masses are induced at two-loop level, while the first and second charged-leptons of the standard model are done at one-loop level. As a result of the model structure, we can predict one massless active neutrino, and there is a dark matter candidate. Then we discuss the neutrino mass matrix, muon anomalous magnetic moment, lepton flavor violations, oblique parameters, and relic density of dark matter, taking into account the experimental constraints.
Urban Morphology Influence on Urban Albedo: A Revisit with the S olene Model
Groleau, Dominique; Mestayer, Patrice G.
2013-05-01
This heuristic study of the urban morphology influence on urban albedo is based on some 3,500 simulations with the S olene model. The studied configurations include square blocks in regular and staggered rows, rectangular blocks with different street widths, cross-shaped blocks, infinite street canyons and several actual districts in Marseilles, Toulouse and Nantes, France. The scanned variables are plan density, facade density, building height, layout orientation, latitude, date and time of the day. The sky-view factors of the ground and canopy surfaces are also considered. This study demonstrates the significance of the facade density, in addition to the built plan density, as the explanatory geometrical factor to characterize the urban morphology, rather than building height. On the basis of these albedo calculations the puzzling results of Kondo et al. (Boundary-Layer Meteorol 100:225-242, 2001) for the influence of building height are explained, and the plan density influence is quantitatively assessed. It is shown that the albedo relationship with plan and facade densities obtained with the regular square plot configuration may be considered as a reference for all other configurations, with the exception of the infinite street canyon that shows systematic differences for the lower plan densities. The curves representing this empirical relationship may be used as a sort of abacus for all other geometries while an approximate simple mathematical model is proposed, as well as relationships between the albedo and sky-view factors.
Revisiting the constant growth angle: Estimation and verification via rigorous thermal modeling
Virozub, Alexander; Rasin, Igal G.; Brandon, Simon
2008-12-01
Methods for estimating growth angle ( θgr) values, based on the a posteriori analysis of directionally solidified material (e.g. drops) often involve assumptions of negligible gravitational effects as well as a planar solid/liquid interface during solidification. We relax both of these assumptions when using experimental drop shapes from the literature to estimate the relevant growth angles at the initial stages of solidification. Assumed to be constant, we use these values as input into a rigorous heat transfer and solidification model of the growth process. This model, which is shown to reproduce the experimental shape of a solidified sessile water drop using the literature value of θgr=0∘, yields excellent agreement with experimental profiles using our estimated values for silicon ( θgr=10∘) and germanium ( θgr=14.3∘) solidifying on an isotropic crystalline surface. The effect of gravity on the solidified drop shape is found to be significant in the case of germanium, suggesting that gravity should either be included in the analysis or that care should be taken that the relevant Bond number is truly small enough in each measurement. The planar solidification interface assumption is found to be unjustified. Although this issue is important when simulating the inflection point in the profile of the solidified water drop, there are indications that solidified drop shapes (at least in the case of silicon) may be fairly insensitive to the shape of this interface.
Ota, Kazutaka; Kohda, Masanori; Hori, Michio; Sato, Tetsu
2011-10-01
Alternative reproductive tactics are widespread in males and may cause intraspecific differences in testes investment. Parker's sneak-guard model predicts that sneaker males, who mate under sperm competition risk, invest in testes relatively more than bourgeois conspecifics that have lower risk. Given that sneakers are much smaller than bourgeois males, sneakers may increase testes investment to overcome their limited sperm productivity because of their small body sizes. In this study, we examined the mechanism that mediates differential testes investment across tactics in the Lake Tanganyika cichlid fish Lamprologus callipterus. In the Rumonge population of Burundi, bourgeois males are small compared with those in other populations and have a body size close to sneaky dwarf males. Therefore, if differences in relative testis investment depend on sperm competition, the rank order of relative testis investment should be dwarf males > bourgeois males in Rumonge = bourgeois males in the other populations. If differences in relative testis investment depend on body size, the rank order of relative testes investment should be dwarf males > bourgeois males in Rumonge > bourgeois males in the other populations. Comparisons of relative testis investment among the three male groups supported the role of sperm competition, as predicted by the sneak-guard model. Nevertheless, the effects of absolute body size on testes investment should be considered to understand the mechanisms underlying intraspecific variation in testes investment caused by alternative reproductive tactics.
Attention capture by abrupt onsets: re-visiting the priority tag model
Meera Mary Sunny
2013-12-01
Full Text Available Abrupt onsets have been shown to strongly attract attention in a stimulus-driven, bottom-up manner. However, the precise mechanism that drives capture by onsets is still debated. According to the new object account, abrupt onsets capture attention because they signal the appearance of a new object. Yantis and Johnson (1990 used a visual search task and showed that up to four onsets can be automatically prioritized. However, in their study the number of onsets co-varied with the total number of items in the display, allowing for a possible confound between these two variables. In the present study, display size was fixed at eight items while the number of onsets was systematically varied between zero and eight. Experiment 1 showed a systematic increase in reactions times with increasing number of onsets. This increase was stronger when the target was an onset than when it was a no-onset item, a result that is best explained by a model according to which only one onset is automatically prioritized. Even when the onsets were marked in red (Experiment 2, nearly half of the participants continued to prioritize only one onset item. Only when onset and no-onset targets were blocked (Experiment 3, participants started to search selectively through the set of only the relevant target type. These results further support the finding that only one onset captures attention. Many bottom-up models of attention capture, like masking or saliency accounts, can efficiently explain this finding.
Attention capture by abrupt onsets: re-visiting the priority tag model.
Sunny, Meera M; von Mühlenen, Adrian
2013-01-01
Abrupt onsets have been shown to strongly attract attention in a stimulus-driven, bottom-up manner. However, the precise mechanism that drives capture by onsets is still debated. According to the new object account, abrupt onsets capture attention because they signal the appearance of a new object. Yantis and Johnson (1990) used a visual search task and showed that up to four onsets can be automatically prioritized. However, in their study the number of onsets co-varied with the total number of items in the display, allowing for a possible confound between these two variables. In the present study, display size was fixed at eight items while the number of onsets was systematically varied between zero and eight. Experiment 1 showed a systematic increase in reactions times with increasing number of onsets. This increase was stronger when the target was an onset than when it was a no-onset item, a result that is best explained by a model according to which only one onset is automatically prioritized. Even when the onsets were marked in red (Experiment 2), nearly half of the participants continued to prioritize only one onset item. Only when onset and no-onset targets were blocked (Experiment 3), participants started to search selectively through the set of only the relevant target type. These results further support the finding that only one onset captures attention. Many bottom-up models of attention capture, like masking or saliency accounts, can efficiently explain this finding.
Model-Based Systems Engineering Approach to Managing Mass Margin
Chung, Seung H.; Bayer, Todd J.; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Christopher; Lam, Doris
2012-01-01
When designing a flight system from concept through implementation, one of the fundamental systems engineering tasks ismanaging the mass margin and a mass equipment list (MEL) of the flight system. While generating a MEL and computing a mass margin is conceptually a trivial task, maintaining consistent and correct MELs and mass margins can be challenging due to the current practices of maintaining duplicate information in various forms, such as diagrams and tables, and in various media, such as files and emails. We have overcome this challenge through a model-based systems engineering (MBSE) approach within which we allow only a single-source-of-truth. In this paper we describe the modeling patternsused to capture the single-source-of-truth and the views that have been developed for the Europa Habitability Mission (EHM) project, a mission concept study, at the Jet Propulsion Laboratory (JPL).
Towards dynamic reference information models: Readiness for ICT mass customisation
Verdouw, C.N.; Beulens, A.J.M.; Trienekens, J.H.; Verwaart, D.
2010-01-01
Current dynamic demand-driven networks make great demands on, in particular, the interoperability and agility of information systems. This paper investigates how reference information models can be used to meet these demands by enhancing ICT mass customisation. It was found that reference models for
Neutrino Mass Models: impact of non-zero reactor angle
King, Stephen F.
2011-01-01
In this talk neutrino mass models are reviewed and the impact of a non-zero reactor angle and other deviations from tri-bi maximal mixing are discussed. We propose some benchmark models, where the only way to discriminate between them is by high precision neutrino oscillation experiments.
Test of a chromomagnetic model for hadron mass differences
Lichtenberg, D. B.; Roncaglia, R.
1993-05-01
An oversimplified model consisting of the QCD color-magnetic interaction has been used previously by Silvestre-Brac and others to compare the masses of exotic and normal hadrons. We show that the model can give qualitatively wrong answers when applied to systems of normal hadrons.
Test of a chromomagnetic model for hadron mass differences
Lichtenberg, D.B.; Roncaglia, R.
1993-01-01
An oversimplified model consisting of the QCD color-magnetic interaction has been used previously by Silvestre-Brac and others to compare the masses of exotic and normal hadrons. We show that the model can give qualitatively wrong answers when applied to systems of normal hadrons
Ranger, N.; Millner, A.; Niehoerster, F.
2010-12-01
Traditionally, climate change risk assessments have taken a roughly four-stage linear ‘chain’ of moving from socioeconomic projections, to climate projections, to primary impacts and then finally onto economic and social impact assessment. Adaptation decisions are then made on the basis of these outputs. The escalation of uncertainty through this chain is well known; resulting in an ‘explosion’ of uncertainties in the final risk and adaptation assessment. The space of plausible future risk scenarios is growing ever wider with the application of new techniques which aim to explore uncertainty ever more deeply; such as those used in the recent ‘probabilistic’ UK Climate Projections 2009, and the stochastic integrated assessment models, for example PAGE2002. This explosion of uncertainty can make decision-making problematic, particularly given that the uncertainty information communicated can not be treated as strictly probabilistic and therefore, is not an easy fit with standard decision-making under uncertainty approaches. Additional problems can arise from the fact that the uncertainty estimated for different components of the ‘chain’ is rarely directly comparable or combinable. Here, we explore the challenges and limitations of using current projections for adaptation decision-making. We report the findings of a recent report completed for the UK Adaptation Sub-Committee on approaches to deal with these challenges and make robust adaptation decisions today. To illustrate these approaches, we take a number of illustrative case studies, including a case of adaptation to hurricane risk on the US Gulf Coast. This is a particularly interesting case as it involves urgent adaptation of long-lived infrastructure but requires interpreting highly uncertain climate change science and modelling; i.e. projections of Atlantic basin hurricane activity. An approach we outline is reversing the linear chain of assessments to put the economics and decision
Pola, Marco; Cacace, Mauro; Fabbri, Paolo; Piccinini, Leonardo; Zampieri, Dario; Dalla Libera, Nico
2017-04-01
As one of the largest and most extensive utilized geothermal system in northern Italy, the Euganean Geothermal System (EGS, Veneto region, NE Italy) has long been the subject of still ongoing studies. Hydrothermal waters feeding the system are of meteoric origin and infiltrate in the Veneto Prealps, to the north of the main geothermal area. The waters circulate for approximately 100 km in the subsurface of the central Veneto, outflowing with temperatures from 65°C to 86°C to the southwest near the cities of Abano Terme and Montegrotto Terme. The naturally emerging waters are mainly used for balneotherapeutic purposes, forming the famous Euganean spa district. This preferential outflow is thought to have a relevant structural component producing a high secondary permeability localized within an area of limited extent (approx. 25 km2). This peculiar structure is associated with a local network of fractures resulting from transtentional tectonics of the regional Schio-Vicenza fault system (SVFS) bounding the Euganean Geothermal Field (EGF). In the present study, a revised conceptual hydrothermal model for the EGS based on the regional hydrogeology and structural geology is proposed. Particularly, this work aims to quantify: (1) the role of the regional SVFS, and (2) the impact of the high density local fractures mesh beneath the EGF on the regional-to-local groundwater flow circulation at depths and its thermal configuration. 3D coupled flow and heat transport numerical simulations inspired by the newly developed conceptual model are carried out to properly quantify the results from these interactions. Consistently with the observations, the obtained results provide indication for temperatures in the EGF reservoir being higher than in the surrounding areas, despite a uniform basal regional crustal heat inflow. In addition, they point to the presence of a structural causative process for the localized outflow, in which deep-seated groundwater is preferentially
Validating neural-network refinements of nuclear mass models
Utama, R.; Piekarewicz, J.
2018-01-01
Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.
A scan for models with realistic fermion mass patterns
Bijnens, J.; Wetterich, C.
1986-03-01
We consider models which have no small Yukawa couplings unrelated to symmetry. This situation is generic in higher dimensional unification where Yukawa couplings are predicted to have strength similar to the gauge couplings. Generations have then to be differentiated by symmetry properties and the structure of fermion mass matrices is given in terms of quantum numbers alone. We scan possible symmetries leading to realistic mass matrices. (orig.)
NianSong Zhang
2015-01-01
Full Text Available A study on the dynamic response of a projectile penetrating concrete is conducted. The evolutional process of projectile mass loss and the effect of mass loss on penetration resistance are investigated using theoretical methods. A projectile penetration model considering projectile mass loss is established in three stages, namely, cratering phase, mass loss penetration phase, and remainder rigid projectile penetration phase.
Hartono, A. D.; Hakiki, Farizal; Syihab, Z.; Ambia, F.; Yasutra, A.; Sutopo, S.; Efendi, M.; Sitompul, V.; Primasari, I.; Apriandi, R.
2017-01-01
EOR preliminary analysis is pivotal to be performed at early stage of assessment in order to elucidate EOR feasibility. This study proposes an in-depth analysis toolkit for EOR preliminary evaluation. The toolkit incorporates EOR screening, predictive, economic, risk analysis and optimisation modules. The screening module introduces algorithms which assimilates statistical and engineering notions into consideration. The United States Department of Energy (U.S. DOE) predictive models were implemented in the predictive module. The economic module is available to assess project attractiveness, while Monte Carlo Simulation is applied to quantify risk and uncertainty of the evaluated project. Optimization scenario of EOR practice can be evaluated using the optimisation module, in which stochastic methods of Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Evolutionary Strategy (ES) were applied in the algorithms. The modules were combined into an integrated package of EOR preliminary assessment. Finally, we utilised the toolkit to evaluate several Indonesian oil fields for EOR evaluation (past projects) and feasibility (future projects). The attempt was able to update the previous consideration regarding EOR attractiveness and open new opportunity for EOR implementation in Indonesia.
Structural plasticity in the dentate gyrus- revisiting a classic injury model.
Julia V. Perederiy
2013-02-01
Full Text Available The adult brain is in a continuous state of remodeling. This is nowhere more true than in the dentate gyrus, where competing forces such as neurodegeneration and neurogenesis dynamically modify neuronal connectivity, and can occur simultaneously. This plasticity of the adult nervous system is particularly important in the context of traumatic brain injury or deafferentation. In this review, we summarize a classic injury model, lesioning of the perforant path, which removes the main extrahippocampal input to the dentate gyrus. Early studies revealed that in response to deafferentation, axons of remaining fiber systems and dendrites of mature granule cells undergo lamina-specific changes, providing one of the first examples of structural plasticity in the adult brain. Given the increasing role of adult-generated new neurons in the function of the dentate gyrus, we also compare the response of newborn and mature granule cells following lesioning of the perforant path. These studies provide insights not only to plasticity in the dentate gyrus, but also to the response of neural circuits to brain injury.
Revisiting source identification, weathering models, and phase discrimination for Exxon Valdez oil
Driskell, W.B.; Payne, J.R.; Shigenaka, G.
2005-01-01
A large chemistry data set for polycyclic aromatic hydrocarbon (PAH) and saturated hydrocarbon (SHC) contamination in sediment, water and tissue samples has emerged in the aftermath of the 1989 Exxon Valdez oil spill in Prince William Sound, Alaska. When the oil was fresh, source identification was a primary objective and fairly reliable. However, source identification became problematic as the oil weathered and its signatures changed. In response to concerns regarding when the impacted area will be clean again, this study focused on developing appropriate tools to confirm hydrocarbon source identifications and assess weathering in various matrices. Previous efforts that focused only on the whole or particulate-phase oil are not adequate to track dissolved-phase signal with low total PAH values. For that reason, a particulate signature index (PSI) and dissolved signature index (DSI) screening tool was developed in this study to discriminate between these 2 phases. The screening tool was used to measure the dissolved or water-soluble fraction of crude oil which occurs at much lower levels than the particulate phase, but which is more widely circulated and equally as important as the particulate oil phase. The discrimination methods can also identify normally-discarded, low total PAH samples which can increase the amount of usable data needed to model other effects of oil spills. 37 refs., 3 tabs., 10 figs
Chatterjee, Ankita; Kundu, Sudip
2015-01-01
Chlorophyll is one of the most important pigments present in green plants and rice is one of the major food crops consumed worldwide. We curated the existing genome scale metabolic model (GSM) of rice leaf by incorporating new compartment, reactions and transporters. We used this modified GSM to elucidate how the chlorophyll is synthesized in a leaf through a series of bio-chemical reactions spanned over different organelles using inorganic macronutrients and light energy. We predicted the essential reactions and the associated genes of chlorophyll synthesis and validated against the existing experimental evidences. Further, ammonia is known to be the preferred source of nitrogen in rice paddy fields. The ammonia entering into the plant is assimilated in the root and leaf. The focus of the present work is centered on rice leaf metabolism. We studied the relative importance of ammonia transporters through the chloroplast and the cytosol and their interlink with other intracellular transporters. Ammonia assimilation in the leaves takes place by the enzyme glutamine synthetase (GS) which is present in the cytosol (GS1) and chloroplast (GS2). Our results provided possible explanation why GS2 mutants show normal growth under minimum photorespiration and appear chlorotic when exposed to air. PMID:26443104
Hartono, A. D.
2017-10-17
EOR preliminary analysis is pivotal to be performed at early stage of assessment in order to elucidate EOR feasibility. This study proposes an in-depth analysis toolkit for EOR preliminary evaluation. The toolkit incorporates EOR screening, predictive, economic, risk analysis and optimisation modules. The screening module introduces algorithms which assimilates statistical and engineering notions into consideration. The United States Department of Energy (U.S. DOE) predictive models were implemented in the predictive module. The economic module is available to assess project attractiveness, while Monte Carlo Simulation is applied to quantify risk and uncertainty of the evaluated project. Optimization scenario of EOR practice can be evaluated using the optimisation module, in which stochastic methods of Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Evolutionary Strategy (ES) were applied in the algorithms. The modules were combined into an integrated package of EOR preliminary assessment. Finally, we utilised the toolkit to evaluate several Indonesian oil fields for EOR evaluation (past projects) and feasibility (future projects). The attempt was able to update the previous consideration regarding EOR attractiveness and open new opportunity for EOR implementation in Indonesia.
Cerebellar supervised learning revisited: biophysical modeling and degrees-of-freedom control.
Kawato, Mitsuo; Kuroda, Shinya; Schweighofer, Nicolas
2011-10-01
The biophysical models of spike-timing-dependent plasticity have explored dynamics with molecular basis for such computational concepts as coincidence detection, synaptic eligibility trace, and Hebbian learning. They overall support different learning algorithms in different brain areas, especially supervised learning in the cerebellum. Because a single spine is physically very small, chemical reactions at it are essentially stochastic, and thus sensitivity-longevity dilemma exists in the synaptic memory. Here, the cascade of excitable and bistable dynamics is proposed to overcome this difficulty. All kinds of learning algorithms in different brain regions confront with difficult generalization problems. For resolution of this issue, the control of the degrees-of-freedom can be realized by changing synchronicity of neural firing. Especially, for cerebellar supervised learning, the triangle closed-loop circuit consisting of Purkinje cells, the inferior olive nucleus, and the cerebellar nucleus is proposed as a circuit to optimally control synchronous firing and degrees-of-freedom in learning. Copyright © 2011 Elsevier Ltd. All rights reserved.
Dependence of X-Ray Burst Models on Nuclear Masses
Schatz, H.; Ong, W.-J. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States)
2017-08-01
X-ray burst model predictions of light curves and the final composition of the nuclear ashes are affected by uncertain nuclear masses. However, not all of these masses are determined experimentally with sufficient accuracy. Here we identify the remaining nuclear mass uncertainties in X-ray burst models using a one-zone model that takes into account the changes in temperature and density evolution caused by changes in the nuclear physics. Two types of bursts are investigated—a typical mixed H/He burst with a limited rapid proton capture process (rp-process) and an extreme mixed H/He burst with an extended rp-process. When allowing for a 3 σ variation, only three remaining nuclear mass uncertainties affect the light-curve predictions of a typical H/He burst ({sup 27}P, {sup 61}Ga, and {sup 65}As), and only three additional masses affect the composition strongly ({sup 80}Zr, {sup 81}Zr, and {sup 82}Nb). A larger number of mass uncertainties remain to be addressed for the extreme H/He burst, with the most important being {sup 58}Zn, {sup 61}Ga, {sup 62}Ge, {sup 65}As, {sup 66}Se, {sup 78}Y, {sup 79}Y, {sup 79}Zr, {sup 80}Zr, {sup 81}Zr, {sup 82}Zr, {sup 82}Nb, {sup 83}Nb, {sup 86}Tc, {sup 91}Rh, {sup 95}Ag, {sup 98}Cd, {sup 99}In, {sup 100}In, and {sup 101}In. The smallest mass uncertainty that still impacts composition significantly when varied by 3 σ is {sup 85}Mo with 16 keV uncertainty. For one of the identified masses, {sup 27}P, we use the isobaric mass multiplet equation to improve the mass uncertainty, obtaining an atomic mass excess of −716(7) keV. The results provide a roadmap for future experiments at advanced rare isotope beam facilities, where all the identified nuclides are expected to be within reach for precision mass measurements.
Piecing together the maternal death puzzle through narratives: the three delays model revisited.
Viva Combs Thorsen
Full Text Available BACKGROUND: In Malawi maternal mortality continues to be a major public health challenge. Going beyond the numbers to form a more complete view of why women die is critical to improving access to and quality of emergency obstetric care. The objective of the current study was to identify the socio-cultural and facility-based factors that contributed to maternal deaths in the district of Lilongwe, Malawi. METHODS: Retrospectively, 32 maternal death cases that occurred between January 1, 2011 and June 30, 2011 were reviewed independently by two gynecologists/obstetricians. Interviews were conducted with healthcare staff, family members, neighbors, and traditional birth attendants. Guided by the grounded theory approach, interview transcripts were analyzed manually and continuously. Emerging, recurring themes were identified and excerpts from the transcripts were categorized according to the Three Delays Model (3Ds. RESULTS: Sixteen deaths were due to direct obstetric complications, sepsis and hemorrhage being most common. Sixteen deaths were due to indirect causes with the main cause being anemia, followed by HIV and heart disease. Lack of recognizing signs, symptoms, and severity of the situation; using traditional Birth Attendant services; low female literacy level; delayed access to transport; hardship of long distance and physical terrain; delayed prompt quality emergency obstetric care; and delayed care while at the hospital due to patient refusal or concealment were observed. According to the 3Ds, the most common delay observed was in receiving treatment upon reaching the facility due to referral delays, missed diagnoses, lack of blood, lack of drugs, or inadequate care, and severe mismanagement.
Quigg, Chris
2007-01-01
In the classical physics we inherited from Isaac Newton, mass does not arise, it simply is. The mass of a classical object is the sum of the masses of its parts. Albert Einstein showed that the mass of a body is a measure of its energy content, inviting us to consider the origins of mass. The protons we accelerate at Fermilab are prime examples of Einsteinian matter: nearly all of their mass arises from stored energy. Missing mass led to the discovery of the noble gases, and a new form of missing mass leads us to the notion of dark matter. Starting with a brief guided tour of the meanings of mass, the colloquium will explore the multiple origins of mass. We will see how far we have come toward understanding mass, and survey the issues that guide our research today.
Impact of mass generation for spin-1 mediator simplified models
Bell, Nicole F.; Cai, Yi; Leane, Rebecca K.
2017-01-01
In the simplified dark matter models commonly studied, the mass generation mechanism for the dark fields is not typically specified. We demonstrate that the dark matter interaction types, and hence the annihilation processes relevant for relic density and indirect detection, are strongly dictated by the mass generation mechanism chosen for the dark sector particles, and the requirement of gauge invariance. We focus on the class of models in which fermionic dark matter couples to a spin-1 vector or axial-vector mediator. However, in order to generate dark sector mass terms, it is necessary in most cases to introduce a dark Higgs field and thus a spin-0 scalar mediator will also be present. In the case that all the dark sector fields gain masses via coupling to a single dark sector Higgs field, it is mandatory that the axial-vector coupling of the spin-1 mediator to the dark matter is non-zero; the vector coupling may also be present depending on the charge assignments. For all other mass generation options, only pure vector couplings between the spin-1 mediator and the dark matter are allowed. If these coupling restrictions are not obeyed, unphysical results may be obtained such as a violation of unitarity at high energies. These two-mediator scenarios lead to important phenomenology that does not arise in single mediator models. We survey two-mediator dark matter models which contain both vector and scalar mediators, and explore their relic density and indirect detection phenomenology.
The analogic model ''RIC'' of thermal behaviour of mass concrete
Gonzalez Redondo, M.; Gonzalez de Posada, F.; Plana Claver, J.
1997-01-01
In order to study the thermal field and calorific flows in heat sources (i.e. mass concrete during setting) we have conceived, built and experimented with an analogical electric model. This model, named RIC, consists of resistors (R) and capacitors (C) in which nodes an electric current (I) has been injected. Several analogical constants were used for the mathematical approximation. Thus, this paper describes the analogical RIC model, simulating heat generation, boundary and initial conditions and concreting. (Author) 4 refs
Hagedorn, Claudia; King, Stephen F.; Luhn, Christoph
2012-01-01
Following the recent results from Daya Bay and RENO, which measure the lepton mixing angle θ 13 l ≈0.15, we revisit a supersymmetric (SUSY) S 4 ×SU(5) model, which predicts tri-bimaximal (TB) mixing in the neutrino sector with θ 13 l being too small in its original version. We show that introducing one additional S 4 singlet flavon into the model gives rise to a sizable θ 13 l via an operator which leads to the breaking of one of the two Z 2 symmetries preserved in the neutrino sector at leading order (LO). The results of the original model for fermion masses, quark mixing and the solar mixing angle are maintained to good precision. The atmospheric and solar mixing angle deviations from TB mixing are subject to simple sum rule bounds.
Clifford Algebra Implying Three Fermion Generations Revisited
Krolikowski, W.
2002-01-01
The author's idea of algebraic compositeness of fundamental particles, allowing to understand the existence in Nature of three fermion generations, is revisited. It is based on two postulates. Primo, for all fundamental particles of matter the Dirac square-root procedure √p 2 → Γ (N) ·p works, leading to a sequence N=1, 2, 3, ... of Dirac-type equations, where four Dirac-type matrices Γ (N) μ are embedded into a Clifford algebra via a Jacobi definition introducing four ''centre-of-mass'' and (N - 1) x four ''relative'' Dirac-type matrices. These define one ''centre-of-mass'' and N - 1 ''relative'' Dirac bispinor indices. Secundo, the ''centre-of-mass'' Dirac bispinor index is coupled to the Standard Model gauge fields, while N - 1 ''relative'' Dirac bispinor indices are all free indistinguishable physical objects obeying Fermi statistics along with the Pauli principle which requires the full antisymmetry with respect to ''relative'' Dirac indices. This allows only for three Dirac-type equations with N = 1, 3, 5 in the case of N odd, and two with N = 2, 4 in the case of N even. The first of these results implies unavoidably the existence of three and only three generations of fundamental fermions, namely leptons and quarks, as labelled by the Standard Model signature. At the end, a comment is added on the possible shape of Dirac 3 x 3 mass matrices for four sorts of spin-1/2 fundamental fermions appearing in three generations. For charged leptons a prediction is m τ = 1776.80 MeV, when the input of experimental m e and m μ is used. (author)
Clifford Algebra Implying Three Fermion Generations Revisited
Krolikowski, Wojciech
2002-09-01
The author's idea of algebraic compositeness of fundamental particles, allowing to understand the existence in Nature of three fermion generations, is revisited. It is based on two postulates. Primo, for all fundamental particles of matter the Dirac square-root procedure √ {p2} → {Γ }(N)p works, leading to a sequence N = 1,2,3, ... of Dirac-type equations, where four Dirac-type matrices {Γ }(N)μ are embedded into a Clifford algebra via a Jacobi definition introducing four ``centre-of-mass'' and (N-1)× four ``relative'' Dirac-type matrices. These define one ``centre-of-mass'' and (N-1) ``relative'' Dirac bispinor indices. Secundo, the ``centre-of-mass'' Dirac bispinor index is coupled to the Standard Model gauge fields, while (N-1) ``relative'' Dirac bispinor indices are all free indistinguishable physical objects obeying Fermi statistics along with the Pauli principle which requires the full antisymmetry with respect to ``relative'' Dirac indices. This allows only for three Dirac-type equations with N = 1,3,5 in the case of N odd, and two with N = 2,4 in the case of N even. The first of these results implies unavoidably the existence of three and only three generations of fundamental fermions, namely leptons and quarks, as labelled by the Standard Model signature. At the end, a comment is added on the possible shape of Dirac 3x3 mass matrices for four sorts of spin-1/2 fundamental fermions appearing in three generations. For charged leptons a prediction is mτ = 1776.80 MeV, when the input of experimental me and mμ is used.
Unified model of nuclear mass and level density formulas
Nakamura, Hisashi
2001-01-01
The objective of present work is to obtain a unified description of nuclear shell, pairing and deformation effects for both ground state masses and level densities, and to find a new set of parameter systematics for both the mass and the level density formulas on the basis of a model for new single-particle state densities. In this model, an analytical expression is adopted for the anisotropic harmonic oscillator spectra, but the shell-pairing correlation are introduced in a new way. (author)
Oxidative phosphorylation revisited
Nath, Sunil; Villadsen, John
2015-01-01
The fundamentals of oxidative phosphorylation and photophosphorylation are revisited. New experimental data on the involvement of succinate and malate anions respectively in oxidative phosphorylation and photophosphorylation are presented. These new data offer a novel molecular mechanistic...
Development and validation of a mass casualty conceptual model.
Culley, Joan M; Effken, Judith A
2010-03-01
To develop and validate a conceptual model that provides a framework for the development and evaluation of information systems for mass casualty events. The model was designed based on extant literature and existing theoretical models. A purposeful sample of 18 experts validated the model. Open-ended questions, as well as a 7-point Likert scale, were used to measure expert consensus on the importance of each construct and its relationship in the model and the usefulness of the model to future research. Computer-mediated applications were used to facilitate a modified Delphi technique through which a panel of experts provided validation for the conceptual model. Rounds of questions continued until consensus was reached, as measured by an interquartile range (no more than 1 scale point for each item); stability (change in the distribution of responses less than 15% between rounds); and percent agreement (70% or greater) for indicator questions. Two rounds of the Delphi process were needed to satisfy the criteria for consensus or stability related to the constructs, relationships, and indicators in the model. The panel reached consensus or sufficient stability to retain all 10 constructs, 9 relationships, and 39 of 44 indicators. Experts viewed the model as useful (mean of 5.3 on a 7-point scale). Validation of the model provides the first step in understanding the context in which mass casualty events take place and identifying variables that impact outcomes of care. This study provides a foundation for understanding the complexity of mass casualty care, the roles that nurses play in mass casualty events, and factors that must be considered in designing and evaluating information-communication systems to support effective triage under these conditions.
The B - L scotogenic models for Dirac neutrino masses
Wang, Weijian [North China Electric Power University, Department of Physics, Baoding (China); Wang, Ruihong [Hebei Agricultural University, College of Information Science and Technology, Baoding (China); Han, Zhi-Long [University of Jinan, School of Physics and Technology, Jinan, Shandong (China); Han, Jin-Zhong [Zhoukou Normal University, School of Physics and Telecommunications Engineering, Zhoukou, Henan (China)
2017-12-15
We construct the one-loop and two-loop scotogenic models for Dirac neutrino mass generation in the context of U(1){sub B-L} extensions of standard model. It is indicated that the total number of intermediate fermion singlets is uniquely fixed by the anomaly free condition and the new particles may have exotic B - L charges so that the direct SM Yukawa mass term anti ν{sub L}ν{sub R}φ{sup 0} and the Majorana mass term (m{sub N}/2)ν{sub R}{sup C}ν{sub R} are naturally forbidden. After the spontaneous breaking of the U(1){sub B-L} symmetry, the discrete Z{sub 2} or Z{sub 3} symmetry appears as the residual symmetry and gives rise to the stability of intermediate fields as DM candidates. Phenomenological aspects of lepton flavor violation, DM, leptogenesis and LHC signatures are discussed. (orig.)
The B-L scotogenic models for Dirac neutrino masses
Wang, Weijian; Wang, Ruihong; Han, Zhi-Long; Han, Jin-Zhong
2017-12-01
We construct the one-loop and two-loop scotogenic models for Dirac neutrino mass generation in the context of U(1)_{B-L} extensions of standard model. It is indicated that the total number of intermediate fermion singlets is uniquely fixed by the anomaly free condition and the new particles may have exotic B-L charges so that the direct SM Yukawa mass term \\bar{ν }_Lν _R\\overline{φ ^0} and the Majorana mass term (m_N/2)\\overline{ν _R^C}ν _R are naturally forbidden. After the spontaneous breaking of the U(1)_{B-L} symmetry, the discrete Z2 or Z3 symmetry appears as the residual symmetry and gives rise to the stability of intermediate fields as DM candidates. Phenomenological aspects of lepton flavor violation, DM, leptogenesis and LHC signatures are discussed.
Optimal Filtering in Mass Transport Modeling From Satellite Gravimetry Data
Ditmar, P.; Hashemi Farahani, H.; Klees, R.
2011-12-01
Monitoring natural mass transport in the Earth's system, which has marked a new era in Earth observation, is largely based on the data collected by the GRACE satellite mission. Unfortunately, this mission is not free from certain limitations, two of which are especially critical. Firstly, its sensitivity is strongly anisotropic: it senses the north-south component of the mass re-distribution gradient much better than the east-west component. Secondly, it suffers from a trade-off between temporal and spatial resolution: a high (e.g., daily) temporal resolution is only possible if the spatial resolution is sacrificed. To make things even worse, the GRACE satellites enter occasionally a phase when their orbit is characterized by a short repeat period, which makes it impossible to reach a high spatial resolution at all. A way to mitigate limitations of GRACE measurements is to design optimal data processing procedures, so that all available information is fully exploited when modeling mass transport. This implies, in particular, that an unconstrained model directly derived from satellite gravimetry data needs to be optimally filtered. In principle, this can be realized with a Wiener filter, which is built on the basis of covariance matrices of noise and signal. In practice, however, a compilation of both matrices (and, therefore, of the filter itself) is not a trivial task. To build the covariance matrix of noise in a mass transport model, it is necessary to start from a realistic model of noise in the level-1B data. Furthermore, a routine satellite gravimetry data processing includes, in particular, the subtraction of nuisance signals (for instance, associated with atmosphere and ocean), for which appropriate background models are used. Such models are not error-free, which has to be taken into account when the noise covariance matrix is constructed. In addition, both signal and noise covariance matrices depend on the type of mass transport processes under
Revisiting the formal foundation of Probabilistic Databases
Wanders, B.; van Keulen, Maurice
2015-01-01
One of the core problems in soft computing is dealing with uncertainty in data. In this paper, we revisit the formal foundation of a class of probabilistic databases with the purpose to (1) obtain data model independence, (2) separate metadata on uncertainty and probabilities from the raw data, (3)
Mass Transfer Model for a Breached Waste Package
Hsu, C.; McClure, J.
2004-01-01
The degradation of waste packages, which are used for the disposal of spent nuclear fuel in the repository, can result in configurations that may increase the probability of criticality. A mass transfer model is developed for a breached waste package to account for the entrainment of insoluble particles. In combination with radionuclide decay, soluble advection, and colloidal transport, a complete mass balance of nuclides in the waste package becomes available. The entrainment equations are derived from dimensionless parameters such as drag coefficient and Reynolds number and based on the assumption that insoluble particles are subjected to buoyant force, gravitational force, and drag force only. Particle size distributions are utilized to calculate entrainment concentration along with geochemistry model abstraction to calculate soluble concentration, and colloid model abstraction to calculate colloid concentration and radionuclide sorption. Results are compared with base case geochemistry model, which only considers soluble advection loss
Old star clusters: Bench tests of low mass stellar models
Salaris M.
2013-03-01
Full Text Available Old star clusters in the Milky Way and external galaxies have been (and still are traditionally used to constrain the age of the universe and the timescales of galaxy formation. A parallel avenue of old star cluster research considers these objects as bench tests of low-mass stellar models. This short review will highlight some recent tests of stellar evolution models that make use of photometric and spectroscopic observations of resolved old star clusters. In some cases these tests have pointed to additional physical processes efficient in low-mass stars, that are not routinely included in model computations. Moreover, recent results from the Kepler mission about the old open cluster NGC6791 are adding new tight constraints to the models.
Modelling toluene oxidation : Incorporation of mass transfer phenomena
Hoorn, J.A.A.; van Soolingen, J.; Versteeg, G. F.
The kinetics of the oxidation of toluene have been studied in close interaction with the gas-liquid mass transfer occurring in the reactor. Kinetic parameters for a simple model have been estimated on basis of experimental observations performed under industrial conditions. The conclusions for the
Hadronic mass-relations from topological expansion and string model
Kaidalov, A.B.
1980-01-01
Hadronic mass-relations from topological expansion and string model are derived. For this purpose the space- time picture of hadron interactions at high energies corresponding to planar diagrams of topological expansion is considered. Simple relations between intercepts and slopes of Regge trajectories based on the topological expansion and q anti q-string picture of hadrons are obtained [ru
Renormalization of seesaw neutrino masses in the standard model ...
the neutrino-mass-operator in the standard model with two-Higgs doublets, and also the QCD–QED ... data of atmospheric muon deficits, thereby suggesting a large mixing angle with ЖС¾. Ь ~ ... One method consists of running the gauge.
Quark potential model of baryon spin-orbit mass splittings
Wang Fan; Wong Chunwa
1987-01-01
We show that it is possible to make the P-wave spin-orbit mass splittings in Λ baryons consistent with those of nonstrange baryons in a naive quark model, but only by introducing additional terms in the quark-quark effective interaction. These terms might be related to contributions due to pomeron exchange and sea excitations. The implications of our model in meson spectroscopy and nuclear forces are discussed. (orig.)
Finite element model for heat conduction in jointed rock masses
Gartling, D.K.; Thomas, R.K.
1981-01-01
A computatonal procedure for simulating heat conduction in a fractured rock mass is proposed and illustrated in the present paper. The method makes use of a simple local model for conduction in the vicinity of a single open fracture. The distributions of fractures and fracture properties within the finite element model are based on a statistical representation of geologic field data. Fracture behavior is included in the finite element computation by locating local, discrete fractures at the element integration points
Modeling of nanofabricated paddle bridges for resonant mass sensing
Lobontiu, N.; Ilic, B.; Garcia, E.; Reissman, T.; Craighead, H. G.
2006-01-01
The modeling of nanopaddle bridges is studied in this article by proposing a lumped-parameter mathematical model which enables structural characterization in the resonant domain. The distributed compliance and inertia of all three segments composing a paddle bridge are taken into consideration in order to determine the equivalent lumped-parameter stiffness and inertia fractions, and further on the bending and torsion resonant frequencies. The approximate model produces results which are confirmed by finite element analysis and experimental measurements. The model is subsequently utilized to quantify the amount of mass which attaches to the bridge by predicting the modified resonant frequencies in either bending or torsion
Mass and Heat Transfer Analysis of Membrane Humidifier with a Simple Lumped Mass Model
Lee, Young Duk; Bae, Ho June; Ahn, Kook Young; Yu, Sang Seok; Hwang, Joon Young
2009-01-01
The performance of proton exchange membrane fuel cell (PEMFC) is seriously changed by the humidification condition which is intrinsic characteristics of the PEMFC. Typically, the humidification of fuel cell is carried out with internal or external humidifier. A membrane humidifier is applied to the external humidification of residential power generation fuel cell due to its convenience and high performance. In this study, a simple static model is constructed to understand the physical phenomena of the membrane humidifier in terms of geometric parameters and operating parameters. The model utilizes the concept of shell and tube heat exchanger but the model is also able to estimate the mass transport through the membrane. Model is constructed with FORTRAN under Matlab/Simulink □ environment to keep consistency with other components model which we already developed. Results shows that the humidity of wet gas and membrane thickness are critical parameters to improve the performance of the humidifier
Maximum Mass of Hybrid Stars in the Quark Bag Model
Alaverdyan, G. B.; Vartanyan, Yu. L.
2017-12-01
The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.
Improved Nuclear Reactor and Shield Mass Model for Space Applications
Robb, Kevin
2004-01-01
New technologies are being developed to explore the distant reaches of the solar system. Beyond Mars, solar energy is inadequate to power advanced scientific instruments. One technology that can meet the energy requirements is the space nuclear reactor. The nuclear reactor is used as a heat source for which a heat-to-electricity conversion system is needed. Examples of such conversion systems are the Brayton, Rankine, and Stirling cycles. Since launch cost is proportional to the amount of mass to lift, mass is always a concern in designing spacecraft. Estimations of system masses are an important part in determining the feasibility of a design. I worked under Michael Barrett in the Thermal Energy Conversion Branch of the Power & Electric Propulsion Division. An in-house Closed Cycle Engine Program (CCEP) is used for the design and performance analysis of closed-Brayton-cycle energy conversion systems for space applications. This program also calculates the system mass including the heat source. CCEP uses the subroutine RSMASS, which has been updated to RSMASS-D, to estimate the mass of the reactor. RSMASS was developed in 1986 at Sandia National Laboratories to quickly estimate the mass of multi-megawatt nuclear reactors for space applications. In response to an emphasis for lower power reactors, RSMASS-D was developed in 1997 and is based off of the SP-100 liquid metal cooled reactor. The subroutine calculates the mass of reactor components such as the safety systems, instrumentation and control, radiation shield, structure, reflector, and core. The major improvements in RSMASS-D are that it uses higher fidelity calculations, is easier to use, and automatically optimizes the systems mass. RSMASS-D is accurate within 15% of actual data while RSMASS is only accurate within 50%. My goal this summer was to learn FORTRAN 77 programming language and update the CCEP program with the RSMASS-D model.
Hansen, Morten Balle; Lindholst, Andrej Christian
2016-01-01
out; Benchmarking and yardstick competition; and Public-Private collaboration. On the basis of the review of the seven articles, it is found that all elements in all marketization models are firmly embedded but also under dynamic change within public service delivery systems. The review also......Purpose: The purpose of this introduction article to the IJPSM special issue on marketization is to clarify the conceptual foundations of marketization as a phenomenon within the public sector and to gauge current marketization trends on the basis of the seven articles in the special issue. Design....../methodology/approach: Conceptual clarification and cross-cutting review of seven articles analysing marketization in six countries in three policy areas at the level of local government. Findings: Four ideal-type models are deduced: Quasi-markets, involving both provider competition and free choice for users; Classic contracting...
High Mass Standard Model Higgs searches at the Tevatron
Petridis Konstantinos A.
2012-06-01
Full Text Available We present the results of searches for the Standard Model Higgs boson decaying predominantly to W+W− pairs, at a center-of-mass energy of √s = 1.96 TeV, using up to 8.2 fb−1 of data collected with the CDF and D0 detectors at the Fermilab Tevatron collider. The analysis techniques and the various channels considered are discussed. These searches result in exclusions across the Higgs mass range of 156.5< mH <173.7 GeV for CDF and 161< mH <170 GeV for D0.
The exact mass-gaps of the principal chiral models
Hollowood, Timothy J
1994-01-01
An exact expression for the mass-gap, the ratio of the physical particle mass to the $\\Lambda$-parameter, is found for the principal chiral sigma models associated to all the classical Lie algebras. The calculation is based on a comparison of the free-energy in the presence of a source coupling to a conserved charge of the theory computed in two ways: via the thermodynamic Bethe Ansatz from the exact scattering matrix and directly in perturbation theory. The calculation provides a non-trivial test of the form of the exact scattering matrix.
The quark mass spectrum in the Universal Seesaw model
Ranfone, S.
1993-03-01
In the context of a Universal Seesaw model implemented in a left-right symmetric theory, we show that, by allowing the two left-handed doublet Higgs fields to develop different vacuum-expectation-values (VEV's), it is possible to account for the observed structure of the quark mass spectrum without the need of any hierarchy among the Yukawa couplings. In this framework the top-quark mass is expected to be of the order of its present experimental lower bound, m t ≅ 90 to 100 GeV. Moreover, we find that, while one of the Higgs doublets gets essentially the standard model VEV of approximately 250 GeV, the second doublet is expected to have a much smaller VEV, of order 10 GeV. The identification of the large mass scale of the model with the Peccei-Quinn scale fixes the mass of the right-handed gauge bosons in the range 10 7 to 10 10 GeV, far beyond the reach of present collider experiments. (author)
Critical boundary sine-Gordon revisited
Hasselfield, M.; Lee, Taejin; Semenoff, G.W.; Stamp, P.C.E.
2006-01-01
We revisit the exact solution of the two space-time dimensional quantum field theory of a free massless boson with a periodic boundary interaction and self-dual period. We analyze the model by using a mapping to free fermions with a boundary mass term originally suggested in Ref. [J. Polchinski, L. Thorlacius, Phys. Rev. D 50 (1994) 622]. We find that the entire SL (2, C) family of boundary states of a single boson are boundary sine-Gordon states and we derive a simple explicit expression for the boundary state in fermion variables and as a function of sine-Gordon coupling constants. We use this expression to compute the partition function. We observe that the solution of the model has a strong-weak coupling generalization of T-duality. We then examine a class of recently discovered conformal boundary states for compact bosons with radii which are rational numbers times the self-dual radius. These have simple expression in fermion variables. We postulate sine-Gordon-like field theories with discrete gauge symmetries for which they are the appropriate boundary states
The critical catastrophe revisited
De Mulatier, Clélia; Rosso, Alberto; Dumonteil, Eric; Zoia, Andrea
2015-01-01
The neutron population in a prototype model of nuclear reactor can be described in terms of a collection of particles confined in a box and undergoing three key random mechanisms: diffusion, reproduction due to fissions, and death due to absorption events. When the reactor is operated at the critical point, and fissions are exactly compensated by absorptions, the whole neutron population might in principle go to extinction because of the wild fluctuations induced by births and deaths. This phenomenon, which has been named critical catastrophe, is nonetheless never observed in practice: feedback mechanisms acting on the total population, such as human intervention, have a stabilizing effect. In this work, we revisit the critical catastrophe by investigating the spatial behaviour of the fluctuations in a confined geometry. When the system is free to evolve, the neutrons may display a wild patchiness (clustering). On the contrary, imposing a population control on the total population acts also against the local fluctuations, and may thus inhibit the spatial clustering. The effectiveness of population control in quenching spatial fluctuations will be shown to depend on the competition between the mixing time of the neutrons (i.e. the average time taken for a particle to explore the finite viable space) and the extinction time
Towner, I.S.; Khanna, F.C.
1984-01-01
Consideration of core polarization, isobar currents and meson-exchange processes gives a satisfactory understanding of the ground-state magnetic moments in closed-shell-plus (or minus)-one nuclei, A = 3, 15, 17, 39 and 41. Ever since the earliest days of the nuclear shell model the understanding of magnetic moments of nuclear states of supposedly simple configurations, such as doubly closed LS shells +-1 nucleon, has been a challenge for theorists. The experimental moments, which in most cases are known with extraordinary precision, show a small yet significant departure from the single-particle Schmidt values. The departure, however, is difficult to evaluate precisely since, as will be seen, it results from a sensitive cancellation between several competing corrections each of which can be as large as the observed discrepancy. This, then, is the continuing fascination of magnetic moments. In this contribution, we revisit the subjet principally to identify the role played by isobar currents, which are of much concern at this conference. But in so doing we warn quite strongly of the dangers of considering just isobar currents in isolation; equal consideration must be given to competing processes which in this context are the mundane nuclear structure effects, such as core polarization, and the more popular meson-exchange currents
Mass transfer models analysis for the structured packings
Suastegui R, A.O.
1997-01-01
The models that have been developing, to understand the mechanism of the mass transfer through the structured packings, present limitations for their application, existing then uncertainty in order to use them in the chemical industrial processes. In this study the main parameters used in the mass transfer are: the hydrodynamic of the bed of the column, the geometry of the bed, physical-chemical properties of the mixture and the flow regime of the operation between the flows liquid-gas. The sensibility of each one of these parameters generate an arduous work to develop right proposals and good interpretation of the phenomenon. With the purpose of showing the importance of these parameters mentioned in the mass transfer, this work is analyzed the process of absorption for the system water-air, using the models to the structured packings in packed columns. The models selected were developed by Bravo and collaborators in 1985 and 1992, in order to determine the parameters previous mentioned for the system water-air, using a structured packing built in the National Institute of Nuclear Research. In this work is showed the results of the models application and their discussion. (Author)
Infinite nuclear matter model and mass formulae for nuclei
Satpathy, L.
2016-01-01
The matter composed of the nucleus is a quantum-mechanical interacting many-fermionic system. However, the shell and classical liquid drop have been taken as the two main features of nuclear dynamics, which have guided the evolution of nuclear physics. These two features can be considered as the macroscopic manifestation of the microscopic dynamics of the nucleons at fundamental level. Various mass formulae have been developed based on either of these features over the years, resulting in many ambiguities and uncertainties posing many challenges in this field. Keeping this in view, Infinite Nuclear Matter (INM) model has been developed during last couple of decades with a many-body theoretical foundation employing the celebrated Hugenholtz-Van Hove theorem, quite appropriate for the interacting quantum-mechanical nuclear system. A mass formula called INM mass formula based on this model yields rms deviation of 342 keV being the lowest in literature. Some of the highlights of its result includes its determination of INM density in agreement with the electron scattering data leading to the resolution of the long standing 'r 0 -paradox' it predicts new magic numbers giving rise to new island of stability in the drip-line regions. This is the manifestation of a new phenomenon where shell-effect over comes the repulsive component of nucleon-nucleon force resulting in the broadening of the stability peninsula. Shell quenching in N= 82,and N= 126 shells, and several islands of inversion have been predicted. The model determines the empirical value of the nuclear compression modulus, using high precission 4500 data comprising nuclear masses, neutron and proton separation energies. The talk will give a critical review of the field of mass formula and our understanding of nuclear dynamics as a whole
Models of mass segregation at the Galactic Centre
Freitag, Marc; Amaro-Seoane, Pau; Kalogera, Vassiliki
2006-01-01
We study the process of mass segregation through 2-body relaxation in galactic nuclei with a central massive black hole (MBH). This study has bearing on a variety of astrophysical questions, from the distribution of X-ray binaries at the Galactic centre, to tidal disruptions of main- sequence and giant stars, to inspirals of compact objects into the MBH, an important category of events for the future space borne gravitational wave interferometer LISA. In relatively small galactic nuclei, typical hosts of MBHs with masses in the range 10 4 - 10 7 M o-dot , the relaxation induces the formation of a steep density cusp around the MBH and strong mass segregation. Using a spherical stellar dynamical Monte-Carlo code, we simulate the long-term relaxational evolution of galactic nucleus models with a spectrum of stellar masses. Our focus is the concentration of stellar black holes to the immediate vicinity of the MBH. Special attention is given to models developed to match the conditions in the Milky Way nucleus
Modeling and Simulation of Variable Mass, Flexible Structures
Tobbe, Patrick A.; Matras, Alex L.; Wilson, Heath E.
2009-01-01
The advent of the new Ares I launch vehicle has highlighted the need for advanced dynamic analysis tools for variable mass, flexible structures. This system is composed of interconnected flexible stages or components undergoing rapid mass depletion through the consumption of solid or liquid propellant. In addition to large rigid body configuration changes, the system simultaneously experiences elastic deformations. In most applications, the elastic deformations are compatible with linear strain-displacement relationships and are typically modeled using the assumed modes technique. The deformation of the system is approximated through the linear combination of the products of spatial shape functions and generalized time coordinates. Spatial shape functions are traditionally composed of normal mode shapes of the system or even constraint modes and static deformations derived from finite element models of the system. Equations of motion for systems undergoing coupled large rigid body motion and elastic deformation have previously been derived through a number of techniques [1]. However, in these derivations, the mode shapes or spatial shape functions of the system components were considered constant. But with the Ares I vehicle, the structural characteristics of the system are changing with the mass of the system. Previous approaches to solving this problem involve periodic updates to the spatial shape functions or interpolation between shape functions based on system mass or elapsed mission time. These solutions often introduce misleading or even unstable numerical transients into the system. Plus, interpolation on a shape function is not intuitive. This paper presents an approach in which the shape functions are held constant and operate on the changing mass and stiffness matrices of the vehicle components. Each vehicle stage or component finite element model is broken into dry structure and propellant models. A library of propellant models is used to describe the
Modelling of heat and mass transfer processes in neonatology
Ginalski, Maciej K [FLUENT Europe, Sheffield Business Park, Europa Link, Sheffield S9 1XU (United Kingdom); Nowak, Andrzej J [Institute of Thermal Technology, Silesian University of Technology, Konarskiego 22, 44-100 Gliwice (Poland); Wrobel, Luiz C [School of Engineering and Design, Brunel University, Uxbridge UB8 3PH (United Kingdom)], E-mail: maciej.ginalski@ansys.com, E-mail: Andrzej.J.Nowak@polsl.pl, E-mail: luiz.wrobel@brunel.ac.uk
2008-09-01
This paper reviews some of our recent applications of computational fluid dynamics (CFD) to model heat and mass transfer problems in neonatology and investigates the major heat and mass transfer mechanisms taking place in medical devices such as incubators and oxygen hoods. This includes novel mathematical developments giving rise to a supplementary model, entitled infant heat balance module, which has been fully integrated with the CFD solver and its graphical interface. The numerical simulations are validated through comparison tests with experimental results from the medical literature. It is shown that CFD simulations are very flexible tools that can take into account all modes of heat transfer in assisting neonatal care and the improved design of medical devices.
Modelling of heat and mass transfer processes in neonatology
Ginalski, Maciej K; Nowak, Andrzej J; Wrobel, Luiz C
2008-01-01
This paper reviews some of our recent applications of computational fluid dynamics (CFD) to model heat and mass transfer problems in neonatology and investigates the major heat and mass transfer mechanisms taking place in medical devices such as incubators and oxygen hoods. This includes novel mathematical developments giving rise to a supplementary model, entitled infant heat balance module, which has been fully integrated with the CFD solver and its graphical interface. The numerical simulations are validated through comparison tests with experimental results from the medical literature. It is shown that CFD simulations are very flexible tools that can take into account all modes of heat transfer in assisting neonatal care and the improved design of medical devices
Generalized one-loop neutrino mass model with charged particles
Cheung, Kingman; Okada, Hiroshi
2018-04-01
We propose a radiative neutrino-mass model by introducing 3 generations of fermion pairs E-(N +1 )/2E+(N +1 )/2 and a couple of multicharged bosonic doublet fields ΦN /2,ΦN /2 +1, where N =1 , 3, 5, 7, 9. We show that the models can satisfy the neutrino masses and oscillation data, and are consistent with lepton-flavor violations, the muon anomalous magnetic moment, the oblique parameters, and the beta function of the U (1 )Y hypercharge gauge coupling. We also discuss the collider signals for various N , namely, multicharged leptons in the final state from the Drell-Yan production of E-(N +1 )/2E+(N +1 )/2. In general, the larger the N the more charged leptons will appear in the final state.
Torfs, Elena; Marti, M. Carmen; Locatelli, Florent; Balemans, Sophie; Burger, Raimund; Diehl, Stefan; Laurent, Julien; Vanrolleghem, Peter A.; Francois, Pierre; Nopens, Ingmar
2017-01-01
A new perspective on the modelling of settling behaviour in water resource recovery facilities is introduced. The ultimate goal is to describe in a unified way the processes taking place both in primary settling tanks (PSTs) and secondary settling tanks (SSTs) for a more detailed operation and control. First, experimental evidence is provided, pointing out distributed particle properties (such as size, shape, density, porosity, and flocculation state) as an important common source of distribu...
Energy, mass, model-based displays, and memory recall
Beltracchi, L.
1989-01-01
The operation of a pressurized water reactor in the context of the conservation laws for energy and mass is discussed. These conservation laws are the basis of the Rankine heat engine cycle. Computer graphic implementation of the heat engine cycle, in terms of temperature-entropy coordinates for water, serves as a model-based display of the plant process. A human user of this display, trained in first principles of the process, may exercise a monitoring strategy based on the conservation laws
GUT and flavor models for neutrino masses and mixing
Meloni, Davide
2017-10-01
In the recent years experiments have established the existence of neutrino oscillations and most of the oscillation parameters have been measured with a good accuracy. However, in spite of many interesting ideas, no real illumination was sparked on the problem of flavor in the lepton sector. In this review, we discuss the state of the art of models for neutrino masses and mixings formulated in the context of flavor symmetries, with particular emphasis on the role played by grand unified gauge groups.
New Constraints on the running-mass inflation model
Covi, Laura; Lyth, David H.; Melchiorri, Alessandro
2002-01-01
We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest Cosmic Microwave Background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman $\\alpha $ forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of $n$, which occurs in a physically reasonabl...
Black hole constraints on the running-mass inflation model
Leach, Samuel M; Grivell, Ian J; Liddle, Andrew R
2000-01-01
The running-mass inflation model, which has strong motivation from particle physics, predicts density perturbations whose spectral index is strongly scale-dependent. For a large part of parameter space the spectrum rises sharply to short scales. In this paper we compute the production of primordial black holes, using both analytic and numerical calculation of the density perturbation spectra. Observational constraints from black hole production are shown to exclude a large region of otherwise...
The running-mass inflation model and WMAP
Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.
2004-01-01
We consider the observational constraints on the running-mass inflationary model, and in particular on the scale-dependence of the spectral index, from the new Cosmic Microwave Background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale-dependence of $n$, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into sp...
Torfs, Elena; Martí, M Carmen; Locatelli, Florent; Balemans, Sophie; Bürger, Raimund; Diehl, Stefan; Laurent, Julien; Vanrolleghem, Peter A; François, Pierre; Nopens, Ingmar
2017-02-01
A new perspective on the modelling of settling behaviour in water resource recovery facilities is introduced. The ultimate goal is to describe in a unified way the processes taking place both in primary settling tanks (PSTs) and secondary settling tanks (SSTs) for a more detailed operation and control. First, experimental evidence is provided, pointing out distributed particle properties (such as size, shape, density, porosity, and flocculation state) as an important common source of distributed settling behaviour in different settling unit processes and throughout different settling regimes (discrete, hindered and compression settling). Subsequently, a unified model framework that considers several particle classes is proposed in order to describe distributions in settling behaviour as well as the effect of variations in particle properties on the settling process. The result is a set of partial differential equations (PDEs) that are valid from dilute concentrations, where they correspond to discrete settling, to concentrated suspensions, where they correspond to compression settling. Consequently, these PDEs model both PSTs and SSTs.
Material constitutive model for jointed rock mass behavior
Thomas, R.K.
1980-11-01
A material constitutive model is presented for jointed rock masses which exhibit preferred planes of weakness. This model is intended for use in finite element computations. The immediate application is the thermomechanical modelling of a nuclear waste repository in hard rock, but the model seems appropriate for a variety of other static and dynamic geotechnical problems as well. Starting with the finite element representations of a two-dimensional elastic body, joint planes are introduced in an explicit manner by direct modification of the material stiffness matrix. A novel feature of this approach is that joint set orientations, lengths and spacings are readily assigned through the sampling of a population distribution statistically determined from field measurement data. The result is that the fracture characteristics of the formations have the same statistical distribution in the model as is observed in the field. As a demonstration of the jointed rock mass model, numerical results are presented for the example problem of stress concentration at an underground opening
Variational approach to thermal masses in compactified models
Dominici, Daniele [Dipartimento di Fisica e Astronomia Università di Firenze and INFN - Sezione di Firenze,Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Roditi, Itzhak [Centro Brasileiro de Pesquisas Físicas - CBPF/MCT,Rua Dr. Xavier Sigaud 150, 22290-180, Rio de Janeiro, RJ (Brazil)
2015-08-20
We investigate by means of a variational approach the effective potential of a 5DU(1) scalar model at finite temperature and compactified on S{sup 1} and S{sup 1}/Z{sub 2} as well as the corresponding 4D model obtained through a trivial dimensional reduction. We are particularly interested in the behavior of the thermal masses of the scalar field with respect to the Wilson line phase and the results obtained are compared with those coming from a one-loop effective potential calculation. We also explore the nature of the phase transition.
A Physical Model of Mass Ejection in Failed Supernovae
Coughlin, Eric R.; Quataert, Eliot; Fernández, Rodrigo; Kasen, Daniel
2018-03-01
During the core collapse of massive stars, the formation of the protoneutron star is accompanied by the emission of a significant amount of mass-energy (˜0.3 M⊙) in the form of neutrinos. This mass-energy loss generates an outward-propagating pressure wave that steepens into a shock near the stellar surface, potentially powering a weak transient associated with an otherwise-failed supernova. We analytically investigate this mass-loss-induced wave generation and propagation. Heuristic arguments provide an accurate estimate of the amount of energy contained in the outgoing sound pulse. We then develop a general formalism for analyzing the response of the star to centrally concentrated mass loss in linear perturbation theory. To build intuition, we apply this formalism to polytropic stellar models, finding qualitative and quantitative agreement with simulations and heuristic arguments. We also apply our results to realistic pre-collapse massive star progenitors (both giants and compact stars). Our analytic results for the sound pulse energy, excitation radius, and steepening in the stellar envelope are in good agreement with full time-dependent hydrodynamic simulations. We show that prior to the sound pulses arrival at the stellar photosphere, the photosphere has already reached velocities ˜20 - 100% of the local sound speed, thus likely modestly decreasing the stellar effective temperature prior to the star disappearing. Our results provide important constraints on the physical properties and observational appearance of failed supernovae.
Modeling the chemistry of plasma polymerization using mass spectrometry.
Ihrig, D F; Stockhaus, J; Scheide, F; Winkelhake, Oliver; Streuber, Oliver
2003-04-01
The goal of the project is a solvent free painting shop. The environmental technologies laboratory is developing processes of plasma etching and polymerization. Polymerized thin films are first-order corrosion protection and primer for painting. Using pure acetylene we get very nice thin films which were not bonded very well. By using air as bulk gas it is possible to polymerize, in an acetylene plasma, well bonded thin films which are stable first-order corrosion protections and good primers. UV/Vis spectroscopy shows nitrogen oxide radicals in the emission spectra of pure nitrogen and air. But nitrogen oxide is fully suppressed in the presence of acetylene. IR spectroscopy shows only C=O, CH(2) and CH(3) groups but no nitrogen species. With the aid of UV/Vis spectra and the chemistry of ozone formation it is possible to define reactive traps and steps, molecule depletion and processes of proton scavenging and proton loss. Using a numerical model it is possible to evaluate these processes and to calculate theoretical mass spectra. Adjustment of theoretical mass spectra to real measurements leads to specific channels of polymerization which are driven by radicals especially the acetyl radical. The estimated theoretical mass spectra show the specific channels of these chemical processes. It is possible to quantify these channels. This quantification represents the mass flow through this chemical system. With respect to these chemical processes it is possible to have an idea of pollutant production processes.
Modelling Mass Casualty Decontamination Systems Informed by Field Exercise Data
Richard Amlôt
2012-10-01
Full Text Available In the event of a large-scale chemical release in the UK decontamination of ambulant casualties would be undertaken by the Fire and Rescue Service (FRS. The aim of this study was to track the movement of volunteer casualties at two mass decontamination field exercises using passive Radio Frequency Identification tags and detection mats that were placed at pre-defined locations. The exercise data were then used to inform a computer model of the FRS component of the mass decontamination process. Having removed all clothing and having showered, the re-dressing (termed re-robing of casualties was found to be a bottleneck in the mass decontamination process during both exercises. Computer simulations showed that increasing the capacity of each lane of the re-robe section to accommodate 10 rather than five casualties would be optimal in general, but that a capacity of 15 might be required to accommodate vulnerable individuals. If the duration of the shower was decreased from three minutes to one minute then a per lane re-robe capacity of 20 might be necessary to maximise the throughput of casualties. In conclusion, one practical enhancement to the FRS response may be to provide at least one additional re-robe section per mass decontamination unit.
A physical model of mass ejection in failed supernovae
Coughlin, Eric R.; Quataert, Eliot; Fernández, Rodrigo; Kasen, Daniel
2018-06-01
During the core collapse of massive stars, the formation of the proto-neutron star is accompanied by the emission of a significant amount of mass energy (˜0.3 M⊙) in the form of neutrinos. This mass-energy loss generates an outward-propagating pressure wave that steepens into a shock near the stellar surface, potentially powering a weak transient associated with an otherwise-failed supernova. We analytically investigate this mass-loss-induced wave generation and propagation. Heuristic arguments provide an accurate estimate of the amount of energy contained in the outgoing sound pulse. We then develop a general formalism for analysing the response of the star to centrally concentrated mass loss in linear perturbation theory. To build intuition, we apply this formalism to polytropic stellar models, finding qualitative and quantitative agreement with simulations and heuristic arguments. We also apply our results to realistic pre-collapse massive star progenitors (both giants and compact stars). Our analytic results for the sound pulse energy, excitation radius, and steepening in the stellar envelope are in good agreement with full time-dependent hydrodynamic simulations. We show that prior to the sound pulses arrival at the stellar photosphere, the photosphere has already reached velocities ˜ 20-100 per cent of the local sound speed, thus likely modestly decreasing the stellar effective temperature prior to the star disappearing. Our results provide important constraints on the physical properties and observational appearance of failed supernovae.
Neutrino mass and physics beyond the Standard Model
Hosteins, P.
2007-09-01
The purpose of this thesis is to study, in the neutrino sector, the flavour structures at high energy. The work is divided into two main parts. The first part is dedicated to the well known mechanism to produce small neutrino masses: the seesaw mechanism, which implies the existence of massive particles whose decays violate lepton number. Therefore this mechanism can also be used to generate a net baryon number in the early universe and explain the cosmological observation of the asymmetry between matter and antimatter. However, it is often non-trivial to fulfill the constraints coming at the same time from neutrino oscillations and cosmological experiments, at least in frameworks where the couplings can be somehow constrained, like some Grand Unification models. Therefore we devoted the first part to the study of a certain class of seesaw mechanism which can be found in the context of SO(10) theories for example. We introduce a method to extract the mass matrix of the heavy right-handed neutrinos and explore the phenomenological consequences of this quantity, mainly concerning the production of a sufficient baryon asymmetry. When trying to identify the underlying symmetry governing the mixings between the different generations, we see that there is a puzzling difference between the quark and the lepton sectors. However, the quark and lepton parameters have to be compared at the scale of the flavour symmetry breaking, therefore we have to make them run to the appropriate scale. Thus, it is worthwhile investigating models where quantum corrections allow an approximate unification of quark and lepton mixings. This is why the other part of the thesis investigates the running of the effective neutrino mass operator in models with an extra compact dimension, where quantum corrections to the neutrino masses and mixings can be potentially large due to the multiplicity of states
Mass balance model parameter transferability on a tropical glacier
Gurgiser, Wolfgang; Mölg, Thomas; Nicholson, Lindsey; Kaser, Georg
2013-04-01
The mass balance and melt water production of glaciers is of particular interest in the Peruvian Andes where glacier melt water has markedly increased water supply during the pronounced dry seasons in recent decades. However, the melt water contribution from glaciers is projected to decrease with appreciable negative impacts on the local society within the coming decades. Understanding mass balance processes on tropical glaciers is a prerequisite for modeling present and future glacier runoff. As a first step towards this aim we applied a process-based surface mass balance model in order to calculate observed ablation at two stakes in the ablation zone of Shallap Glacier (4800 m a.s.l., 9°S) in the Cordillera Blanca, Peru. Under the tropical climate, the snow line migrates very frequently across most of the ablation zone all year round causing large temporal and spatial variations of glacier surface conditions and related ablation. Consequently, pronounced differences between the two chosen stakes and the two years were observed. Hourly records of temperature, humidity, wind speed, short wave incoming radiation, and precipitation are available from an automatic weather station (AWS) on the moraine near the glacier for the hydrological years 2006/07 and 2007/08 while stake readings are available at intervals of between 14 to 64 days. To optimize model parameters, we used 1000 model simulations in which the most sensitive model parameters were varied randomly within their physically meaningful ranges. The modeled surface height change was evaluated against the two stake locations in the lower ablation zone (SH11, 4760m) and in the upper ablation zone (SH22, 4816m), respectively. The optimal parameter set for each point achieved good model skill but if we transfer the best parameter combination from one stake site to the other stake site model errors increases significantly. The same happens if we optimize the model parameters for each year individually and transfer
Analytic Models of Brown Dwarfs and the Substellar Mass Limit
Sayantan Auddy
2016-01-01
Full Text Available We present the analytic theory of brown dwarf evolution and the lower mass limit of the hydrogen burning main-sequence stars and introduce some modifications to the existing models. We give an exact expression for the pressure of an ideal nonrelativistic Fermi gas at a finite temperature, therefore allowing for nonzero values of the degeneracy parameter. We review the derivation of surface luminosity using an entropy matching condition and the first-order phase transition between the molecular hydrogen in the outer envelope and the partially ionized hydrogen in the inner region. We also discuss the results of modern simulations of the plasma phase transition, which illustrate the uncertainties in determining its critical temperature. Based on the existing models and with some simple modification, we find the maximum mass for a brown dwarf to be in the range 0.064M⊙–0.087M⊙. An analytic formula for the luminosity evolution allows us to estimate the time period of the nonsteady state (i.e., non-main-sequence nuclear burning for substellar objects. We also calculate the evolution of very low mass stars. We estimate that ≃11% of stars take longer than 107 yr to reach the main sequence, and ≃5% of stars take longer than 108 yr.
Predicting chick body mass by artificial intelligence-based models
Patricia Ferreira Ponciano Ferraz
2014-07-01
Full Text Available The objective of this work was to develop, validate, and compare 190 artificial intelligence-based models for predicting the body mass of chicks from 2 to 21 days of age subjected to different duration and intensities of thermal challenge. The experiment was conducted inside four climate-controlled wind tunnels using 210 chicks. A database containing 840 datasets (from 2 to 21-day-old chicks - with the variables dry-bulb air temperature, duration of thermal stress (days, chick age (days, and the daily body mass of chicks - was used for network training, validation, and tests of models based on artificial neural networks (ANNs and neuro-fuzzy networks (NFNs. The ANNs were most accurate in predicting the body mass of chicks from 2 to 21 days of age after they were subjected to the input variables, and they showed an R² of 0.9993 and a standard error of 4.62 g. The ANNs enable the simulation of different scenarios, which can assist in managerial decision-making, and they can be embedded in the heating control systems.
Heat and Mass Transfer Model in Freeze-Dried Medium
Alfat, Sayahdin; Purqon, Acep
2017-07-01
There are big problems in agriculture sector every year. One of the major problems is abundance of agricultural product during the peak of harvest season that is not matched by an increase in demand of agricultural product by consumers, this causes a wasted agricultural products. Alternative way was food preservation by freeze dried method. This method was already using heat transfer through conduction and convection to reduce water quality in the food. The main objective of this research was to design a model heat and mass transfer in freeze-dried medium. We had two steps in this research, the first step was design of medium as the heat injection site and the second was simulate heat and mass transfer of the product. During simulation process, we use physical property of some agriculture product. The result will show how temperature and moisture distribution every second. The method of research use finite element method (FEM) and will be illustrated in three dimensional.
Dynamical gluon mass in the instanton vacuum model
Musakhanov, M.; Egamberdiev, O.
2018-04-01
We consider the modifications of gluon properties in the instanton liquid model (ILM) for the QCD vacuum. Rescattering of gluons on instantons generates the dynamical momentum-dependent gluon mass Mg (q). First, we consider the case of a scalar gluon, no zero-mode problem occurs and its dynamical mass Ms (q) can be found. Using the typical phenomenological values of the average instanton size ρ = 1 / 3 fm and average inter-instanton distance R = 1 fm we get Ms (0) = 256 MeV. We then extend this approach to the real vector gluon with zero-modes carefully considered. We obtain the following expression Mg2 (q) = 2 Ms2 (q). This modification of the gluon in the instanton media will shed light on nonperturbative aspect on heavy quarkonium physics.
Hosteins, P
2007-09-15
The purpose of this thesis is to study, in the neutrino sector, the flavour structures at high energy. The work is divided into two main parts. The first part is dedicated to the well known mechanism to produce small neutrino masses: the seesaw mechanism, which implies the existence of massive particles whose decays violate lepton number. Therefore this mechanism can also be used to generate a net baryon number in the early universe and explain the cosmological observation of the asymmetry between matter and antimatter. However, it is often non-trivial to fulfill the constraints coming at the same time from neutrino oscillations and cosmological experiments, at least in frameworks where the couplings can be somehow constrained, like some Grand Unification models. Therefore we devoted the first part to the study of a certain class of seesaw mechanism which can be found in the context of SO(10) theories for example. We introduce a method to extract the mass matrix of the heavy right-handed neutrinos and explore the phenomenological consequences of this quantity, mainly concerning the production of a sufficient baryon asymmetry. When trying to identify the underlying symmetry governing the mixings between the different generations, we see that there is a puzzling difference between the quark and the lepton sectors. However, the quark and lepton parameters have to be compared at the scale of the flavour symmetry breaking, therefore we have to make them run to the appropriate scale. Thus, it is worthwhile investigating models where quantum corrections allow an approximate unification of quark and lepton mixings. This is why the other part of the thesis investigates the running of the effective neutrino mass operator in models with an extra compact dimension, where quantum corrections to the neutrino masses and mixings can be potentially large due to the multiplicity of states.
Cellular automaton model of coupled mass transport and chemical reactions
Karapiperis, T.
1994-01-01
Mass transport, coupled with chemical reactions, is modelled as a cellular automaton in which solute molecules perform a random walk on a lattice and react according to a local probabilistic rule. Assuming molecular chaos and a smooth density function, we obtain the standard reaction-transport equations in the continuum limit. The model is applied to the reactions a + b ↔c and a + b →c, where we observe interesting macroscopic effects resulting from microscopic fluctuations and spatial correlations between molecules. We also simulate autocatalytic reaction schemes displaying spontaneous formation of spatial concentration patterns. Finally, we propose and discuss the limitations of a simple model for mineral-solute interaction. (author) 5 figs., 20 refs
Neutrino masses from SUSY breaking in radiative seesaw models
Figueiredo, Antonio J.R.
2015-01-01
Radiatively generated neutrino masses (m ν ) are proportional to supersymmetry (SUSY) breaking, as a result of the SUSY non-renormalisation theorem. In this work, we investigate the space of SUSY radiative seesaw models with regard to their dependence on SUSY breaking (SUSY). In addition to contributions from sources of SUSY that are involved in electroweak symmetry breaking (SUSY EWSB contributions), and which are manifest from left angle F H † right angle = μ left angle anti H right angle ≠ 0 and left angle D right angle = g sum H left angle H † x H H right angle ≠ 0, radiatively generated m ν can also receive contributions from SUSY sources that are unrelated to EWSB (SUSY EWS contributions). We point out that recent literature overlooks pure-SUSY EWSB contributions (∝ μ/M) that can arise at the same order of perturbation theory as the leading order contribution from SUSY EWS . We show that there exist realistic radiative seesaw models in which the leading order contribution to m ν is proportional to SUSY EWS . To our knowledge no model with such a feature exists in the literature. We give a complete description of the simplest model topologies and their leading dependence on SUSY. We show that in one-loop realisations LLHH operators are suppressed by at least μ m soft /M 3 or m soft 2 /M 3 . We construct a model example based on a oneloop type-II seesaw. An interesting aspect of these models lies in the fact that the scale of soft-SUSY effects generating the leading order m ν can be quite small without conflicting with lower limits on the mass of new particles. (orig.)
Revisiting Okun's Relationship
Dixon, R.; Lim, G.C.; van Ours, Jan
2016-01-01
Our paper revisits Okun's relationship between observed unemployment rates and output gaps. We include in the relationship the effect of labour market institutions as well as age and gender effects. Our empirical analysis is based on 20 OECD countries over the period 1985-2013. We find that the
Revisiting the Okun relationship
Dixon, R. (Robert); Lim, G.C.; J.C. van Ours (Jan)
2017-01-01
textabstractOur article revisits the Okun relationship between observed unemployment rates and output gaps. We include in the relationship the effect of labour market institutions as well as age and gender effects. Our empirical analysis is based on 20 OECD countries over the period 1985–2013. We
Bounded Intention Planning Revisited
Sievers Silvan; Wehrle Martin; Helmert Malte
2014-01-01
Bounded intention planning provides a pruning technique for optimal planning that has been proposed several years ago. In addition partial order reduction techniques based on stubborn sets have recently been investigated for this purpose. In this paper we revisit bounded intention planning in the view of stubborn sets.
Cornean, Horia; Nenciu, Gheorghe
2009-01-01
This paper is the second in a series revisiting the (effect of) Faraday rotation. We formulate and prove the thermodynamic limit for the transverse electric conductivity of Bloch electrons, as well as for the Verdet constant. The main mathematical tool is a regularized magnetic and geometric...
On the single-mass model of the vocal folds
Howe, M S; McGowan, R S
2010-01-01
An analysis is made of the fluid-structure interactions necessary to support self-sustained oscillations of a single-mass mechanical model of the vocal folds subject to a nominally steady subglottal overpressure. The single-mass model of Fant and Flanagan is re-examined and an analytical representation of vortex shedding during 'voiced speech' is proposed that promotes cooperative, periodic excitation of the folds by the glottal flow. Positive feedback that sustains glottal oscillations is shown to occur during glottal contraction, when the flow separates from the 'trailing edge' of the glottis producing a low-pressure 'suction' force that tends to pull the folds together. Details are worked out for flow that can be regarded as locally two-dimensional in the glottal region. Predictions of free-streamline theory are used to model the effects of quasi-static variations in the separation point on the glottal wall. Numerical predictions are presented to illustrate the waveform of the sound radiated towards the mouth from the glottis. The theory is easily modified to include feedback on the glottal flow of standing acoustic waves, both in the vocal tract beyond the glottis and in the subglottal region. (invited paper)
Asymmetric mass models of disk galaxies. I. Messier 99
Chemin, Laurent; Huré, Jean-Marc; Soubiran, Caroline; Zibetti, Stefano; Charlot, Stéphane; Kawata, Daisuke
2016-04-01
Mass models of galactic disks traditionally rely on axisymmetric density and rotation curves, paradoxically acting as if their most remarkable asymmetric features, such as lopsidedness or spiral arms, were not important. In this article, we relax the axisymmetry approximation and introduce a methodology that derives 3D gravitational potentials of disk-like objects and robustly estimates the impacts of asymmetries on circular velocities in the disk midplane. Mass distribution models can then be directly fitted to asymmetric line-of-sight velocity fields. Applied to the grand-design spiral M 99, the new strategy shows that circular velocities are highly nonuniform, particularly in the inner disk of the galaxy, as a natural response to the perturbed gravitational potential of luminous matter. A cuspy inner density profile of dark matter is found in M 99, in the usual case where luminous and dark matter share the same center. The impact of the velocity nonuniformity is to make the inner profile less steep, although the density remains cuspy. On another hand, a model where the halo is core dominated and shifted by 2.2-2.5 kpc from the luminous mass center is more appropriate to explain most of the kinematical lopsidedness evidenced in the velocity field of M 99. However, the gravitational potential of luminous baryons is not asymmetric enough to explain the kinematical lopsidedness of the innermost regions, irrespective of the density shape of dark matter. This discrepancy points out the necessity of an additional dynamical process in these regions: possibly a lopsided distribution of dark matter.
Radiatively induced neutrino mass model with flavor dependent gauge symmetry
Lee, SangJong; Nomura, Takaaki; Okada, Hiroshi
2018-06-01
We study a radiative seesaw model at one-loop level with a flavor dependent gauge symmetry U(1) μ - τ, in which we consider bosonic dark matter. We also analyze the constraints from lepton flavor violations, muon g - 2, relic density of dark matter, and collider physics, and carry out numerical analysis to search for allowed parameter region which satisfy all the constraints and to investigate some predictions. Furthermore we find that a simple but adhoc hypothesis induces specific two zero texture with inverse mass matrix, which provides us several predictions such as a specific pattern of Dirac CP phase.
Leptogenesis in a neutrino mass model coupled with inflaton
Daijiro Suematsu
2016-09-01
Full Text Available We propose a scenario for the generation of baryon number asymmetry based on the inflaton decay in a radiative neutrino mass model extended with singlet scalars. In this scenario, lepton number asymmetry is produced through the decay of non-thermal right-handed neutrinos caused from the inflaton decay. Since the amount of non-thermal right-handed neutrinos could be much larger than the thermal ones, the scenario could work without any resonance effect for rather low reheating temperature. Sufficient baryon number asymmetry can be generated for much lighter right-handed neutrinos compared with the Davidson–Ibarra bound.
New constraints on the running-mass inflation model
Covi, L.; Lyth, D.H.; Melchiorri, A.
2002-10-01
We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest cosmic microwave background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman α forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of n, which occurs in a physically reasonable regime of parameter space. (orig.)
On the origin of mass in the standard model
Sundman, S.
2013-01-01
A model is proposed in which the presently existing elementary particles are the result of an evolution proceeding from the simplest possible particle state to successively more complex states via a series of symmetry-breaking transitions. The properties of two fossil particles — the tauon and muon — together with the observed photon–baryon number ratio provide information that makes it possible to track the early development of particles. A computer simulation of the evolution reveals details about the purpose and history of all presently known elementary particles. In particular, it is concluded that the heavy Higgs particle that generates the bulk of the mass of the Z and W bosons also comes in a light version, which generates small mass contributions to the charged leptons. The predicted mass of this 'flyweight' Higgs boson is 0.505 MeV/c 2 , 106.086 eV/c 2 or 12.0007 μeV/c 2 (corresponding to a photon of frequency 2.9018 GHz) depending on whether it is associated with the tauon, muon or electron. Support for the conclusion comes from the Brookhaven muon g-2 experiment, which indicates the existence of a Higgs particle lighter than the muon. (author)
Improved metastability bounds on the standard model Higgs mass
Espinosa, J R; Espinosa, J R; Quiros, M
1995-01-01
Depending on the Higgs-boson and top-quark masses, M_H and M_t, the effective potential of the Standard Model at finite (and zero) temperature can have a deep and unphysical stable minimum \\langle \\phi(T)\\rangle at values of the field much larger than G_F^{-1/2}. We have computed absolute lower bounds on M_H, as a function of M_t, imposing the condition of no decay by thermal fluctuations, or quantum tunnelling, to the stable minimum. Our effective potential at zero temperature includes all next-to-leading logarithmic corrections (making it extremely scale-independent), and we have used pole masses for the Higgs-boson and top-quark. Thermal corrections to the effective potential include plasma effects by one-loop ring resummation of Debye masses. All calculations, including the effective potential and the bubble nucleation rate, are performed numerically and so the results do not rely on any kind of analytical approximation. Easy-to-use fits are provided for the benefit of the reader. Conclusions on the possi...
A Generative Computer Model for Preliminary Design of Mass Housing
Ahmet Emre DİNÇER
2014-05-01
Full Text Available Today, we live in what we call the “Information Age”, an age in which information technologies are constantly being renewed and developed. Out of this has emerged a new approach called “Computational Design” or “Digital Design”. In addition to significantly influencing all fields of engineering, this approach has come to play a similar role in all stages of the design process in the architectural field. In providing solutions for analytical problems in design such as cost estimate, circulation systems evaluation and environmental effects, which are similar to engineering problems, this approach is being used in the evaluation, representation and presentation of traditionally designed buildings. With developments in software and hardware technology, it has evolved as the studies based on design of architectural products and production implementations with digital tools used for preliminary design stages. This paper presents a digital model which may be used in the preliminary stage of mass housing design with Cellular Automata, one of generative design systems based on computational design approaches. This computational model, developed by scripts of 3Ds Max software, has been implemented on a site plan design of mass housing, floor plan organizations made by user preferences and facade designs. By using the developed computer model, many alternative housing types could be rapidly produced. The interactive design tool of this computational model allows the user to transfer dimensional and functional housing preferences by means of the interface prepared for model. The results of the study are discussed in the light of innovative architectural approaches.
Mass transport measurements and modeling for chemical vapor infiltration
Starr, T.L.; Chiang, D.Y.; Fiadzo, O.G.; Hablutzel, N. [Georgia Inst. of Tech., Atlanta, GA (United States). School of Materials Science and Engineering
1997-12-01
This project involves experimental and modeling investigation of densification behavior and mass transport in fiber preforms and partially densified composites, and application of these results to chemical vapor infiltration (CVI) process modeling. This supports work on-going at ORNL in process development for fabrication of ceramic matrix composite (CMC) tubes. Tube-shaped composite preforms are fabricated at ORNL with Nextel{trademark} 312 fiber (3M Corporation, St. Paul, MN) by placing and compressing several layers of braided sleeve on a tubular mandrel. In terms of fiber architecture these preforms are significantly different than those made previously with Nicalon{trademark} fiber (Nippon Carbon Corp., Tokyo, Japan) square weave cloth. The authors have made microstructure and permeability measurements on several of these preforms and a few partially densified composites so as to better understand their densification behavior during CVI.
An ice-cream cone model for coronal mass ejections
Xue, X. H.; Wang, C. B.; Dou, X. K.
2005-08-01
In this study, we use an ice-cream cone model to analyze the geometrical and kinematical properties of the coronal mass ejections (CMEs). Assuming that in the early phase CMEs propagate with near-constant speed and angular width, some useful properties of CMEs, namely the radial speed (v), the angular width (α), and the location at the heliosphere, can be obtained considering the geometrical shapes of a CME as an ice-cream cone. This model is improved by (1) using an ice-cream cone to show the near real configuration of a CME, (2) determining the radial speed via fitting the projected speeds calculated from the height-time relation in different azimuthal angles, (3) not only applying to halo CMEs but also applying to nonhalo CMEs.
Induced Monoculture in Axelrod Model with Clever Mass Media
Rodríguez, Arezky H.; Del Castillo-Mussot, M.; Vázquez, G. J.
A new model is proposed, in the context of Axelrod's model for the study of cultural dissemination, to include an external vector field (VF) which describes the effects of mass media on social systems. The VF acts over the whole system and it is characterized by two parameters: a nonnull overlap with each agent in the society and a confidence value of its information. Beyond a threshold value of the confidence, there is induced monocultural globalization of the system lined up with the VF. Below this value, the multicultural states are unstable and certain homogenization of the system is obtained in opposite line up according to that we have called negative publicity effect. Three regimes of behavior for the spread process of the VF information as a function of time are reported.
Sibiriakova Olena Oleksandrivna
2015-01-01
In this research the author examines changes to approaches of observation of mass communication. As a result of systemization of key theoretical models of communication, the author comes to conclusion of evolution of ideas about the process of mass communication measurement from linear to multisided and multiple.
Revisiting top-bottom-tau Yukawa unification in supersymmetric grand unified theories
Tobe, Kazuhiro; Wells, James D.
2003-01-01
Third family Yukawa unification, as suggested by minimal SO(10) unification, is revisited in light of recent experimental measurements and theoretical progress. We characterize unification in a semi-model-independent fashion, and conclude that finite b quark mass corrections from superpartners must be non-zero, but much smaller than naively would be expected. We show that a solution that does not require cancellations of dangerously large tanβ effects in observables implies that scalar superpartner masses should be substantially heavier than the Z scale, and perhaps inaccessible to all currently approved colliders. On the other hand, gauginos must be significantly lighter than the scalars. We demonstrate that a spectrum of anomaly-mediated gaugino masses and heavy scalars works well as a theory compatible with third family Yukawa unification and dark matter observations
Double neutron stars: merger rates revisited
Chruslinska, Martyna; Belczynski, Krzysztof; Klencki, Jakub; Benacquista, Matthew
2018-03-01
We revisit double neutron star (DNS) formation in the classical binary evolution scenario in light of the recent Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo DNS detection (GW170817). The observationally estimated Galactic DNS merger rate of R_MW = 21^{+28}_{-14} Myr-1, based on three Galactic DNS systems, fully supports our standard input physics model with RMW = 24 Myr-1. This estimate for the Galaxy translates in a non-trivial way (due to cosmological evolution of progenitor stars in chemically evolving Universe) into a local (z ≈ 0) DNS merger rate density of Rlocal = 48 Gpc-3 yr-1, which is not consistent with the current LIGO/Virgo DNS merger rate estimate (1540^{+3200}_{-1220} Gpc-3 yr-1). Within our study of the parameter space, we find solutions that allow for DNS merger rates as high as R_local ≈ 600^{+600}_{-300} Gpc-3 yr-1 which are thus consistent with the LIGO/Virgo estimate. However, our corresponding BH-BH merger rates for the models with high DNS merger rates exceed the current LIGO/Virgo estimate of local BH-BH merger rate (12-213 Gpc-3 yr-1). Apart from being particularly sensitive to the common envelope treatment, DNS merger rates are rather robust against variations of several of the key factors probed in our study (e.g. mass transfer, angular momentum loss, and natal kicks). This might suggest that either common envelope development/survival works differently for DNS (˜10-20 M⊙ stars) than for BH-BH (˜40-100 M⊙ stars) progenitors, or high black hole (BH) natal kicks are needed to meet observational constraints for both types of binaries. Our conclusion is based on a limited number of (21) evolutionary models and is valid within this particular DNS and BH-BH isolated binary formation scenario.
Ponten, S.C.; Daffertshofer, A.; Hillebrand, A.; Stam, C.J.
2010-01-01
We investigated the relationship between structural network properties and both synchronization strength and functional characteristics in a combined neural mass and graph theoretical model of the electroencephalogram (EEG). Thirty-two neural mass models (NMMs), each representing the lump activity
Masses and fission barriers of nuclei in the LSD model
Pomorski, Krzysztof
2009-07-01
Recently developed Lublin-Strasbourg Drop (LSD) model together with the microscopic corrections taken r is very successful in describing many features of nuclei. In addition to the classical liquid drop model the LSD contains the curvature term proportional to the A{sup 1/3}. The r.m.s. deviation of the LSD binding energies of 2766 isotopes with Z,N>7 from the experimental ones is 0.698 MeV only. It turns out that the LSD model gives also a satisfactory prediction of the fission barrier heights. In addition, it was found in that taking into account the deformation dependence of the congruence energy proposed by Myers and Swiatecki significantly approaches the LSD-model barrier-heights to the experimental data in the case of light isotopes while the fission barriers for heavy nuclei remain nearly unchanged and agree well with experiment. It was also shown in that the saddle point masses of transactinides from {sup 232}Th to {sup 250}Cf evaluated using the LSD differ by less than 0.67 MeV from the experimental data.
Subgrid models for mass and thermal diffusion in turbulent mixing
Lim, H; Yu, Y; Glimm, J; Li, X-L; Sharp, D H
2010-01-01
We propose a new method for the large eddy simulation (LES) of turbulent mixing flows. The method yields convergent probability distribution functions (PDFs) for temperature and concentration and a chemical reaction rate when applied to reshocked Richtmyer-Meshkov (RM) unstable flows. Because such a mesh convergence is an unusual and perhaps original capability for LES of RM flows, we review previous validation studies of the principal components of the algorithm. The components are (i) a front tracking code, FronTier, to control numerical mass diffusion and (ii) dynamic subgrid scale (SGS) models to compensate for unresolved scales in the LES. We also review the relevant code comparison studies. We compare our results to a simple model based on 1D diffusion, taking place in the geometry defined statistically by the interface (the 50% isoconcentration surface between the two fluids). Several conclusions important to physics could be drawn from our study. We model chemical reactions with no closure approximations beyond those in the LES of the fluid variables itself, and as with dynamic SGS models, these closures contain no adjustable parameters. The chemical reaction rate is specified by the joint PDF for temperature and concentration. We observe a bimodal distribution for the PDF and we observe significant dependence on fluid transport parameters.
Dynamics of Symmetric Conserved Mass Aggregation Model on Complex Networks
HUA Da-Yin
2009-01-01
We investigate the dynamical behaviour of the aggregation process in the symmetric conserved mass aggregation model under three different topological structures. The dispersion σ(t, L) = (∑i(mi - ρ0)2/L)1/2 is defined to describe the dynamical behaviour where ρ0 is the density of particle and mi is the particle number on a site. It is found numerically that for a regular lattice and a scale-free network, σ(t, L) follows a power-law scaling σ(t, L) ～ tδ1 and σ(t, L) ～ tδ4 from a random initial condition to the stationary states, respectively. However, for a small-world network, there are two power-law scaling regimes, σ(t, L) ～ tδ2 when t＜T and a(t, L) ～ tδ3 when tT. Moreover, it is found numerically that δ2 is near to δ1 for small rewiring probability q, and δ3 hardly changes with varying q and it is almost the same as δ4. We speculate that the aggregation of the connection degree accelerates the mass aggregation in the initial relaxation stage and the existence of the long-distance interactions in the complex networks results in the acceleration of the mass aggregation when tT for the small-world networks. We also show that the relaxation time T follows a power-law scaling τ Lz and σ(t, L) in the stationary state follows a power-law σs(L) ～ Lσ for three different structures.
IRAS 17423-1755 (HEN 3-1475) REVISITED: AN O-RICH HIGH-MASS POST-ASYMPTOTIC GIANT BRANCH STAR
Manteiga, M.; GarcIa-Hernandez, D. A.; Manchado, A.; Ulla, A.; GarcIa-Lario, P.
2011-01-01
The high-resolution (R ∼ 600) Spitzer/IRS spectrum of the bipolar protoplanetary nebula (PN) IRAS 17423-1755 is presented in order to clarify the dominant chemistry (C-rich versus O-rich) of its circumstellar envelope as well as to constrain its evolutionary stage. The high-quality Spitzer/IRS spectrum shows weak 9.7 μm absorption from amorphous silicates. This confirms for the first time the O-rich nature of IRAS 17423-1755 in contradiction to a previous C-rich classification, which was based on the wrong identification of the strong 3.1 μm absorption feature seen in the Infrared Space Observatory spectrum as due to acetylene (C 2 H 2 ). The high-resolution Spitzer/IRS spectrum displays a complete lack of C-rich mid-IR features such as molecular absorption features (e.g., 13.7 μm C 2 H 2 , 14.0 μm HCN, etc.) or the classical polycyclic aromatic hydrocarbon infrared emission bands. Thus, the strong 3.1 μm absorption band toward IRAS 17423-1755 has to be identified as water ice. In addition, an [Ne II] nebular emission line at 12.8 μm is clearly detected, indicating that the ionization of its central region may be already started. The spectral energy distribution in the infrared (∼2-200 μm) and other observational properties of IRAS 17423-1755 are discussed in comparison with the similar post-asymptotic giant branch (AGB) objects IRAS 19343+2926 and IRAS 17393-2727. We conclude that IRAS 17423-1755 is an O-rich high-mass post-AGB object that represents a link between OH/IR stars with extreme outflows and highly bipolar PN.
MODELS OF NEPTUNE-MASS EXOPLANETS: EMERGENT FLUXES AND ALBEDOS
Spiegel, David S.; Burrows, Adam; Ibgui, Laurent; Hubeny, Ivan; Milsom, John A.
2010-01-01
There are now many known exoplanets with Msin i within a factor of 2 of Neptune's, including the transiting planets GJ 436b and HAT-P-11b. Planets in this mass range are different from their more massive cousins in several ways that are relevant to their radiative properties and thermal structures. By analogy with Neptune and Uranus, they are likely to have metal abundances that are an order of magnitude or more greater than those of larger, more massive planets. This increases their opacity, decreases Rayleigh scattering, and changes their equation of state. Furthermore, their smaller radii mean that fluxes from these planets are roughly an order of magnitude lower than those of otherwise identical gas giant planets. Here, we compute a range of plausible radiative equilibrium models of GJ 436b and HAT-P-11b. In addition, we explore the dependence of generic Neptune-mass planets on a range of physical properties, including their distance from their host stars, their metallicity, the spectral type of their stars, the redistribution of heat in their atmospheres, and the possible presence of additional optical opacity in their upper atmospheres.
Mass Spectrometry Coupled Experiments and Protein Structure Modeling Methods
Lee Sael
2013-10-01
Full Text Available With the accumulation of next generation sequencing data, there is increasing interest in the study of intra-species difference in molecular biology, especially in relation to disease analysis. Furthermore, the dynamics of the protein is being identified as a critical factor in its function. Although accuracy of protein structure prediction methods is high, provided there are structural templates, most methods are still insensitive to amino-acid differences at critical points that may change the overall structure. Also, predicted structures are inherently static and do not provide information about structural change over time. It is challenging to address the sensitivity and the dynamics by computational structure predictions alone. However, with the fast development of diverse mass spectrometry coupled experiments, low-resolution but fast and sensitive structural information can be obtained. This information can then be integrated into the structure prediction process to further improve the sensitivity and address the dynamics of the protein structures. For this purpose, this article focuses on reviewing two aspects: the types of mass spectrometry coupled experiments and structural data that are obtainable through those experiments; and the structure prediction methods that can utilize these data as constraints. Also, short review of current efforts in integrating experimental data in the structural modeling is provided.
Force Limited Random Vibration Test of TESS Camera Mass Model
Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.
2015-01-01
The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.
The Levy sections theorem revisited
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Silva, Sergio Da
2007-01-01
This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets
The Levy sections theorem revisited
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio
2007-06-01
This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.
On Two-Scale Modelling of Heat and Mass Transfer
Vala, J.; Stastnik, S.
2008-01-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
On Two-Scale Modelling of Heat and Mass Transfer
Vala, J.; Št'astník, S.
2008-09-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
A Coupled Chemical and Mass Transport Model for Concrete Durability
Jensen, Mads Mønster; Johannesson, Björn; Geiker, Mette Rica
2012-01-01
-Raphson iteration scheme arising from the non-linearity. The overall model is a transient problem, solved using a single parameter formulation. The sorption hysteresis and chemical equilibrium is included as source or sink terms. The advantages with this formulation is that each node in the discrete system has...... their individual sorption hysteresis isotherm which is of great importance when describing non fully water saturated system e.g. caused by time depended boundary conditions. Chemical equilibrium is also established in each node of the discrete system, where the rate of chemical degradation is determined.......g. charge balance, from the mass transport calculation could cause the above mentioned numerical problems. Two different test cases are studied, the sorption hysteresis in different depth of the sample, caused by time depended boundary condition and the chemical degradation of the solid matrix in a ten year...
Mass transfer inside oblate spheroidal solids: modelling and simulation
J. E. F. Carmo
2008-03-01
Full Text Available A numerical solution of the unsteady diffusion equation describing mass transfer inside oblate spheroids, considering a constant diffusion coefficient and the convective boundary condition, is presented. The diffusion equation written in the oblate spheroidal coordinate system was used for a two-dimensional case. The finite-volume method was employed to discretize the basic equation. The linear equation set was solved iteratively using the Gauss-Seidel method. As applications, the effects of the Fourier number, the Biot number and the aspect ratio of the body on the drying rate and moisture content during the process are presented. To validate the methodology, results obtained in this work are compared with analytical results of the moisture content encountered in the literature and good agreement was obtained. The results show that the model is consistent and it may be used to solve cases such as those that include disks and spheres and/or those with variable properties with small modifications.
Roth, A. C.; Hock, R.; Schuler, T.; Bieniek, P.; Aschwanden, A.
2017-12-01
Mass loss from glaciers in Southeast Alaska is expected to alter downstream ecological systems as runoff patterns change. To investigate these potential changes under future climate scenarios, distributed glacier mass balance modeling is required. However, the spatial resolution gap between global or regional climate models and the requirements for glacier mass balance modeling studies must be addressed first. We have used a linear theory of orographic precipitation model to downscale precipitation from both the Weather Research and Forecasting (WRF) model and ERA-Interim to the Juneau Icefield region over the period 1979-2013. This implementation of the LT model is a unique parameterization that relies on the specification of snow fall speed and rain fall speed as tuning parameters to calculate the cloud time delay, τ. We assessed the LT model results by considering winter precipitation so the effect of melt was minimized. The downscaled precipitation pattern produced by the LT model captures the orographic precipitation pattern absent from the coarse resolution WRF and ERA-Interim precipitation fields. Observational data constraints limited our ability to determine a unique parameter combination and calibrate the LT model to glaciological observations. We established a reference run of parameter values based on literature and performed a sensitivity analysis of the LT model parameters, horizontal resolution, and climate input data on the average winter precipitation. The results of the reference run showed reasonable agreement with the available glaciological measurements. The precipitation pattern produced by the LT model was consistent regardless of parameter combination, horizontal resolution, and climate input data, but the precipitation amount varied strongly with these factors. Due to the consistency of the winter precipitation pattern and the uncertainty in precipitation amount, we suggest a precipitation index map approach to be used in combination with
Linnell, A.P.; Kallrath, J.
1986-08-01
New analysis tools and additional unanalyzed observations justify a reanalysis of MR Cygni. The reanalysis applied successively more restrictive physical models, each with an optimization program. The final model assigned separate first and second order limb darkening coefficients, from model atmospheres, to individual grid points. Proper operation of the optimization procedure was tested on simulated observational data, produced by light synthesis with assigned system parameters, and modulated by simulated observational error. The iterative solution converged to a weakly-determined mass ratio of 0.75. Assuming the B3 primary component is on the main sequence, the HR diagram location of the secondary from the light ratio (ordinate) and adjusted T sub eff (abscissa) was calculated. The derived mass ratio, together with a main-sequence mass for the B3 component, implies a main-sequence secondary spectral type of B4. The photometrically-determined secondary radii agree with this spectral type, in marginal disagreement with the B7 type from the HR diagram analysis. The individual masses, derived from the radial velocity curve of the primary component, the photometrically-determined i, and alternative values of derived mass ratio are seriously discrepant with main sequence objects. The imputed physical status of the system is in disagreement with representations that have appeared in the literature
George, Phiji P.; Irodi, Aparna; Keshava, Shyamkumar N.; Lamont, Anthony C.
2014-01-01
In this article we revisit, with the help of images, those classic signs in chest radiography described by Dr Benjamin Felson himself, or other illustrious radiologists of his time, cited and discussed in 'Chest Roentgenology'. We briefly describe the causes of the signs, their utility and the differential diagnosis to be considered when each sign is seen. Wherever possible, we use CT images to illustrate the basis of some of these classic radiographic signs.
Fathi, Albert
2015-07-01
In this paper we revisit our joint work with Antonio Siconolfi on time functions. We will give a brief introduction to the subject. We will then show how to construct a Lipschitz time function in a simplified setting. We will end with a new result showing that the Aubry set is not an artifact of our proof of existence of time functions for stably causal manifolds.
Whitehead, Jim; De Bra, Paul; Grønbæk, Kaj; Larsen, Deena; Legget, John; schraefel, monica m.c.
2002-01-01
It has been 15 years since the original presentation by Frank Halasz at Hypertext'87 on seven issues for the next generation of hypertext systems. These issues are: Search and Query Composites Virtual Structures Computation in/over hypertext network Versioning Collaborative Work Extensibility and Tailorability Since that time, these issues have formed the nucleus of multiple research agendas within the Hypertext community. Befitting this direction-setting role, the issues have been revisited ...
Deterministic Graphical Games Revisited
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
Modeling the Motion of an Increasing Mass System
Kunkel, William; Harrington, Randal
2010-01-01
Problems on the dynamics of changing mass systems often call for the more general form of Newton's second law Fnet = dp/dt. These problems usually involve situations where the mass of the system decreases, such as in rocket propulsion. In contrast, this experiment examines a system where the mass "increases" at a constant rate and the net force…
Moresi, Louis
2015-04-01
Dynamic Topography Revisited Dynamic topography is usually considered to be one of the trinity of contributing causes to the Earth's non-hydrostatic topography along with the long-term elastic strength of the lithosphere and isostatic responses to density anomalies within the lithosphere. Dynamic topography, thought of this way, is what is left over when other sources of support have been eliminated. An alternate and explicit definition of dynamic topography is that deflection of the surface which is attributable to creeping viscous flow. The problem with the first definition of dynamic topography is 1) that the lithosphere is almost certainly a visco-elastic / brittle layer with no absolute boundary between flowing and static regions, and 2) the lithosphere is, a thermal / compositional boundary layer in which some buoyancy is attributable to immutable, intrinsic density variations and some is due to thermal anomalies which are coupled to the flow. In each case, it is difficult to draw a sharp line between each contribution to the overall topography. The second definition of dynamic topography does seem cleaner / more precise but it suffers from the problem that it is not measurable in practice. On the other hand, this approach has resulted in a rich literature concerning the analysis of large scale geoid and topography and the relation to buoyancy and mechanical properties of the Earth [e.g. refs 1,2,3] In convection models with viscous, elastic, brittle rheology and compositional buoyancy, however, it is possible to examine how the surface topography (and geoid) are supported and how different ways of interpreting the "observable" fields introduce different biases. This is what we will do. References (a.k.a. homework) [1] Hager, B. H., R. W. Clayton, M. A. Richards, R. P. Comer, and A. M. Dziewonski (1985), Lower mantle heterogeneity, dynamic topography and the geoid, Nature, 313(6003), 541-545, doi:10.1038/313541a0. [2] Parsons, B., and S. Daly (1983), The
Subgrid models for mass and thermal diffusion in turbulent mixing
Sharp, David H [Los Alamos National Laboratory; Lim, Hyunkyung [STONY BROOK UNIV; Li, Xiao - Lin [STONY BROOK UNIV; Gilmm, James G [STONY BROOK UNIV
2008-01-01
We are concerned with the chaotic flow fields of turbulent mixing. Chaotic flow is found in an extreme form in multiply shocked Richtmyer-Meshkov unstable flows. The goal of a converged simulation for this problem is twofold: to obtain converged solutions for macro solution features, such as the trajectories of the principal shock waves, mixing zone edges, and mean densities and velocities within each phase, and also for such micro solution features as the joint probability distributions of the temperature and species concentration. We introduce parameterized subgrid models of mass and thermal diffusion, to define large eddy simulations (LES) that replicate the micro features observed in the direct numerical simulation (DNS). The Schmidt numbers and Prandtl numbers are chosen to represent typical liquid, gas and plasma parameter values. Our main result is to explore the variation of the Schmidt, Prandtl and Reynolds numbers by three orders of magnitude, and the mesh by a factor of 8 per linear dimension (up to 3200 cells per dimension), to allow exploration of both DNS and LES regimes and verification of the simulations for both macro and micro observables. We find mesh convergence for key properties describing the molecular level of mixing, including chemical reaction rates between the distinct fluid species. We find results nearly independent of Reynolds number for Re 300, 6000, 600K . Methodologically, the results are also new. In common with the shock capturing community, we allow and maintain sharp solution gradients, and we enhance these gradients through use of front tracking. In common with the turbulence modeling community, we include subgrid scale models with no adjustable parameters for LES. To the authors' knowledge, these two methodologies have not been previously combined. In contrast to both of these methodologies, our use of Front Tracking, with DNS or LES resolution of the momentum equation at or near the Kolmogorov scale, but without
Advanced Change Theory Revisited: An Article Critique
R. Scott Pochron
2008-12-01
Full Text Available The complexity of life in 21st century society requires new models for leading and managing change. With that in mind, this paper revisits the model for Advanced Change Theory (ACT as presented by Quinn, Spreitzer, and Brown in their article, “Changing Others Through Changing Ourselves: The Transformation of Human Systems” (2000. The authors present ACT as a potential model for facilitating change in complex organizations. This paper presents a critique of the article and summarizes opportunities for further exploring the model in the light of current trends in developmental and integral theory.
Vehicle Lightweighting: Mass Reduction Spectrum Analysis and Process Cost Modeling
Mascarin, Anthony [IBIS Associates, Inc., Waltham, MA (United States); Hannibal, Ted [IBIS Associates, Inc., Waltham, MA (United States); Raghunathan, Anand [Energetics Inc., Columbia, MD (United States); Ivanic, Ziga [Energetics Inc., Columbia, MD (United States); Clark, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-03-01
The U.S. Department of Energy’s Vehicle Technologies Office, Materials area commissioned a study to model and assess manufacturing economics of alternative design and production strategies for a series of lightweight vehicle concepts. In the first two phases of this effort examined combinations of strategies aimed at achieving strategic targets of 40% and a 45% mass reduction relative to a standard North American midsize passenger sedan at an effective cost of $3.42 per pound (lb) saved. These results have been reported in the Idaho National Laboratory report INL/EXT-14-33863 entitled Vehicle Lightweighting: 40% and 45% Weight Savings Analysis: Technical Cost Modeling for Vehicle Lightweighting published in March 2015. The data for these strategies were drawn from many sources, including Lotus Engineering Limited and FEV, Inc. lightweighting studies, U.S. Department of Energy-funded Vehma International of America, Inc./Ford Motor Company Multi-Material Lightweight Prototype Vehicle Demonstration Project, the Aluminum Association Transportation Group, many United States Council for Automotive Research’s/United States Automotive Materials Partnership LLC lightweight materials programs, and IBIS Associates, Inc.’s decades of experience in automotive lightweighting and materials substitution analyses.
Device-Level Models Using Multi-Valley Effective Mass
Baczewski, Andrew D.; Frees, Adam; Gamble, John King; Gao, Xujiao; Jacobson, N. Tobias; Mitchell, John A.; Montaño, Inès; Muller, Richard P.; Nielsen, Erik
2015-03-01
Continued progress in quantum electronics depends critically on the availability of robust device-level modeling tools that capture a wide range of physics and effective mass theory (EMT) is one means of building such models. Recent developments in multi-valley EMT show quantitative agreement with more detailed atomistic tight-binding calculations of phosphorus donors in silicon (Gamble, et. al., arXiv:1408.3159). Leveraging existing PDE solvers, we are developing a framework in which this multi-valley EMT is coupled to an integrated device-level description of several experimentally active qubit technologies. Device-level simulations of quantum operations will be discussed, as well as the extraction of process matrices at this level of theory. The authors gratefully acknowledge support from the Sandia National Laboratories Truman Fellowship Program, which is funded by the Laboratory Directed Research and Development (LDRD) Program. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94AL85000.
Vehicle Lightweighting: Mass Reduction Spectrum Analysis and Process Cost Modeling
Mascarin, Anthony; Hannibal, Ted; Raghunathan, Anand; Ivanic, Ziga; Clark, Michael
2016-01-01
The U.S. Department of Energy's Vehicle Technologies Office, Materials area commissioned a study to model and assess manufacturing economics of alternative design and production strategies for a series of lightweight vehicle concepts. In the first two phases of this effort examined combinations of strategies aimed at achieving strategic targets of 40% and a 45% mass reduction relative to a standard North American midsize passenger sedan at an effective cost of $3.42 per pound (lb) saved. These results have been reported in the Idaho National Laboratory report INL/EXT-14-33863 entitled Vehicle Lightweighting: 40% and 45% Weight Savings Analysis: Technical Cost Modeling for Vehicle Lightweighting published in March 2015. The data for these strategies were drawn from many sources, including Lotus Engineering Limited and FEV, Inc. lightweighting studies, U.S. Department of Energy-funded Vehma International of America, Inc./Ford Motor Company Multi-Material Lightweight Prototype Vehicle Demonstration Project, the Aluminum Association Transportation Group, many United States Council for Automotive Research's/United States Automotive Materials Partnership LLC lightweight materials programs, and IBIS Associates, Inc.'s decades of experience in automotive lightweighting and materials substitution analyses.
Giveon, A.; Sarid, U.; Hall, L.J.; California Univ., Berkeley, CA
1991-01-01
Model-independent criteria for unification in the SU(5) framework are studied. These are applied to the minimal supersymmetric standard model and to the standard model with a split 45 Higgs representation. Although the former is consistent with SU(5) unification, the superpartner masses can vary over a wide range, and may even all lie well beyond the reach of planned colliders. Adding a split 45 to the standard model can also satisfy the unification criteria, so supersymmetric SU(5) is far from unique. Furthermore, we learn that separate Higgs doublets must couple to the top and bottom quarks in order to give a correct m b /m τ prediction. (orig.)
Mass transfer model for two-layer TBP oxidation reactions
Laurinat, J.E.
1994-01-01
To prove that two-layer, TBP-nitric acid mixtures can be safely stored in the canyon evaporators, it must be demonstrated that a runaway reaction between TBP and nitric acid will not occur. Previous bench-scale experiments showed that, at typical evaporator temperatures, this reaction is endothermic and therefore cannot run away, due to the loss of heat from evaporation of water in the organic layer. However, the reaction would be exothermic and could run away if the small amount of water in the organic layer evaporates before the nitric acid in this layer is consumed by the reaction. Provided that there is enough water in the aqueous layer, this would occur if the organic layer is sufficiently thick so that the rate of loss of water by evaporation exceeds the rate of replenishment due to mixing with the aqueous layer. This report presents measurements of mass transfer rates for the mixing of water and butanol in two-layer, TBP-aqueous mixtures, where the top layer is primarily TBP and the bottom layer is comprised of water or aqueous salt solution. Mass transfer coefficients are derived for use in the modeling of two-layer TBP-nitric acid oxidation experiments. Three cases were investigated: (1) transfer of water into the TBP layer with sparging of both the aqueous and TBP layers, (2) transfer of water into the TBP layer with sparging of just the TBP layer, and (3) transfer of butanol into the aqueous layer with sparging of both layers. The TBP layer was comprised of 99% pure TBP (spiked with butanol for the butanol transfer experiments), and the aqueous layer was comprised of either water or an aluminum nitrate solution. The liquid layers were air sparged to simulate the mixing due to the evolution of gases generated by oxidation reactions. A plastic tube and a glass frit sparger were used to provide different size bubbles. Rates of mass transfer were measured using infrared spectrophotometers provided by SRTC/Analytical Development
Higgs boson masses in a non-minimal supersymmetric model
Tiesi, Alessandro
2002-01-01
A study of the neutral Higgs spectrum in a general Z 3 -breaking Next to Minimal Supersymmetric Standard Model (NMSSM) is reported in several significant contexts. Particular attention has been devoted to the upper bound on lightest Higgs boson. In the CP-conserving case we show that the extra terms involved in the general Z 3 -breaking superpotential do not affect the upper bound which remains unchanged: it is ∼ 136 GeV when tan β = 2.7. The Spontaneous CP Violation scenario in the Z 3 -breaking NMSSM can occur at tree-level. When the phases of the fields are small the spectrum shows the lightest Higgs particle to be an almost singlet CP-odd. The second lightest particle, a doublet almost-CP-even state, still manifests the upper bound of the CP-conserving case. When the CP-violating phases are large the lightest particle is a doublet with no definite CP parity and its mass shows the usual upper bound at ∼ 136 GeV. The large number of parameters involved in the effective potential can be significantly reduced in the Infrared Quasi Fixed Point (IRQFP) resulting after solving the Renormalization Group (RG) equations assuming universality for the soft SUSY breaking masses. In the Z 3 -breaking NMSSM, unlike the Z 3 -conserving NMSSM, it is possible to find a Higgs spectrum which is still compatible with both experiment and universality at the unification scale. Because in the IRQFP regime tan β ∼ 1.8 and the stop mixing parameter is reduced then the upper bound on the lightest Higgs boson turns out to be ∼ 121 GeV. This result is compatible with experimental data coming from LEPII and might be one of the next predictions to be tested at hadron collider experiments. (author)
Calibration of a surface mass balance model for global-scale applications
Giesen, R. H.; Oerlemans, J.
2012-01-01
Global applications of surface mass balance models have large uncertainties, as a result of poor climate input data and limited availability of mass balance measurements. This study addresses several possible consequences of these limitations for the modelled mass balance. This is done by applying a
THE STELLAR MASS COMPONENTS OF GALAXIES: COMPARING SEMI-ANALYTICAL MODELS WITH OBSERVATION
Liu Lei; Yang Xiaohu; Mo, H. J.; Van den Bosch, Frank C.; Springel, Volker
2010-01-01
We compare the stellar masses of central and satellite galaxies predicted by three independent semi-analytical models (SAMs) with observational results obtained from a large galaxy group catalog constructed from the Sloan Digital Sky Survey. In particular, we compare the stellar mass functions of centrals and satellites, the relation between total stellar mass and halo mass, and the conditional stellar mass functions, Φ(M * |M h ), which specify the average number of galaxies of stellar mass M * that reside in a halo of mass M h . The SAMs only predict the correct stellar masses of central galaxies within a limited mass range and all models fail to reproduce the sharp decline of stellar mass with decreasing halo mass observed at the low mass end. In addition, all models over-predict the number of satellite galaxies by roughly a factor of 2. The predicted stellar mass in satellite galaxies can be made to match the data by assuming that a significant fraction of satellite galaxies are tidally stripped and disrupted, giving rise to a population of intra-cluster stars (ICS) in their host halos. However, the amount of ICS thus predicted is too large compared to observation. This suggests that current galaxy formation models still have serious problems in modeling star formation in low-mass halos.
Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel
2014-05-01
Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.
Neutrino masses and their ordering: global data, priors and models
Gariazzo, S.; Archidiacono, M.; de Salas, P. F.; Mena, O.; Ternes, C. A.; Tórtola, M.
2018-03-01
We present a full Bayesian analysis of the combination of current neutrino oscillation, neutrinoless double beta decay and Cosmic Microwave Background observations. Our major goal is to carefully investigate the possibility to single out one neutrino mass ordering, namely Normal Ordering or Inverted Ordering, with current data. Two possible parametrizations (three neutrino masses versus the lightest neutrino mass plus the two oscillation mass splittings) and priors (linear versus logarithmic) are exhaustively examined. We find that the preference for NO is only driven by neutrino oscillation data. Moreover, the values of the Bayes factor indicate that the evidence for NO is strong only when the scan is performed over the three neutrino masses with logarithmic priors; for every other combination of parameterization and prior, the preference for NO is only weak. As a by-product of our Bayesian analyses, we are able to (a) compare the Bayesian bounds on the neutrino mixing parameters to those obtained by means of frequentist approaches, finding a very good agreement; (b) determine that the lightest neutrino mass plus the two mass splittings parametrization, motivated by the physical observables, is strongly preferred over the three neutrino mass eigenstates scan and (c) find that logarithmic priors guarantee a weakly-to-moderately more efficient sampling of the parameter space. These results establish the optimal strategy to successfully explore the neutrino parameter space, based on the use of the oscillation mass splittings and a logarithmic prior on the lightest neutrino mass, when combining neutrino oscillation data with cosmology and neutrinoless double beta decay. We also show that the limits on the total neutrino mass ∑ mν can change dramatically when moving from one prior to the other. These results have profound implications for future studies on the neutrino mass ordering, as they crucially state the need for self-consistent analyses which explore the
Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin
2018-04-01
This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.
Multiscale modeling of fluid flow and mass transport
Masuoka, K.; Yamamoto, H.; Bijeljic, B.; Lin, Q.; Blunt, M. J.
2017-12-01
In recent years, there are some reports on a simulation of fluid flow in pore spaces of rocks using Navier-Stokes equations. These studies mostly adopt a X-ray CT to create 3-D numerical grids of the pores in micro-scale. However, results may be of low accuracy when the rock has a large pore size distribution, because pores, whose size is smaller than resolution of the X-ray CT may be neglected. We recently found out by tracer tests in a laboratory using a brine saturated Ryukyu limestone and inject fresh water that a decrease of chloride concentration took longer time. This phenomenon can be explained due to weak connectivity of the porous networks. Therefore, it is important to simulate entire pore spaces even those of very small sizes in which diffusion is dominant. We have developed a new methodology for multi-level modeling for pore scale fluid flow in porous media. The approach is to combine pore-scale analysis with Darcy-flow analysis using two types of X-ray CT images in different resolutions. Results of the numerical simulations showed a close match with the experimental results. The proposed methodology is an enhancement for analyzing mass transport and flow phenomena in rocks with complicated pore structure.
Modelling of convective heat and mass transfer in rotating flows
Shevchuk, Igor V
2016-01-01
This monograph presents results of the analytical and numerical modeling of convective heat and mass transfer in different rotating flows caused by (i) system rotation, (ii) swirl flows due to swirl generators, and (iii) surface curvature in turns and bends. Volume forces (i.e. centrifugal and Coriolis forces), which influence the flow pattern, emerge in all of these rotating flows. The main part of this work deals with rotating flows caused by system rotation, which includes several rotating-disk configurations and straight pipes rotating about a parallel axis. Swirl flows are studied in some of the configurations mentioned above. Curvilinear flows are investigated in different geometries of two-pass ribbed and smooth channels with 180° bends. The author demonstrates that the complex phenomena of fluid flow and convective heat transfer in rotating flows can be successfully simulated using not only the universal CFD methodology, but in certain cases by means of the integral methods, self-similar and analyt...
Top and Higgs masses in a composite boson model
Kahana, D.E.
1993-01-01
Recently Nambu as well as Bardeen, Hill and Linden have suggested replacing the Higgs mechanism with a dynamical symmetry breaking generated by four fermion interactions of the top quark. In fact the model for replacing the scalar sector is that of Nambu and Jona-Lasinio (NJL) and one recovers the Higgs as a tt composite. Earlier authors have also treated vector mesons as composites within the NJL framework, with perhaps the earliest suggestion being that of Bjorken for a composite photon. Here we attempt to generate the entire electroweak interaction from a specific current-current, baryon number conserving form of the four fermion interaction. The W, Z and Higgs boson appear as coherent composites of all fermions, quarks and lepton, and not just of the top quark. The four fermion interaction is assumed to be valid at some high mass scale μ, perhaps the low energy limit resulting by the elimination of non-fermionic degrees of freedom from a more basic theory. The cutoff Λ, necessary in the non-renormalizable NJL may be viewed then as the proper scale for this more basic theory
Higgs mass bounds from a chirally invariant lattice Higgs-Yukawa model with overlap fermions
Gerhold, Philipp; Kallarackal, Jim
2008-10-01
We study the parameter dependence of the Higgs mass in a chirally invariant lattice Higgs-Yukawa model emulating the same Higgs-fermion coupling structure as in the Higgs sector of the electroweak Standard Model. Eventually, the aim is to establish upper and lower Higgs mass bounds. Here we present our preliminary results on the lower Higgs mass bound at several selected values for the cutoff and give a brief outlook towards the upper Higgs mass bound. (orig.)
Kuhl, Thorsten
2014-01-01
The origin of fundamental particle mass remains one of the key topics for particle physics at the LHC, even after the discovery of the Higgs. Because of the relatively low Higgs Boson mass, uncertainty remains as to whether the Standard Model (SM) can actually describe all Higgs related observations, or whether a theory beyond the SM will be required. The largest deviations from the SM can be expected to be observed in couplings of the most massive Standard Model particle, the top quark, to t...
Greenland Ice Sheet Mass Loss from GRACE Monthly Models
Sørensen, Louise Sandberg; Forsberg, René
2010-01-01
The Greenland ice sheet is currently experiencing a net mass loss. There are however large discrepancies between the published qualitative mass loss estimates, based on different data sets and methods. There are even large differences between the results based on the same data sources, as is the ...
Mass modelling from stellar streams in the Milky Way
Helmi, Amina; Sanderson, Robyn E.
2015-01-01
Arguably two of the most important questions in Astrophysics today are: what is the Universe made of? and, how do galaxies form and evolve? Quite astonishingly we know only the properties of <5% of the mass in the Universe (the atoms we are made of), while the nature of the dominant mass component
Orbifold matrix models and fuzzy extra dimensions
Chatzistavrakidis, Athanasios; Zoupanos, George
2011-01-01
We revisit an orbifold matrix model obtained as a restriction of the type IIB matrix model on a Z_3-invariant sector. An investigation of its moduli space of vacua is performed and issues related to chiral gauge theory and gravity are discussed. Modifications of the orbifolded model triggered by Chern-Simons or mass deformations are also analyzed. Certain vacua of the modified models exhibit higher-dimensional behaviour with internal geometries related to fuzzy spheres.
Dynamic modeling of fixed-bed adsorption of flue gas using a variable mass transfer model
Park, Jehun; Lee, Jae W.
2016-01-01
This study introduces a dynamic mass transfer model for the fixed-bed adsorption of a flue gas. The derivation of the variable mass transfer coefficient is based on pore diffusion theory and it is a function of effective porosity, temperature, and pressure as well as the adsorbate composition. Adsorption experiments were done at four different pressures (1.8, 5, 10 and 20 bars) and three different temperatures (30, 50 and 70 .deg. C) with zeolite 13X as the adsorbent. To explain the equilibrium adsorption capacity, the Langmuir-Freundlich isotherm model was adopted, and the parameters of the isotherm equation were fitted to the experimental data for a wide range of pressures and temperatures. Then, dynamic simulations were performed using the system equations for material and energy balance with the equilibrium adsorption isotherm data. The optimal mass transfer and heat transfer coefficients were determined after iterative calculations. As a result, the dynamic variable mass transfer model can estimate the adsorption rate for a wide range of concentrations and precisely simulate the fixed-bed adsorption process of a flue gas mixture of carbon dioxide and nitrogen.
Deterministic Graphical Games Revisited
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2012-01-01
Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...
Metamorphosis in Craniiformea revisited
Altenburger, Andreas; Wanninger, Andreas; Holmer, Lars E.
2013-01-01
We revisited the brachiopod fold hypothesis and investigated metamorphosis in the craniiform brachiopod Novocrania anomala. Larval development is lecithotrophic and the dorsal (brachial) valve is secreted by dorsal epithelia. We found that the juvenile ventral valve, which consists only of a thin...... brachiopods during metamorphosis to cement their pedicle to the substrate. N. anomala is therefore not initially attached by a valve but by material corresponding to pedicle cuticle. This is different to previous descriptions, which had led to speculations about a folding event in the evolution of Brachiopoda...
Yang, Lurong; Wang, Xinyu; Mendoza-Sanchez, Itza; Abriola, Linda M
2018-04-01
Sequestered mass in low permeability zones has been increasingly recognized as an important source of organic chemical contamination that acts to sustain downgradient plume concentrations above regulated levels. However, few modeling studies have investigated the influence of this sequestered mass and associated (coupled) mass transfer processes on plume persistence in complex dense nonaqueous phase liquid (DNAPL) source zones. This paper employs a multiphase flow and transport simulator (a modified version of the modular transport simulator MT3DMS) to explore the two- and three-dimensional evolution of source zone mass distribution and near-source plume persistence for two ensembles of highly heterogeneous DNAPL source zone realizations. Simulations reveal the strong influence of subsurface heterogeneity on the complexity of DNAPL and sequestered (immobile/sorbed) mass distribution. Small zones of entrapped DNAPL are shown to serve as a persistent source of low concentration plumes, difficult to distinguish from other (sorbed and immobile dissolved) sequestered mass sources. Results suggest that the presence of DNAPL tends to control plume longevity in the near-source area; for the examined scenarios, a substantial fraction (43.3-99.2%) of plume life was sustained by DNAPL dissolution processes. The presence of sorptive media and the extent of sorption non-ideality are shown to greatly affect predictions of near-source plume persistence following DNAPL depletion, with plume persistence varying one to two orders of magnitude with the selected sorption model. Results demonstrate the importance of sorption-controlled back diffusion from low permeability zones and reveal the importance of selecting the appropriate sorption model for accurate prediction of plume longevity. Large discrepancies for both DNAPL depletion time and plume longevity were observed between 2-D and 3-D model simulations. Differences between 2- and 3-D predictions increased in the presence of
Yang, Lurong; Wang, Xinyu; Mendoza-Sanchez, Itza; Abriola, Linda M.
2018-04-01
Sequestered mass in low permeability zones has been increasingly recognized as an important source of organic chemical contamination that acts to sustain downgradient plume concentrations above regulated levels. However, few modeling studies have investigated the influence of this sequestered mass and associated (coupled) mass transfer processes on plume persistence in complex dense nonaqueous phase liquid (DNAPL) source zones. This paper employs a multiphase flow and transport simulator (a modified version of the modular transport simulator MT3DMS) to explore the two- and three-dimensional evolution of source zone mass distribution and near-source plume persistence for two ensembles of highly heterogeneous DNAPL source zone realizations. Simulations reveal the strong influence of subsurface heterogeneity on the complexity of DNAPL and sequestered (immobile/sorbed) mass distribution. Small zones of entrapped DNAPL are shown to serve as a persistent source of low concentration plumes, difficult to distinguish from other (sorbed and immobile dissolved) sequestered mass sources. Results suggest that the presence of DNAPL tends to control plume longevity in the near-source area; for the examined scenarios, a substantial fraction (43.3-99.2%) of plume life was sustained by DNAPL dissolution processes. The presence of sorptive media and the extent of sorption non-ideality are shown to greatly affect predictions of near-source plume persistence following DNAPL depletion, with plume persistence varying one to two orders of magnitude with the selected sorption model. Results demonstrate the importance of sorption-controlled back diffusion from low permeability zones and reveal the importance of selecting the appropriate sorption model for accurate prediction of plume longevity. Large discrepancies for both DNAPL depletion time and plume longevity were observed between 2-D and 3-D model simulations. Differences between 2- and 3-D predictions increased in the presence of
Lechner, U.; Schmid, Beat
2001-01-01
Information and communication technology opens up an unprecedented space of design options for the creation of economic value. The business model "community" and the role "community organizer" are determined to become pivotal in the digital economy. We argue that any online business model needs to take communities and community organizing in the design of communication and the system architecture into account. Our discussion is guided by the media model (Schmid, 1997). We characterize the rel...
A cosmological model with compact space sections and low mass density
Fagundes, H.V.
1982-01-01
A general relativistic cosmological model is presented, which has closed space sections and mass density below a critical density similar to that of Friedmann's models. The model may predict double images of cosmic sources. (Author) [pt
Charmed, beauty hadrons revisited
Chabab, M.
1998-01-01
Applying two different versions of QCD sum-rules, we reanalyze rigourously the rich spectroscopy of mesons and baryons built from charm and beauty quarks. An improved determination of the masses and the leptonic decay constants of B c (bc-bar), B c *(bc-bar), and Λ(bcu) is presented. Our optimal results, constrained by stability criteria, are consistent in both versions and support the general pattern common to potential models predictions
A statistical model for horizontal mass flux of erodible soil
Babiker, A.G.A.G.; Eltayeb, I.A.; Hassan, M.H.A.
1986-11-01
It is shown that the mass flux of erodible soil transported horizontally by a statistically distributed wind flow has a statistical distribution. Explicit expression for the probability density function, p.d.f., of the flux is derived for the case in which the wind speed has a Weibull distribution. The statistical distribution for a mass flux characterized by a generalized Bagnold formula is found to be Weibull for the case of zero threshold speed. Analytic and numerical values for the average horizontal mass flux of soil are obtained for various values of wind parameters, by evaluating the first moment of the flux density function. (author)
3-D modelling and analysis of Dst C-responses in the North Pacific Ocean region, revisited
Kuvshinov, A.; Utada, H.; Avdeev, D.
2005-01-01
models that are as realistic and detailed as possible. In order to perform the simulations using realistic 3-D models on a routine basis a novel 3-D 'spherical' forward solution has been elaborated in this paper. The solution combines the modified iterative-dissipative method with a conjugate gradient...
Bobowik, Magdalena; Martinovic, Borja; Basabe, Nekane; Barsties, Lisa S.; Wachter, Gusta
2017-01-01
Rejection-identification and rejection-disidentification models propose that low-status groups identify with their in-group and disidentify with a high-status out-group in response to rejection by the latter. Our research tests these two models simultaneously among multiple groups of foreign-born
Color-flavor locked strange quark matter in a mass density-dependent model
Chen Yuede; Wen Xinjian
2007-01-01
Properties of color-flavor locked (CFL) strange quark matter have been studied in a mass-density-dependent model, and compared with the results in the conventional bag model. In both models, the CFL phase is more stable than the normal nuclear matter for reasonable parameters. However, the lower density behavior of the sound velocity in this model is completely opposite to that in the bag model, which makes the maximum mass of CFL quark stars in the mass-density-dependent model larger than that in the bag model. (authors)
Revisiting Nursing Research in Nigeria
2016-08-18
Aug 18, 2016 ... health care research, it is therefore pertinent to revisit the state of nursing research in the country. .... platforms, updated libraries with electronic resource ... benchmarks for developing countries of 26%, [17] the amount is still ...
Reichert, B.K.; Bengtsson, L.; Oerlemans, J.
2001-01-01
A process-oriented modeling approach is applied in order to simulate glacier mass balance for individual glaciers using statistically downscaled general circulation models (GCMs). Glacier-specific seasonal sensitivity characteristics based on a mass balance model of intermediate complexity are used
CO2 Mass transfer model for carbonic anhydrase-enhanced aqueous MDEA solutions
Gladis, Arne Berthold; Deslauriers, Maria Gundersen; Neerup, Randi
2018-01-01
In this study a CO2 mass transfer model was developed for carbonic anhydrase-enhanced MDEA solutions based on a mechanistic kinetic enzyme model. Four different enzyme models were compared in their ability to predict the liquid side mass transfer coefficient at temperatures in the range of 298...
Modelling of a micro Coriolis mass flow sensor for sensitivity improvement
Groenesteijn, Jarno; van de Ridder, Bert; Lötters, Joost Conrad; Wiegerink, Remco J.
2014-01-01
We have developed a multi-axis flexible body model with which we can investigate the behavior of (micro) Coriolis mass flow sensors with arbitrary channel geometry. The model has been verified by measurements on five different designs of micro Coriolis mass flow sensors. The model predicts the Eigen
Dynamic Models of Instruments Using Rotating Unbalanced Masses
Hung, John Y.; Gallaspy, Jason M.; Bishop, Carlee A.
1998-01-01
The motion of telescopes, satellites, and other flight bodies have been controlled by various means in the past. For example, gimbal mounted devices can use electric motors to produce pointing and scanning motions. Reaction wheels, control moment gyros, and propellant-charged reaction jets are other technologies that have also been used. Each of these methods has its advantages, but all actuator systems used in a flight environment face the challenges of minimizing weight, reducing energy consumption, and maximizing reliability. Recently, Polites invented and patented the Rotating Unbalanced Mass (RUM) device as a means for generation scanning motion on flight experiments. RUM devices together with traditional servomechanisms have been successfully used to generate various scanning motions: linear, raster, and circular. The basic principle can be described: A RUM rotating at constant angular velocity exerts a cyclic centrifugal force on the instrument or main body, thus producing a periodic scanning motion. A system of RUM devices exerts no reaction forces on the main body, requires very little energy to rotate the RUMS, and is simple to construct. These are significant advantages over electric motors, reaction wheels, and control moment gyroscopes. Although the RUM device very easily produces scanning motion, an auxiliary control system has been required to maintain the proper orientation, or pointing of the main body. It has been suggested that RUM devices can be used to control pointing dynamics, as well as generate the desired periodic scanning motion. The idea is that the RUM velocity will not be kept constant, but will vary over the period of one RUM rotation. The thought is that the changing angular velocity produces a centrifugal force having time-varying magnitude and direction. The scope of this ongoing research project is to study the pointing control concept, and recommend a direction of study for advanced pointing control using only RUM devices. This
Elias .
2011-03-01
Full Text Available The case study was conducted in the area of Acacia mangium plantation at BKPH Parung Panjang, KPH Bogor. The objective of the study was to formulate equation models of tree root carbon mass and root to shoot carbon mass ratio of the plantation. It was found that carbon content in the parts of tree biomass (stems, branches, twigs, leaves, and roots was different, in which the highest and the lowest carbon content was in the main stem of the tree and in the leaves, respectively. The main stem and leaves of tree accounted for 70% of tree biomass. The root-shoot ratio of root biomass to tree biomass above the ground and the root-shoot ratio of root biomass to main stem biomass was 0.1443 and 0.25771, respectively, in which 75% of tree carbon mass was in the main stem and roots of tree. It was also found that the root-shoot ratio of root carbon mass to tree carbon mass above the ground and the root-shoot ratio of root carbon mass to tree main stem carbon mass was 0.1442 and 0.2034, respectively. All allometric equation models of tree root carbon mass of A. mangium have a high goodness-of-fit as indicated by its high adjusted R2.Keywords: Acacia mangium, allometric, root-shoot ratio, biomass, carbon mass
Gauge boson mass without a Higgs field: a simple model
Nicholson, A.F.; Kennedy, D.C.
1997-02-01
A simple, anomaly-free chiral gauge theory can be perturbatively quantized and renormalized in such a way as to generate fermion and gauge boson masses. This development exploits certain freedoms inherent in choosing the unperturbed Lagrangian and in the renormalization procedure. Apart from its intrinsic interest, such a mechanism might be employed in electroweak gauge theory to generate fermion and gauge boson masses without a Higgs sector. 38 refs
Implications of improved Higgs mass calculations for supersymmetric models.
Buchmueller, O; Dolan, M J; Ellis, J; Hahn, T; Heinemeyer, S; Hollik, W; Marrouche, J; Olive, K A; Rzehak, H; de Vries, K J; Weiglein, G
We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, [Formula: see text], in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyse the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of [Formula: see text]and ATLAS searches for [Formula: see text] events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours [Formula: see text], though not in the NUHM1 or NUHM2.
Implications of improved Higgs mass calculations for supersymmetric models
Buchmueller, O. [Imperial College, London (United Kingdom). High Energy Physics Group; Dolan, M.J. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Theory Group; Ellis, J. [King' s College, London (United Kingdom). Theoretical Particle Physics and Cosmology Group; and others
2014-03-15
We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, M{sub h}, in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyze the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of BR(B{sub s}→μ{sup +}μ{sup -}) and ATLAS searches for E{sub T} events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours tan β
Circular revisit orbits design for responsive mission over a single target
Li, Taibo; Xiang, Junhua; Wang, Zhaokui; Zhang, Yulin
2016-10-01
The responsive orbits play a key role in addressing the mission of Operationally Responsive Space (ORS) because of their capabilities. These capabilities are usually focused on supporting specific targets as opposed to providing global coverage. One subtype of responsive orbits is repeat coverage orbit which is nearly circular in most remote sensing applications. This paper deals with a special kind of repeating ground track orbit, referred to as circular revisit orbit. Different from traditional repeat coverage orbits, a satellite on circular revisit orbit can visit a target site at both the ascending and descending stages in one revisit cycle. This typology of trajectory allows a halving of the traditional revisit time and does a favor to get useful information for responsive applications. However the previous reported numerical methods in some references often cost lots of computation or fail to obtain such orbits. To overcome this difficulty, an analytical method to determine the existence conditions of the solutions to revisit orbits is presented in this paper. To this end, the mathematical model of circular revisit orbit is established under the central gravity model and the J2 perturbation. A constraint function of the circular revisit orbit is introduced, and the monotonicity of that function has been studied. The existent conditions and the number of such orbits are naturally worked out. Taking the launch cost into consideration, optimal design model of circular revisit orbit is established to achieve a best orbit which visits a target twice a day in the morning and in the afternoon respectively for several days. The result shows that it is effective to apply circular revisit orbits in responsive application such as reconnoiter of natural disaster.
Deviations from mass transfer equilibrium and mathematical modeling of mixer-settler contactors
Beyerlein, A.L.; Geldard, J.F.; Chung, H.F.; Bennett, J.E.
1980-01-01
This paper presents the mathematical basis for the computer model PUBG of mixer-settler contactors which accounts for deviations from mass transfer equilibrium. This is accomplished by formulating the mass balance equations for the mixers such that the mass transfer rate of nuclear materials between the aqueous and organic phases is accounted for. 19 refs
Should the coupling constants be mass dependent in the relativistic mean field models
Levai, P.; Lukacs, B.
1986-05-01
Mass dependent coupling constants are proposed for baryonic resonances in the relativistic mean field model, according to the mass splitting of the SU-6 multiplet. With this choice the negative effective masses are avoided and the system remains nucleon dominated with moderate antidelta abundance. (author)
Martinson, Liisa; Lamersdorf, Norbert; Warfvinge, Per
2005-01-01
Soil chemistry under the Solling clean-rain roof was simulated using the dynamic multi-layer soil chemistry model SAFE, including sulfate adsorption. Soil was sampled in order to parameterize the pH and sulfate concentration dependent sulfate adsorption isotherm used in SAFE. Modeled soil solution chemistry was compared to the 14 year long time-series of monthly measurements of soil solution data at 10 and 100 cm depth. The deposition of N and S under the roof has been reduced by 68% and 53%, respectively, compared to the surrounding area. Despite this the soil solution concentrations of sulfate are still high (a median of 420 μmol c /L at 100 cm depth between 2000 and 2002) and the soil base saturation low (approximately 3% in the whole profile in 1998). Sulfate adsorption is an important process in Solling. The soil capacity to adsorb sulfate is large, the modeled adsorbed pool in 2003 down to 100 cm was 1030 kg S/ha, and the measured sulfate concentration is high, due to release of adsorbed sulfate. The addition of sulfate adsorption improved the modeled sulfate dynamics although the model still slightly underestimated the sulfate concentration at 100 cm. Model predictions show no recovery, based on the criteria of Bc/Al ratio above 1 in the rooting zone, before the year 2050, independent of future deposition cuts. - Desorption of sulfate still influences soil chemistry
Mancuso, Katherine; Mauck, Matthew C; Kuchenbecker, James A; Neitz, Maureen; Neitz, Jay
2010-01-01
In 1993, DeValois and DeValois proposed a 'multi-stage color model' to explain how the cortex is ultimately able to deconfound the responses of neurons receiving input from three cone types in order to produce separate red-green and blue-yellow systems, as well as segregate luminance percepts (black-white) from color. This model extended the biological implementation of Hurvich and Jameson's Opponent-Process Theory of color vision, a two-stage model encompassing the three cone types combined in a later opponent organization, which has been the accepted dogma in color vision. DeValois' model attempts to satisfy the long-remaining question of how the visual system separates luminance information from color, but what are the cellular mechanisms that establish the complicated neural wiring and higher-order operations required by the Multi-stage Model? During the last decade and a half, results from molecular biology have shed new light on the evolution of primate color vision, thus constraining the possibilities for the visual circuits. The evolutionary constraints allow for an extension of DeValois' model that is more explicit about the biology of color vision circuitry, and it predicts that human red-green colorblindness can be cured using a retinal gene therapy approach to add the missing photopigment, without any additional changes to the post-synaptic circuitry.
Algebraic formulation of collective models. I. The mass quadrupole collective model
Rosensteel, G.; Rowe, D.J.
1979-01-01
This paper is the first in a series of three which together present a microscopic formulation of the Bohr--Mottelson (BM) collective model of the nucleus. In this article the mass quadrupole collective (MQC) model is defined and shown to be a generalization of the BM model. The MQC model eliminates the small oscillation assumption of BM and also yields the rotational and CM (3) submodels by holonomic constraints on the MQC configuration space. In addition, the MQC model is demonstrated to be an algebraic model, so that the state space of the MQC model carries an irrep of a Lie algebra of microscopic observables, the MQC algebra. An infinite class of new collective models is then given by the various inequivalent irreps of this algebra. A microscopic embedding of the BM model is achieved by decomposing the representation of the MQC algebra on many-particle state space into its irreducible components. In the second paper this decomposition is studied in detail. The third paper presents the symplectic model, which provides the realization of the collective model in the harmonic oscillator shell model
The Uses and Dependency Model of Mass Communication.
Rubin, Alan M.; Windahl, Sven
1986-01-01
Responds to criticism of the uses and gratification model by proposing a modified model integrating the dependency perspective. Suggests that this integrated model broadens the heuristic application of the earlier model. (MS)
Three-dimensional two-phase mass transport model for direct methanol fuel cells
Yang, W.W.; Zhao, T.S.; Xu, C.
2007-01-01
A three-dimensional (3D) steady-state model for liquid feed direct methanol fuel cells (DMFC) is presented in this paper. This 3D mass transport model is formed by integrating five sub-models, including a modified drift-flux model for the anode flow field, a two-phase mass transport model for the porous anode, a single-phase model for the polymer electrolyte membrane, a two-phase mass transport model for the porous cathode, and a homogeneous mist-flow model for the cathode flow field. The two-phase mass transport models take account the effect of non-equilibrium evaporation/ condensation at the gas-liquid interface. A 3D computer code is then developed based on the integrated model. After being validated against the experimental data reported in the literature, the code was used to investigate numerically transport behaviors at the DMFC anode and their effects on cell performance
Sippel, Judith; Meeßen, Christian; Cacace, Mauro; Mechie, James; Fishwick, Stewart; Heine, Christian; Scheck-Wenderoth, Magdalena; Strecker, Manfred R.
2017-01-01
We present three-dimensional (3-D) models that describe the present-day thermal and rheological state of the lithosphere of the greater Kenya rift region aiming at a better understanding of the rift evolution, with a particular focus on plume-lithosphere interactions. The key methodology applied is the 3-D integration of diverse geological and geophysical observations using gravity modelling. Accordingly, the resulting lithospheric-scale 3-D density model is consistent with (i) reviewed descriptions of lithological variations in the sedimentary and volcanic cover, (ii) known trends in crust and mantle seismic velocities as revealed by seismic and seismological data and (iii) the observed gravity field. This data-based model is the first to image a 3-D density configuration of the crystalline crust for the entire region of Kenya and northern Tanzania. An upper and a basal crustal layer are differentiated, each composed of several domains of different average densities. We interpret these domains to trace back to the Precambrian terrane amalgamation associated with the East African Orogeny and to magmatic processes during Mesozoic and Cenozoic rifting phases. In combination with seismic velocities, the densities of these crustal domains indicate compositional differences. The derived lithological trends have been used to parameterise steady-state thermal and rheological models. These models indicate that crustal and mantle temperatures decrease from the Kenya rift in the west to eastern Kenya, while the integrated strength of the lithosphere increases. Thereby, the detailed strength configuration appears strongly controlled by the complex inherited crustal structure, which may have been decisive for the onset, localisation and propagation of rifting.
A practical method of predicting client revisit intention in a hospital setting.
Lee, Kyun Jick
2005-01-01
Data mining (DM) models are an alternative to traditional statistical methods for examining whether higher customer satisfaction leads to higher revisit intention. This study used a total of 906 outpatients' satisfaction data collected from a nationwide survey interviews conducted by professional interviewers on a face-to-face basis in South Korea, 1998. Analyses showed that the relationship between overall satisfaction with hospital services and outpatients' revisit intention, along with word-of-mouth recommendation as intermediate variables, developed into a nonlinear relationship. The five strongest predictors of revisit intention were overall satisfaction, intention to recommend to others, awareness of hospital promotion, satisfaction with physician's kindness, and satisfaction with treatment level.
Dong, Jianping
2011-01-01
The many-body space fractional quantum system is studied using the density matrix method. We give the new results of the Thomas-Fermi model, obtain the quantum pressure of the free electron gas. We also show the validity of the Hohenberg-Kohn theorems in the space fractional quantum mechanics and generalize the density functional theory to the fractional quantum mechanics. -- Highlights: → Thomas-Fermi model under the framework of fractional quantum mechanics is studied. → We show the validity of the HK theorems in the space fractional quantum mechanics. → The density functional theory is generalized to the fractional quantum mechanics.
Mancuso, Katherine; Mauck, Matthew C.; Kuchenbecker, James A.; Neitz, Maureen; Neitz, Jay
2010-01-01
In 1993, DeValois and DeValois proposed a “multi-stage color model” to explain how the cortex is ultimately able to deconfound the responses of neurons receiving input from three cone types in order to produce separate red-green and blue-yellow systems, as well as segregate luminance percepts (black-white) from color. This model extended the biological implementation of Hurvich and Jameson’s Opponent-Process Theory of color vision, a two-stage model encompassing the three cone types combined ...
Revisiting Constructivist Teaching Methods in Ontario Colleges Preparing for Accreditation
Schultz, Rachel A.
2015-01-01
At the time of writing, the first community colleges in Ontario were preparing for transition to an accreditation model from an audit system. This paper revisits constructivist literature, arguing that a more pragmatic definition of constructivism effectively blends positivist and interactionist philosophies to achieve both student centred…
Higgs-boson contributions to gauge-boson mass shifts in extended electroweak models
Moore, S.R.
1985-10-01
In the minimal standard model, the difference between the tree-level and one-loop-corrected predictions for the gauge-boson masses, known as the mass shifts, are of the order of 4%. The dominant contribution is from light-fermion loops. The Higgs-dependent terms are small, even if the Higgs boson is heavy. We have analyzed the mass shifts for models with a more complicated Higgs sector. We use the on-shell renormalization scheme, in which the parameters of the theory are the physical masses and couplings. We have considered the 2-doublet, n-doublet, triplet and doublet-triplet models. We have found that the Z-boson mass prediction has a strong dependence on the charged-Higgs mass. In the limit that the charged Higgs is much heavier than the gauge bosons, the Higgs-dependent terms become significant, and may even cancel the light-fermion terms. In the models with a Higgs triplet, there is also a strong dependence on the neutral-Higgs masses, although this contribution tends to be suppressed in realistic models. The W-boson mass shift does not have a strong Higgs dependence. If we use the Z mass as input in determining the parameters of the theory, a scenario which will become attractive as the mass of the Z is accurately measured in the next few years, we find that the W-boson mass shift exhibits the same sort of behavior, differing from the minimal model for the case of the charged Higgs being heavy. We have found that when radiative corrections are taken into account, models with extended Higgs sectors may differ significantly from the minimal standard model in their predictions for the gauge-boson masses. Thus, an accurate measurement of the masses will help shed light on the structure of the Higgs sector. 68 refs
Raymond C K Chan
Full Text Available BACKGROUND: Neurological soft signs and neurocognitive impairments have long been considered important features of schizophrenia. Previous correlational studies have suggested that there is a significant relationship between neurological soft signs and neurocognitive functions. The purpose of the current study was to examine the underlying relationships between these two distinct constructs with structural equation modeling (SEM. METHODS: 118 patients with schizophrenia and 160 healthy controls were recruited for the current study. The abridged version of the Cambridge Neurological Inventory (CNI and a set of neurocognitive function tests were administered to all participants. SEM was then conducted independently in these two samples to examine the relationships between neurological soft signs and neurocognitive functions. RESULTS: Both the measurement and structural models showed that the models fit well to the data in both patients and healthy controls. The structural equations also showed that there were modest to moderate associations among neurological soft signs, executive attention, verbal memory, and visual memory, while the healthy controls showed more limited associations. CONCLUSIONS: The current findings indicate that motor coordination, sensory integration, and disinhibition contribute to the latent construct of neurological soft signs, whereas the subset of neurocognitive function tests contribute to the latent constructs of executive attention, verbal memory, and visual memory in the present sample. Greater evidence of neurological soft signs is associated with more severe impairment of executive attention and memory functions. Clinical and theoretical implications of the model findings are discussed.
Haedt-Matt, Alissa A.; Keel, Pamela K.
2011-01-01
The affect regulation model of binge eating, which posits that patients binge eat to reduce negative affect (NA), has received support from cross-sectional and laboratory-based studies. Ecological momentary assessment (EMA) involves momentary ratings and repeated assessments over time and is ideally suited to identify temporal antecedents and…
Sørup, Christian Michel; Jacobsen, Peter
2014-01-01
are entitled safety and satisfaction, waiting time, information delivery, and infrastructure accordingly. As an empirical foundation, a recently published comprehensive survey in 11 Danish EDs is analysed in depth using structural equation modeling (SEM). Consulting the proposed framework, ED decision makers...
Nazem, Mohsen; Trépanier, Martin; Morency, Catherine
2015-01-01
An Enhanced Intervening Opportunities Model (EIOM) is developed for Public Transit (PT). This is a distribution supply dependent model, with single constraints on trip production for work trips during morning peak hours (6:00 a.m.-9:00 a.m.) within the Island of Montreal, Canada. Different data sets, including the 2008 Origin-Destination (OD) survey of the Greater Montreal Area, the 2006 Census of Canada, GTFS network data, along with the geographical data of the study area, are used. EIOM is a nonlinear model composed of socio-demographics, PT supply data and work location attributes. An enhanced destination ranking procedure is used to calculate the number of spatially cumulative opportunities, the basic variable of EIOM. For comparison, a Basic Intervening Opportunities Model (BIOM) is developed by using the basic destination ranking procedure. The main difference between EIOM and BIOM is in the destination ranking procedure: EIOM considers the maximization of a utility function composed of PT Level Of Service and number of opportunities at the destination, along with the OD trip duration, whereas BIOM is based on a destination ranking derived only from OD trip durations. Analysis confirmed that EIOM is more accurate than BIOM. This study presents a new tool for PT analysts, planners and policy makers to study the potential changes in PT trip patterns due to changes in socio-demographic characteristics, PT supply, and other factors. Also it opens new opportunities for the development of more accurate PT demand models with new emergent data such as smart card validations.
Hiatt, JR; Rivard, MJ
2014-01-01
Purpose: The model S700 Axxent electronic brachytherapy source by Xoft was characterized in 2006 by Rivard et al. The source design was modified in 2006 to include a plastic centering insert at the source tip to more accurately position the anode. The objectives of the current study were to establish an accurate Monte Carlo source model for simulation purposes, to dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and to determine dose differences between the source with and without the centering insert. Methods: Design information from dissected sources and vendor-supplied CAD drawings were used to devise the source model for radiation transport simulations of dose distributions in a water phantom. Collision kerma was estimated as a function of radial distance, r, and polar angle, θ, for determination of reference TG-43 dosimetry parameters. Simulations were run for 10 10 histories, resulting in statistical uncertainties on the transverse plane of 0.03% at r=1 cm and 0.08% at r=10 cm. Results: The dose rate distribution the transverse plane did not change beyond 2% between the 2006 model and the current study. While differences exceeding 15% were observed near the source distal tip, these diminished to within 2% for r>1.5 cm. Differences exceeding a factor of two were observed near θ=150° and in contact with the source, but diminished to within 20% at r=10 cm. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% over a third of the available solid angle external from the source. For clinical applications using balloons or applicators with tissue located within 5 cm from the source, dose differences exceeding 2% were observed only for θ>110°. This study carefully examined the current source geometry and presents a modern reference TG-43 dosimetry dataset for the model S700 source
Revisiting Biomarkers of Total-Body and Partial-Body Exposure in a Baboon Model of Irradiation.
Marco Valente
Full Text Available In case of a mass casualty radiation event, there is a need to distinguish total-body irradiation (TBI and partial-body irradiation (PBI to concentrate overwhelmed medical resources to the individuals that would develop an acute radiation syndrome (ARS and need hematologic support (i.e., mostly TBI victims. To improve the identification and medical care of TBI versus PBI individuals, reliable biomarkers of exposure could be very useful. To investigate this issue, pairs of baboons (n = 18 were exposed to different situations of TBI and PBI corresponding to an equivalent of either 5 Gy 60Co gamma irradiation (5 Gy TBI; 7.5 Gy left hemibody/2.5 right hemibody TBI; 5.55 Gy 90% PBI; 6.25 Gy 80% PBI; 10 Gy 50% PBI, 15 Gy 30% PBI or 2.5 Gy (2.5 Gy TBI; 5 Gy 50% PBI. More than fifty parameters were evaluated before and after irradiation at several time points up to 200 days. A partial least square discriminant analysis showed a good distinction of TBI from PBI situations that were equivalent to 5 Gy. Furthermore, all the animals were pooled in two groups, TBI (n = 6 and PBI (n = 12, for comparison using a logistic regression and a non parametric statistical test. Nine plasmatic biochemical markers and most of hematological parameters turned out to discriminate between TBI and PBI animals during the prodromal phase and the manifest illness phase. The most significant biomarkers were aspartate aminotransferase, creatine kinase, lactico dehydrogenase, urea, Flt3-ligand, iron, C-reactive protein, absolute neutrophil count and neutrophil-to-lymphocyte ratio for the early period, and Flt3-ligand, iron, platelet count, hemoglobin, monocyte count, absolute neutrophil count and neutrophil-to-lymphocyte ratio for the ARS phase. These results suggest that heterogeneity could be distinguished within a range of 2.5 to 5 Gy TBI.
Radiative corrections to neutrino deep inelastic scattering revisited
Arbuzov, Andrej B.; Bardin, Dmitry Yu.; Kalinovskaya, Lidia V.
2005-01-01
Radiative corrections to neutrino deep inelastic scattering are revisited. One-loop electroweak corrections are re-calculated within the automatic SANC system. Terms with mass singularities are treated including higher order leading logarithmic corrections. Scheme dependence of corrections due to weak interactions is investigated. The results are implemented into the data analysis of the NOMAD experiment. The present theoretical accuracy in description of the process is discussed
Hiemstra, Tjisse; Van Riemsdijk, Willem H.
2009-08-01
A multisite surface complexation (MUSIC) model for ferrihydrite (Fh) has been developed. The surface structure and composition of Fh nanoparticles are described in relation to ion binding and surface charge development. The site densities of the various reactive surface groups, the molar mass, the mass density, the specific surface area, and the particle size are quantified. As derived theoretically, molecular mass and mass density of nanoparticles will depend on the types of surface groups and the corresponding site densities and will vary with particle size and surface area because of a relatively large contribution of the surface groups in comparison to the mineral core of nanoparticles. The nano-sized (˜2.6 nm) particles of freshly prepared 2-line Fh as a whole have an increased molar mass of M ˜ 101 ± 2 g/mol Fe, a reduced mass density of ˜3.5 ± 0.1 g/cm 3, both relatively to the mineral core. The specific surface area is ˜650 m 2/g. Six-line Fh (5-6 nm) has a molar mass of M ˜ 94 ± 2 g/mol, a mass density of ˜3.9 ± 0.1 g/cm 3, and a surface area of ˜280 ± 30 m 2/g. Data analysis shows that the mineral core of Fh has an average chemical composition very close to FeOOH with M ˜ 89 g/mol. The mineral core has a mass density around ˜4.15 ± 0.1 g/cm 3, which is between that of feroxyhyte, goethite, and lepidocrocite. These results can be used to constrain structural models for Fh. Singly-coordinated surface groups dominate the surface of ferrihydrite (˜6.0 ± 0.5 nm -2). These groups can be present in two structural configurations. In pairs, the groups either form the edge of a single Fe-octahedron (˜2.5 nm -2) or are present at a single corner (˜3.5 nm -2) of two adjacent Fe octahedra. These configurations can form bidentate surface complexes by edge- and double-corner sharing, respectively, and may therefore respond differently to the binding of ions such as uranyl, carbonate, arsenite, phosphate, and others. The relatively low PZC of
Pereyra, Marcelo
2016-01-01
Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in many areas of data science such as mathematical imaging and machine learning, where high dimensionality is addressed by using models that are log-concave and whose posterior mode can be computed efficiently by using convex optimisation algorithms. However, despite its success and rapid adoption, MAP estimation is not theoretically well understood yet, and the prevalent view is that it is generally not proper ...
Models for predicting the mass of lime fruits by some engineering properties.
Miraei Ashtiani, Seyed-Hassan; Baradaran Motie, Jalal; Emadi, Bagher; Aghkhani, Mohammad-Hosein
2014-11-01
Grading fruits based on mass is important in packaging and reduces the waste, also increases the marketing value of agricultural produce. The aim of this study was mass modeling of two major cultivars of Iranian limes based on engineering attributes. Models were classified into three: 1-Single and multiple variable regressions of lime mass and dimensional characteristics. 2-Single and multiple variable regressions of lime mass and projected areas. 3-Single regression of lime mass based on its actual volume and calculated volume assumed as ellipsoid and prolate spheroid shapes. All properties considered in the current study were found to be statistically significant (ρ lime based on minor diameter and first projected area are the most appropriate models in the first and the second classifications, respectively. In third classification, the best model was obtained on the basis of the prolate spheroid volume. It was finally concluded that the suitable grading system of lime mass is based on prolate spheroid volume.
Engel, Benjamin D; Ludington, William B; Marshall, Wallace F
2009-10-05
The assembly and maintenance of eukaryotic flagella are regulated by intraflagellar transport (IFT), the bidirectional traffic of IFT particles (recently renamed IFT trains) within the flagellum. We previously proposed the balance-point length control model, which predicted that the frequency of train transport should decrease as a function of flagellar length, thus modulating the length-dependent flagellar assembly rate. However, this model was challenged by the differential interference contrast microscopy observation that IFT frequency is length independent. Using total internal reflection fluorescence microscopy to quantify protein traffic during the regeneration of Chlamydomonas reinhardtii flagella, we determined that anterograde IFT trains in short flagella are composed of more kinesin-associated protein and IFT27 proteins than trains in long flagella. This length-dependent remodeling of train size is consistent with the kinetics of flagellar regeneration and supports a revised balance-point model of flagellar length control in which the size of anterograde IFT trains tunes the rate of flagellar assembly.
Model-independent X-ray Mass Determinations for Clusters of Galaxies
Nulsen, Paul
2005-09-01
We propose to use high quality X-ray data from the Chandra archive to determine the mass distributions of about 60 clusters of galaxies over the largest possible range of radii. By avoiding unwarranted assumptions, model-independent methods make best use of high quality data. We will employ two model-independent methods. That used by Nulsen & Boehringer (1995) to determine the mass of the Virgo Cluster and a new method, that will be developed as part of the project. The new method will fit a general mass model directly to the X-ray spectra, making best possible use of the fitting errors to constrain mass profiles.
Fermion flavor mixing in models with dynamical mass generation
Beneš, Petr
2010-01-01
Roč. 81, č. 6 (2010), 065029/1-065029/13 ISSN 1550-7998 Institutional research plan: CEZ:AV0Z10480505 Keywords : weak interactions * particle physics * neutrino masses Subject RIV: BE - Theoretical Physics Impact factor: 4.964, year: 2010
Modification of the FEM3 model to ensure mass conservation
Gresho, P.M.
1987-01-01
The problem of global mass conservation (lack thereof) in the current anelastic equations solved by FEM3 is described and its cause explained. The additional equations necessary to solve the problem are presented and methods for their incorporation into the current code are suggested. 14 refs
Heterogeneous studies in pulping of wood: Modelling mass transfer of alkali
Simão, João P. F.; Egas, Ana P. V.; Carvalho, M. Graça; Baptista, Cristina M. S. G.; Castro, José Almiro A. M.
2008-01-01
In this paper a heterogeneous lumped parameter model is proposed to describe the mass transfer of effective alkali during the kraft pulping of wood. This model, based on the spatial mean of the concentration profile of effective alkali along the chip thickness, enables the estimation of the effective diffusion coefficient that characterizes the internal resistance to mass transfer and the contribution of the external resistance to mass transfer which has often been neglected. http://www.sc...
Resolution of Reflection Seismic Data Revisited
Hansen, Thomas Mejer; Mosegaard, Klaus; Zunino, Andrea
The Rayleigh Principle states that the minimum separation between two reflectors that allows them to be visually separated is the separation where the wavelet maxima from the two superimposed reflections combine into one maximum. This happens around Δtres = λb/8, where λb is the predominant...... lower vertical resolution of reflection seismic data. In the following we will revisit think layer model and demonstrate that there is in practice no limit to the vertical resolution using the parameterization of Widess (1973), and that the vertical resolution is limited by the noise in the data...
Putilov, A. V.; Bugaenko, M. V.; Timokhin, D. V.
2017-01-01
In the article approaches to the modernization of the national education system with the use of IT-technologies are offered, the review of the problems and obstacles of such modernization is held and concrete steps on the adaptation of the educational process to the labor market requirements are stated. On the basis of the previously proposed model of "economic cross" strategic directions of informatization of the educational process are defined, the analysis of conditions and intensity of the use of IT-technologies at the time of this writing is conducted, the recommendations on the improvement of known modernization tools and the development of new ones for Russian education are developed.
Bai, Peng; Fan, Kaigong; Guo, Xianghai; Zhang, Haocui
2016-01-01
Highlights: • We propose a non-equilibrium mass transfer absorption model instead of a distillation equilibrium model to calculate boron isotopes separation. • We apply the model to calculate the needed column height to meet prescribed separation requirements. - Abstract: To interpret the phenomenon of chemical exchange in boron isotopes separation accurately, the process is specified as an absorption–reaction–desorption hybrid process instead of a distillation equilibrium model, the non-equilibrium mass transfer absorption model is put forward and a mass transfer enhancement factor E is introduced to find the packing height needed to meet the specified separation requirements with MATLAB.
Wave Propagation in Finite Element and Mass-Spring-Dashpot Lattice Models
Holt-Phoenix, Marianne S
2006-01-01
...), and a mass-spring-dashpot lattice model (MSDLM) are investigated. Specifically, the error in the ultrasonic phase speed with variations in Poisson's ratio and angle of incidence is evaluated in each model of an isotropic elastic solid...
Manuel Jimmy Saint-Cyr
2015-12-01
Full Text Available Due to its toxic properties, high stability, and prevalence, the presence of deoxynivalenol (DON in the food chain is a major threat to food safety and therefore a health risk for both humans and animals. In this study, experiments were carried out with sows and female rats to examine the kinetics of DON after intravenous and oral administration at 100 µg/kg of body weight. After intravenous administration of DON in pigs, a two-compartment model with rapid initial distribution (0.030 ± 0.019 h followed by a slower terminal elimination phase (1.53 ± 0.54 h was fitted to the concentration profile of DON in pig plasma. In rats, a short elimination half-life (0.46 h and a clearance of 2.59 L/h/kg were estimated by sparse sampling non-compartmental analysis. Following oral exposure, DON was rapidly absorbed and reached maximal plasma concentrations (Cmax of 42.07 ± 8.48 and 10.44 ± 5.87 µg/L plasma after (tmax 1.44 ± 0.52 and 0.17 h in pigs and rats, respectively. The mean bioavailability of DON was 70.5% ± 25.6% for pigs and 47.3% for rats. In the framework of DON risk assessment, these two animal models could be useful in an exposure scenario in two different ways because of their different bioavailability.
Collet, R.; Nordlund, Ã.; Asplund, M.
2018-01-01
We present an abundance analysis of the low-metallicity benchmark red giant star HD 122563 based on realistic, state-of-the-art, high-resolution, three-dimensional (3D) model stellar atmospheres including non-grey radiative transfer through opacity binning with 4, 12, and 48 bins. The 48-bin 3D...... simulation reaches temperatures lower by ˜300-500 K than the corresponding 1D model in the upper atmosphere. Small variations in the opacity binning, adopted line opacities, or chemical mixture can cool the photospheric layers by a further ˜100-300 K and alter the effective temperature by ˜100 K. A 3D local...... molecular bands and lines in the ultraviolet, visible, and infrared. We find a small positive 3D-1D abundance correction for carbon (+0.03 dex) and negative ones for nitrogen (-0.07 dex) and oxygen (-0.34 dex). From the analysis of the [O I] line at 6300.3 Å, we derive a significantly higher oxygen...
Consumption of Mass Communication--Construction of a Model on Information Consumption Behaviour.
Sepstrup, Preben
A general conceptual model on the consumption of information is introduced. Information as the output of the mass media is treated as a product, and a model on the consumption of this product is developed by merging elements from consumer behavior theory and mass communication theory. Chapter I gives basic assumptions about the individual and the…
Wiio, Osmo A.
A more unified approach to communication theory can evolve through systems modeling of information theory, communication modes, and mass media operations. Such systematic analysis proposes, as is the case care here, that information models be based upon combinations of energy changes and exchanges and changes in receiver systems. The mass media is…
Electromagnetic mass differences in the SU(3) x U(1) gauge model
Maharana, K.; Sastry, C.V.
1975-01-01
In this note we point out that the electromagnetic mass differences of the pion and kaon in the SU(3) times U(1) model are the same as in Weinberg's model except for the differences in the masses of the gauge bosons
Bounds on the Higgs mass in the standard model and the minimal supersymmetric standard model
Quiros, M.
1995-01-01
Depending on the Higgs-boson and top-quark masses, M_H and M_t, the effective potential of the {\\bf Standard Model} can develop a non-standard minimum for values of the field much larger than the weak scale. In those cases the standard minimum becomes metastable and the possibility of decay to the non-standard one arises. Comparison of the decay rate to the non-standard minimum at finite (and zero) temperature with the corresponding expansion rate of the Universe allows to identify the region, in the (M_H, M_t) plane, where the Higgs field is sitting at the standard electroweak minimum. In the {\\bf Minimal Supersymmetric Standard Model}, approximate analytical expressions for the Higgs mass spectrum and couplings are worked out, providing an excellent approximation to the numerical results which include all next-to-leading-log corrections. An appropriate treatment of squark decoupling allows to consider large values of the stop and/or sbottom mixing parameters and thus fix a reliable upper bound on the mass o...
Mechanistic model of mass-specific basal metabolic rate: evaluation in healthy young adults.
Wang, Z; Bosy-Westphal, A; Schautz, B; Müller, M
2011-12-01
Mass-specific basal metabolic rate (mass-specific BMR), defined as the resting energy expenditure per unit body mass per day, is an important parameter in energy metabolism research. However, a mechanistic explanation for magnitude of mass-specific BMR remains lacking. The objective of the present study was to validate the applicability of a proposed mass-specific BMR model in healthy adults. A mechanistic model was developed at the organ-tissue level, mass-specific BMR = Σ( K i × F i ), where Fi is the fraction of body mass as individual organs and tissues, and K i is the specific resting metabolic rate of major organs and tissues. The Fi values were measured by multiple MRI scans and the K i values were suggested by Elia in 1992. A database of healthy non-elderly non-obese adults (age 20 - 49 yrs, BMI BMR of all subjects was 21.6 ± 1.9 (mean ± SD) and 21.7 ± 1.6 kcal/kg per day, respectively. The measured mass-specific BMR was correlated with the predicted mass-specific BMR (r = 0.82, P BMR, versus the average of measured and predicted mass-specific BMR. In conclusion, the proposed mechanistic model was validated in non-elderly non-obese adults and can help to understand the inherent relationship between mass-specific BMR and body composition.
Ditlevsen, Ove Dalager
2004-01-01
The derivation of the life quality index (LQI) is revisited for a revision. This revision takes into account the unpaid but necessary work time needed to stay alive in clean and healthy conditions to be fit for effective wealth producing work and to enjoyable free time. Dimension analysis...... at birth should not vary between countries. Finally the distributional assumptions are relaxed as compared to the assumptions made in an earlier work by the author. These assumptions concern the calculation of the life expectancy change due to the removal of an accident source. Moreover a simple public...... consistency problems with the standard power function expression of the LQI are pointed out. It is emphasized that the combination coefficient in the convex differential combination between the relative differential of the gross domestic product per capita and the relative differential of the expected life...
Balcerak, Ernie
2012-12-01
In January 1994, the two geostationary satellites known as Anik-E1 and Anik-E2, operated by Telesat Canada, failed one after the other within 9 hours, leaving many northern Canadian communities without television and data services. The outage, which shut down much of the country's broadcast television for hours and cost Telesat Canada more than $15 million, generated significant media attention. Lam et al. used publicly available records to revisit the event; they looked at failure details, media coverage, recovery effort, and cost. They also used satellite and ground data to determine the precise causes of those satellite failures. The researchers traced the entire space weather event from conditions on the Sun through the interplanetary medium to the particle environment in geostationary orbit.
Klein's double discontinuity revisited
Winsløw, Carl; Grønbæk, Niels
2014-01-01
Much effort and research has been invested into understanding and bridging the ‘gaps’ which many students experience in terms of contents and expectations as they begin university studies with a heavy component of mathematics, typically in the form of calculus courses. We have several studies...... of bridging measures, success rates and many other aspects of these “entrance transition” problems. In this paper, we consider the inverse transition, experienced by university students as they revisit core parts of high school mathematics (in particular, calculus) after completing the undergraduate...... mathematics courses which are mandatory to become a high school teacher of mathematics. To what extent does the “advanced” experience enable them to approach the high school calculus in a deeper and more autonomous way ? To what extent can “capstone” courses support such an approach ? How could it be hindered...
Reframing in dentistry: Revisited
Sivakumar Nuvvula
2013-01-01
Full Text Available The successful practice of dentistry involves a good combination of technical skills and soft skills. Soft skills or communication skills are not taught extensively in dental schools and it can be challenging to learn and at times in treating dental patients. Guiding the child′s behavior in the dental operatory is one of the preliminary steps to be taken by the pediatric dentist and one who can successfully modify the behavior can definitely pave the way for a life time comprehensive oral care. This article is an attempt to revisit a simple behavior guidance technique, reframing and explain the possible psychological perspectives behind it for better use in the clinical practice.
Alexander, Patrick M.; Tedesco, Marco; Schlegel, Nicole-Jeanne; Luthcke, Scott B.; Fettweis, Xavier; Larour, Eric
2016-06-01
Improving the ability of regional climate models (RCMs) and ice sheet models (ISMs) to simulate spatiotemporal variations in the mass of the Greenland Ice Sheet (GrIS) is crucial for prediction of future sea level rise. While several studies have examined recent trends in GrIS mass loss, studies focusing on mass variations at sub-annual and sub-basin-wide scales are still lacking. At these scales, processes responsible for mass change are less well understood and modeled, and could potentially play an important role in future GrIS mass change. Here, we examine spatiotemporal variations in mass over the GrIS derived from the Gravity Recovery and Climate Experiment (GRACE) satellites for the January 2003-December 2012 period using a "mascon" approach, with a nominal spatial resolution of 100 km, and a temporal resolution of 10 days. We compare GRACE-estimated mass variations against those simulated by the Modèle Atmosphérique Régionale (MAR) RCM and the Ice Sheet System Model (ISSM). In order to properly compare spatial and temporal variations in GrIS mass from GRACE with model outputs, we find it necessary to spatially and temporally filter model results to reproduce leakage of mass inherent in the GRACE solution. Both modeled and satellite-derived results point to a decline (of -178.9 ± 4.4 and -239.4 ± 7.7 Gt yr-1 respectively) in GrIS mass over the period examined, but the models appear to underestimate the rate of mass loss, especially in areas below 2000 m in elevation, where the majority of recent GrIS mass loss is occurring. On an ice-sheet-wide scale, the timing of the modeled seasonal cycle of cumulative mass (driven by summer mass loss) agrees with the GRACE-derived seasonal cycle, within limits of uncertainty from the GRACE solution. However, on sub-ice-sheet-wide scales, some areas exhibit significant differences in the timing of peaks in the annual cycle of mass change. At these scales, model biases, or processes not accounted for by models related
Predictors and Outcomes of Revisits in Older Adults Discharged from the Emergency Department.
de Gelder, Jelle; Lucke, Jacinta A; de Groot, Bas; Fogteloo, Anne J; Anten, Sander; Heringhaus, Christian; Dekkers, Olaf M; Blauw, Gerard J; Mooijaart, Simon P
2018-04-01
To study predictors of emergency department (ED) revisits and the association between ED revisits and 90-day functional decline or mortality. Multicenter cohort study. One academic and two regional Dutch hospitals. Older adults discharged from the ED (N=1,093). At baseline, data on demographic characteristics, illness severity, and geriatric parameters (cognition, functional capacity) were collected. All participants were prospectively followed for an unplanned revisit within 30 days and for functional decline and mortality 90 days after the initial visit. The median age was 79 (interquartile range 74-84), and 114 participants (10.4%) had an ED revisit within 30 days of discharge. Age (hazard ratio (HR)=0.96, 95% confidence interval (CI)=0.92-0.99), male sex (HR=1.61, 95% CI=1.05-2.45), polypharmacy (HR=2.06, 95% CI=1.34-3.16), and cognitive impairment (HR=1.71, 95% CI=1.02-2.88) were independent predictors of a 30-day ED revisit. The area under the receiver operating characteristic curve to predict an ED revisit was 0.65 (95% CI=0.60-0.70). In a propensity score-matched analysis, individuals with an ED revisit were at higher risk (odds ratio=1.99 95% CI=1.06-3.71) of functional decline or mortality. Age, male sex, polypharmacy, and cognitive impairment were independent predictors of a 30-day ED revisit, but no useful clinical prediction model could be developed. However, an early ED revisit is a strong new predictor of adverse outcomes in older adults. © 2018 The Authors. The Journal of the American Geriatrics Society published by Wiley Periodicals, Inc. on behalf of The American Geriatrics Society.
Collet, R.; Nordlund, Å.; Asplund, M.; Hayek, W.; Trampedach, R.
2018-04-01
We present an abundance analysis of the low-metallicity benchmark red giant star HD 122563 based on realistic, state-of-the-art, high-resolution, three-dimensional (3D) model stellar atmospheres including non-grey radiative transfer through opacity binning with 4, 12, and 48 bins. The 48-bin 3D simulation reaches temperatures lower by ˜300-500 K than the corresponding 1D model in the upper atmosphere. Small variations in the opacity binning, adopted line opacities, or chemical mixture can cool the photospheric layers by a further ˜100-300 K and alter the effective temperature by ˜100 K. A 3D local thermodynamic equilibrium (LTE) spectroscopic analysis of Fe I and Fe II lines gives discrepant results in terms of derived Fe abundance, which we ascribe to non-LTE effects and systematic errors on the stellar parameters. We also determine C, N, and O abundances by simultaneously fitting CH, OH, NH, and CN molecular bands and lines in the ultraviolet, visible, and infrared. We find a small positive 3D-1D abundance correction for carbon (+0.03 dex) and negative ones for nitrogen (-0.07 dex) and oxygen (-0.34 dex). From the analysis of the [O I] line at 6300.3 Å, we derive a significantly higher oxygen abundance than from molecular lines (+0.46 dex in 3D and +0.15 dex in 1D). We rule out important OH photodissociation effects as possible explanation for the discrepancy and note that lowering the surface gravity would reduce the oxygen abundance difference between molecular and atomic indicators.
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
Haedt-Matt, Alissa A.; Keel, Pamela K.
2011-01-01
The affect regulation model of binge eating, which posits that patients binge eat to reduce negative affect (NA), has received support from cross-sectional and laboratory-based studies. Ecological momentary assessment (EMA) involves momentary ratings and repeated assessments over time and is ideally suited to identify temporal antecedents and consequences of binge eating. This meta-analytic review includes EMA studies of affect and binge eating. Electronic database and manual searches produced 36 EMA studies with N = 968 participants (89% Caucasian women). Meta-analyses examined changes in affect before and after binge eating using within-subjects standardized mean gain effect sizes (ES). Results supported greater NA preceding binge eating relative to average affect (ES = .63) and affect before regular eating (ES = .68). However, NA increased further following binge episodes (ES = .50). Preliminary findings suggested that NA decreased following purging in Bulimia Nervosa (ES = −.46). Moderators included diagnosis (with significantly greater elevations of NA prior to bingeing in Binge Eating Disorder compared to Bulimia Nervosa) and binge definition (with significantly smaller elevations of NA before binge versus regular eating episodes for the DSM definition compared to lay definitions of binge eating). Overall, results fail to support the affect regulation model of binge eating and challenge reductions in NA as a maintenance factor for binge eating. However, limitations of this literature include unidimensional analyses of NA and inadequate examination of affect during binge eating as binge eating may regulate only specific facets of affect or may reduce NA only during the episode. PMID:21574678
Möller, M.; Obleitner, F.; Reijmer, C.H.; Pohjola, V.A.; Glowacki, P.; Kohler, J.
2016-01-01
Large-scale modeling of glacier mass balance relies often on the output from regional climate models (RCMs). However, the limited accuracy and spatial resolution of RCM output pose limitations on mass balance simulations at subregional or local scales. Moreover, RCM output is still rarely available
Washburn, J.F.; Kaszeta, F.E.; Simmons, C.S.; Cole, C.R.
1980-07-01
This report presents the results of the development of a one-dimensional radionuclide transport code, MMT2D (Multicomponent Mass Transport), for the AEGIS Program. Multicomponent Mass Transport is a numerical solution technique that uses the discrete-parcel-random-wald (DPRW) method to directly simulate the migration of radionuclides. MMT1D accounts for: convection;dispersion; sorption-desorption; first-order radioactive decay; and n-membered radioactive decay chains. Comparisons between MMT1D and an analytical solution for a similar problem show that: MMT1D agrees very closely with the analytical solution; MMT1D has no cumulative numerical dispersion like that associated with solution techniques such as finite differences and finite elements; for current AEGIS applications, relatively few parcels are required to produce adequate results; and the power of MMT1D is the flexibility of the code in being able to handle complex problems for which analytical solution cannot be obtained. Multicomponent Mass Transport (MMT1D) codes were developed at Pacific Northwest Laboratory to predict the movement of radiocontaminants in the saturated and unsaturated sediments of the Hanford Site. All MMT models require ground-water flow patterns that have been previously generated by a hydrologic model. This report documents the computer code and operating procedures of a third generation of the MMT series: the MMT differs from previous versions by simulating the mass transport processes in systems with radionuclide decay chains. Although MMT is a one-dimensional code, the user is referred to the documentation of the theoretical and numerical procedures of the three-dimensional MMT-DPRW code for discussion of expediency, verification, and error-sensitivity analysis
Lumped Mass Modeling for Local-Mode-Suppressed Element Connectivity
Joung, Young Soo; Yoon, Gil Ho; Kim, Yoon Young
2005-01-01
connectivity parameterization (ECP) is employed. On the way to the ultimate crashworthy structure optimization, we are now developing a local mode-free topology optimization formulation that can be implemented in the ECP method. In fact, the local mode-freeing strategy developed here can be also used directly...... experiencing large structural changes, appears to be still poor. In ECP, the nodes of the domain-discretizing elements are connected by zero-length one-dimensional elastic links having varying stiffness. For computational efficiency, every elastic link is now assumed to have two lumped masses at its ends....... Choosing appropriate penalization functions for lumped mass and link stiffness is important for local mode-free results. However, unless the objective and constraint functions are carefully selected, it is difficult to obtain clear black-and-white results. It is shown that the present formulation is also...
Masses and mixing angles in SU(5) gauge model
Nandi, S.; Tanaka, K.
1979-01-01
Georgi and Jarlskog mass relations m/sub μ/m/sub e/ = 9m/sub s//m/sub d/, m/sub b/ = m/sub tau/ are obtained above the grand unification mass M = 10 15 GeV with two approx. 5's and one approx. 45 Higgs representations of SU(5) and a discrete symmetry. In the lowest order, the Kobayashi-Maskawa angles are found to be s 2 = -(m/sub c//m/sub t/) /sup 1/2/ and s 3 = -(m/sub u//m/sub t/) /sup 1/2//s 1 , where s 1 is the sine of the Cabibbo angle. The CP violation is considered, and the b quark decays predominantly into c quarks with lifetime of tau/sub b/ approx. equal to 10 -13 s for m/sub t/ = 25 GeV
Modeling energy flexibility of low energy buildings utilizing thermal mass
Foteinaki, Kyriaki; Heller, Alfred; Rode, Carsten
2016-01-01
In the future energy system a considerable increase in the penetration of renewable energy is expected, challenging the stability of the system, as both production and consumption will have fluctuating patterns. Hence, the concept of energy flexibility will be necessary in order for the consumption...... to match the production patterns, shifting demand from on-peak hours to off-peak hours. Buildings could act as flexibility suppliers to the energy system, through load shifting potential, provided that the large thermal mass of the building stock could be utilized for energy storage. In the present study...... the load shifting potential of an apartment of a low energy building in Copenhagen is assessed, utilizing the heat storage capacity of the thermal mass when the heating system is switched off for relieving the energy system. It is shown that when using a 4-hour preheating period before switching off...
Cristallo, S.; Straniero, O.; Piersanti, L.; Gobrecht, D.
2015-08-01
We present a new set of models for intermediate-mass asymptotic giant branch (AGB) stars (4.0, 5.0, and 6.0 M⊙) at different metallicities (-2.15 ≤ [Fe/H] ≤ +0.15). This set integrates the existing models for low-mass AGB stars (1.3 ≤ M/M⊙ ≤ 3.0) already included in the FRUITY database. We describe the physical and chemical evolution of the computed models from the main sequence up to the end of the AGB phase. Due to less efficient third dredge up episodes, models with large core masses show modest surface enhancements. This effect is due to the fact that the interpulse phases are short and, therefore, thermal pulses (TPs) are weak. Moreover, the high temperature at the base of the convective envelope prevents it from deeply penetrating the underlying radiative layers. Depending on the initial stellar mass, the heavy element nucleosynthesis is dominated by different neutron sources. In particular, the s-process distributions of the more massive models are dominated by the 22Ne(α,n)25Mg reaction, which is efficiently activated during TPs. At low metallicities, our models undergo hot bottom burning and hot third dredge up. We compare our theoretical final core masses to available white dwarf observations. Moreover, we quantify the influence intermediate-mass models have on the carbon star luminosity function. Finally, we present the upgrade of the FRUITY web interface, which now also includes the physical quantities of the TP-AGB phase for all of the models included in the database (ph-FRUITY).
Cristallo, S.; Straniero, O.; Piersanti, L.; Gobrecht, D. [INAF-Osservatorio Astronomico di Collurania, I-64100 Teramo (Italy)
2015-08-15
We present a new set of models for intermediate-mass asymptotic giant branch (AGB) stars (4.0, 5.0, and 6.0 M{sub ⊙}) at different metallicities (−2.15 ≤ [Fe/H] ≤ +0.15). This set integrates the existing models for low-mass AGB stars (1.3 ≤ M/M{sub ⊙} ≤ 3.0) already included in the FRUITY database. We describe the physical and chemical evolution of the computed models from the main sequence up to the end of the AGB phase. Due to less efficient third dredge up episodes, models with large core masses show modest surface enhancements. This effect is due to the fact that the interpulse phases are short and, therefore, thermal pulses (TPs) are weak. Moreover, the high temperature at the base of the convective envelope prevents it from deeply penetrating the underlying radiative layers. Depending on the initial stellar mass, the heavy element nucleosynthesis is dominated by different neutron sources. In particular, the s-process distributions of the more massive models are dominated by the {sup 22}Ne(α,n){sup 25}Mg reaction, which is efficiently activated during TPs. At low metallicities, our models undergo hot bottom burning and hot third dredge up. We compare our theoretical final core masses to available white dwarf observations. Moreover, we quantify the influence intermediate-mass models have on the carbon star luminosity function. Finally, we present the upgrade of the FRUITY web interface, which now also includes the physical quantities of the TP-AGB phase for all of the models included in the database (ph-FRUITY)
Comprehensive and critical review of the predictive properties of the various mass models
Haustein, P.E.
1984-01-01
Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, there is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models
MEHDI M. POORANGI
2013-01-01
Full Text Available The current climate of business necessitates competitions that are often tough and unpredictable. All organizations, regardless their size and scope of operation, are facing severe competitive challenges. In order to cope with this phenomenon, managers are turning to e-commerce in their respective organizations. The present study hinges upon exploring and explaining the different dimensions of the adoption of e-commerce among Small and medium enterprises, based on the Five Factors of Diffusion of Innovation Model derived by Rogers. In this study, however, we employed the survey methods. A questionnaire was distributed to 1,200 managers and employees in the manufacturing, service and agricultural sectors by email; with a response rate of 10%. The results gleamed from this study posits that relative advantage is influential vis-à-vis e-commerce adoption. Trialability and Observability factors affect the level of confidence of management, which in turn, influences e-commerce adoption. Meanwhile, the existing culture of a company affects the resistance of employees, which in turn negatively effects the e-commerce adoption, while complexity does not significantly influence the e-commerce adoption.
Leukemia and ionizing radiation revisited
Cuttler, J.M. [Cuttler & Associates Inc., Vaughan, Ontario (Canada); Welsh, J.S. [Loyola University-Chicago, Dept. or Radiation Oncology, Stritch School of Medicine, Maywood, Illinois (United States)
2016-03-15
A world-wide radiation health scare was created in the late 19508 to stop the testing of atomic bombs and block the development of nuclear energy. In spite of the large amount of evidence that contradicts the cancer predictions, this fear continues. It impairs the use of low radiation doses in medical diagnostic imaging and radiation therapy. This brief article revisits the second of two key studies, which revolutionized radiation protection, and identifies a serious error that was missed. This error in analyzing the leukemia incidence among the 195,000 survivors, in the combined exposed populations of Hiroshima and Nagasaki, invalidates use of the LNT model for assessing the risk of cancer from ionizing radiation. The threshold acute dose for radiation-induced leukemia, based on about 96,800 humans, is identified to be about 50 rem, or 0.5 Sv. It is reasonable to expect that the thresholds for other cancer types are higher than this level. No predictions or hints of excess cancer risk (or any other health risk) should be made for an acute exposure below this value until there is scientific evidence to support the LNT hypothesis. (author)
Revisiting the safety of aspartame.
Choudhary, Arbind Kumar; Pretorius, Etheresia
2017-09-01
Aspartame is a synthetic dipeptide artificial sweetener, frequently used in foods, medications, and beverages, notably carbonated and powdered soft drinks. Since 1981, when aspartame was first approved by the US Food and Drug Administration, researchers have debated both its recommended safe dosage (40 mg/kg/d) and its general safety to organ systems. This review examines papers published between 2000 and 2016 on both the safe dosage and higher-than-recommended dosages and presents a concise synthesis of current trends. Data on the safe aspartame dosage are controversial, and the literature suggests there are potential side effects associated with aspartame consumption. Since aspartame consumption is on the rise, the safety of this sweetener should be revisited. Most of the literature available on the safety of aspartame is included in this review. Safety studies are based primarily on animal models, as data from human studies are limited. The existing animal studies and the limited human studies suggest that aspartame and its metabolites, whether consumed in quantities significantly higher than the recommended safe dosage or within recommended safe levels, may disrupt the oxidant/antioxidant balance, induce oxidative stress, and damage cell membrane integrity, potentially affecting a variety of cells and tissues and causing a deregulation of cellular function, ultimately leading to systemic inflammation. © The Author(s) 2017. Published by Oxford University Press on behalf of the International Life Sciences Institute. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Higgs-boson contributions to gauge-boson mass shifts in extended electroweak models
Moore, S.R.
1985-01-01
The author analyzed the mass shifts for models with a more complicated Higgs sector. He uses the on-shell renormalization scheme, in which the parameters of the theory are the physical masses and couplings. The author has considered the 2-doublet, n-doublet, triplet and doublet-triplet models. He has found that the Z-boson mass prediction has a strong dependence on the charged-Higgs mass. In the limit that the charged Higgs is much heavier than the gauge bosons, the Higgs-dependent terms become significant, and may even cancel the light-fermion terms. If the author uses the Z mass as input in determining the parameters of the theory, a scenario which will become attractive as the mass of the Z is accurately measured in the next few years, it is found that the W-boson mass shift exhibits the same sort of behavior, differing from the minimal model for the case of the charged Higgs being heavy. The author has found that when the radiative corrections are taken into account, models with extended Higgs sectors may differ significantly from the minimal standard model in this predictions for the gauge-boson masses. Thus, an accurate measurement of the masses will help shed light on the structure of the Higgs sector
Exact Mass-Coupling Relation for the Homogeneous Sine-Gordon Model.
Bajnok, Zoltán; Balog, János; Ito, Katsushi; Satoh, Yuji; Tóth, Gábor Zsolt
2016-05-06
We derive the exact mass-coupling relation of the simplest multiscale quantum integrable model, i.e., the homogeneous sine-Gordon model with two mass scales. The relation is obtained by comparing the perturbed conformal field theory description of the model valid at short distances to the large distance bootstrap description based on the model's integrability. In particular, we find a differential equation for the relation by constructing conserved tensor currents, which satisfy a generalization of the Θ sum rule Ward identity. The mass-coupling relation is written in terms of hypergeometric functions.
A one-dimensional model of resonances with a delta barrier and mass jump
Alvarez, J.J.; Gadella, M.; Heras, F.J.H.; Nieto, L.M.
2009-01-01
In this Letter, we present a one-dimensional model that includes a hard core at the origin, a Dirac delta barrier at a point in the positive semiaxis and a mass jump at the same point. We study the effect of this mass jump in the behavior of the resonances of the model. We obtain an infinite number of resonances for this situation, showing that for the case of a mass jump the imaginary part of the resonance poles tend to a fixed value depending on the quotient of masses, and demonstrate that none of these resonances is degenerated.
An alternative mass model for galactic dark coronae
Ninković S.
2001-01-01
Full Text Available A spherically symmetric mass distribution with two scale parameters for the dark corona of a (spiral galaxy as an alternative to the usually applied quasi-isothermal sphere is considered. Examinations of the rotation curve produced by this distribution over a limited interval of the distance to the rotation axis show that it can be a successful alternative to the usual approximation of the quasiisothermal sphere. This is important taking into account that the potential formula considered in the present paper can be easily generalized towards axial symmetry.
Mathematical Models for the Apparent Mass of the Seated Human Body Exposed to Vertical Vibration
Wei, L.; Griffin, M. J.
1998-05-01
Alternative mathematical models of the vertical apparent mass of the seated human body are developed. The optimum parameters of four models (two single-degree-of-freedom models and two two-degree-of-freedom models) are derived from the mean measured apparent masses of 60 subjects (24 men, 24 women, 12 children) previously reported. The best fits were obtained by fitting the phase data with single-degree-of-freedom and two-degree-of-freedom models having rigid support structures. For these two models, curve fitting was performed on each of the 60 subjects (so as to obtain optimum model parameters for each subject), for the averages of each of the three groups of subjects, and for the entire group of subjects. The values obtained are tabulated. Use of a two-degree-of-freedom model provided a better fit to the phase of the apparent mass at frequencies greater than about 8 Hz and an improved fit to the modulus of the apparent mass at frequencies around 5 Hz. It is concluded that the two-degree-of-freedom model provides an apparent mass similar to that of the human body, but this does not imply that the body moves in the same manner as the masses in this optimized two-degree-of-freedom model.
Coronal Mass Ejections: Models and Their Observational Basis
P. F. Chen
2011-04-01
Full Text Available Coronal mass ejections (CMEs are the largest-scale eruptive phenomenon in the solar system, expanding from active region-sized nonpotential magnetic structure to a much larger size. The bulk of plasma with a mass of ∼10^11 – 10^13 kg is hauled up all the way out to the interplanetary space with a typical velocity of several hundred or even more than 1000 km s^-1, with a chance to impact our Earth, resulting in hazardous space weather conditions. They involve many other much smaller-sized solar eruptive phenomena, such as X-ray sigmoids, filament/prominence eruptions, solar flares, plasma heating and radiation, particle acceleration, EIT waves, EUV dimmings, Moreton waves, solar radio bursts, and so on. It is believed that, by shedding the accumulating magnetic energy and helicity, they complete the last link in the chain of the cycling of the solar magnetic field. In this review, I try to explicate our understanding on each stage of the fantastic phenomenon, including their pre-eruption structure, their triggering mechanisms and the precursors indicating the initiation process, their acceleration and propagation. Particular attention is paid to clarify some hot debates, e.g., whether magnetic reconnection is necessary for the eruption, whether there are two types of CMEs, how the CME frontal loop is formed, and whether halo CMEs are special.
Solvable Model for Dynamic Mass Transport in Disordered Geophysical Media
Marder, M.; Eftekhari, Behzad; Patzek, Tadeusz
2018-01-01
We present an analytically solvable model for transport in geophysical materials on large length and time scales. It describes the flow of gas to a complicated absorbing boundary over long periods of time. We find a solution to this model using Green's function techniques, and apply the solution to three absorbing networks of increasing complexity.
Dynamic plant uptake modelling and mass flux estimation
Rein, Arno; Bauer-Gottwein, Peter; Trapp, Stefan
2011-01-01
in environmental systems at different scales. Feedback mechanisms between plants and hydrological systems can play an important role. However, they have received little attention to date. Here, a new model concept for dynamic plant uptake models applying analytical matrix solutions is presented, which can...
Solvable Model for Dynamic Mass Transport in Disordered Geophysical Media
Marder, M.
2018-03-29
We present an analytically solvable model for transport in geophysical materials on large length and time scales. It describes the flow of gas to a complicated absorbing boundary over long periods of time. We find a solution to this model using Green\\'s function techniques, and apply the solution to three absorbing networks of increasing complexity.
Normal and Special Models of Neutrino Masses and Mixings
Altarelli, Guido
2005-01-01
One can make a distinction between "normal" and "special" models. For normal models $\\theta_{23}$ is not too close to maximal and $\\theta_{13}$ is not too small, typically a small power of the self-suggesting order parameter $\\sqrt{r}$, with $r=\\Delta m_{sol}^2/\\Delta m_{atm}^2 \\sim 1/35$. Special models are those where some symmetry or dynamical feature assures in a natural way the near vanishing of $\\theta_{13}$ and/or of $\\theta_{23}- \\pi/4$. Normal models are conceptually more economical and much simpler to construct. Here we focus on special models, in particular a recent one based on A4 discrete symmetry and extra dimensions that leads in a natural way to a Harrison-Perkins-Scott mixing matrix.
Discrete fracture modelling of the Finnsjoen rock mass: Phase 2
Geier, J.E.; Axelsson, C.L.; Haessler, L.; Benabderrahmane, A.
1992-04-01
A discrete fracture network (DFN) model of the Finnsjoen site was derived from field data, and used to predict block-scale flow and transport properties. The DFN model was based on a compound Poisson process, with stochastic fracture zones, and individual fracture concentrated around the fracture zones. This formulation was used to represent the multitude of fracture zones at the site which could be observed on lineament maps and in boreholes, but were not the focus of detailed characterization efforts. Due to a shortage of data for fracture geometry at depth, distributions of fracture orientation and size were assumed to be uniform throughout the site. Transmissivity within individual fracture planes was assumed to vary according to a fractal model. Constant-head packer tests were simulated with the model, and the observed transient responses were compared with actual tests in terms of distributions of interpreted transmissivity and flow dimension, to partially validate the model. Both simulated and actual tests showed a range of flow dimension from sublinear to spherical, indicating local variations in the connectivity of the fracture population. A methodology was developed for estimation of an effective stochastic continuum from the DFN model, but this was only partly demonstrated. Directional conductivities for 40 m block were estimated using the DFN model. These show extremely poor correlation with results of multiple packer tests in the same blocks, indicating possible limitation of small-scale packer tests for predicting block-scale properties. Estimates are given of effective flow porosity and flow wetted surface, based on the block-scale flow fields calculated by the DFN model, and probabilistic models for the relationships among local fracture transmissivity, void space, and specific surface. The database for constructing these models is extremely limited. A review is given of the existing database for single fracture hydrologic properties. (127 refs
Modelling and visualising modular product architectures for mass customisation
Mortensen, Niels Henrik; Pedersen, Rasmus; Kvist, Morten
2008-01-01
that the companies are striving for variety from a commercial- and simplicity from a manufacturing one. A conscious structuring of product architectures and/or the use of product platforms can help overcome this challenge. This paper presents a new method for the synthesis and visualisation of product architecture...... concepts that puts emphasis on variety in markets while also treating the consequences in the manufacturing set-up. The work is based on the assumption that a graphical overview of a given solution space and relations between market demands, product architecture and manufacturing layout can support......Companies following a mass customisation strategy have to observe two prerequisites for success: they have to fulfil a wide variety of customer needs and demands, and to harvest the benefits from economies of scale within their organisation and supply chain. This leads to the situation...
3D modelling of coupled mass and heat transfer of a convection-oven roasting process
Feyissa, Aberham Hailu; Adler-Nissen, Jens; Gernaey, Krist
2013-01-01
A 3D mathematical model of coupled heat and mass transfer describing oven roasting of meat has been developed from first principles. The proposed mechanism for the mass transfer of water is modified and based on a critical literature review of the effect of heat on meat. The model equations...... are based on a conservation of mass and energy, coupled through Darcy's equations of porous media - the water flow is mainly pressure-driven. The developed model together with theoretical and experimental assessments were used to explain the heat and water transport and the effect of the change...
Modelling of interactions between variable mass and density solid particles and swirling gas stream
Wardach-Święcicka, I; Kardaś, D; Pozorski, J
2011-01-01
The aim of this work is to investigate the solid particles - gas interactions. For this purpose, numerical modelling was carried out by means of a commercial code for simulations of two-phase dispersed flows with the in-house models accounting for mass and density change of solid phase. In the studied case the particles are treated as spherical moving grains carried by a swirling stream of hot gases. Due to the heat and mass transfer between gas and solid phase, the particles are losing their mass and they are changing their volume. Numerical simulations were performed for turbulent regime, using two methods for turbulence modelling: RANS and LES.
General structure of democratic mass matrix of quark sector in E{sub 6} model
Ciftci, R., E-mail: rciftci@cern.ch [Ankara (Turkey); Çiftci, A. K., E-mail: abbas.kenan.ciftci@cern.ch [Ankara University, Ankara (Turkey)
2016-03-25
An extension of the Standard Model (SM) fermion sector, which is inspired by the E{sub 6} Grand Unified Theory (GUT) model, might be a good candidate to explain a number of unanswered questions in SM. Existence of the isosinglet quarks might explain great mass difference of bottom and top quarks. Also, democracy on mass matrix elements is a natural approach in SM. In this study, we have given general structure of Democratic Mass Matrix (DMM) of quark sector in E6 model.
Modelling of the processes of heat and mass transfer in adiabatic steam and drop flows
Andrizhievskij, A.A.; Mikhalevich, A.A.; Nesterenko, V.B.; Trifonov, A.G.
1983-01-01
The mathematical models for investigating the local and integral characteristics of heat and mass transfer processes during simultaneous motion of adiabatic steam and drop flow and a flux of impurity particles are given. The mathematical model is constrUcted on the basis of one-dimensional stationary eqUations of conservation of mass, thermal energy and momentum of liquid and vapor phases. Dispersion composition of condensed moisture is described by the Nukiyama-Tanasava distribution function formed taking into account the Veber number critical value. Equations of motion and mass balance conservation for impurity particles are included into the mathematical model. These equations are considered as additional inactive phase
The ρ - ω mass difference in a relativistic potential model with pion corrections
Palladino, B.E.; Ferreira, P.L.
1988-01-01
The problem of the ρ - ω mass difference is studied in the framework of the relativistic, harmonic, S+V independent quark model implemented by center-of-mass, one-gluon exchange and plon-cloud corrections stemming from the requirement of chiral symmetry in the (u,d) SU(2) flavour sector of the model. The plonic self-energy corrections with different intermediate energy states are instrumental of the analysis of the problem, which requires and appropriate parametrization of the mesonic sector different from that previously used to calculate the mass spectrum of the S-wave baryons. The right ρ - ω mass splitting is found, together with a satisfactory value for the mass of the pion, calculated as a bound-state of a quark-antiquark pair. An analogous discussion based on the cloudy-bag model is also presented. (author) [pt
A flavor symmetry model for bilarge leptonic mixing and the lepton masses
Ohlsson, Tommy; Seidl, Gerhart
2002-11-01
We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.
A model of social influence on body mass index.
Hammond, Ross A; Ornstein, Joseph T
2014-12-01
In this paper, we develop an agent-based model of social influence on body weight. The model's assumptions are grounded in theory and evidence from physiology, social psychology, and behavioral science, and its outcomes are tested against longitudinal data from American youth. We discuss the implementation of the model, the insights it generates, and its implications for public health policy. By explicating a well-grounded dynamic mechanism, our analysis helps clarify important dependencies for both efforts to leverage social influence for obesity intervention and efforts to interpret clustering of BMI in networks. © 2014 New York Academy of Sciences.
Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald
2017-12-01
An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.
Revisiting the decoupling effects in the running of the Cosmological Constant
Antipin, Oleg; Melic, Blazenka
2017-01-01
We revisit the decoupling effects associated with heavy particles in the renormalization group running of the vacuum energy in a mass-dependent renormalization scheme. We find the running of the vacuum energy stemming from the Higgs condensate in the entire energy range and show that it behaves as expected from the simple dimensional arguments meaning that it exhibits the quadratic sensitivity to the mass of the heavy particles in the infrared regime. The consequence of such a running to the fine-tuning problem with the measured value of the Cosmological Constant is analyzed and the constraint on the mass spectrum of a given model is derived. We show that in the Standard Model (SM) this fine-tuning constraint is not satisfied while in the massless theories this constraint formally coincides with the well known Veltman condition. We also provide a remarkably simple extension of the SM where saturation of this constraint enables us to predict the radiative Higgs mass correctly. Generalization to constant curvature spaces is also given. (orig.)
Revisiting the decoupling effects in the running of the Cosmological Constant
Antipin, Oleg; Melic, Blazenka [Rudjer Boskovic Institute, Division of Theoretical Physics, Zagreb (Croatia)
2017-09-15
We revisit the decoupling effects associated with heavy particles in the renormalization group running of the vacuum energy in a mass-dependent renormalization scheme. We find the running of the vacuum energy stemming from the Higgs condensate in the entire energy range and show that it behaves as expected from the simple dimensional arguments meaning that it exhibits the quadratic sensitivity to the mass of the heavy particles in the infrared regime. The consequence of such a running to the fine-tuning problem with the measured value of the Cosmological Constant is analyzed and the constraint on the mass spectrum of a given model is derived. We show that in the Standard Model (SM) this fine-tuning constraint is not satisfied while in the massless theories this constraint formally coincides with the well known Veltman condition. We also provide a remarkably simple extension of the SM where saturation of this constraint enables us to predict the radiative Higgs mass correctly. Generalization to constant curvature spaces is also given. (orig.)
Stability of mass hierarchy in models with a sliding singlet
Smirnov, A.Yu.; Tainov, E.A.
1986-01-01
In the broad class of models with a heavy sliding singlet and softly broken supersymmetry (e.g. by the effects of N=1 supergravity) it is shown that the doublet-triplet hierarchy obtained at the tree level is not destroyed by quantum correction at any loop order. As an example the simplest SU(5) model with a stable doublet-triplet hierarchy is proposed. The necessary and sufficient conditions of the hierarchy stability are discussed. (orig.)