Asymmetric Gepner models (revisited)
Energy Technology Data Exchange (ETDEWEB)
Gato-Rivera, B. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands)] [Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); Schellekens, A.N., E-mail: t58@nikhef.n [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands)] [Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain)] [IMAPP, Radboud Universiteit, Nijmegen (Netherlands)
2010-12-11
We reconsider a class of heterotic string theories studied in 1989, based on tensor products of N=2 minimal models with asymmetric simple current invariants. We extend this analysis from (2,2) and (1,2) spectra to (0,2) spectra with SO(10) broken to the Standard Model. In the latter case the spectrum must contain fractionally charged particles. We find that in nearly all cases at least some of them are massless. However, we identify a large subclass where the fractional charges are at worst half-integer, and often vector-like. The number of families is very often reduced in comparison to the 1989 results, but there are no new tensor combinations yielding three families. All tensor combinations turn out to fall into two classes: those where the number of families is always divisible by three, and those where it is never divisible by three. We find an empirical rule to determine the class, which appears to extend beyond minimal N=2 tensor products. We observe that distributions of physical quantities such as the number of families, singlets and mirrors have an interesting tendency towards smaller values as the gauge groups approaches the Standard Model. We compare our results with an analogous class of free fermionic models. This displays similar features, but with less resolution. Finally we present a complete scan of the three family models based on the triply-exceptional combination (1,16{sup *},16{sup *},16{sup *}) identified originally by Gepner. We find 1220 distinct three family spectra in this case, forming 610 mirror pairs. About half of them have the gauge group SU(3)xSU(2){sub L}xSU(2){sub R}xU(1){sup 5}, the theoretical minimum, and many others are trinification models.
Asymmetric Gepner models II. Heterotic weight lifting
Energy Technology Data Exchange (ETDEWEB)
Gato-Rivera, B. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); Schellekens, A.N., E-mail: t58@nikhef.n [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); IMAPP, Radboud Universiteit, Nijmegen (Netherlands)
2011-05-21
A systematic study of 'lifted' Gepner models is presented. Lifted Gepner models are obtained from standard Gepner models by replacing one of the N=2 building blocks and the E{sub 8} factor by a modular isomorphic N=0 model on the bosonic side of the heterotic string. The main result is that after this change three family models occur abundantly, in sharp contrast to ordinary Gepner models. In particular, more than 250 new and unrelated moduli spaces of three family models are identified. We discuss the occurrence of fractionally charged particles in these spectra.
Asymmetric Gepner models II. Heterotic weight lifting
International Nuclear Information System (INIS)
Gato-Rivera, B.; Schellekens, A.N.
2011-01-01
A systematic study of 'lifted' Gepner models is presented. Lifted Gepner models are obtained from standard Gepner models by replacing one of the N=2 building blocks and the E 8 factor by a modular isomorphic N=0 model on the bosonic side of the heterotic string. The main result is that after this change three family models occur abundantly, in sharp contrast to ordinary Gepner models. In particular, more than 250 new and unrelated moduli spaces of three family models are identified. We discuss the occurrence of fractionally charged particles in these spectra.
Asymmetric Gepner models III. B-L lifting
Energy Technology Data Exchange (ETDEWEB)
Gato-Rivera, B. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); Schellekens, A.N., E-mail: t58@nikhef.n [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); IMAPP, Radboud Universiteit, Nijmegen (Netherlands)
2011-06-21
In the same spirit as heterotic weight lifting, B-L lifting is a way of replacing the superfluous and ubiquitous U(1){sub B-L} with something else with the same modular properties, but different conformal weights and ground state dimensions. This method works in principle for all variants of (2,2) constructions, such as orbifolds, Calabi-Yau manifolds, free bosons and fermions and Gepner models, since it only modifies the universal SO(10)xE{sub 8} part of the CFT. However, it can only yield chiral spectra if the 'internal' sector of the theory provides a simple current of order 5. Here we apply this new method to Gepner models. Including exceptional invariants, 86 of them have the required order 5 simple current, and 69 of these yield chiral spectra. Three family spectra occur abundantly.
Non-supersymmetric orientifolds of Gepner models
Energy Technology Data Exchange (ETDEWEB)
Gato-Rivera, B. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); Schellekens, A.N. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); IMAPP, Radboud Universiteit, Nijmegen (Netherlands)], E-mail: t58@nikhef.nl
2009-01-12
Starting from a previously collected set of tachyon-free closed strings, we search for N=2 minimal model orientifold spectra which contain the standard model and are free of tachyons and tadpoles at lowest order. For each class of tachyon-free closed strings - bulk supersymmetry, automorphism invariants or Klein bottle projection - we do indeed find non-supersymmetric and tachyon free chiral brane configurations that contain the standard model. However, a tadpole-cancelling hidden sector could only be found in the case of bulk supersymmetry. Although about half of the examples we have found make use of branes that break the bulk space-time supersymmetry, the resulting massless open string spectra are nevertheless supersymmetric in all cases. Dropping the requirement that the standard model be contained in the spectrum, we find chiral tachyon and tadpole-free solutions in all three cases, although in the case of bulk supersymmetry all massless spectra are supersymmetric. In the other two cases we find truly non-supersymmetric spectra, but a large fraction of them are nevertheless partly or fully supersymmetric at the massless level.
A family of SCFTs hosting all 'very attractive' relatives of the (2)4 Gepner model
International Nuclear Information System (INIS)
Wendland, Katrin
2006-01-01
This work gives a manual for constructing superconformal field theories associated to a family of smooth K3 surfaces. A direct method is not known, but a combination of orbifold techniques with a non-classical duality turns out to yield such models. A four parameter family of superconformal field theories associated to certain quartic K3 surfaces in CP 3 is obtained, four of whose complex structure parameters give the parameters within superconformal field theory. Standard orbifold techniques are used to construct these models, so on the level of superconformal field theory they are already well understood. All 'very attractive' K3 surfaces belong to the family of quartics underlying these theories, that is all quartic hypersurfaces in CP 3 with maximal Picard number whose defining polynomial is given by the sum of two polynomials in two variables. A particular member of the family is the (2) 4 Gepner model, such that these theories can be viewed as complex structure deformations of (2) 4 in its geometric interpretation on the Fermat quartic
International Nuclear Information System (INIS)
Fawaz, S.; Khan, Zulfiquar A.; Mossa, Samir Y.
2006-01-01
A new definition is proposed for analyzing the consultation in the primary health care, integrating other models of consultation and provides a framework by which general practitioners can apply the principles of consultation using communication skills to reconcile the respective agenda and autonomy of both doctor and patient into a negotiated agreed plan, which includes both management of health problems and health promotion. Achieving success of consultations depends on time and mutual cooperation between patient and doctor showed by doctor-patient relationship. (author)
International Nuclear Information System (INIS)
Kern, J.
1996-01-01
The problem of identification of multiphonon states for vibrational nuclei is discussed. It is shown that an examination of the excitation patterns provides an adequate filter to select good or potentially good vibrational nuclei as the global nuclear properties (such as the level energies) being less strongly perturbed by the presence of additional structures than the local properties (like the wave functions and the transitions probabilities). The energies of the first 2 + states are systematically low by about 15% with respect to the values expected from the global nuclear properties. This appears to be in contradiction with the general belief that these states have a high purity. The comparison of the experimental results with the predictions of the Brink model is made. The conclusion is made that the predictions are quite good, but it is necessary to renormalize the 1 phonon energy, i.e. to increase it by about 15%. Since the modified Brink method involves only the use of a virtual 2 1 + energy and no level fit, a problem of weights cannot be invoked. The calculations confirm the existence of multiphonon states at high excitation energies and the persistence of the symmetry properties well inside regions where one would expect the appearance of disorder
Revisiting fifth forces in the Galileon model
Energy Technology Data Exchange (ETDEWEB)
Burrage, Clare [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie; Seery, David [Sussex Univ., Brighton (United Kingdom). Dept. of Physics and Astronomy
2010-05-15
A Galileon field is one which obeys a spacetime generalization of the non- relativistic Galilean invariance. Such a field may possess non-canonical kinetic terms, but ghost-free theories with a well-defined Cauchy problem exist, constructed using a finite number of relevant operators. The interactions of this scalar with matter are hidden by the Vainshtein effect, causing the Galileon to become weakly coupled near heavy sources. We revisit estimates of the fifth force mediated by a Galileon field, and show that the parameters of the model are less constrained by experiment than previously supposed. (orig.)
Simple Tidal Prism Models Revisited
Luketina, D.
1998-01-01
Simple tidal prism models for well-mixed estuaries have been in use for some time and are discussed in most text books on estuaries. The appeal of this model is its simplicity. However, there are several flaws in the logic behind the model. These flaws are pointed out and a more theoretically correct simple tidal prism model is derived. In doing so, it is made clear which effects can, in theory, be neglected and which can not.
International Nuclear Information System (INIS)
Pilotto, F.; Vasconcellos, C.A.Z.; Coelho, H.T.
2001-01-01
In this work we develop a new version of the fuzzy bag model. Th main ideas is to include the conservation of energy and momentum in the model. This feature is not included in the original formulation of the fuzzy bag model, but is of paramount importance to interpret the model as being a bag model - that, is a model in which the outward pressure of the quarks inside the bag is balanced by the inward pressure of the non-perturbative vacuum outside the bag - as opposed to a relativistic potential model, in which there is no energy-momentum conservation. In the MT bag model, as well as in the original version of the fuzzy bag model, the non-perturbative QCD vacuum is parametrized by a constant B in the Lagrangian density. One immediate consequence of including energy-momentum conservation in the fuzzy bag model is that the bag constant B will acquire a radial dependence, B = B(r). (author)
Energy Technology Data Exchange (ETDEWEB)
Pilotto, F.; Vasconcellos, C.A.Z. [Rio Grande do Sul Univ., Porto Alegre, RS (Brazil). Inst. de Fisica; Coelho, H.T. [Pernambuco Univ., Recife, PE (Brazil). Inst. de Fisica
2001-07-01
In this work we develop a new version of the fuzzy bag model. Th main ideas is to include the conservation of energy and momentum in the model. This feature is not included in the original formulation of the fuzzy bag model, but is of paramount importance to interpret the model as being a bag model - that, is a model in which the outward pressure of the quarks inside the bag is balanced by the inward pressure of the non-perturbative vacuum outside the bag - as opposed to a relativistic potential model, in which there is no energy-momentum conservation. In the MT bag model, as well as in the original version of the fuzzy bag model, the non-perturbative QCD vacuum is parametrized by a constant B in the Lagrangian density. One immediate consequence of including energy-momentum conservation in the fuzzy bag model is that the bag constant B will acquire a radial dependence, B = B(r). (author)
A revisited standard solar model
International Nuclear Information System (INIS)
Casse, M.; Cahen, S.; Doom, C.
1987-01-01
Recent models of the Sun, including our own, based on canonical physics and featuring modern reaction rates and radiative opacities are presented. They lead to a presolar helium abundance, in better agreement with the value found in the Orion nebula. Most models predict a neutrino counting rate greater than 6 SNU in the chlorine-argon detector, which is at least 3 times higher than the observed rate. The primordial helium abundance derived from the solar one, on the basis of recent models of helium production from the birth of the Galaxy to the birth of the sun, is significantly higher than the value inferred from observations of extragalactic metal-poor nebulae. This indicates that the stellar production of helium is probably underestimated by the models considered
A revisited standard solar model
International Nuclear Information System (INIS)
Casse, M.; Cahen, S.; Doom, C.
1985-09-01
Recent models of the Sun, including our own, based on canonical physics and featuring modern reaction rates and radiative opacities are presented. They lead to a presolar helium abundance of approximately 0.28 by mass, at variance with the value of 0.25 proposed by Bahcall et al. (1982, 1985), but in better agreement with the value found in the Orion nebula. Most models predict a neutrino counting rate greater than 6 SNU in the chlorine-argon detector, which is at least 3 times higher than the observed rate. The primordial helium abundance derived from the solar one, on the basis of recent models of helium production from the birth of the Galaxy to the birth of the sun, Ysub(P) approximately 0.26, is significantly higher than the value inferred from observations of extragalactic metal-poor nebulae (Y approximately 0.23). This indicates that the stellar production of helium is probably underestimated by the models considered
Revisiting the Lund Fragmentation Model
International Nuclear Information System (INIS)
Andersson, B.; Nilsson, A.
1992-10-01
We present a new method to implement the Lund Model fragmentation distributions for multi-gluon situations. The method of Sjoestrand, implemented in the well-known Monte Carlo simulation program JETSET, is robust and direct and according to his findings there are no observable differences between different ways to implement his scheme. His method can be described as a space-time method because the breakup proper time plays a major role. The method described in this paper is built on energy-momentum space methods. We make use of the χ-curve, which is defined directly from the energy momentum vectors of the partons. We have shown that the χ-curve describes the breakup properties and the final state energy momentum distributions in the mean. We present a method to find the variations around the χ-curve, which also implements the basic Lund Model fragmentation distributions (the area-law and the corresponding iterative cascade). We find differences when comparing the corresponding Monte Carlo implementation REVJET to the JETSET distributions inside the gluon jets. (au)
DEFF Research Database (Denmark)
Tegtmeier, Silke; Meyer, Verena; Pakura, Stefanie
2017-01-01
were captured when they described entrepreneurs. Therefore, this paper aims to revisit gender role stereotypes among young adults. Design/methodology/approach: To measure stereotyping, participants were asked to describe entrepreneurs in general and either women or men in general. The Schein......Purpose: Entrepreneurship is shaped by a male norm, which has been widely demonstrated in qualitative studies. The authors strive to complement these methods by a quantitative approach. First, gender role stereotypes were measured in entrepreneurship. Second, the explicit notions of participants......: The images of men and entrepreneurs show a high and significant congruence (r = 0.803), mostly in those adjectives that are untypical for men and entrepreneurs. The congruence of women and entrepreneurs was low (r = 0.152) and insignificant. Contrary to the participants’ beliefs, their explicit notions did...
A Multi-Level Model of Moral Functioning Revisited
Reed, Don Collins
2009-01-01
The model of moral functioning scaffolded in the 2008 "JME" Special Issue is here revisited in response to three papers criticising that volume. As guest editor of that Special Issue I have formulated the main body of this response, concerning the dynamic systems approach to moral development, the problem of moral relativism and the role of…
The hard-core model on random graphs revisited
International Nuclear Information System (INIS)
Barbier, Jean; Krzakala, Florent; Zhang, Pan; Zdeborová, Lenka
2013-01-01
We revisit the classical hard-core model, also known as independent set and dual to vertex cover problem, where one puts particles with a first-neighbor hard-core repulsion on the vertices of a random graph. Although the case of random graphs with small and very large average degrees respectively are quite well understood, they yield qualitatively different results and our aim here is to reconciliate these two cases. We revisit results that can be obtained using the (heuristic) cavity method and show that it provides a closed-form conjecture for the exact density of the densest packing on random regular graphs with degree K ≥ 20, and that for K > 16 the nature of the phase transition is the same as for large K. This also shows that the hard-code model is the simplest mean-field lattice model for structural glasses and jamming
Quark matter revisited with non-extensive MIT bag model
Energy Technology Data Exchange (ETDEWEB)
Cardoso, Pedro H.G.; Nunes da Silva, Tiago; Menezes, Debora P. [Universidade Federal de Santa Catarina, Departamento de Fisica, CFM, Florianopolis (Brazil); Deppman, Airton [Instituto de Fisica da Universidade de Sao Paulo, Sao Paulo (Brazil)
2017-10-15
In this work we revisit the MIT bag model to describe quark matter within both the usual Fermi-Dirac and the Tsallis statistics. We verify the effects of the non-additivity of the latter by analysing two different pictures: the first order phase transition of the QCD phase diagram and stellar matter properties. While the QCD phase diagram is visually affected by the Tsallis statistics, the resulting effects on quark star macroscopic properties are barely noticed. (orig.)
A Structural Equation Model of Risk Perception of Rockfall for Revisit Intention
Ya-Fen Lee; Yun-Yao Chi
2014-01-01
The study aims to explore the relationship between risk perception of rockfall and revisit intention using a Structural Equation Modeling (SEM) analysis. A total of 573 valid questionnaires are collected from travelers to Taroko National Park, Taiwan. The findings show the majority of travelers have the medium perception of rockfall risk, and are willing to revisit the Taroko National Park. The revisit intention to Taroko National Park is influenced by hazardous preferences, willingness-to-pa...
Entropy of measurement and erasure: Szilard's membrane model revisited
Leff, Harvey S.; Rex, Andrew F.
1994-11-01
It is widely believed that measurement is accompanied by irreversible entropy increase. This conventional wisdom is based in part on Szilard's 1929 study of entropy decrease in a thermodynamic system by intelligent intervention (i.e., a Maxwell's demon) and Brillouin's association of entropy with information. Bennett subsequently argued that information acquisition is not necessarily irreversible, but information erasure must be dissipative (Landauer's principle). Inspired by the ensuing debate, we revisit the membrane model introduced by Szilard and find that it can illustrate and clarify (1) reversible measurement, (2) information storage, (3) decoupling of the memory from the system being measured, and (4) entropy increase associated with memory erasure and resetting.
The random field Blume-Capel model revisited
Santos, P. V.; da Costa, F. A.; de Araújo, J. M.
2018-04-01
We have revisited the mean-field treatment for the Blume-Capel model under the presence of a discrete random magnetic field as introduced by Kaufman and Kanner (1990). The magnetic field (H) versus temperature (T) phase diagrams for given values of the crystal field D were recovered in accordance to Kaufman and Kanner original work. However, our main goal in the present work was to investigate the distinct structures of the crystal field versus temperature phase diagrams as the random magnetic field is varied because similar models have presented reentrant phenomenon due to randomness. Following previous works we have classified the distinct phase diagrams according to five different topologies. The topological structure of the phase diagrams is maintained for both H - T and D - T cases. Although the phase diagrams exhibit a richness of multicritical phenomena we did not found any reentrant effect as have been seen in similar models.
Darwin model in plasma physics revisited
International Nuclear Information System (INIS)
Xie, Huasheng; Zhu, Jia; Ma, Zhiwei
2014-01-01
Dispersion relations from the Darwin (a.k.a., magnetoinductive or magnetostatic) model are given and compared with those of the full electromagnetic model. Analytical and numerical solutions show that the errors from the Darwin approximation can be large even if phase velocity for a low-frequency wave is close to or larger than the speed of light. Besides missing two wave branches associated mainly with the electron dynamics, the coupling branch of the electrons and ions in the Darwin model is modified to become a new artificial branch that incorrectly represents the coupling dynamics of the electrons and ions. (paper)
Single toxin dose-response models revisited
Energy Technology Data Exchange (ETDEWEB)
Demidenko, Eugene, E-mail: eugened@dartmouth.edu [Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH03756 (United States); Glaholt, SP, E-mail: sglaholt@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States); Kyker-Snowman, E, E-mail: ek2002@wildcats.unh.edu [Department of Natural Resources and the Environment, University of New Hampshire, Durham, NH03824 (United States); Shaw, JR, E-mail: joeshaw@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Chen, CY, E-mail: Celia.Y.Chen@dartmouth.edu [Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States)
2017-01-01
The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the four models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.
Whole body acid-base modeling revisited.
Ring, Troels; Nielsen, Søren
2017-04-01
The textbook account of whole body acid-base balance in terms of endogenous acid production, renal net acid excretion, and gastrointestinal alkali absorption, which is the only comprehensive model around, has never been applied in clinical practice or been formally validated. To improve understanding of acid-base modeling, we managed to write up this conventional model as an expression solely on urine chemistry. Renal net acid excretion and endogenous acid production were already formulated in terms of urine chemistry, and we could from the literature also see gastrointestinal alkali absorption in terms of urine excretions. With a few assumptions it was possible to see that this expression of net acid balance was arithmetically identical to minus urine charge, whereby under the development of acidosis, urine was predicted to acquire a net negative charge. The literature already mentions unexplained negative urine charges so we scrutinized a series of seminal papers and confirmed empirically the theoretical prediction that observed urine charge did acquire negative charge as acidosis developed. Hence, we can conclude that the conventional model is problematic since it predicts what is physiologically impossible. Therefore, we need a new model for whole body acid-base balance, which does not have impossible implications. Furthermore, new experimental studies are needed to account for charge imbalance in urine under development of acidosis. Copyright © 2017 the American Physiological Society.
The sine-Gordon model revisited I
Energy Technology Data Exchange (ETDEWEB)
Niccoli, G.; Teschner, J.
2009-10-15
We study integrable lattice regularizations of the Sine-Gordon model with the help of the Separation of Variables method of Sklyanin and the Baxter Q-operators. This allows us to characterize the spectrum (eigenvalues and eigenstates) completely in terms of polynomial solutions of the Baxter equation with certain properties. This result is analogous to the completeness of the Bethe ansatz. (orig.)
Packet models revisited: tandem and priority systems
M.R.H. Mandjes (Michel)
2004-01-01
textabstractWe examine two extensions of traditional single-node packet-scale queueing models: tandem networks and (strict) priority systems. Two generic input processes are considered: periodic and Poisson arrivals. For the two-node tandem, an exact expression is derived for the joint distribution
The Motive--Strategy Congruence Model Revisited.
Watkins, David; Hattie, John
1992-01-01
Research with 1,266 Australian secondary school students supports 2 propositions critical to the motive-strategy congruence model of J. B. Biggs (1985). Students tend to use learning strategies congruent with motivation for learning, and congruent motive-strategy combinations are associated with higher average school grades. (SLD)
Diffusion approximation of neuronal models revisited
Czech Academy of Sciences Publication Activity Database
Čupera, Jakub
2014-01-01
Roč. 11, č. 1 (2014), s. 11-25 ISSN 1547-1063. [International Workshop on Neural Coding (NC) /10./. Praha, 02.09.2012-07.09.2012] R&D Projects: GA ČR(CZ) GAP103/11/0282 Institutional support: RVO:67985823 Keywords : stochastic model * neuronal activity * first-passage time Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.840, year: 2014
Revisited global drift fluid model for linear devices
International Nuclear Information System (INIS)
Reiser, Dirk
2012-01-01
The problem of energy conserving global drift fluid simulations is revisited. It is found that for the case of cylindrical plasmas in a homogenous magnetic field, a straightforward reformulation is possible avoiding simplifications leading to energetic inconsistencies. The particular new feature is the rigorous treatment of the polarisation drift by a generalization of the vorticity equation. The resulting set of model equations contains previous formulations as limiting cases and is suitable for efficient numerical techniques. Examples of applications on studies of plasma blobs and its impact on plasma target interaction are presented. The numerical studies focus on the appearance of plasma blobs and intermittent transport and its consequences on the release of sputtered target materials in the plasma. Intermittent expulsion of particles in radial direction can be observed and it is found that although the neutrals released from the target show strong fluctuations in their propagation into the plasma column, the overall effect on time averaged profiles is negligible for the conditions considered. In addition, the numerical simulations are utilised to perform an a-posteriori assessment of the magnitude of energetic inconsistencies in previously used simplified models. It is found that certain popular approximations, in particular by the use of simplified vorticity equations, do not significantly affect energetics. However, popular model simplifications with respect to parallel advection are found to provide significant deterioration of the model consistency.
Effective-Medium Models for Marine Gas Hydrates, Mallik Revisited
Terry, D. A.; Knapp, C. C.; Knapp, J. H.
2011-12-01
Hertz-Mindlin type effective-medium dry-rock elastic models have been commonly used for more than three decades in rock physics analysis, and recently have been applied to assessment of marine gas hydrate resources. Comparisons of several effective-medium models with derivative well-log data from the Mackenzie River Valley, Northwest Territories, Canada (i.e. Mallik 2L-38 and 5L-38) were made several years ago as part of a marine gas hydrate joint industry project in the Gulf of Mexico. The matrix/grain supporting model (one of the five models compared) was clearly a better representation of the Mallik data than the other four models (2 cemented sand models; a pore-filling model; and an inclusion model). Even though the matrix/grain supporting model was clearly better, reservations were noted that the compressional velocity of the model was higher than the compressional velocity measured via the sonic logs, and that the shear velocities showed an even greater discrepancy. Over more than thirty years, variations of Hertz-Mindlin type effective medium models have evolved for unconsolidated sediments and here, we briefly review their development. In the past few years, the perfectly smooth grain version of the Hertz-Mindlin type effective-medium model has been favored over the infinitely rough grain version compared in the Gulf of Mexico study. We revisit the data from the Mallik wells to review assertions that effective-medium models with perfectly smooth grains are a better predictor than models with infinitely rough grains. We briefly review three Hertz-Mindlin type effective-medium models, and standardize nomenclature and notation. To calibrate the extended effective-medium model in gas hydrates, we use a well accepted framework for unconsolidated sediments through Hashin-Shtrikman bounds. We implement the previously discussed effective-medium models for saturated sediments with gas hydrates and compute theoretical curves of seismic velocities versus gas hydrate
E7 type modular invariant Wess-Zumino theory and Gepner's string compactification
International Nuclear Information System (INIS)
Kato, Akishi; Kitazawa, Yoshihisa
1989-01-01
The report addresses the development of a general procedure to study the structure of operator algebra in off-diagonal modular invariant theories. An effort is made to carry out this procedure in E 7 type modular invariant Wess-Zumino-Witten theory and explicitly check the closure of operator product algebra, which is required for any consistent conformal field theory. The conformal field theory is utilized to construct perturbative vacuum in string theory. Apparently quite nontrivial vacuums can be constructed out of minimal models of the N = 2 superconformal theory. Here, an investigation made of the Yukawa couplings of such a model which uses E 7 type off-diagonal modular invariance. Phenomenological properties of this model is also discussed. Although off-diagonal modular invariant theories are rather special, realistic models seem to require very special manifolds. Therefore they may enhance the viability of string theory to describe real world. A study is also made on Verlinde's fusion algebra in E 7 modular invariant theory. It is determined in the holomorphic sector only. Furthermore the indicator is given by the modular transformation matrix. A pair of operators which operate on the characters play a crucial role in this theory. (Nogami, K.)
Revisiting the advection-dispersion model - Testing an alternative
International Nuclear Information System (INIS)
Neretnieks, I.
2001-01-01
Some of the basic assumptions of the Advection-Dispersion model, AD-model, are revisited. That model assumes a continuous mixing along the flowpath similar to Fickian diffusion. This implies that there is a constant dispersion length irrespective of observation distance. This is contrary to most field observations. The properties of an alternative model based on the assumption that individual water packages can retain their identity over long distances are investigated. The latter model is called the Multi-Channel model, MChM. Inherent in the latter model is that if the waters in the different pathways are collected and mixed, the 'dispersion length' is proportional to observation distance. Using diffusion theory it is investigated over which distances or contact times, adjacent water packages will keep their identity. It is found that for a contact time of 10 hours, two streams, each wider than 6 mm, that flow side by side, will not have lost their identity. For 1000 hours contact time the minimum width is 6 cm. The MChM and AD-models were found to have very similar Residence Time Distributions, RTD, for Peclet numbers larger than 3. A generalised relation between flowrate and residence time is developed, including the so-called cubic law and constant aperture assumptions. Using the generalised relation, surprisingly it is found that for a system that has the same average flow volume and average flowrate the form of the RTD curves are the same irrespective of the form of the relation. Both models are also compared for a system where there is strong interaction of the solute with the rock matrix. In this case it is assumed that the solute can diffuse into and out of the fracture walls and also to sorb on the micro-fractures of the matrix. The so-called Flow Wetted Surface, FWS, between the flowing water in the fracture and the rock is a key entity in such systems. It is found that the AD-model predicts much later arrivals and lower concentrations than does the MCh-model
Standing and travelling waves in a spherical brain model: The Nunez model revisited
Visser, S.; Nicks, R.; Faugeras, O.; Coombes, S.
2017-06-01
The Nunez model for the generation of electroencephalogram (EEG) signals is naturally described as a neural field model on a sphere with space-dependent delays. For simplicity, dynamical realisations of this model either as a damped wave equation or an integro-differential equation, have typically been studied in idealised one dimensional or planar settings. Here we revisit the original Nunez model to specifically address the role of spherical topology on spatio-temporal pattern generation. We do this using a mixture of Turing instability analysis, symmetric bifurcation theory, centre manifold reduction and direct simulations with a bespoke numerical scheme. In particular we examine standing and travelling wave solutions using normal form computation of primary and secondary bifurcations from a steady state. Interestingly, we observe spatio-temporal patterns which have counterparts seen in the EEG patterns of both epileptic and schizophrenic brain conditions.
Star-Triangle Relation of the Chiral Potts Model Revisited
Horibe, M.; Shigemoto, K.
2001-01-01
We give the simple proof of the star-triangle relation of the chiral Potts model. We also give the constructive way to understand the star-triangle relation of the chiral Potts model, which may give the hint to give the new integrable models.
The Candy model revisited: Markov properties and inference
M.N.M. van Lieshout (Marie-Colette); R.S. Stoica
2001-01-01
textabstractThis paper studies the Candy model, a marked point process introduced by Stoica et al. (2000). We prove Ruelle and local stability, investigate its Markov properties, and discuss how the model may be sampled. Finally, we consider estimation of the model parameters and present some
Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter
Flächer, Henning; Haller, J; Höcker, A; Mönig, K; Stelzer, J
2009-01-01
The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter projec...
Kawase & McDermott revisited with a proper ocean model.
Jochum, Markus; Poulsen, Mads; Nuterman, Roman
2017-04-01
A suite of experiments with global ocean models is used to test the hypothesis that Southern Ocean (SO) winds can modify the strength of the Atlantic Meridional Overturning Circulation (AMOC). It is found that for 3 and 1 degree resolution models the results are consistent with Toggweiler & Samuels (1995): stronger SO winds lead to a slight increase of the AMOC. In the simulations with 1/10 degree resolution, however, stronger SO winds weaken the AMOC. We show that these different outcomes are determined by the models' representation of topographic Rossby and Kelvin waves. Consistent with previous literature based on theory and idealized models, first baroclinic waves are slower in the coarse resolution models, but still manage to establish a pattern of global response that is similar to the one in the eddy-permitting model. Because of its different stratification, however, the Atlantic signal is transmitted by higher baroclinic modes. In the coarse resolution model these higher modes are dissipated before they reach 30N, whereas in the eddy-permitting model they reach the subpolar gyre undiminished. This inability of non-eddy-permitting ocean models to represent planetary waves with higher baroclinic modes casts doubt on the ability of climate models to represent non-local effects of climate change. Ideas on how to overcome these difficulties will be discussed.
Revisiting the direct detection of dark matter in simplified models
Li, Tong
2018-01-01
In this work we numerically re-examine the loop-induced WIMP-nucleon scattering cross section for the simplified dark matter models and the constraint set by the latest direct detection experiment. We consider a fermion, scalar or vector dark matter component from five simplified models with leptophobic spin-0 mediators coupled only to Standard Model quarks and dark matter particles. The tree-level WIMP-nucleon cross sections in these models are all momentum-suppressed. We calculate the non-s...
Business modelling revisited: The configuration of control and value
Ballon, P.J.P.
2007-01-01
Purpose - This paper aims to provide a theoretically grounded framework for designing and analysing business models for (mobile) information communication technology (ICT) services and systems. Design/methodology/approach - The paper reviews the most topical literature on business modelling, as well
Terrestrial nitrogen cycling in Earth system models revisited
Stocker, Benjamin D; Prentice, I. Colin; Cornell, Sarah; Davies-Barnard, T; Finzi, Adrien; Franklin, Oskar; Janssens, Ivan; Larmola, Tuula; Manzoni, Stefano; Näsholm, Torgny; Raven, John; Rebel, Karin; Reed, Sasha C.; Vicca, Sara; Wiltshire, Andy; Zaehle, Sönke
2016-01-01
Understanding the degree to which nitrogen (N) availability limits land carbon (C) uptake under global environmental change represents an unresolved challenge. First-generation ‘C-only’vegetation models, lacking explicit representations of N cycling,projected a substantial and increasing land C sink under rising atmospheric CO2 concentrations. This prediction was questioned for not taking into account the potentially limiting effect of N availability, which is necessary for plant growth (Hungate et al.,2003). More recent global models include coupled C and N cycles in land ecosystems (C–N models) and are widely assumed to be more realistic. However, inclusion of more processes has not consistently improved their performance in capturing observed responses of the global C cycle (e.g. Wenzel et al., 2014). With the advent of a new generation of global models, including coupled C, N, and phosphorus (P) cycling, model complexity is sure to increase; but model reliability may not, unless greater attention is paid to the correspondence of model process representations ande mpirical evidence. It was in this context that the ‘Nitrogen Cycle Workshop’ at Dartington Hall, Devon, UK was held on 1–5 February 2016. Organized by I. Colin Prentice and Benjamin D. Stocker (Imperial College London, UK), the workshop was funded by the European Research Council,project ‘Earth system Model Bias Reduction and assessing Abrupt Climate change’ (EMBRACE). We gathered empirical ecologists and ecosystem modellers to identify key uncertainties in terrestrial C–N cycling, and to discuss processes that are missing or poorly represented in current models.
Schedulability of Herschel revisited using statistical model checking
DEFF Research Database (Denmark)
David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel
2015-01-01
-approximation technique. We can safely conclude that the system is schedulable for varying values of BCET. For the cases where deadlines are violated, we use polyhedra to try to confirm the witnesses. Our alternative method to confirm non-schedulability uses statistical model-checking (SMC) to generate counter...... and blocking times of tasks. Consequently, the method may falsely declare deadline violations that will never occur during execution. This paper is a continuation of previous work of the authors in applying extended timed automata model checking (using the tool UPPAAL) to obtain more exact schedulability...... analysis, here in the presence of non-deterministic computation times of tasks given by intervals [BCET,WCET]. Computation intervals with preemptive schedulers make the schedulability analysis of the resulting task model undecidable. Our contribution is to propose a combination of model checking techniques...
Reactor kinetics revisited: a coefficient based model (CBM)
International Nuclear Information System (INIS)
Ratemi, W.M.
2011-01-01
In this paper, a nuclear reactor kinetics model based on Guelph expansion coefficients calculation ( Coefficients Based Model, CBM), for n groups of delayed neutrons is developed. The accompanying characteristic equation is a polynomial form of the Inhour equation with the same coefficients of the CBM- kinetics model. Those coefficients depend on Universal abc- values which are dependent on the type of the fuel fueling a nuclear reactor. Furthermore, such coefficients are linearly dependent on the inserted reactivity. In this paper, the Universal abc- values have been presented symbolically, for the first time, as well as with their numerical values for U-235 fueled reactors for one, two, three, and six groups of delayed neutrons. Simulation studies for constant and variable reactivity insertions are made for the CBM kinetics model, and a comparison of results, with numerical solutions of classical kinetics models for one, two, three, and six groups of delayed neutrons are presented. The results show good agreements, especially for single step insertion of reactivity, with the advantage of the CBM- solution of not encountering the stiffness problem accompanying the numerical solutions of the classical kinetics model. (author)
Running of radiative neutrino masses: the scotogenic model — revisited
Energy Technology Data Exchange (ETDEWEB)
Merle, Alexander; Platscher, Moritz [Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany)
2015-11-23
A few years ago, it had been shown that effects stemming from renormalisation group running can be quite large in the scotogenic model, where neutrinos obtain their mass only via a 1-loop diagram (or, more generally, in many models in which the light neutrino mass is generated via quantum corrections at loop-level). We present a new computation of the renormalisation group equations (RGEs) for the scotogenic model, thereby updating previous results. We discuss the matching in detail, in particular in what regards the different mass spectra possible for the new particles involved. We furthermore develop approximate analytical solutions to the RGEs for an extensive list of illustrative cases, covering all general tendencies that can appear in the model. Comparing them with fully numerical solutions, we give a comprehensive discussion of the running in the scotogenic model. Our approach is mainly top-down, but we also discuss an attempt to get information on the values of the fundamental parameters when inputting the low-energy measured quantities in a bottom-up manner. This work serves the basis for a full parameter scan of the model, thereby relating its low- and high-energy phenomenology, to fully exploit the available information.
Revisiting the mouse model of oxygen-induced retinopathy
Directory of Open Access Journals (Sweden)
Kim CB
2016-05-01
Full Text Available Clifford B Kim,1,2 Patricia A D’Amore,2–4 Kip M Connor1,2 1Angiogenesis Laboratory, Massachusetts Eye and Ear, 2Department of Ophthalmology, Harvard Medical School, 3Schepens Eye Research Institute, Massachusetts Eye and Ear, 4Department of Pathology, Harvard Medical School, Boston, MA, USA Abstract: Abnormal blood vessel growth in the retina is a hallmark of many retinal diseases, such as retinopathy of prematurity (ROP, proliferative diabetic retinopathy, and the wet form of age-related macular degeneration. In particular, ROP has been an important health concern for physicians since the advent of routine supplemental oxygen therapy for premature neonates more than 70 years ago. Since then, researchers have explored several animal models to better understand ROP and retinal vascular development. Of these models, the mouse model of oxygen-induced retinopathy (OIR has become the most widely used, and has played a pivotal role in our understanding of retinal angiogenesis and ocular immunology, as well as in the development of groundbreaking therapeutics such as anti-vascular endothelial growth factor injections for wet age-related macular degeneration. Numerous refinements to the model have been made since its inception in the 1950s, and technological advancements have expanded the use of the model across multiple scientific fields. In this review, we explore the historical developments that have led to the mouse OIR model utilized today, essential concepts of OIR, limitations of the model, and a representative selection of key findings from OIR, with particular emphasis on current research progress. Keywords: ROP, OIR, angiogenesis
Energy-economy interactions revisited within a comprehensive sectoral model
Energy Technology Data Exchange (ETDEWEB)
Hanson, D. A.; Laitner, J. A.
2000-07-24
This paper describes a computable general equilibrium (CGE) model with considerable sector and technology detail, the ``All Modular Industry Growth Assessment'' Model (AMIGA). It is argued that a detailed model is important to capture and understand the several rolls that energy plays within the economy. Fundamental consumer and industrial demands are for the services from energy; hence, energy demand is a derived demand based on the need for heating, cooling mechanical, electrical, and transportation services. Technologies that provide energy-services more efficiently (on a life cycle basis), when adopted, result in increased future output of the economy and higher paths of household consumption. The AMIGA model can examine the effects on energy use and economic output of increases in energy prices (e.g., a carbon charge) and other incentive-based policies or energy-efficiency programs. Energy sectors and sub-sector activities included in the model involve energy extraction conversion and transportation. There are business opportunities to produce energy-efficient goods (i.e., appliances, control systems, buildings, automobiles, clean electricity). These activities are represented in the model by characterizing their likely production processes (e.g., lighter weight motor vehicles). Also, multiple industrial processes can produce the same output but with different technologies and inputs. Secondary recovery, i.e., recycling processes, are examples of these multiple processes. Combined heat and power (CHP) is also represented for energy-intensive industries. Other modules represent residential and commercial building technologies to supply energy services. All sectors of the economy command real resources (capital services and labor).
Massive (p,q)-supersymmetric sigma models revisited
International Nuclear Information System (INIS)
Papadopoulos, G.
1994-06-01
We recently obtained the conditions on the couplings of the general two-dimensional massive sigma-model required by (p,q)-supersymmetry. Here wer compute the Poisson bracket algebra of the supersymmetry and central Noether charges, and show that the action is invariant under the automorphism group of this algebra. Surprisingly, for the (4,4) case the automorphism group is always a subgroup of SO(3), rather than SO(4). We also re-analyse the conditions for (2,2) and 4,4) supersymmetry of the zero torsion models without assumptions about the central charge matrix. (orig.)
A Simple Singlet Fermionic Dark-Matter Model Revisited
International Nuclear Information System (INIS)
Qin Hong-Yi; Wang Wen-Yu; Xiong Zhao-Hua
2011-01-01
We evaluate the spin-independent elastic dark matter-nucleon scattering cross section in the framework of the simple singlet fermionic dark matter extension of the standard model and constrain the model parameter space with the following considerations: (i) new dark matter measurement, in which, apart from WMAP and CDMS, the results from the XENON experiment are also used in constraining the model; (ii) new fitted value of the quark fractions in nucleons, in which the updated value of f T s from the recent lattice simulation is much smaller than the previous one and may reduce the scattering rate significantly; (iii) new dark matter annihilation channels, in which the scenario where top quark and Higgs pairs produced by dark matter annihilation was not included in the previous works. We find that unlike in the minimal supersymmetric standard model, the cross section is just reduced by a factor of about 1/4 and dark matter lighter than 100 GeV is not favored by the WMAP, CDMS and XENON experiments. (the physics of elementary particles and fields)
What drives health care expenditure?--Baumol's model of 'unbalanced growth' revisited.
Hartwig, Jochen
2008-05-01
The share of health care expenditure in GDP rises rapidly in virtually all OECD countries, causing increasing concern among politicians and the general public. Yet, economists have to date failed to reach an agreement on what the main determinants of this development are. This paper revisits Baumol's [Baumol, W.J., 1967. Macroeconomics of unbalanced growth: the anatomy of urban crisis. American Economic Review 57 (3), 415-426] model of 'unbalanced growth', showing that the latter offers a ready explanation for the observed inexorable rise in health care expenditure. The main implication of Baumol's model in this context is that health care expenditure is driven by wage increases in excess of productivity growth. This hypothesis is tested empirically using data from a panel of 19 OECD countries. Our tests yield robust evidence in favor of Baumol's theory.
Fowler, Keirnan J. A.; Peel, Murray C.; Western, Andrew W.; Zhang, Lu; Peterson, Tim J.
2016-03-01
Hydrologic models have potential to be useful tools in planning for future climate variability. However, recent literature suggests that the current generation of conceptual rainfall runoff models tend to underestimate the sensitivity of runoff to a given change in rainfall, leading to poor performance when evaluated over multiyear droughts. This research revisited this conclusion, investigating whether the observed poor performance could be due to insufficient model calibration and evaluation techniques. We applied an approach based on Pareto optimality to explore trade-offs between model performance in different climatic conditions. Five conceptual rainfall runoff model structures were tested in 86 catchments in Australia, for a total of 430 Pareto analyses. The Pareto results were then compared with results from a commonly used model calibration and evaluation method, the Differential Split Sample Test. We found that the latter often missed potentially promising parameter sets within a given model structure, giving a false negative impression of the capabilities of the model. This suggests that models may be more capable under changing climatic conditions than previously thought. Of the 282[347] cases of apparent model failure under the split sample test using the lower [higher] of two model performance criteria trialed, 155[120] were false negatives. We discuss potential causes of remaining model failures, including the role of data errors. Although the Pareto approach proved useful, our aim was not to suggest an alternative calibration strategy, but to critically assess existing methods of model calibration and evaluation. We recommend caution when interpreting split sample results.
Time-independent models of asset returns revisited
Gillemot, L.; Töyli, J.; Kertesz, J.; Kaski, K.
2000-07-01
In this study we investigate various well-known time-independent models of asset returns being simple normal distribution, Student t-distribution, Lévy, truncated Lévy, general stable distribution, mixed diffusion jump, and compound normal distribution. For this we use Standard and Poor's 500 index data of the New York Stock Exchange, Helsinki Stock Exchange index data describing a small volatile market, and artificial data. The results indicate that all models, excluding the simple normal distribution, are, at least, quite reasonable descriptions of the data. Furthermore, the use of differences instead of logarithmic returns tends to make the data looking visually more Lévy-type distributed than it is. This phenomenon is especially evident in the artificial data that has been generated by an inflated random walk process.
Goodwin accelerator model revisited with fixed time delays
Matsumoto, Akio; Merlone, Ugo; Szidarovszky, Ferenc
2018-05-01
Dynamics of Goodwin's accelerator business cycle model is reconsidered. The model is characterized by a nonlinear accelerator and an investment time delay. The role of the nonlinearity for the birth of persistent oscillations is fully discussed in the existing literature. On the other hand, not much of the role of the delay has yet been revealed. The purpose of this paper is to show that the delay really matters. In the original framework of Goodwin [6], it is first demonstrated that there is a threshold value of the delay: limit cycles arise for smaller values than the threshold and so do sawtooth oscillations for larger values. In the extended framework in which a consumption or saving delay, in addition to the investment delay, is introduced, three main results are demonstrated under assumption of the identical length of investment and consumption delays. The dynamics with consumption delay is basically the same as that of the single delay model. Second, in the case of saving delay, the steady state can coexist with the stable and unstable limit cycles in the stable case. Third, in the unstable case, there is an interval of delay in which the limit cycle or the sawtooth oscillation emerges depending on the choice of the constant initial function.
Hydraulic modeling support for conflict analysis: The Manayunk canal revisited
International Nuclear Information System (INIS)
Chadderton, R.A.; Traver, R.G.; Rao, J.N.
1992-01-01
This paper presents a study which used a standard, hydraulic computer model to generate detailed design information to support conflict analysis of a water resource use issue. As an extension of previous studies, the conflict analysis in this case included several scenarios for stability analysis - all of which reached the conclusion that compromising, shared access to the water resources available would result in the most benefits to society. This expected equilibrium outcome was found to maximize benefit-cost estimates. 17 refs., 1 fig., 2 tabs
Foam Assisted WAG, Snorre Revisit with New Foam Screening Model
DEFF Research Database (Denmark)
Spirov, Pavel; Rudyk, Svetlana Nikolayevna; Khan, Arif
2012-01-01
This study deals with simulation model of Foam Assisted Water Alternating Gas (FAWAG) method that had been implemented to two Norwegian Reservoirs. Being studied on number of pilot projects, the method proved successful, but Field Scale simulation was never understood properly. New phenomenological...... of the simulation contributes to more precise planning of the schedule of water and gas injection, prediction of the injection results and evaluation of the method efficiency. The testing of the surfactant properties allows making grounded choice of surfactant to use. The analysis of the history match gives insight...
Revisiting a model-independent dark energy reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)
2012-09-15
In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)
The Zipf Law revisited: An evolutionary model of emerging classification
Energy Technology Data Exchange (ETDEWEB)
Levitin, L.B. [Boston Univ., MA (United States); Schapiro, B. [TINA, Brandenburg (Germany); Perlovsky, L. [NRC, Wakefield, MA (United States)
1996-12-31
Zipf`s Law is a remarkable rank-frequency relationship observed in linguistics (the frequencies of the use of words are approximately inversely proportional to their ranks in the decreasing frequency order) as well as in the behavior of many complex systems of surprisingly different nature. We suggest an evolutionary model of emerging classification of objects into classes corresponding to concepts and denoted by words. The evolution of the system is derived from two basic assumptions: first, the probability to recognize an object as belonging to a known class is proportional to the number of objects in this class already recognized, and, second, there exists a small probability to observe an object that requires creation of a new class ({open_quotes}mutation{close_quotes} that gives birth to a new {open_quotes}species{close_quotes}). It is shown that the populations of classes in such a system obey the Zipf Law provided that the rate of emergence of new classes is small. The model leads also to the emergence of a second-tier structure of {open_quotes}super-classes{close_quotes} - groups of classes with almost equal populations.
Critical rotation of general-relativistic polytropic models revisited
Geroyannis, V.; Karageorgopoulos, V.
2013-09-01
We develop a perturbation method for computing the critical rotational parameter as a function of the equatorial radius of a rigidly rotating polytropic model in the "post-Newtonia approximation" (PNA). We treat our models as "initial value problems" (IVP) of ordinary differential equations in the complex plane. The computations are carried out by the code dcrkf54.f95 (Geroyannis and Valvi 2012 [P1]; modified Runge-Kutta-Fehlberg code of fourth and fifth order for solving initial value problems in the complex plane). Such a complex-plane treatment removes the syndromes appearing in this particular family of IVPs (see e.g. P1, Sec. 3) and allows continuation of the numerical integrations beyond the surface of the star. Thus all the required values of the Lane-Emden function(s) in the post-Newtonian approximation are calculated by interpolation (so avoiding any extrapolation). An interesting point is that, in our computations, we take into account the complete correction due to the gravitational term, and this issue is a remarkable difference compared to the classical PNA. We solve the generalized density as a function of the equatorial radius and find the critical rotational parameter. Our computations are extended to certain other physical characteristics (like mass, angular momentum, rotational kinetic energy, etc). We find that our method yields results comparable with those of other reliable methods. REFERENCE: V.S. Geroyannis and F.N. Valvi 2012, International Journal of Modern Physics C, 23, No 5, 1250038:1-15.
The hierarchy problem of the electroweak standard model revisited
International Nuclear Information System (INIS)
Jegerlehner, Fred
2013-05-01
A careful renormalization group analysis of the electroweak Standard Model reveals that there is no hierarchy problem in the SM. In the broken phase a light Higgs turns out to be natural as it is self-protected and self-tuned by the Higgs mechanism. It means that the scalar Higgs needs not be protected by any extra symmetry, specifically super symmetry, in order not to be much heavier than the other SM particles which are protected by gauge- or chiral-symmetry. Thus the existence of quadratic cutoff effects in the SM cannot motivate the need for a super symmetric extensions of the SM, but in contrast plays an important role in triggering the electroweak phase transition and in shaping the Higgs potential in the early universe to drive inflation as supported by observation.
Hubbert's Oil Peak Revisited by a Simulation Model
International Nuclear Information System (INIS)
Giraud, P.N.; Sutter, A.; Denis, T.; Leonard, C.
2010-01-01
As conventional oil reserves are declining, the debate on the oil production peak has become a burning issue. An increasing number of papers refer to Hubbert's peak oil theory to forecast the date of the production peak, both at regional and world levels. However, in our views, this theory lacks micro-economic foundations. Notably, it does not assume that exploration and production decisions in the oil industry depend on market prices. In an attempt to overcome these shortcomings, we have built an adaptative model, accounting for the behavior of one agent, standing for the competitive exploration-production industry, subjected to incomplete but improving information on the remaining reserves. Our work yields challenging results on the reasons for an Hubbert type peak oil, lying mainly 'above the ground', both at regional and world levels, and on the shape of the production and marginal cost trajectories. (authors)
The hierarchy problem of the electroweak standard model revisited
Energy Technology Data Exchange (ETDEWEB)
Jegerlehner, Fred [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2013-05-15
A careful renormalization group analysis of the electroweak Standard Model reveals that there is no hierarchy problem in the SM. In the broken phase a light Higgs turns out to be natural as it is self-protected and self-tuned by the Higgs mechanism. It means that the scalar Higgs needs not be protected by any extra symmetry, specifically super symmetry, in order not to be much heavier than the other SM particles which are protected by gauge- or chiral-symmetry. Thus the existence of quadratic cutoff effects in the SM cannot motivate the need for a super symmetric extensions of the SM, but in contrast plays an important role in triggering the electroweak phase transition and in shaping the Higgs potential in the early universe to drive inflation as supported by observation.
Temperature Effect on Micelle Formation: Molecular Thermodynamic Model Revisited.
Khoshnood, Atefeh; Lukanov, Boris; Firoozabadi, Abbas
2016-03-08
Temperature affects the aggregation of macromolecules such as surfactants, polymers, and proteins in aqueous solutions. The effect on the critical micelle concentration (CMC) is often nonmonotonic. In this work, the effect of temperature on the micellization of ionic and nonionic surfactants in aqueous solutions is studied using a molecular thermodynamic model. Previous studies based on this technique have predicted monotonic behavior for ionic surfactants. Our investigation shows that the choice of tail transfer energy to describe the hydrophobic effect between the surfactant tails and the polar solvent molecules plays a key role in the predicted CMC. We modify the tail transfer energy by taking into account the effect of the surfactant head on the neighboring methylene group. The modification improves the description of the CMC and the predicted micellar size for aqueous solutions of sodium n-alkyl sulfate, dodecyl trimethylammonium bromide (DTAB), and n-alkyl polyoxyethylene. The new tail transfer energy describes the nonmonotonic behavior of CMC versus temperature. In the DTAB-water system, we redefine the head size by including the methylene group, next to the nitrogen, in the head. The change in the head size along with our modified tail transfer energy improves the CMC and aggregation size prediction significantly. Tail transfer is a dominant energy contribution in micellar and microemulsion systems. It also promotes the adsorption of surfactants at fluid-fluid interfaces and affects the formation of adsorbed layer at fluid-solid interfaces. Our proposed modifications have direct applications in the thermodynamic modeling of the effect of temperature on molecular aggregation, both in the bulk and at the interfaces.
Revisiting non-Gaussianity from non-attractor inflation models
Cai, Yi-Fu; Chen, Xingang; Namjoo, Mohammad Hossein; Sasaki, Misao; Wang, Dong-Gang; Wang, Ziwei
2018-05-01
Non-attractor inflation is known as the only single field inflationary scenario that can violate non-Gaussianity consistency relation with the Bunch-Davies vacuum state and generate large local non-Gaussianity. However, it is also known that the non-attractor inflation by itself is incomplete and should be followed by a phase of slow-roll attractor. Moreover, there is a transition process between these two phases. In the past literature, this transition was approximated as instant and the evolution of non-Gaussianity in this phase was not fully studied. In this paper, we follow the detailed evolution of the non-Gaussianity through the transition phase into the slow-roll attractor phase, considering different types of transition. We find that the transition process has important effect on the size of the local non-Gaussianity. We first compute the net contribution of the non-Gaussianities at the end of inflation in canonical non-attractor models. If the curvature perturbations keep evolving during the transition—such as in the case of smooth transition or some sharp transition scenarios—the Script O(1) local non-Gaussianity generated in the non-attractor phase can be completely erased by the subsequent evolution, although the consistency relation remains violated. In extremal cases of sharp transition where the super-horizon modes freeze immediately right after the end of the non-attractor phase, the original non-attractor result can be recovered. We also study models with non-canonical kinetic terms, and find that the transition can typically contribute a suppression factor in the squeezed bispectrum, but the final local non-Gaussianity can still be made parametrically large.
Revisiting of Stommel's model for the understanding of the abrupt climate change
International Nuclear Information System (INIS)
Scatamacchia, R.; Purini, R.; Rafanelli, C.
2010-01-01
Despite the enormous number of papers devoted to modelling climate changes, the pionieristic Stommel paper (1961) remains a still valid tool for the understanding of the basic mechanism that governs the abrupt climate change, i.e. the existence of multipla equilibria in the governing non-linear equations. Using non-dimensional quantities, Stommel did not provide any explicit information about the temporal scale affecting the process under examination when the control parameters are varied. On the basis of this consideration, the present paper revisits the Stommel theory putting some emphasis on the quantitative estimate of how the variations of the control system parameters system modify the fundamental motor of the climate change, i.e. the thermohaline circulation.
Revisiting directed flow in relativistic heavy-ion collisions from a multiphase transport model
Guo, Chong-Qiang; Zhang, Chun-Jian; Xu, Jun
2017-12-01
We have revisited several interesting questions on how the rapidity-odd directed flow is developed in relativistic 197Au+197Au collisions at √{s_{NN}} = 200 and 39 GeV based on a multiphase transport model. As the partonic phase evolves with time, the slope of the parton directed flow at midrapidity region changes from negative to positive as a result of the later dynamics at 200 GeV, while it remains negative at 39 GeV due to the shorter life time of the partonic phase. The directed flow splitting for various quark species due to their different initial eccentricities is observed at 39 GeV, while the splitting is very small at 200GeV. From a dynamical coalescence algorithm with Wigner functions, we found that the directed flow of hadrons is a result of competition between the coalescence in momentum and coordinate space as well as further modifications by the hadronic rescatterings.
Konevskikh, Tatiana; Ponossov, Arkadi; Blümel, Reinhold; Lukacs, Rozalia; Kohler, Achim
2015-06-21
The appearance of fringes in the infrared spectroscopy of thin films seriously hinders the interpretation of chemical bands because fringes change the relative peak heights of chemical spectral bands. Thus, for the correct interpretation of chemical absorption bands, physical properties need to be separated from chemical characteristics. In the paper at hand we revisit the theory of the scattering of infrared radiation at thin absorbing films. Although, in general, scattering and absorption are connected by a complex refractive index, we show that for the scattering of infrared radiation at thin biological films, fringes and chemical absorbance can in good approximation be treated as additive. We further introduce a model-based pre-processing technique for separating fringes from chemical absorbance by extended multiplicative signal correction (EMSC). The technique is validated by simulated and experimental FTIR spectra. It is further shown that EMSC, as opposed to other suggested filtering methods for the removal of fringes, does not remove information related to chemical absorption.
Tauber, Sean; Navarro, Daniel J; Perfors, Amy; Steyvers, Mark
2017-07-01
Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hömberg, D.; Patacchini, F. S.; Sakamoto, K.; Zimmer, J.
2016-01-01
The classical Johnson-Mehl-Avrami-Kolmogorov approach for nucleation and growth models of diffusive phase transitions is revisited and applied to model the growth of ferrite in multiphase steels. For the prediction of mechanical properties of such steels, a deeper knowledge of the grain structure is essential. To this end, a Fokker-Planck evolution law for the volume distribution of ferrite grains is developed and shown to exhibit a log-normally distributed solution. Numerical parameter studi...
Revisiting the mesoscopic Termonia and Smith model for deformation of polymers
International Nuclear Information System (INIS)
Krishna Reddy, B; Basu, Sumit; Estevez, Rafael
2008-01-01
Mesoscopic models for polymers have the potential to link macromolecular properties with the mechanical behaviour without being too expensive computationally. An interesting, popular and rather simple model to this end was proposed by Termonia and Smith (1987 Macromolecules 20 835–8). In this model the macromolecular ensemble is viewed as a collection of two-dimensional self-avoiding random walks on a regular lattice whose lattice points represent entanglements. The load is borne by members representing van der Waals bonds as well as macromolecular strands between two entanglement points. Model polymers simulated via this model exhibited remarkable qualitative similarity with real polymers with respect to their molecular weight, entanglement spacing, strain rate and temperature dependence. In this work, we revisit this model and present a detailed reformulation within the framework of a finite deformation finite element scheme. The physical origins of each of the parameters in the model are investigated and inherent assumptions in the model which contribute to its success are critically probed
An efficient numerical progressive diagonalization scheme for the quantum Rabi model revisited
International Nuclear Information System (INIS)
Pan, Feng; Bao, Lina; Dai, Lianrong; Draayer, Jerry P
2017-01-01
An efficient numerical progressive diagonalization scheme for the quantum Rabi model is revisited. The advantage of the scheme lies in the fact that the quantum Rabi model can be solved almost exactly by using the scheme that only involves a finite set of one variable polynomial equations. The scheme is especially efficient for a specified eigenstate of the model, for example, the ground state. Some low-lying level energies of the model for several sets of parameters are calculated, of which one set of the results is compared to that obtained from the Braak’s exact solution proposed recently. It is shown that the derivative of the entanglement measure defined in terms of the reduced von Neumann entropy with respect to the coupling parameter does reach the maximum near the critical point deduced from the classical limit of the Dicke model, which may provide a probe of the critical point of the crossover in finite quantum many-body systems, such as that in the quantum Rabi model. (paper)
Revisiting the EC/CMB model for extragalactic large scale jets
Lucchini, M.; Tavecchio, F.; Ghisellini, G.
2017-04-01
One of the most outstanding results of the Chandra X-ray Observatory was the discovery that AGN jets are bright X-ray emitters on very large scales, up to hundreds of kpc. Of these, the powerful and beamed jets of flat-spectrum radio quasars are particularly interesting, as the X-ray emission cannot be explained by an extrapolation of the lower frequency synchrotron spectrum. Instead, the most common model invokes inverse Compton scattering of photons of the cosmic microwave background (EC/CMB) as the mechanism responsible for the high-energy emission. The EC/CMB model has recently come under criticism, particularly because it should predict a significant steady flux in the MeV-GeV band which has not been detected by the Fermi/LAT telescope for two of the best studied jets (PKS 0637-752 and 3C273). In this work, we revisit some aspects of the EC/CMB model and show that electron cooling plays an important part in shaping the spectrum. This can solve the overproduction of γ-rays by suppressing the high-energy end of the emitting particle population. Furthermore, we show that cooling in the EC/CMB model predicts a new class of extended jets that are bright in X-rays but silent in the radio and optical bands. These jets are more likely to lie at intermediate redshifts and would have been missed in all previous X-ray surveys due to selection effects.
Revisiting the quasi-particle model of the quark-gluon plasma
International Nuclear Information System (INIS)
Bannur, V.M.
2007-01-01
The quasi-particle model of the quark-gluon plasma (QGP) is revisited here with a new method, different from earlier studies, one without the need of a temperature dependent bag constant and other effects such as confinement, effective degrees of freedom etc. Our model has only one system dependent parameter and shows a surprisingly good fit to the lattice results for the gluon plasma, and for 2-flavor, 3-flavor and (2+1)-flavor QGP. The basic idea is first to evaluate the energy density ε from the grand partition function of quasi-particle QGP, and then derive all other thermodynamic functions from ε. Quasi-particles are assumed to have a temperature dependent mass equal to the plasma frequency. Energy density, pressure and speed of sound at zero chemical potential are evaluated and compared with the available lattice data. We further extend the model to a finite chemical potential, without any new parameters, to obtain the quark density, quark susceptibility etc., and the model fits very well with the lattice results on 2-flavor QGP. (orig.)
A goodness-of-fit test for occupancy models with correlated within-season revisits
Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.
2016-01-01
Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and
A classical model wind turbine wake “blind test” revisited by remote sensing lidars
DEFF Research Database (Denmark)
Sjöholm, Mikael; Angelou, Nikolas; Nielsen, Morten Busk
2017-01-01
One of the classical model wind turbine wake “blind test” experiments1 conducted in the boundary-layer wind tunnel at NTNU in Trondheim and used for benchmarking of numerical flow models has been revisited by remote sensing lidars in a joint experiment called “Lidars For Wind Tunnels” (L4WT) under...... was D=0.894 m and it was designed for a tip speed ratio (TSR) of 6. However, the TSRs used were 3, 6, and 10 at a free-stream velocity of 10 m/s. Due to geometrical constraints imposed by for instance the locations of the wind tunnel windows, all measurements were performed in the very same vertical...... cross-section of the tunnel and the various down-stream distances of the wake, i.e. 1D, 3D, and 5D were achieved by re-positioning the turbine. The approach used allows for unique studies of the influence of the inherent lidar spatial filtering on previously both experimentally and numerically well...
Energy Technology Data Exchange (ETDEWEB)
Galindo-Nava, E.I., E-mail: eg375@cam.ac.uk; Rae, C.M.F.
2016-01-10
A new approach for modelling dislocation creep during primary and secondary creep in FCC metals is proposed. The Orowan equation and dislocation behaviour at the grain scale are revisited to include the effects of different microstructures such as the grain size and solute atoms. Dislocation activity is proposed to follow a jog-diffusion law. It is shown that the activation energy for cross-slip E{sub cs} controls dislocation mobility and the strain increments during secondary creep. This is confirmed by successfully comparing E{sub cs} with the experimentally determined activation energy during secondary creep in 5 FCC metals. It is shown that the inverse relationship between the grain size and dislocation creep is attributed to the higher number of strain increments at the grain level dominating their magnitude as the grain size decreases. An alternative approach describing solid solution strengthening effects in nickel alloys is presented, where the dislocation mobility is reduced by dislocation pinning around solute atoms. An analysis on the solid solution strengthening effects of typical elements employed in Ni-base superalloys is also discussed. The model results are validated against measurements of Cu, Ni, Ti and 4 Ni-base alloys for wide deformation conditions and different grain sizes.
Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond
Energy Technology Data Exchange (ETDEWEB)
Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2008-11-15
The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225
Revisiting directed flow in relativistic heavy-ion collisions from a multiphase transport model
Energy Technology Data Exchange (ETDEWEB)
Guo, Chong-Qiang; Zhang, Chun-Jian [Chinese Academy of Sciences, Shanghai Institute of Applied Physics, Shanghai (China); University of Chinese Academy of Sciences, Beijing (China); Xu, Jun [Chinese Academy of Sciences, Shanghai Institute of Applied Physics, Shanghai (China)
2017-12-15
We have revisited several interesting questions on how the rapidity-odd directed flow is developed in relativistic {sup 197}Au + {sup 197}Au collisions at √(s{sub NN}) = 200 and 39 GeV based on a multiphase transport model. As the partonic phase evolves with time, the slope of the parton directed flow at midrapidity region changes from negative to positive as a result of the later dynamics at 200 GeV, while it remains negative at 39 GeV due to the shorter life time of the partonic phase. The directed flow splitting for various quark species due to their different initial eccentricities is observed at 39 GeV, while the splitting is very small at 200 GeV. From a dynamical coalescence algorithm with Wigner functions, we found that the directed flow of hadrons is a result of competition between the coalescence in momentum and coordinate space as well as further modifications by the hadronic rescatterings. (orig.)
Cerezo, Javier; Santoro, Fabrizio
2016-10-11
Vertical models for the simulation of spectroscopic line shapes expand the potential energy surface (PES) of the final state around the equilibrium geometry of the initial state. These models provide, in principle, a better approximation of the region of the band maximum. At variance, adiabatic models expand each PES around its own minimum. In the harmonic approximation, when the minimum energy structures of the two electronic states are connected by large structural displacements, adiabatic models can breakdown and are outperformed by vertical models. However, the practical application of vertical models faces the issues related to the necessity to perform a frequency analysis at a nonstationary point. In this contribution we revisit vertical models in harmonic approximation adopting both Cartesian (x) and valence internal curvilinear coordinates (s). We show that when x coordinates are used, the vibrational analysis at nonstationary points leads to a deficient description of low-frequency modes, for which spurious imaginary frequencies may even appear. This issue is solved when s coordinates are adopted. It is however necessary to account for the second derivative of s with respect to x, which here we compute analytically. We compare the performance of the vertical model in the s-frame with respect to adiabatic models and previously proposed vertical models in x- or Q 1 -frame, where Q 1 are the normal coordinates of the initial state computed as combination of Cartesian coordinates. We show that for rigid molecules the vertical approach in the s-frame provides a description of the final state very close to the adiabatic picture. For sizable displacements it is a solid alternative to adiabatic models, and it is not affected by the issues of vertical models in x- and Q 1 -frames, which mainly arise when temperature effects are included. In principle the G matrix depends on s, and this creates nonorthogonality problems of the Duschinsky matrix connecting the normal
Directory of Open Access Journals (Sweden)
Laura Casas
Full Text Available The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude × nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype, those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here. We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dose-dependent effect probably due to a concerted action of multiple pathways involved in scale formation.
Casas, Laura; Szűcs, Ré ka; Vij, Shubha; Goh, Chin Heng; Kathiresan, Purushothaman; Né meth, Sá ndor; Jeney, Zsigmond; Bercsé nyi, Mikló s; Orbá n, Lá szló
2013-01-01
The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.
Casas, Laura
2013-12-30
The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the \\'S\\' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called \\'N\\' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov\\'s work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.
Tseng, W. L.; Johnson, R. E.; Tucker, O. J.; Perry, M. E.; Ip, W. H.
2017-12-01
During the Cassini Grand Finale mission, this spacecraft, for the first time, has done the in-situ measurements of Saturn's upper atmosphere and its rings and provides critical information for understanding the coupling dynamics between the main rings and the Saturnian system. The ring atmosphere is the source of neutrals (i.e., O2, H2, H; Tseng et al., 2010; 2013a), which is primarily generated by photolytic decomposition of water ice (Johnson et al., 2006), and plasma (i.e., O2+ and H2+; Tseng et al., 2011) in the Saturnian magnetosphere. In addition, the main rings have strong interaction with Saturn's atmosphere and ionosphere (i.e., a source of oxygen into Saturn's upper atmosphere and/or the "ring rain" in O'Donoghue et al., 2013). Furthermore, the near-ring plasma environment is complicated by the neutrals from both the seasonally dependent ring atmosphere and Enceladus torus (Tseng et al., 2013b), and, possibly, from small grains from the main and tenuous F and G rings (Johnson et al.2017). The data now coming from Cassini Grand Finale mission already shed light on the dominant physics and chemistry in this region of Saturn's magnetosphere, for example, the presence of carbonaceous material from meteorite impacts in the main rings and each gas species have similar distribution in the ring atmosphere. We will revisit the details in our ring atmosphere/ionosphere model to study, such as the source mechanism for the organic material and the neutral-grain-plasma interaction processes.
Seidl, Roman; Barthel, Roland
2016-04-01
Interdisciplinary scientific and societal knowledge plays an increasingly important role in global change research. Also, in the field of water resources interdisciplinarity as well as cooperation with stakeholders from outside academia have been recognized as important. In this contribution, we revisit an integrated regional modelling system (DANUBIA), which was developed by an interdisciplinary team of researchers and relied on stakeholder participation in the framework of the GLOWA-Danube project from 2001 to 2011 (Mauser and Prasch 2016). As the model was developed before the current increase in literature on participatory modelling and interdisciplinarity, we ask how a socio-hydrology approach would have helped and in what way it would have made the work different. The present contribution firstly presents the interdisciplinary concept of DANUBIA, mainly with focus on the integration of human behaviour in a spatially explicit, process-based numerical modelling system (Roland Barthel, Janisch, Schwarz, Trifkovic, Nickel, Schulz, and Mauser 2008; R. Barthel, Nickel, Meleg, Trifkovic, and Braun 2005). Secondly, we compare the approaches to interdisciplinarity in GLOWA-Danube with concepts and ideas presented by socio-hydrology. Thirdly, we frame DANUBIA and a review of key literature on socio-hydrology in the context of a survey among hydrologists (N = 184). This discussion is used to highlight gaps and opportunities of the socio-hydrology approach. We show that the interdisciplinary aspect of the project and the participatory process of stakeholder integration in DANUBIA were not entirely successful. However, important insights were gained and important lessons were learnt. Against the background of these experiences we feel that in its current state, socio-hydrology is still lacking a plan for knowledge integration. Moreover, we consider necessary that socio-hydrology takes into account the lessons learnt from these earlier examples of knowledge integration
Court, Deborah
1999-01-01
Revisits and reviews Imre Lakatos' ideas on "Falsification and the Methodology of Scientific Research Programmes." Suggests that Lakatos' framework offers an insightful way of looking at the relationship between theory and research that is relevant not only for evaluating research programs in theoretical physics, but in the social…
DEFF Research Database (Denmark)
Hertzum, Morten
1994-01-01
: (1) the text model, also known as the inverted file approach, (2) the hypertext model, and (3) the relational model. In the design of the relational model changeability was a key consideration, but more often it is sacrificed to save development resources or improve performance. As it is not uncommon...... to see successful TSARS exist for 15-20 years and be subject to manifold changes during their lifetime, it is the relational model which is considered for use in the unified toolkit. It seems as if the relational model can be enhanced to incorporate the text model and the hypertext model...
International Nuclear Information System (INIS)
Basic, Ivan; Nadramija, Damir; Flajslik, Mario; Amic, Dragan; Lucic, Bono
2007-01-01
Several quantitative structure-activity studies for this data set containing 107 HEPT derivatives have been performed since 1997, using the same set of molecules by (more or less) different classes of molecular descriptors. Multivariate Regression (MR) and Artificial Neural Network (ANN) models were developed and in each study the authors concluded that ANN models are superior to MR ones. We re-calculated multivariate regression models for this set of molecules using the same set of descriptors, and compared our results with the previous ones. Two main reasons for overestimation of the quality of the ANN models in previous studies comparing with MR models are: (1) wrong calculation of leave-one-out (LOO) cross-validated (CV) correlation coefficient for MR models in Luco et al., J. Chem. Inf. Comput. Sci. 37 392-401 (1997), and (2) incorrect estimation/interpretation of leave-one-out (LOO) cross-validated and predictive performance and power of ANN models. More precise and fairer comparison of fit and LOO CV statistical parameters shows that MR models are more stable. In addition, MR models are much simpler than ANN ones. For real testing the predictive performance of both classes of models we need more HEPT derivatives, because all ANN models that presented results for external set of molecules used experimental values in optimization of modeling procedure and model parameters
Borza, Liana Rada; Gavrilovici, Cristina; Stockman, René
2015-01-01
The present paper revisits the ethical models of patient--physician relationship from the perspective of patient autonomy and values. It seems that the four traditional models of physician--patient relationship proposed by Emanuel & Emanuel in 1992 closely link patient values and patient autonomy. On the other hand, their reinterpretation provided by Agarwal & Murinson twenty years later emphasizes the independent expression of values and autonomy in individual patients. Additionally, patient education has been assumed to join patient values and patient autonomy. Moreover, several authors have noted that, over the past few decades, patient autonomy has gradually replaced the paternalistic approach based on the premise that the physician knows what is best for the patient. Neither the paternalistic model of physician-patient relationship, nor the informative model is considered to be satisfactory, as the paternalistic model excludes patient values from decision making, while the informative model excludes physician values from decision making. However, the deliberative model of patient-physician interaction represents an adequate alternative to the two unsatisfactory approaches by promoting shared decision making between the physician and the patient. It has also been suggested that the deliberative model would be ideal for exercising patient autonomy in chronic care and that the ethical role of patient education would be to make the deliberative model applicable to chronic care. In this regard, studies have indicated that the use of decision support interventions might increase the deliberative capacity of chronic patients.
DEFF Research Database (Denmark)
Holt, Robin; Cornelissen, Joep
2014-01-01
We critique and extend theory on organizational sensemaking around three themes. First, we investigate sense arising non-productively and so beyond any instrumental relationship with things; second, we consider how sense is experienced through mood as well as our cognitive skills of manipulation ...... research by revisiting Weick’s seminal reading of Norman Maclean’s book surrounding the tragic events of a 1949 forest fire at Mann Gulch, USA....
Elliott, E. A.; Rodriguez, A. B.; McKee, B. A.
2017-12-01
Traditional models of estuarine systems show deposition occurs primarily within the central basin. There, accommodation space is high within the deep central valley, which is below regional wave base and where current energy is presumed to reach a relative minimum, promoting direct deposition of cohesive sediment and minimizing erosion. However, these models often reflect long-term (decadal-millennial) timescales, where accumulation rates are in relative equilibrium with the rate of relative sea-level rise, and lack the resolution to capture shorter term changes in sediment deposition and erosion within the central estuary. This work presents a conceptual model for estuarine sedimentation during non-equilibrium conditions, where high-energy inputs to the system reach a relative maximum in the central basin, resulting in temporary deposition and/or remobilization over sub-annual to annual timescales. As an example, we present a case study of Core Sound, NC, a lagoonal estuarine system where the regional base-level has been reached, and sediment deposition, resuspension and bypassing is largely a result of non-equilibrium, high-energy events. Utilizing a 465 cm-long sediment core from a mini-basin located between Core Sound and the continental shelf, a 40-year sub-annual chronology was developed for the system, with sediment accumulation rates (SAR) interpolated to a monthly basis over the 40-year record. This study links erosional processes in the estuary directly with sediment flux to the continental shelf, taking advantage of the highly efficient sediment trapping capability of the mini-basin. The SAR record indicates high variation in the estuarine sediment supply, with peaks in the SAR record at a recurrence interval of 1 year (+/- 0.25). This record has been compared to historical storm influence for the area. Through this multi-decadal record, sediment flushing events occur at a much more frequent interval than previously thought (i.e. annual rather than
Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.
Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H
2008-05-01
The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.
Bottomonium spectrum revisited
Segovia, Jorge; Entem, David R.; Fernández, Francisco
2016-01-01
We revisit the bottomonium spectrum motivated by the recently exciting experimental progress in the observation of new bottomonium states, both conventional and unconventional. Our framework is a nonrelativistic constituent quark model which has been applied to a wide range of hadronic observables from the light to the heavy quark sector and thus the model parameters are completely constrained. Beyond the spectrum, we provide a large number of electromagnetic, strong and hadronic decays in order to discuss the quark content of the bottomonium states and give more insights about the better way to determine their properties experimentally.
Zhang, Baocheng; Yuan, Yunbin
2017-04-01
A synthesis of two prevailing Global Navigation Satellite System (GNSS) positioning technologies, namely the precise point positioning (PPP) and the network-based real-time kinematic (NRTK), results in the emergence of the PPP-RTK. This new concept preferably integrates the typical advantage of PPP (e.g. flexibility) and that of NRTK (e.g. efficiency), such that it enables single-receiver users to achieve high positioning accuracy with reasonable timeliness through integer ambiguity resolution (IAR). The realization of PPP-RTK needs to accomplish two sequential tasks. The first task is to determine a class of corrections including, necessarily, the satellite orbits, the satellite clocks and the satellite phase (and code, in case of more than two frequencies) biases at the network level. With these corrections, the second task, then, is capable of solving for the ambiguity-fixed, absolute position(s) at the user level. In this contribution, we revisit three variants (geometry-free, geometry-fixed, and geometry- and satellite-clock-fixed) of undifferenced, uncombined PPP-RTK network model and discuss their implications for practical use. We carry out a case study using multi-day, dual-frequency GPS data from the Crustal Movement Observation Network of China (CMONOC), aiming to assess the (static and kinematic) positioning performance (in terms of time-to-first-fix and accuracy) that is achievable by PPP-RTK users across China.
eWOM, Revisit Intention, Destination Trust and Gender
Abubakar, Abubakar Mohammed; Ilkan, Mustafa; Al-Tal, Raad Meshall; Eluwole, Kayode
2017-01-01
This article investigates the impact of eWOM on intention to revisit and destination trust, and the moderating role of gender in medical tourism industry. Result from structural equation modeling (n=240) suggests the following: (1) that eWOM influences intention to revisit and destination trust; (2) that destination trust influences intention to revisit; (3) that the impact of eWOM on intention to revisit is about 1.3 times higher in men; (4) that the impact of eWOM on destination trust is ab...
Ups and downs of Viagra: revisiting ototoxicity in the mouse model.
Au, Adrian; Stuyt, John Gerka; Chen, Daniel; Alagramam, Kumar
2013-01-01
Sildenafil citrate (Viagra), a phosphodiesterase 5 inhibitor (PDE5i), is a commonly prescribed drug for erectile dysfunction. Since the introduction of Viagra in 1997, several case reports have linked Viagra to sudden sensorineural hearing loss. However, these studies are not well controlled for confounding factors, such as age and noise-induced hearing loss and none of these reports are based on prospective double-blind studies. Further, animal studies report contradictory data. For example, one study (2008) reported hearing loss in rats after long-term and high-dose exposure to sildenafil citrate. The other study (2012) showed vardenafil, another formulation of PDE5i, to be protective against noise-induced hearing loss in mice and rats. Whether or not clinically relevant doses of sildenafil citrate cause hearing loss in normal subjects (animals or humans) is controversial. One possibility is that PDE5i exacerbates age-related susceptibility to hearing loss in adults. Therefore, we tested sildenafil citrate in C57BL/6J, a strain of mice that displays increased susceptibility to age-related hearing loss, and compared the results to those obtained from the FVB/N, a strain of mice with no predisposition to hearing loss. Six-week-old mice were injected with the maximum tolerated dose of sildenafil citrate (10 mg/kg/day) or saline for 30 days. Auditory brainstem responses (ABRs) were recorded pre- and post injection time points to assess hearing loss. Entry of sildenafil citrate in the mouse cochlea was confirmed by qRT-PCR analysis of a downstream target of the cGMP-PKG cascade. ABR data indicated no statistically significant difference in hearing between treated and untreated mice in both backgrounds. Results show that the maximum tolerated dose of sildenafil citrate administered daily for 4 weeks does not affect hearing in the mouse. Our study gives no indication that Viagra will negatively impact hearing and it emphasizes the need to revisit the issue of Viagra
Ups and downs of Viagra: revisiting ototoxicity in the mouse model.
Directory of Open Access Journals (Sweden)
Adrian Au
Full Text Available Sildenafil citrate (Viagra, a phosphodiesterase 5 inhibitor (PDE5i, is a commonly prescribed drug for erectile dysfunction. Since the introduction of Viagra in 1997, several case reports have linked Viagra to sudden sensorineural hearing loss. However, these studies are not well controlled for confounding factors, such as age and noise-induced hearing loss and none of these reports are based on prospective double-blind studies. Further, animal studies report contradictory data. For example, one study (2008 reported hearing loss in rats after long-term and high-dose exposure to sildenafil citrate. The other study (2012 showed vardenafil, another formulation of PDE5i, to be protective against noise-induced hearing loss in mice and rats. Whether or not clinically relevant doses of sildenafil citrate cause hearing loss in normal subjects (animals or humans is controversial. One possibility is that PDE5i exacerbates age-related susceptibility to hearing loss in adults. Therefore, we tested sildenafil citrate in C57BL/6J, a strain of mice that displays increased susceptibility to age-related hearing loss, and compared the results to those obtained from the FVB/N, a strain of mice with no predisposition to hearing loss. Six-week-old mice were injected with the maximum tolerated dose of sildenafil citrate (10 mg/kg/day or saline for 30 days. Auditory brainstem responses (ABRs were recorded pre- and post injection time points to assess hearing loss. Entry of sildenafil citrate in the mouse cochlea was confirmed by qRT-PCR analysis of a downstream target of the cGMP-PKG cascade. ABR data indicated no statistically significant difference in hearing between treated and untreated mice in both backgrounds. Results show that the maximum tolerated dose of sildenafil citrate administered daily for 4 weeks does not affect hearing in the mouse. Our study gives no indication that Viagra will negatively impact hearing and it emphasizes the need to revisit the issue
Directory of Open Access Journals (Sweden)
Brome McCreary
2009-12-01
Full Text Available Amphibian declines have been reported in mountainous areas around the western USA. Few data quantify the extent of population losses in the Pacific Northwest, a region in which amphibian declines have received much attention. From 2001–2004, we resurveyed historical breeding sites of two species of conservation concern, the Western Toad (Bufo [=Anaxyrus] boreas and Cascades Frog (Rana cascadae. We detected B. boreas breeding at 75.9% and R. cascadae breeding at 66.6% of historical sites. When we analyzed the data using occupancy models that accounted for detection probability, we estimated the current use of historically occupied sites in our study area was 84.9% (SE = 4.9 for B. boreas and 72.4% (SE = 6.6 for R. cascadae. Our ability to detect B. boreas at sites where they were present was lower in the first year of surveys (a low snowpack year and higher at sites with introduced fish. Our ability to detect R. cascadae was lower at sites with fish. The probability that B. boreas still uses a historical site for breeding was related to the easting of the site (+ and the age of record (-. None of the variables we analyzed was strongly related to R. cascadae occupancy. Both species had increased odds of occupancy with higher latitude, but model support for this variable was modest. Our analysis suggests that while local losses are possible, these two amphibians have not experienced recent, broad population losses in the Oregon Cascades. Historical site revisitation studies such as ours cannot distinguish between population losses and site switching, and do not account for colonization of new habitats, so our analysis may overestimate declines in occupancy within our study area.
Behaviour of turbulence models near a turbulent/non-turbulent interface revisited
International Nuclear Information System (INIS)
Ferrey, P.; Aupoix, B.
2006-01-01
The behaviour of turbulence models near a turbulent/non-turbulent interface is investigated. The analysis holds as well for two-equation as for Reynolds stress turbulence models using Daly and Harlow diffusion model. The behaviour near the interface is shown not to be a power law, as usually considered, but a more complex parametric solution. Why previous works seemed to numerically confirm the power law solution is explained. Constraints for turbulence modelling, i.e., for ensuring that models have a good behaviour near a turbulent/non-turbulent interface so that the solution is not sensitive to small turbulence levels imposed in the irrotational flow, are drawn
Czech Academy of Sciences Publication Activity Database
Vlček, Lukáš; Nezbeda, Ivo
2004-01-01
Roč. 102, č. 5 (2004), s. 485-497 ISSN 0026-8976 R&D Projects: GA ČR GA203/02/0764; GA AV ČR IAA4072303; GA AV ČR IAA4072309 Institutional research plan: CEZ:AV0Z4072921 Keywords : primitive model * association fluids * ethanol Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.406, year: 2004
Revisiting Temporal Markov Chains for Continuum modeling of Transport in Porous Media
Delgoshaie, A. H.; Jenny, P.; Tchelepi, H.
2017-12-01
The transport of fluids in porous media is dominated by flow-field heterogeneity resulting from the underlying permeability field. Due to the high uncertainty in the permeability field, many realizations of the reference geological model are used to describe the statistics of the transport phenomena in a Monte Carlo (MC) framework. There has been strong interest in working with stochastic formulations of the transport that are different from the standard MC approach. Several stochastic models based on a velocity process for tracer particle trajectories have been proposed. Previous studies have shown that for high variances of the log-conductivity, the stochastic models need to account for correlations between consecutive velocity transitions to predict dispersion accurately. The correlated velocity models proposed in the literature can be divided into two general classes of temporal and spatial Markov models. Temporal Markov models have been applied successfully to tracer transport in both the longitudinal and transverse directions. These temporal models are Stochastic Differential Equations (SDEs) with very specific drift and diffusion terms tailored for a specific permeability correlation structure. The drift and diffusion functions devised for a certain setup would not necessarily be suitable for a different scenario, (e.g., a different permeability correlation structure). The spatial Markov models are simple discrete Markov chains that do not require case specific assumptions. However, transverse spreading of contaminant plumes has not been successfully modeled with the available correlated spatial models. Here, we propose a temporal discrete Markov chain to model both the longitudinal and transverse dispersion in a two-dimensional domain. We demonstrate that these temporal Markov models are valid for different correlation structures without modification. Similar to the temporal SDEs, the proposed model respects the limited asymptotic transverse spreading of
Distinct and yet not Separate: Revisiting the Welfare Models in the EU New Member States
Directory of Open Access Journals (Sweden)
Helena Tendera-Właszczuk
2017-03-01
Full Text Available Objective: The objective of this paper is to evaluate the welfare state models in the EU countries and to start the discussion if the new member states (NMS, i.e. those EU member states that joined the EU in 2004/2007, fit the Sapir typology (Nordic model, Continental model, Anglo-Saxon model, Mediterranean model. The second objective is to examine the labour market situation, reduction of poverty and social inequalities in the EU countries. The third one is to open the issue if the public spending can be managed both justly and effectively. Research Design & Methods: The linear regression function and correlation has been used to present effectiveness of social expenditures to reduce poverty, as well as evidence that public spending can be managed both justly and effectively. Findings: This paper demonstrates more similarities can be drawn across the NMS and the EU-15 than within the NMS and EU-15, respectively. The typology of welfare state models is applied to the NMS and their effectiveness is tested. Accordingly, we classify the Czech Republic, Slovenia and Cyprus as countries of the Nordic model; Hungary, Slovakia and Malta as the Continental model; Lithuania, Latvia and Estonia as the Anglo-Saxon model and, finally, Poland, Croatia, Romania and Bulgaria as the Mediterranean model. Implications & Recommendations: Recent data suggest that the global crisis has caused an increase in the level of poverty and social spending in the EU countries. However, this is just a temporary situation and it does reflect the solutions of models. Contribution & Value Added: The NMS tend to be examined as a separate group of countries that – as the literature suggests – depict different qualities of the welfare models than those pursued in the EU-15.
International Nuclear Information System (INIS)
Schmidt, Alexandre G M; Paiva, Milena M
2012-01-01
We revisit the quantum two-person duel. In this problem, both Alice and Bob each possess a spin-1/2 particle which models dead and alive states for each player. We review the Abbott and Flitney result—now considering non-zero α 1 and α 2 in order to decide if it is better for Alice to shoot or not the second time—and we also consider a duel where players do not necessarily start alive. This simple assumption allows us to explore several interesting special cases, namely how a dead player can win the duel shooting just once, or how can Bob revive Alice after one shot, and the better strategy for Alice—being either alive or in a superposition of alive and dead states—fighting a dead opponent. (paper)
Logistics Innovation Process Revisited
DEFF Research Database (Denmark)
Gammelgaard, Britta; Su, Shong-Iee Ivan; Yang, Su-Lan
2011-01-01
Purpose – The purpose of this paper is to learn more about logistics innovation processes and their implications for the focal organization as well as the supply chain, especially suppliers. Design/methodology/approach – The empirical basis of the study is a longitudinal action research project...... that was triggered by the practical needs of new ways of handling material flows of a hospital. This approach made it possible to revisit theory on logistics innovation process. Findings – Apart from the tangible benefits reported to the case hospital, five findings can be extracted from this study: the logistics...... innovation process model may include not just customers but also suppliers; logistics innovation in buyer-supplier relations may serve as an alternative to outsourcing; logistics innovation processes are dynamic and may improve supplier partnerships; logistics innovations in the supply chain are as dependent...
International Nuclear Information System (INIS)
Myong, R. S.; Nagdewe, S. P.
2011-01-01
The Grad's closure for the high-order moment equation is revisited and, by extending his theory, a physically motivated closure is developed for the one-dimensional velocity shear gas flow. The closure is based on the physical argument of the relative importance of various terms appearing in the moment equation. Also, the closure is derived such that the resulting theory may be inclusive of the well established linear theory (Navier-Stokes-Fourier) as limiting case near local thermal equilibrium.
Optimal management of ecosystem services with pollution traps : The lake model revisited
de Zeeuw, Aart; Grass, Dieter; Xepapadeas, Anastasios
2017-01-01
In this paper, optimal management of the lake model and common-property outcomes are reconsidered when the lake model is extended with the slowly changing variable. New optimal trajectories are found that were hidden in the simplified analysis. Furthermore, it is shown that two Nash equilibria may
The Two-Capacitor Problem Revisited: A Mechanical Harmonic Oscillator Model Approach
Lee, Keeyung
2009-01-01
The well-known two-capacitor problem, in which exactly half the stored energy disappears when a charged capacitor is connected to an identical capacitor, is discussed based on the mechanical harmonic oscillator model approach. In the mechanical harmonic oscillator model, it is shown first that "exactly half" the work done by a constant applied…
Revisiting the Model of Creative Destruction: St. Jacobs, Ontario, a Decade Later
Mitchell, Clare J. A.; de Waal, Sarah B.
2009-01-01
Ten years ago, the model of creative destruction was developed to predict the fate of communities that base their development on the commodification of rural heritage (Mitchell, C.J.A., 1998. Entrepreneurialism, commodification and creative destruction: a model of post-modern community development. Journal of Rural Studies 14, 273-286). Its…
Fellnhofer, Katharina
2017-01-01
Relying on Bandura's (1986) social learning theory, Ajzen's (1988) theory of planned behaviour (TPB), and Dyer's (1994) model of entrepreneurial careers, this study aims to highlight the potential of entrepreneurial role models to entrepreneurship education. The results suggest that entrepreneurial courses would greatly benefit from real-life…
Remembered Experiences and Revisit Intentions
DEFF Research Database (Denmark)
Barnes, Stuart; Mattsson, Jan; Sørensen, Flemming
2016-01-01
Tourism is an experience-intensive sector in which customers seek and pay for experiences above everything else. Remembering past tourism experiences is also crucial for an understanding of the present, including the predicted behaviours of visitors to tourist destinations. We adopt a longitudinal...... approach to memory data collection from psychological science, which has the potential to contribute to our understanding of tourist behaviour. In this study, we examine the impact of remembered tourist experiences in a safari park. In particular, using matched survey data collected longitudinally and PLS...... path modelling, we examine the impact of positive affect tourist experiences on the development of revisit intentions. We find that longer-term remembered experiences have the strongest impact on revisit intentions, more so than predicted or immediate memory after an event. We also find that remembered...
BrainSignals Revisited: Simplifying a Computational Model of Cerebral Physiology.
Directory of Open Access Journals (Sweden)
Matthew Caldwell
Full Text Available Multimodal monitoring of brain state is important both for the investigation of healthy cerebral physiology and to inform clinical decision making in conditions of injury and disease. Near-infrared spectroscopy is an instrument modality that allows non-invasive measurement of several physiological variables of clinical interest, notably haemoglobin oxygenation and the redox state of the metabolic enzyme cytochrome c oxidase. Interpreting such measurements requires the integration of multiple signals from different sources to try to understand the physiological states giving rise to them. We have previously published several computational models to assist with such interpretation. Like many models in the realm of Systems Biology, these are complex and dependent on many parameters that can be difficult or impossible to measure precisely. Taking one such model, BrainSignals, as a starting point, we have developed several variant models in which specific regions of complexity are substituted with much simpler linear approximations. We demonstrate that model behaviour can be maintained whilst achieving a significant reduction in complexity, provided that the linearity assumptions hold. The simplified models have been tested for applicability with simulated data and experimental data from healthy adults undergoing a hypercapnia challenge, but relevance to different physiological and pathophysiological conditions will require specific testing. In conditions where the simplified models are applicable, their greater efficiency has potential to allow their use at the bedside to help interpret clinical data in near real-time.
Revisiting the T2K data using different models for the neutrino-nucleus cross sections
Energy Technology Data Exchange (ETDEWEB)
Meloni, D., E-mail: meloni@fis.uniroma3.it [Dipartimento di Fisica ' E. Amaldi' , Universita degli Studi Roma Tre, Via della Vasca Navale 84, 00146 Roma (Italy); Martini, M., E-mail: mmartini@ulb.ac.be [Institut d' Astronomie et d' Astrophysique, CP-226, Universite Libre de Bruxelles, 1050 Brussels (Belgium)
2012-09-17
We present a three-flavor fit to the recent {nu}{sub {mu}}{yields}{nu}{sub e} and {nu}{sub {mu}}{yields}{nu}{sub {mu}} T2K oscillation data with different models for the neutrino-nucleus cross section. We show that, even for a limited statistics, the allowed regions and best fit points in the ({theta}{sub 13},{delta}{sub CP}) and ({theta}{sub 23},{Delta}m{sub atm}{sup 2}) planes are affected if, instead of using the Fermi gas model to describe the quasielastic cross section, we employ a model including the multinucleon emission channel.
Fellnhofer, Katharina
2017-01-01
Relying on Bandura’s (1986) social learning theory, Ajzen’s (1988) theory of planned behaviour (TPB), and Dyer’s (1994) model of entrepreneurial careers, this study aims to highlight the potential of entrepreneurial role models to entrepreneurship education. The results suggest that entrepreneurial courses would greatly benefit from real-life experiences, either positive or negative. The results of regression analysis based on 426 individuals, primarily from Austria, Finland, and Greece, show that role models increase learners’ entrepreneurial perceived behaviour control (PBC) by increasing their self-efficacy. This study can inform the research and business communities and governments about the importance of integrating entrepreneurs into education to stimulate entrepreneurial PBC. This study is the first of its kind using its approach, and its results warrant more in-depth studies of storytelling by entrepreneurial role models in the context of multimedia entrepreneurship education. PMID:29104604
Fellnhofer, Katharina
2017-01-01
Relying on Bandura's (1986) social learning theory, Ajzen's (1988) theory of planned behaviour (TPB), and Dyer's (1994) model of entrepreneurial careers, this study aims to highlight the potential of entrepreneurial role models to entrepreneurship education. The results suggest that entrepreneurial courses would greatly benefit from real-life experiences, either positive or negative. The results of regression analysis based on 426 individuals, primarily from Austria, Finland, and Greece, show that role models increase learners' entrepreneurial perceived behaviour control (PBC) by increasing their self-efficacy. This study can inform the research and business communities and governments about the importance of integrating entrepreneurs into education to stimulate entrepreneurial PBC. This study is the first of its kind using its approach, and its results warrant more in-depth studies of storytelling by entrepreneurial role models in the context of multimedia entrepreneurship education.
The consensus in the two-feature two-state one-dimensional Axelrod model revisited
Biral, Elias J. P.; Tilles, Paulo F. C.; Fontanari, José F.
2015-04-01
The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains.
Undergraduate Groupwork Revisited: the Use of the Scrum Model to Create Agile Learning Environments
DEFF Research Database (Denmark)
Jurado-Navas, Antonio; Munoz-Luna, Rosa
2016-01-01
The present paper aims to analyse the impact of an innovative teaching model in the learning outcomes of a group of undergraduate students at the University of Malaga (Spain). Based on agile scrum models adopted in the engineering industry, the authors have extraposed the scrum methodology...... to pedagogical contexts at university level. This paper describes the impact of the innovative Scrum model in relation to groupwork management in undergraduate education. The already existing communication problems when working in group yield slow cooperation among group members and therefore, poorer learning...... outcomes. Such communication deficiency can be alleviated with the introduction of short and frequent meetings in each group of 4-5 members so that learning objectives are short-termed and attainable. The scrum model offers the procedural framework where to insert those frequent meetings and where all...
Revisiting low-fidelity two-fluid models for gas–solids transport
Energy Technology Data Exchange (ETDEWEB)
Adeleke, Najeem, E-mail: najm@psu.edu; Adewumi, Michael, E-mail: m2a@psu.edu; Ityokumbul, Thaddeus
2016-08-15
Two-phase gas–solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas–solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe–Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.
Revisiting low-fidelity two-fluid models for gas–solids transport
International Nuclear Information System (INIS)
Adeleke, Najeem; Adewumi, Michael; Ityokumbul, Thaddeus
2016-01-01
Two-phase gas–solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas–solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe–Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.
Revisiting low-fidelity two-fluid models for gas-solids transport
Adeleke, Najeem; Adewumi, Michael; Ityokumbul, Thaddeus
2016-08-01
Two-phase gas-solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas-solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe-Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.
Growth and energy nexus in Europe revisited: Evidence from a fixed effects political economy model
International Nuclear Information System (INIS)
Menegaki, Angeliki N.; Ozturk, Ilhan
2013-01-01
This is an empirical study on the causal relationship between economic growth and energy for 26 European countries in a multivariate panel framework over the period 1975–2009 using a two-way fixed effects model and including greenhouse gas emissions, capital, fossil energy consumption, Herfindahl index (political competition) and number of years the government chief executive stays in office (political stability) as independent variables in the model. Empirical results confirm bidirectional causality between growth and political stability, capital and political stability, capital and fossil energy consumption. Whether political stability favors the implementation of growth or leads to corruption demands further research. - Highlights: • Economic growth and energy for 26 European countries is examined. • Two-way fixed effects model with political economy variables is employed. • Bidirectional causality is observed between growth and political stability
Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach
Maxwell, R.M.; Welty, C.; Harvey, R.W.
2007-01-01
Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured
Roustan, Yelva; Duhanyan, Nora; Bocquet, Marc; Winiarek, Victor
2013-04-01
A sensitivity study of the numerical model, as well as, an inverse modelling approach applied to the atmospheric dispersion issues after the Chernobyl disaster are both presented in this paper. On the one hand, the robustness of the source term reconstruction through advanced data assimilation techniques was tested. On the other hand, the classical approaches for sensitivity analysis were enhanced by the use of an optimised forcing field which otherwise is known to be strongly uncertain. The POLYPHEMUS air quality system was used to perform the simulations of radionuclide dispersion. Activity concentrations in air and deposited to the ground of iodine-131, caesium-137 and caesium-134 were considered. The impact of the implemented parameterizations of the physical processes (dry and wet depositions, vertical turbulent diffusion), of the forcing fields (meteorology and source terms) and of the numerical configuration (horizontal resolution) were investigated for the sensitivity study of the model. A four dimensional variational scheme (4D-Var) based on the approximate adjoint of the chemistry transport model was used to invert the source term. The data assimilation is performed with measurements of activity concentrations in air extracted from the Radioactivity Environmental Monitoring (REM) database. For most of the investigated configurations (sensitivity study), the statistics to compare the model results to the field measurements as regards the concentrations in air are clearly improved while using a reconstructed source term. As regards the ground deposited concentrations, an improvement can only be seen in case of satisfactorily modelled episode. Through these studies, the source term and the meteorological fields are proved to have a major impact on the activity concentrations in air. These studies also reinforce the use of reconstructed source term instead of the usual estimated one. A more detailed parameterization of the deposition process seems also to be
Minihalo model for the low-redshift Lyα absorbers revisited
Directory of Open Access Journals (Sweden)
Lalović A.
2008-01-01
Full Text Available We reconsider the basic properties of the classical minihalo model of Rees and Milgrom in light of the new work, both observational (on 'dark galaxies' and masses of baryonic haloes and theoretical (on the cosmological mass function and the history of star formation. In particular, we show that more detailed models of ionized gas in haloes of dark matter following isothermal and Navarro-Frenk-White density profile can effectively reproduce particular aspects of the observed column density distribution function in a heterogeneous sample of low- and intermediate-redshift Lyα forest absorption lines.
Minihalo Model for the Low-Redshift Lyman alpha Absorbers Revisited
Directory of Open Access Journals (Sweden)
Lalović, A.
2008-06-01
Full Text Available We reconsider the basic properties of the classical minihalo model of Rees and Milgrom in light of the new work, both observational (on "dark galaxies" and masses of baryonic haloes and theoretical (on the cosmological mass function and the history of star formation. In particular, we show that more detailed models of ionized gas in haloes of dark matter following isothermal and Navarro-Frenk-White density profile can effectively reproduce particular aspects of the observed column density distribution function in a heterogeneous sample of low-and intermediate-redshift Ly$alpha$ forest absorption lines.
Phase transitions in relativistic models: revisiting the Nolen-Schiffer anomaly
International Nuclear Information System (INIS)
Menezes, D.P.; Providencia, C.
2003-01-01
We use the non-linear Walecka model in a Thomas-Fermi approximation to investigate the effects of the ρ-ω mixing term in infinite nuclear matter and in finite nuclei. For finite nuclei the contribution of the isospin mixing term is very large as compared with the expected value to solve the Nolen-Schiffer anomaly. (author)
The consensus in the two-feature two-state one-dimensional Axelrod model revisited
International Nuclear Information System (INIS)
Biral, Elias J P; Tilles, Paulo F C; Fontanari, José F
2015-01-01
The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains. (paper)
What Time Is Sunrise? Revisiting the Refraction Component of Sunrise/set Prediction Models
Wilson, Teresa; Bartlett, Jennifer L.; Hilton, James Lindsay
2017-01-01
Algorithms that predict sunrise and sunset times currently have an error of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, even including difficulties determining when the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction. We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We then compare these predictions with data sets of observed rise/set times to create a better model. Sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem. While there are a few data sets available, we will also begin collecting this data using smartphones as part of a citizen science project. The mobile application for this project will be available in the Google Play store. Data analysis will lead to more complete models that will provide more accurate rise/set times for the benefit of astronomers, navigators, and outdoorsmen everywhere.
School leadership effects revisited: a review of empirical studies guided by indirect-effect models
Hendriks, Maria A.; Scheerens, Jaap
2013-01-01
Fourteen leadership effect studies that used indirect-effect models were quantitatively analysed to explore the most promising mediating variables. The results indicate that total effect sizes based on indirect-effect studies appear to be low, quite comparable to the results of some meta-analyses of
Holland in Iceland Revisited: An Emic Approach to Evaluating U.S. Vocational Interest Models
Einarsdottir, Sif; Rounds, James; Su, Rong
2010-01-01
An emic approach was used to test the structural validity and applicability of Holland's (1997) RIASEC (Realistic, Investigative, Artistic, Social, Enterprising, Conventional) model in Iceland. Archival data from the development of the Icelandic Interest Inventory (Einarsdottir & Rounds, 2007) were used in the present investigation. The data…
Revisiting Precede-Proceed: A Leading Model for Ecological and Ethical Health Promotion
Porter, Christine M.
2016-01-01
Background: The Precede-Proceed model has provided moral and practical guidance for the fields of health education and health promotion since Lawrence Green first developed Precede in 1974 and Green and Kreuter added Proceed in 1991. Precede-Proceed today remains the most comprehensive and one of the most used approaches to promoting health.…
DEFF Research Database (Denmark)
Engsted, Tom
1994-01-01
I tidligere studier af de klassiske Europæiske hyperinflationer antages det at stød til pengeefterspørgslen er ikke-stationære. I artiklen vises det v.h.a. kointegrationstests at denne antagelse er fejlagtig. Med udgangspunkt i en kointegreret VAR model findes det, at der under de Europæiske hype...
Meille, Christophe; Barbolosi, Dominique; Ciccolini, Joseph; Freyer, Gilles; Iliadis, Athanassios
2016-08-01
Controlling effects of drugs administered in combination is particularly challenging with a densified regimen because of life-threatening hematological toxicities. We have developed a mathematical model to optimize drug dosing regimens and to redesign the dose intensification-dose escalation process, using densified cycles of combined anticancer drugs. A generic mathematical model was developed to describe the main components of the real process, including pharmacokinetics, safety and efficacy pharmacodynamics, and non-hematological toxicity risk. This model allowed for computing the distribution of the total drug amount of each drug in combination, for each escalation dose level, in order to minimize the average tumor mass for each cycle. This was achieved while complying with absolute neutrophil count clinical constraints and without exceeding a fixed risk of non-hematological dose-limiting toxicity. The innovative part of this work was the development of densifying and intensifying designs in a unified procedure. This model enabled us to determine the appropriate regimen in a pilot phase I/II study in metastatic breast patients for a 2-week-cycle treatment of docetaxel plus epirubicin doublet, and to propose a new dose-ranging process. In addition to the present application, this method can be further used to achieve optimization of any combination therapy, thus improving the efficacy versus toxicity balance of such a regimen.
Packaging the News: Propaganda Model Revisited and the Implications for Foreign Affairs Coverage.
Hsu, Mei-Ling
This research review explores the propaganda model proposed by E. S. Herman and N. Chomsky (1988) as an alternative way of looking at the American news media. The study begins with a review of the theoretical assumptions and the supporting empirical findings highlighting the propaganda framework, following which is a synthesis of research…
Revisiting the concept level of detail in 3D city modelling
Biljecki, F.; Zhao, J.; Stoter, J.E.; Ledoux, H.
2013-01-01
This review paper discusses the concept of level of detail in 3D city modelling, and is a first step towards a foundation for a standardised definition. As an introduction, a few level of detail specifications, outlooks and approaches are given from the industry. The paper analyses the general
Revisiting Kappa to account for change in the accuracy assessment of land-use models
Vliet, van J.; Bregt, A.K.; Hagen-Zanker, A.
2011-01-01
Land-use change models are typically calibrated to reproduce known historic changes. Calibration results can then be assessed by comparing two datasets: the simulated land-use map and the actual land-use map at the same time. A common method for this is the Kappa statistic, which expresses the
Kleijnen, J.P.C.
2006-01-01
Classic linear regression models and their concomitant statistical designs assume a univariate response and white noise.By definition, white noise is normally, independently, and identically distributed with zero mean.This survey tries to answer the following questions: (i) How realistic are these
How inequality hurts growth: Revisiting the Galor-Zeira model through a Korean case
Jun, Bogang; Kaltenberg, Mary; Hwang, Won-sik
2017-01-01
This paper aims to show that the level of inequality increases via the human capital channel with credit market imperfections generating negative effects on economic growth. We expand the model presented by Galor and Zeira (1993) to represent the fact that the economy benefits from endogenous
Revisiting large break LOCA with the CATHARE-3 three-field model
International Nuclear Information System (INIS)
Valette, Michel; Pouvreau, Jerome; Bestion, Dominique; Emonot, Philippe
2009-01-01
Some aspects of large break LOCA analysis (steam binding, oscillatory reflooding, top-down reflooding) are expected to be improved in advanced system codes from more detailed description of flows by adding a third field for droplets. The future system code CATHARE-3 is under development by CEA and supported by EDF, AREVA-NP and IRSN in the frame of the NEPTUNE project and this paper shows some preliminary results obtained in reflooding conditions. A three-field model has been implemented, including vapor, continuous liquid and liquid droplet fields. This model features a set of nine equations of mass, momentum and energy balance. Such a model allows a more detailed description of the droplet transportation from core to steam generator, while countercurrent flow of continuous liquid is allowed. Code assessment against reflooding experiments in an isolated rod bundle mockup is presented, using 1D meshing of the bundle. Comparisons of CATHARE-3 simulations against data series from PERICLES and RBHT full scale experiments show satisfactory results. Quench front motions are well predicted, as well as clad temperatures in most of the tested runs. The BETHSY 6.7C Integral Effect Test simulating the gravity driven Reflooding process in a scaled PWR circuit is then compared to CATHARE-3 simulation. The three-field model is applied in several parts of the circuit : core, upper plenum, hot leg and steam generator, represented by either 1D or 3D modules, while the classic 6-equation model is used in the other parts of the loop. A short analysis of the results is presented. (author)
Revisiting large break LOCA with the CATHARE-3 three-field model
International Nuclear Information System (INIS)
Valette, Michel; Pouvreau, Jérôme; Bestion, Dominique; Emonot, Philippe
2011-01-01
Highlights: ► CATHARE 3 enables a three-field analysis of a LB LOCA. ► Reflooding experiments in isolated rod bundles are satisfactory predicted. ► A BETHSY integral test simulation supports the CATHARE 3 3-field assessment. - Abstract: Some aspects of large break LOCA analysis (steam binding, oscillatory reflooding, top-down reflooding) are expected to be improved in advanced system codes from more detailed description of flows by adding a third field for droplets. The future system code CATHARE-3 is under development by CEA and supported by EDF, AREVA-NP and IRSN in the frame of the NEPTUNE project and this paper shows some preliminary results obtained in reflooding conditions. A three-field model has been implemented, including vapor, continuous liquid and liquid droplet fields. This model features a set of nine equations of mass, momentum and energy balance. Such a model allows a more detailed description of the droplet transportation from core to steam generator, while countercurrent flow of continuous liquid is allowed. Code assessment against reflooding experiments in a rod bundle is presented, using 1D meshing of the bundle. Comparisons of CATHARE-3 simulations against data series from PERICLES and RBHT full scale experiments show satisfactory results. Quench front motions are well predicted, as well as clad temperatures in most of the tested runs. The BETHSY 6.7C Integral Effect Test simulating the gravity driven reflooding process in a scaled PWR circuit is then compared to CATHARE-3 simulation. The three-field model is applied in several parts of the circuit: core, upper plenum, hot leg and steam generator, represented by either 1D or 3D modules, while the classic six-equation model is used in the other parts of the loop. An analysis of these first results is presented and future work is defined for improving the droplet behavior simulation in both the upper plenum and the hot legs.
Revisiting the O(3) non-linear sigma model and its Pohlmeyer reduction
Energy Technology Data Exchange (ETDEWEB)
Pastras, Georgios [NCSR ' ' Demokritos' ' , Institute of Nuclear and Particle Physics, Attiki (Greece)
2018-01-15
It is well known that sigma models in symmetric spaces accept equivalent descriptions in terms of integrable systems, such as the sine-Gordon equation, through Pohlmeyer reduction. In this paper, we study the mapping between known solutions of the Euclidean O(3) non-linear sigma model, such as instantons, merons and elliptic solutions that interpolate between the latter, and solutions of the Pohlmeyer reduced theory, namely the sinh-Gordon equation. It turns out that instantons do not have a counterpart, merons correspond to the ground state, while the class of elliptic solutions is characterized by a two to one correspondence between solutions in the two descriptions. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
The general dynamic model of island biogeography revisited on the level of major plant families
DEFF Research Database (Denmark)
Lenzner, Bernd; Beierkuhnlein, Carl; Patrick, Weigelt
2017-01-01
Aim: The general dynamic model (GDM) proposed by Whittaker et al. (2008) is a widely accepted theoretical framework in island biogeography. In this study, we explore whether GDM predictions hold when overall plant diversity is deconstructed into major plant families. Location: 101 islands from 14...... oceanic archipelagos worldwide. Methods: Occurrence data for all species of nine large, cosmopolitan flowering plant families were used to test predictions derived from the GDM. We analyzed the effects of island area and age on species richness as well as number and percentage of single-island endemic...... species per family using mixed-effect models. Results: Total species and endemic richness as well as the percentage of endemic species showed a hump-shaped relationship with island age. The overall pattern was mainly driven by few species-rich plant families. Varying patterns were found for individual...
The signal-to-noise analysis of the Little-Hopfield model revisited
International Nuclear Information System (INIS)
Bolle, D; Blanco, J Busquets; Verbeiren, T
2004-01-01
Using the generating functional analysis an exact recursion relation is derived for the time evolution of the effective local field of the fully connected Little-Hopfield model. It is shown that, by leaving out the feedback correlations arising from earlier times in this effective dynamics, one precisely finds the recursion relations usually employed in the signal-to-noise approach. The consequences of this approximation as well as the physics behind it are discussed. In particular, it is pointed out why it is hard to notice the effects, especially for model parameters corresponding to retrieval. Numerical simulations confirm these findings. The signal-to-noise analysis is then extended to include all correlations, making it a full theory for dynamics at the level of the generating functional analysis. The results are applied to the frequently employed extremely diluted (a)symmetric architectures and to sequence processing networks
Ponzano-Regge model revisited: I. Gauge fixing, observables and interacting spinning particles
International Nuclear Information System (INIS)
Freidel, Laurent; Louapre, David
2004-01-01
We show how to properly gauge fix all the symmetries of the Ponzano-Regge model for 3D quantum gravity. This amounts to doing explicit finite computations for transition amplitudes. We give the construction of the transition amplitudes in the presence of interacting quantum spinning particles. We introduce a notion of operators whose expectation value gives rise to either gauge fixing, introduction of time, or insertion of particles, according to the choice. We give the link between the spin foam quantization and the Hamiltonian quantization. We finally show the link between the Ponzano-Regge model and the quantization of Chern-Simons theory based on the double quantum group of SU(2)
International Nuclear Information System (INIS)
Bachschmid-Romano, Ludovica; Opper, Manfred
2015-01-01
We study analytically the performance of a recently proposed algorithm for learning the couplings of a random asymmetric kinetic Ising model from finite length trajectories of the spin dynamics. Our analysis shows the importance of the nontrivial equal time correlations between spins induced by the dynamics for the speed of learning. These correlations become more important as the spin’s stochasticity is decreased. We also analyse the deviation of the estimation error (paper)
Elementary isovector spin and orbital magnetic dipole modes revisited in the shell model
International Nuclear Information System (INIS)
Richter, A.
1988-08-01
A review is given on the status of mainly spin magnetic dipole modes in some sd- and fp-shell nuclei studied with inelastic electron and proton scattering, and by β + -decay. Particular emphasis is also placed on a fairly new, mainly orbital magnetic dipole mode investigated by high-resolution (e,e') and (p,p') scattering experiments on a series of fp-shell nuclei. Both modes are discussed in terms of the shell model with various effective interactions. (orig.)
The Educational Model of Private Colleges of Osteopathic Medicine: Revisited for 2003-2013.
Cummings, Mark
2015-12-01
Trends in the development of new private colleges of osteopathic medicine (COMs) described by the author in 2003 have accelerated in the ensuing decade. During 2003 to 2013, 10 new COMs as well as 2 remote teaching sites and 4 new branch campuses at private institutions were accredited, leading to a 98% increase in the number of students enrolled in private COMs. The key features of the private COM educational model during this period were a reliance on student tuition, the establishment of health professions education programs around the medical school, the expansion of class size, the creation of branch campuses and remote teaching sites, an environment that emphasizes teaching over research, and limited involvement in facilities providing clinical services to patients. There is institutional ownership of preclinical instruction, but clinical instruction occurs in affiliated hospitals and medical institutions where students are typically taught by volunteer and/or adjunct faculty.Between 2003 and 2013, this model attracted smaller universities and organizations, which implemented the strategies of established private COMs in initiating new private COMs, branch campuses, and remote teaching sites. The new COMs have introduced changes to the osteopathic profession and private COM model by expanding to new parts of the country and establishing the first for-profit medical school accredited in the United States in modern times. They have also increased pressure on the system of osteopathic graduate medical education, as the number of funded GME positions available to their graduates is less than the need.
The two-capacitor problem revisited: a mechanical harmonic oscillator model approach
International Nuclear Information System (INIS)
Lee, Keeyung
2009-01-01
The well-known two-capacitor problem, in which exactly half the stored energy disappears when a charged capacitor is connected to an identical capacitor, is discussed based on the mechanical harmonic oscillator model approach. In the mechanical harmonic oscillator model, it is shown first that exactly half the work done by a constant applied force is dissipated irrespective of the form of dissipation mechanism when the system comes to a new equilibrium after a constant force is abruptly applied. This model is then applied to the energy loss mechanism in the capacitor charging problem or the two-capacitor problem. This approach allows a simple explanation of the energy dissipation mechanism in these problems and shows that the dissipated energy should always be exactly half the supplied energy whether that is caused by the Joule heat or by the radiation. This paper, which provides a simple treatment of the energy dissipation mechanism in the two-capacitor problem, is suitable for all undergraduate levels
Thermodynamic modeling of the U–Zr system – A revisit
International Nuclear Information System (INIS)
Xiong, Wei; Xie, Wei; Shen, Chao; Morgan, Dane
2013-01-01
Graphical abstract: Display Omitted -- Abstract: A new thermodynamic description of the U–Zr system is developed using the CALPHAD (CALculation of PHAse Diagrams) method with the aid of ab initio calculations. Thermodynamic properties, such as heat capacity, activities, and enthalpy of mixing, are well predicted using the improved thermodynamic description in this work. The model-predicted enthalpies of formation for the bcc and δ phases are in good agreement with the results from DFT + U ab initio calculations. The calculations in this work show better agreements with experimental data comparing with the previous assessments. Using the integrated method of ab initio and CALPHAD modeling, an unexpected relation between the enthalpy of formation of the δ phase and energy of Zr with hexagonal structure is revealed and the model improved by fitting these energies together. The present work has demonstrated that ab initio calculations can help support a successful thermodynamic assessment of actinide systems, for which the thermodynamic properties are often difficult to measure
Coastal Water Quality Modeling in Tidal Lake: Revisited with Groundwater Intrusion
Kim, C.
2016-12-01
A new method for predicting the temporal and spatial variation of water quality, with accounting for a groundwater effect, has been proposed and applied to a water body partially connected to macro-tidal coastal waters in Korea. The method consists of direct measurement of environmental parameters, and it indirectly incorporates a nutrients budget analysis to estimate the submarine groundwater fluxes. Three-dimensional numerical modeling of water quality has been used with the directly collected data and the indirectly estimated groundwater fluxes. The applied area is Saemangeum tidal lake that is enclosed by 33km-long sea dyke with tidal openings at two water gates. Many investigations of groundwater impact reveal that 10 50% of nutrient loading in coastal waters comes from submarine groundwater, particularly in the macro-tidal flat, as in the west coast of Korea. Long-term monitoring of coastal water quality signals the possibility of groundwater influence on salinity reversal and on the excess mass outbalancing the normal budget in Saemangeum tidal lake. In the present study, we analyze the observed data to examine the influence of submarine groundwater, and then a box model is demonstrated for quantifying the influx and efflux. A three-dimensional numerical model has been applied to reproduce the process of groundwater dispersal and its effect on the water quality of Saemangeum tidal lake. The results show that groundwater influx during the summer monsoon then contributes significantly, 20% more than during dry season, to water quality in the tidal lake.
Revisiting simplified dark matter models in terms of AMS-02 and Fermi-LAT
Li, Tong
2018-01-01
We perform an analysis of the simplified dark matter models in the light of cosmic ray observables by AMS-02 and Fermi-LAT. We assume fermion, scalar or vector dark matter particle with a leptophobic spin-0 mediator that couples only to Standard Model quarks and dark matter via scalar and/or pseudo-scalar bilinear. The propagation and injection parameters of cosmic rays are determined by the observed fluxes of nuclei from AMS-02. We find that the AMS-02 observations are consistent with the dark matter framework within the uncertainties. The AMS-02 antiproton data prefer 30 (50) GeV - 5 TeV dark matter mass and require an effective annihilation cross section in the region of 4 × 10-27 (7 × 10-27) - 4 × 10-24 cm3/s for the simplified fermion (scalar and vector) dark matter models. The cross sections below 2 × 10-26 cm3/s can evade the constraint from Fermi-LAT dwarf galaxies for about 100 GeV dark matter mass.
The Halo Occupation Distribution of obscured quasars: revisiting the unification model
Mitra, Kaustav; Chatterjee, Suchetana; DiPompeo, Michael A.; Myers, Adam D.; Zheng, Zheng
2018-06-01
We model the projected angular two-point correlation function (2PCF) of obscured and unobscured quasars selected using the Wide-field Infrared Survey Explorer (WISE), at a median redshift of z ˜ 1 using a five parameter Halo Occupation Distribution (HOD) parametrization, derived from a cosmological hydrodynamic simulation by Chatterjee et al. The HOD parametrization was previously used to model the 2PCF of optically selected quasars and X-ray bright active galactic nuclei (AGNs) at z ˜ 1. The current work shows that a single HOD parametrization can be used to model the population of different kinds of AGN in dark matter haloes suggesting the universality of the relationship between AGN and their host dark matter haloes. Our results show that the median halo mass of central quasar hosts increases from optically selected (4.1^{+0.3}_{-0.4} × 10^{12} h^{-1} M_{⊙}) and infra-red (IR) bright unobscured populations (6.3^{+6.2}_{-2.3} × 10^{12} h^{-1} M_{⊙}) to obscured quasars (10.0^{+2.6}_{-3.7} × 10^{12} h^{-1} M_{⊙}), signifying an increase in the degree of clustering. The projected satellite fractions also increase from optically bright to obscured quasars and tend to disfavour a simple `orientation only' theory of active galactic nuclei unification. Our results also show that future measurements of the small-scale clustering of obscured quasars can constrain current theories of galaxy evolution where quasars evolve from an IR-bright obscured phase to the optically bright unobscured phase.
The S-wave model for electron-hydrogen scattering revisited
International Nuclear Information System (INIS)
Bartschat, K.; Bray, I.
1996-03-01
The R-matrix with pseudo-states (RMPS) and convergent close-coupling (CCC) methods are applied to the calculation of elastic, excitation, and total as well as single-differential ionization cross sections for the simplified S-wave model of electron-hydrogen scattering. Excellent agreement is obtained for the total cross section results obtained at electron energies between 0 and 100 eV. The two calculations also agree on the single-differential ionization cross section at 54.4 eV for the triplet spin channel, while discrepancies are evident in the singlet channel which shows remarkable structure. 18 refs., 3 figs
Urban Morphology Influence on Urban Albedo: A Revisit with the S olene Model
Groleau, Dominique; Mestayer, Patrice G.
2013-05-01
This heuristic study of the urban morphology influence on urban albedo is based on some 3,500 simulations with the S olene model. The studied configurations include square blocks in regular and staggered rows, rectangular blocks with different street widths, cross-shaped blocks, infinite street canyons and several actual districts in Marseilles, Toulouse and Nantes, France. The scanned variables are plan density, facade density, building height, layout orientation, latitude, date and time of the day. The sky-view factors of the ground and canopy surfaces are also considered. This study demonstrates the significance of the facade density, in addition to the built plan density, as the explanatory geometrical factor to characterize the urban morphology, rather than building height. On the basis of these albedo calculations the puzzling results of Kondo et al. (Boundary-Layer Meteorol 100:225-242, 2001) for the influence of building height are explained, and the plan density influence is quantitatively assessed. It is shown that the albedo relationship with plan and facade densities obtained with the regular square plot configuration may be considered as a reference for all other configurations, with the exception of the infinite street canyon that shows systematic differences for the lower plan densities. The curves representing this empirical relationship may be used as a sort of abacus for all other geometries while an approximate simple mathematical model is proposed, as well as relationships between the albedo and sky-view factors.
Revisiting the constant growth angle: Estimation and verification via rigorous thermal modeling
Virozub, Alexander; Rasin, Igal G.; Brandon, Simon
2008-12-01
Methods for estimating growth angle ( θgr) values, based on the a posteriori analysis of directionally solidified material (e.g. drops) often involve assumptions of negligible gravitational effects as well as a planar solid/liquid interface during solidification. We relax both of these assumptions when using experimental drop shapes from the literature to estimate the relevant growth angles at the initial stages of solidification. Assumed to be constant, we use these values as input into a rigorous heat transfer and solidification model of the growth process. This model, which is shown to reproduce the experimental shape of a solidified sessile water drop using the literature value of θgr=0∘, yields excellent agreement with experimental profiles using our estimated values for silicon ( θgr=10∘) and germanium ( θgr=14.3∘) solidifying on an isotropic crystalline surface. The effect of gravity on the solidified drop shape is found to be significant in the case of germanium, suggesting that gravity should either be included in the analysis or that care should be taken that the relevant Bond number is truly small enough in each measurement. The planar solidification interface assumption is found to be unjustified. Although this issue is important when simulating the inflection point in the profile of the solidified water drop, there are indications that solidified drop shapes (at least in the case of silicon) may be fairly insensitive to the shape of this interface.
Ota, Kazutaka; Kohda, Masanori; Hori, Michio; Sato, Tetsu
2011-10-01
Alternative reproductive tactics are widespread in males and may cause intraspecific differences in testes investment. Parker's sneak-guard model predicts that sneaker males, who mate under sperm competition risk, invest in testes relatively more than bourgeois conspecifics that have lower risk. Given that sneakers are much smaller than bourgeois males, sneakers may increase testes investment to overcome their limited sperm productivity because of their small body sizes. In this study, we examined the mechanism that mediates differential testes investment across tactics in the Lake Tanganyika cichlid fish Lamprologus callipterus. In the Rumonge population of Burundi, bourgeois males are small compared with those in other populations and have a body size close to sneaky dwarf males. Therefore, if differences in relative testis investment depend on sperm competition, the rank order of relative testis investment should be dwarf males > bourgeois males in Rumonge = bourgeois males in the other populations. If differences in relative testis investment depend on body size, the rank order of relative testes investment should be dwarf males > bourgeois males in Rumonge > bourgeois males in the other populations. Comparisons of relative testis investment among the three male groups supported the role of sperm competition, as predicted by the sneak-guard model. Nevertheless, the effects of absolute body size on testes investment should be considered to understand the mechanisms underlying intraspecific variation in testes investment caused by alternative reproductive tactics.
Attention capture by abrupt onsets: re-visiting the priority tag model
Directory of Open Access Journals (Sweden)
Meera Mary Sunny
2013-12-01
Full Text Available Abrupt onsets have been shown to strongly attract attention in a stimulus-driven, bottom-up manner. However, the precise mechanism that drives capture by onsets is still debated. According to the new object account, abrupt onsets capture attention because they signal the appearance of a new object. Yantis and Johnson (1990 used a visual search task and showed that up to four onsets can be automatically prioritized. However, in their study the number of onsets co-varied with the total number of items in the display, allowing for a possible confound between these two variables. In the present study, display size was fixed at eight items while the number of onsets was systematically varied between zero and eight. Experiment 1 showed a systematic increase in reactions times with increasing number of onsets. This increase was stronger when the target was an onset than when it was a no-onset item, a result that is best explained by a model according to which only one onset is automatically prioritized. Even when the onsets were marked in red (Experiment 2, nearly half of the participants continued to prioritize only one onset item. Only when onset and no-onset targets were blocked (Experiment 3, participants started to search selectively through the set of only the relevant target type. These results further support the finding that only one onset captures attention. Many bottom-up models of attention capture, like masking or saliency accounts, can efficiently explain this finding.
Attention capture by abrupt onsets: re-visiting the priority tag model.
Sunny, Meera M; von Mühlenen, Adrian
2013-01-01
Abrupt onsets have been shown to strongly attract attention in a stimulus-driven, bottom-up manner. However, the precise mechanism that drives capture by onsets is still debated. According to the new object account, abrupt onsets capture attention because they signal the appearance of a new object. Yantis and Johnson (1990) used a visual search task and showed that up to four onsets can be automatically prioritized. However, in their study the number of onsets co-varied with the total number of items in the display, allowing for a possible confound between these two variables. In the present study, display size was fixed at eight items while the number of onsets was systematically varied between zero and eight. Experiment 1 showed a systematic increase in reactions times with increasing number of onsets. This increase was stronger when the target was an onset than when it was a no-onset item, a result that is best explained by a model according to which only one onset is automatically prioritized. Even when the onsets were marked in red (Experiment 2), nearly half of the participants continued to prioritize only one onset item. Only when onset and no-onset targets were blocked (Experiment 3), participants started to search selectively through the set of only the relevant target type. These results further support the finding that only one onset captures attention. Many bottom-up models of attention capture, like masking or saliency accounts, can efficiently explain this finding.
Ranger, N.; Millner, A.; Niehoerster, F.
2010-12-01
Traditionally, climate change risk assessments have taken a roughly four-stage linear ‘chain’ of moving from socioeconomic projections, to climate projections, to primary impacts and then finally onto economic and social impact assessment. Adaptation decisions are then made on the basis of these outputs. The escalation of uncertainty through this chain is well known; resulting in an ‘explosion’ of uncertainties in the final risk and adaptation assessment. The space of plausible future risk scenarios is growing ever wider with the application of new techniques which aim to explore uncertainty ever more deeply; such as those used in the recent ‘probabilistic’ UK Climate Projections 2009, and the stochastic integrated assessment models, for example PAGE2002. This explosion of uncertainty can make decision-making problematic, particularly given that the uncertainty information communicated can not be treated as strictly probabilistic and therefore, is not an easy fit with standard decision-making under uncertainty approaches. Additional problems can arise from the fact that the uncertainty estimated for different components of the ‘chain’ is rarely directly comparable or combinable. Here, we explore the challenges and limitations of using current projections for adaptation decision-making. We report the findings of a recent report completed for the UK Adaptation Sub-Committee on approaches to deal with these challenges and make robust adaptation decisions today. To illustrate these approaches, we take a number of illustrative case studies, including a case of adaptation to hurricane risk on the US Gulf Coast. This is a particularly interesting case as it involves urgent adaptation of long-lived infrastructure but requires interpreting highly uncertain climate change science and modelling; i.e. projections of Atlantic basin hurricane activity. An approach we outline is reversing the linear chain of assessments to put the economics and decision
Pola, Marco; Cacace, Mauro; Fabbri, Paolo; Piccinini, Leonardo; Zampieri, Dario; Dalla Libera, Nico
2017-04-01
As one of the largest and most extensive utilized geothermal system in northern Italy, the Euganean Geothermal System (EGS, Veneto region, NE Italy) has long been the subject of still ongoing studies. Hydrothermal waters feeding the system are of meteoric origin and infiltrate in the Veneto Prealps, to the north of the main geothermal area. The waters circulate for approximately 100 km in the subsurface of the central Veneto, outflowing with temperatures from 65°C to 86°C to the southwest near the cities of Abano Terme and Montegrotto Terme. The naturally emerging waters are mainly used for balneotherapeutic purposes, forming the famous Euganean spa district. This preferential outflow is thought to have a relevant structural component producing a high secondary permeability localized within an area of limited extent (approx. 25 km2). This peculiar structure is associated with a local network of fractures resulting from transtentional tectonics of the regional Schio-Vicenza fault system (SVFS) bounding the Euganean Geothermal Field (EGF). In the present study, a revised conceptual hydrothermal model for the EGS based on the regional hydrogeology and structural geology is proposed. Particularly, this work aims to quantify: (1) the role of the regional SVFS, and (2) the impact of the high density local fractures mesh beneath the EGF on the regional-to-local groundwater flow circulation at depths and its thermal configuration. 3D coupled flow and heat transport numerical simulations inspired by the newly developed conceptual model are carried out to properly quantify the results from these interactions. Consistently with the observations, the obtained results provide indication for temperatures in the EGF reservoir being higher than in the surrounding areas, despite a uniform basal regional crustal heat inflow. In addition, they point to the presence of a structural causative process for the localized outflow, in which deep-seated groundwater is preferentially
Hartono, A. D.; Hakiki, Farizal; Syihab, Z.; Ambia, F.; Yasutra, A.; Sutopo, S.; Efendi, M.; Sitompul, V.; Primasari, I.; Apriandi, R.
2017-01-01
EOR preliminary analysis is pivotal to be performed at early stage of assessment in order to elucidate EOR feasibility. This study proposes an in-depth analysis toolkit for EOR preliminary evaluation. The toolkit incorporates EOR screening, predictive, economic, risk analysis and optimisation modules. The screening module introduces algorithms which assimilates statistical and engineering notions into consideration. The United States Department of Energy (U.S. DOE) predictive models were implemented in the predictive module. The economic module is available to assess project attractiveness, while Monte Carlo Simulation is applied to quantify risk and uncertainty of the evaluated project. Optimization scenario of EOR practice can be evaluated using the optimisation module, in which stochastic methods of Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Evolutionary Strategy (ES) were applied in the algorithms. The modules were combined into an integrated package of EOR preliminary assessment. Finally, we utilised the toolkit to evaluate several Indonesian oil fields for EOR evaluation (past projects) and feasibility (future projects). The attempt was able to update the previous consideration regarding EOR attractiveness and open new opportunity for EOR implementation in Indonesia.
Structural plasticity in the dentate gyrus- revisiting a classic injury model.
Directory of Open Access Journals (Sweden)
Julia V. Perederiy
2013-02-01
Full Text Available The adult brain is in a continuous state of remodeling. This is nowhere more true than in the dentate gyrus, where competing forces such as neurodegeneration and neurogenesis dynamically modify neuronal connectivity, and can occur simultaneously. This plasticity of the adult nervous system is particularly important in the context of traumatic brain injury or deafferentation. In this review, we summarize a classic injury model, lesioning of the perforant path, which removes the main extrahippocampal input to the dentate gyrus. Early studies revealed that in response to deafferentation, axons of remaining fiber systems and dendrites of mature granule cells undergo lamina-specific changes, providing one of the first examples of structural plasticity in the adult brain. Given the increasing role of adult-generated new neurons in the function of the dentate gyrus, we also compare the response of newborn and mature granule cells following lesioning of the perforant path. These studies provide insights not only to plasticity in the dentate gyrus, but also to the response of neural circuits to brain injury.
Revisiting source identification, weathering models, and phase discrimination for Exxon Valdez oil
International Nuclear Information System (INIS)
Driskell, W.B.; Payne, J.R.; Shigenaka, G.
2005-01-01
A large chemistry data set for polycyclic aromatic hydrocarbon (PAH) and saturated hydrocarbon (SHC) contamination in sediment, water and tissue samples has emerged in the aftermath of the 1989 Exxon Valdez oil spill in Prince William Sound, Alaska. When the oil was fresh, source identification was a primary objective and fairly reliable. However, source identification became problematic as the oil weathered and its signatures changed. In response to concerns regarding when the impacted area will be clean again, this study focused on developing appropriate tools to confirm hydrocarbon source identifications and assess weathering in various matrices. Previous efforts that focused only on the whole or particulate-phase oil are not adequate to track dissolved-phase signal with low total PAH values. For that reason, a particulate signature index (PSI) and dissolved signature index (DSI) screening tool was developed in this study to discriminate between these 2 phases. The screening tool was used to measure the dissolved or water-soluble fraction of crude oil which occurs at much lower levels than the particulate phase, but which is more widely circulated and equally as important as the particulate oil phase. The discrimination methods can also identify normally-discarded, low total PAH samples which can increase the amount of usable data needed to model other effects of oil spills. 37 refs., 3 tabs., 10 figs
Chatterjee, Ankita; Kundu, Sudip
2015-01-01
Chlorophyll is one of the most important pigments present in green plants and rice is one of the major food crops consumed worldwide. We curated the existing genome scale metabolic model (GSM) of rice leaf by incorporating new compartment, reactions and transporters. We used this modified GSM to elucidate how the chlorophyll is synthesized in a leaf through a series of bio-chemical reactions spanned over different organelles using inorganic macronutrients and light energy. We predicted the essential reactions and the associated genes of chlorophyll synthesis and validated against the existing experimental evidences. Further, ammonia is known to be the preferred source of nitrogen in rice paddy fields. The ammonia entering into the plant is assimilated in the root and leaf. The focus of the present work is centered on rice leaf metabolism. We studied the relative importance of ammonia transporters through the chloroplast and the cytosol and their interlink with other intracellular transporters. Ammonia assimilation in the leaves takes place by the enzyme glutamine synthetase (GS) which is present in the cytosol (GS1) and chloroplast (GS2). Our results provided possible explanation why GS2 mutants show normal growth under minimum photorespiration and appear chlorotic when exposed to air. PMID:26443104
Hartono, A. D.
2017-10-17
EOR preliminary analysis is pivotal to be performed at early stage of assessment in order to elucidate EOR feasibility. This study proposes an in-depth analysis toolkit for EOR preliminary evaluation. The toolkit incorporates EOR screening, predictive, economic, risk analysis and optimisation modules. The screening module introduces algorithms which assimilates statistical and engineering notions into consideration. The United States Department of Energy (U.S. DOE) predictive models were implemented in the predictive module. The economic module is available to assess project attractiveness, while Monte Carlo Simulation is applied to quantify risk and uncertainty of the evaluated project. Optimization scenario of EOR practice can be evaluated using the optimisation module, in which stochastic methods of Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Evolutionary Strategy (ES) were applied in the algorithms. The modules were combined into an integrated package of EOR preliminary assessment. Finally, we utilised the toolkit to evaluate several Indonesian oil fields for EOR evaluation (past projects) and feasibility (future projects). The attempt was able to update the previous consideration regarding EOR attractiveness and open new opportunity for EOR implementation in Indonesia.
Cerebellar supervised learning revisited: biophysical modeling and degrees-of-freedom control.
Kawato, Mitsuo; Kuroda, Shinya; Schweighofer, Nicolas
2011-10-01
The biophysical models of spike-timing-dependent plasticity have explored dynamics with molecular basis for such computational concepts as coincidence detection, synaptic eligibility trace, and Hebbian learning. They overall support different learning algorithms in different brain areas, especially supervised learning in the cerebellum. Because a single spine is physically very small, chemical reactions at it are essentially stochastic, and thus sensitivity-longevity dilemma exists in the synaptic memory. Here, the cascade of excitable and bistable dynamics is proposed to overcome this difficulty. All kinds of learning algorithms in different brain regions confront with difficult generalization problems. For resolution of this issue, the control of the degrees-of-freedom can be realized by changing synchronicity of neural firing. Especially, for cerebellar supervised learning, the triangle closed-loop circuit consisting of Purkinje cells, the inferior olive nucleus, and the cerebellar nucleus is proposed as a circuit to optimally control synchronous firing and degrees-of-freedom in learning. Copyright © 2011 Elsevier Ltd. All rights reserved.
Piecing together the maternal death puzzle through narratives: the three delays model revisited.
Directory of Open Access Journals (Sweden)
Viva Combs Thorsen
Full Text Available BACKGROUND: In Malawi maternal mortality continues to be a major public health challenge. Going beyond the numbers to form a more complete view of why women die is critical to improving access to and quality of emergency obstetric care. The objective of the current study was to identify the socio-cultural and facility-based factors that contributed to maternal deaths in the district of Lilongwe, Malawi. METHODS: Retrospectively, 32 maternal death cases that occurred between January 1, 2011 and June 30, 2011 were reviewed independently by two gynecologists/obstetricians. Interviews were conducted with healthcare staff, family members, neighbors, and traditional birth attendants. Guided by the grounded theory approach, interview transcripts were analyzed manually and continuously. Emerging, recurring themes were identified and excerpts from the transcripts were categorized according to the Three Delays Model (3Ds. RESULTS: Sixteen deaths were due to direct obstetric complications, sepsis and hemorrhage being most common. Sixteen deaths were due to indirect causes with the main cause being anemia, followed by HIV and heart disease. Lack of recognizing signs, symptoms, and severity of the situation; using traditional Birth Attendant services; low female literacy level; delayed access to transport; hardship of long distance and physical terrain; delayed prompt quality emergency obstetric care; and delayed care while at the hospital due to patient refusal or concealment were observed. According to the 3Ds, the most common delay observed was in receiving treatment upon reaching the facility due to referral delays, missed diagnoses, lack of blood, lack of drugs, or inadequate care, and severe mismanagement.
Oxidative phosphorylation revisited
DEFF Research Database (Denmark)
Nath, Sunil; Villadsen, John
2015-01-01
The fundamentals of oxidative phosphorylation and photophosphorylation are revisited. New experimental data on the involvement of succinate and malate anions respectively in oxidative phosphorylation and photophosphorylation are presented. These new data offer a novel molecular mechanistic...
Revisiting the formal foundation of Probabilistic Databases
Wanders, B.; van Keulen, Maurice
2015-01-01
One of the core problems in soft computing is dealing with uncertainty in data. In this paper, we revisit the formal foundation of a class of probabilistic databases with the purpose to (1) obtain data model independence, (2) separate metadata on uncertainty and probabilities from the raw data, (3)
DEFF Research Database (Denmark)
Hansen, Morten Balle; Lindholst, Andrej Christian
2016-01-01
out; Benchmarking and yardstick competition; and Public-Private collaboration. On the basis of the review of the seven articles, it is found that all elements in all marketization models are firmly embedded but also under dynamic change within public service delivery systems. The review also......Purpose: The purpose of this introduction article to the IJPSM special issue on marketization is to clarify the conceptual foundations of marketization as a phenomenon within the public sector and to gauge current marketization trends on the basis of the seven articles in the special issue. Design....../methodology/approach: Conceptual clarification and cross-cutting review of seven articles analysing marketization in six countries in three policy areas at the level of local government. Findings: Four ideal-type models are deduced: Quasi-markets, involving both provider competition and free choice for users; Classic contracting...
The critical catastrophe revisited
International Nuclear Information System (INIS)
De Mulatier, Clélia; Rosso, Alberto; Dumonteil, Eric; Zoia, Andrea
2015-01-01
The neutron population in a prototype model of nuclear reactor can be described in terms of a collection of particles confined in a box and undergoing three key random mechanisms: diffusion, reproduction due to fissions, and death due to absorption events. When the reactor is operated at the critical point, and fissions are exactly compensated by absorptions, the whole neutron population might in principle go to extinction because of the wild fluctuations induced by births and deaths. This phenomenon, which has been named critical catastrophe, is nonetheless never observed in practice: feedback mechanisms acting on the total population, such as human intervention, have a stabilizing effect. In this work, we revisit the critical catastrophe by investigating the spatial behaviour of the fluctuations in a confined geometry. When the system is free to evolve, the neutrons may display a wild patchiness (clustering). On the contrary, imposing a population control on the total population acts also against the local fluctuations, and may thus inhibit the spatial clustering. The effectiveness of population control in quenching spatial fluctuations will be shown to depend on the competition between the mixing time of the neutrons (i.e. the average time taken for a particle to explore the finite viable space) and the extinction time
International Nuclear Information System (INIS)
Towner, I.S.; Khanna, F.C.
1984-01-01
Consideration of core polarization, isobar currents and meson-exchange processes gives a satisfactory understanding of the ground-state magnetic moments in closed-shell-plus (or minus)-one nuclei, A = 3, 15, 17, 39 and 41. Ever since the earliest days of the nuclear shell model the understanding of magnetic moments of nuclear states of supposedly simple configurations, such as doubly closed LS shells +-1 nucleon, has been a challenge for theorists. The experimental moments, which in most cases are known with extraordinary precision, show a small yet significant departure from the single-particle Schmidt values. The departure, however, is difficult to evaluate precisely since, as will be seen, it results from a sensitive cancellation between several competing corrections each of which can be as large as the observed discrepancy. This, then, is the continuing fascination of magnetic moments. In this contribution, we revisit the subjet principally to identify the role played by isobar currents, which are of much concern at this conference. But in so doing we warn quite strongly of the dangers of considering just isobar currents in isolation; equal consideration must be given to competing processes which in this context are the mundane nuclear structure effects, such as core polarization, and the more popular meson-exchange currents
Lorentz violation naturalness revisited
Energy Technology Data Exchange (ETDEWEB)
Belenchia, Alessio; Gambassi, Andrea; Liberati, Stefano [SISSA - International School for Advanced Studies, via Bonomea 265, 34136 Trieste (Italy); INFN, Sezione di Trieste, via Valerio 2, 34127 Trieste (Italy)
2016-06-08
We revisit here the naturalness problem of Lorentz invariance violations on a simple toy model of a scalar field coupled to a fermion field via a Yukawa interaction. We first review some well-known results concerning the low-energy percolation of Lorentz violation from high energies, presenting some details of the analysis not explicitly discussed in the literature and discussing some previously unnoticed subtleties. We then show how a separation between the scale of validity of the effective field theory and that one of Lorentz invariance violations can hinder this low-energy percolation. While such protection mechanism was previously considered in the literature, we provide here a simple illustration of how it works and of its general features. Finally, we consider a case in which dissipation is present, showing that the dissipative behaviour does not percolate generically to lower mass dimension operators albeit dispersion does. Moreover, we show that a scale separation can protect from unsuppressed low-energy percolation also in this case.
Torfs, Elena; Marti, M. Carmen; Locatelli, Florent; Balemans, Sophie; Burger, Raimund; Diehl, Stefan; Laurent, Julien; Vanrolleghem, Peter A.; Francois, Pierre; Nopens, Ingmar
2017-01-01
A new perspective on the modelling of settling behaviour in water resource recovery facilities is introduced. The ultimate goal is to describe in a unified way the processes taking place both in primary settling tanks (PSTs) and secondary settling tanks (SSTs) for a more detailed operation and control. First, experimental evidence is provided, pointing out distributed particle properties (such as size, shape, density, porosity, and flocculation state) as an important common source of distribu...
Torfs, Elena; Martí, M Carmen; Locatelli, Florent; Balemans, Sophie; Bürger, Raimund; Diehl, Stefan; Laurent, Julien; Vanrolleghem, Peter A; François, Pierre; Nopens, Ingmar
2017-02-01
A new perspective on the modelling of settling behaviour in water resource recovery facilities is introduced. The ultimate goal is to describe in a unified way the processes taking place both in primary settling tanks (PSTs) and secondary settling tanks (SSTs) for a more detailed operation and control. First, experimental evidence is provided, pointing out distributed particle properties (such as size, shape, density, porosity, and flocculation state) as an important common source of distributed settling behaviour in different settling unit processes and throughout different settling regimes (discrete, hindered and compression settling). Subsequently, a unified model framework that considers several particle classes is proposed in order to describe distributions in settling behaviour as well as the effect of variations in particle properties on the settling process. The result is a set of partial differential equations (PDEs) that are valid from dilute concentrations, where they correspond to discrete settling, to concentrated suspensions, where they correspond to compression settling. Consequently, these PDEs model both PSTs and SSTs.
Revisiting Okun's Relationship
Dixon, R.; Lim, G.C.; van Ours, Jan
2016-01-01
Our paper revisits Okun's relationship between observed unemployment rates and output gaps. We include in the relationship the effect of labour market institutions as well as age and gender effects. Our empirical analysis is based on 20 OECD countries over the period 1985-2013. We find that the
Revisiting the Okun relationship
Dixon, R. (Robert); Lim, G.C.; J.C. van Ours (Jan)
2017-01-01
textabstractOur article revisits the Okun relationship between observed unemployment rates and output gaps. We include in the relationship the effect of labour market institutions as well as age and gender effects. Our empirical analysis is based on 20 OECD countries over the period 1985–2013. We
Bounded Intention Planning Revisited
Sievers Silvan; Wehrle Martin; Helmert Malte
2014-01-01
Bounded intention planning provides a pruning technique for optimal planning that has been proposed several years ago. In addition partial order reduction techniques based on stubborn sets have recently been investigated for this purpose. In this paper we revisit bounded intention planning in the view of stubborn sets.
A Hydrostatic Paradox Revisited
Ganci, Salvatore
2012-01-01
This paper revisits a well-known hydrostatic paradox, observed when turning upside down a glass partially filled with water and covered with a sheet of light material. The phenomenon is studied in its most general form by including the mass of the cover. A historical survey of this experiment shows that a common misunderstanding of the phenomenon…
DEFF Research Database (Denmark)
Cornean, Horia; Nenciu, Gheorghe
2009-01-01
This paper is the second in a series revisiting the (effect of) Faraday rotation. We formulate and prove the thermodynamic limit for the transverse electric conductivity of Bloch electrons, as well as for the Verdet constant. The main mathematical tool is a regularized magnetic and geometric...
The Levy sections theorem revisited
International Nuclear Information System (INIS)
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Silva, Sergio Da
2007-01-01
This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets
The Levy sections theorem revisited
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio
2007-06-01
This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.
International Nuclear Information System (INIS)
George, Phiji P.; Irodi, Aparna; Keshava, Shyamkumar N.; Lamont, Anthony C.
2014-01-01
In this article we revisit, with the help of images, those classic signs in chest radiography described by Dr Benjamin Felson himself, or other illustrious radiologists of his time, cited and discussed in 'Chest Roentgenology'. We briefly describe the causes of the signs, their utility and the differential diagnosis to be considered when each sign is seen. Wherever possible, we use CT images to illustrate the basis of some of these classic radiographic signs.
Fathi, Albert
2015-07-01
In this paper we revisit our joint work with Antonio Siconolfi on time functions. We will give a brief introduction to the subject. We will then show how to construct a Lipschitz time function in a simplified setting. We will end with a new result showing that the Aubry set is not an artifact of our proof of existence of time functions for stably causal manifolds.
Whitehead, Jim; De Bra, Paul; Grønbæk, Kaj; Larsen, Deena; Legget, John; schraefel, monica m.c.
2002-01-01
It has been 15 years since the original presentation by Frank Halasz at Hypertext'87 on seven issues for the next generation of hypertext systems. These issues are: Search and Query Composites Virtual Structures Computation in/over hypertext network Versioning Collaborative Work Extensibility and Tailorability Since that time, these issues have formed the nucleus of multiple research agendas within the Hypertext community. Befitting this direction-setting role, the issues have been revisited ...
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
Moresi, Louis
2015-04-01
Dynamic Topography Revisited Dynamic topography is usually considered to be one of the trinity of contributing causes to the Earth's non-hydrostatic topography along with the long-term elastic strength of the lithosphere and isostatic responses to density anomalies within the lithosphere. Dynamic topography, thought of this way, is what is left over when other sources of support have been eliminated. An alternate and explicit definition of dynamic topography is that deflection of the surface which is attributable to creeping viscous flow. The problem with the first definition of dynamic topography is 1) that the lithosphere is almost certainly a visco-elastic / brittle layer with no absolute boundary between flowing and static regions, and 2) the lithosphere is, a thermal / compositional boundary layer in which some buoyancy is attributable to immutable, intrinsic density variations and some is due to thermal anomalies which are coupled to the flow. In each case, it is difficult to draw a sharp line between each contribution to the overall topography. The second definition of dynamic topography does seem cleaner / more precise but it suffers from the problem that it is not measurable in practice. On the other hand, this approach has resulted in a rich literature concerning the analysis of large scale geoid and topography and the relation to buoyancy and mechanical properties of the Earth [e.g. refs 1,2,3] In convection models with viscous, elastic, brittle rheology and compositional buoyancy, however, it is possible to examine how the surface topography (and geoid) are supported and how different ways of interpreting the "observable" fields introduce different biases. This is what we will do. References (a.k.a. homework) [1] Hager, B. H., R. W. Clayton, M. A. Richards, R. P. Comer, and A. M. Dziewonski (1985), Lower mantle heterogeneity, dynamic topography and the geoid, Nature, 313(6003), 541-545, doi:10.1038/313541a0. [2] Parsons, B., and S. Daly (1983), The
Advanced Change Theory Revisited: An Article Critique
Directory of Open Access Journals (Sweden)
R. Scott Pochron
2008-12-01
Full Text Available The complexity of life in 21st century society requires new models for leading and managing change. With that in mind, this paper revisits the model for Advanced Change Theory (ACT as presented by Quinn, Spreitzer, and Brown in their article, “Changing Others Through Changing Ourselves: The Transformation of Human Systems” (2000. The authors present ACT as a potential model for facilitating change in complex organizations. This paper presents a critique of the article and summarizes opportunities for further exploring the model in the light of current trends in developmental and integral theory.
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2012-01-01
Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...
Metamorphosis in Craniiformea revisited
DEFF Research Database (Denmark)
Altenburger, Andreas; Wanninger, Andreas; Holmer, Lars E.
2013-01-01
We revisited the brachiopod fold hypothesis and investigated metamorphosis in the craniiform brachiopod Novocrania anomala. Larval development is lecithotrophic and the dorsal (brachial) valve is secreted by dorsal epithelia. We found that the juvenile ventral valve, which consists only of a thin...... brachiopods during metamorphosis to cement their pedicle to the substrate. N. anomala is therefore not initially attached by a valve but by material corresponding to pedicle cuticle. This is different to previous descriptions, which had led to speculations about a folding event in the evolution of Brachiopoda...
Lechner, U.; Schmid, Beat
2001-01-01
Information and communication technology opens up an unprecedented space of design options for the creation of economic value. The business model "community" and the role "community organizer" are determined to become pivotal in the digital economy. We argue that any online business model needs to take communities and community organizing in the design of communication and the system architecture into account. Our discussion is guided by the media model (Schmid, 1997). We characterize the rel...
3-D modelling and analysis of Dst C-responses in the North Pacific Ocean region, revisited
DEFF Research Database (Denmark)
Kuvshinov, A.; Utada, H.; Avdeev, D.
2005-01-01
models that are as realistic and detailed as possible. In order to perform the simulations using realistic 3-D models on a routine basis a novel 3-D 'spherical' forward solution has been elaborated in this paper. The solution combines the modified iterative-dissipative method with a conjugate gradient...
Bobowik, Magdalena; Martinovic, Borja; Basabe, Nekane; Barsties, Lisa S.; Wachter, Gusta
2017-01-01
Rejection-identification and rejection-disidentification models propose that low-status groups identify with their in-group and disidentify with a high-status out-group in response to rejection by the latter. Our research tests these two models simultaneously among multiple groups of foreign-born
Revisiting Nursing Research in Nigeria
African Journals Online (AJOL)
2016-08-18
Aug 18, 2016 ... health care research, it is therefore pertinent to revisit the state of nursing research in the country. .... platforms, updated libraries with electronic resource ... benchmarks for developing countries of 26%, [17] the amount is still ...
Circular revisit orbits design for responsive mission over a single target
Li, Taibo; Xiang, Junhua; Wang, Zhaokui; Zhang, Yulin
2016-10-01
The responsive orbits play a key role in addressing the mission of Operationally Responsive Space (ORS) because of their capabilities. These capabilities are usually focused on supporting specific targets as opposed to providing global coverage. One subtype of responsive orbits is repeat coverage orbit which is nearly circular in most remote sensing applications. This paper deals with a special kind of repeating ground track orbit, referred to as circular revisit orbit. Different from traditional repeat coverage orbits, a satellite on circular revisit orbit can visit a target site at both the ascending and descending stages in one revisit cycle. This typology of trajectory allows a halving of the traditional revisit time and does a favor to get useful information for responsive applications. However the previous reported numerical methods in some references often cost lots of computation or fail to obtain such orbits. To overcome this difficulty, an analytical method to determine the existence conditions of the solutions to revisit orbits is presented in this paper. To this end, the mathematical model of circular revisit orbit is established under the central gravity model and the J2 perturbation. A constraint function of the circular revisit orbit is introduced, and the monotonicity of that function has been studied. The existent conditions and the number of such orbits are naturally worked out. Taking the launch cost into consideration, optimal design model of circular revisit orbit is established to achieve a best orbit which visits a target twice a day in the morning and in the afternoon respectively for several days. The result shows that it is effective to apply circular revisit orbits in responsive application such as reconnoiter of natural disaster.
International Nuclear Information System (INIS)
Martinson, Liisa; Lamersdorf, Norbert; Warfvinge, Per
2005-01-01
Soil chemistry under the Solling clean-rain roof was simulated using the dynamic multi-layer soil chemistry model SAFE, including sulfate adsorption. Soil was sampled in order to parameterize the pH and sulfate concentration dependent sulfate adsorption isotherm used in SAFE. Modeled soil solution chemistry was compared to the 14 year long time-series of monthly measurements of soil solution data at 10 and 100 cm depth. The deposition of N and S under the roof has been reduced by 68% and 53%, respectively, compared to the surrounding area. Despite this the soil solution concentrations of sulfate are still high (a median of 420 μmol c /L at 100 cm depth between 2000 and 2002) and the soil base saturation low (approximately 3% in the whole profile in 1998). Sulfate adsorption is an important process in Solling. The soil capacity to adsorb sulfate is large, the modeled adsorbed pool in 2003 down to 100 cm was 1030 kg S/ha, and the measured sulfate concentration is high, due to release of adsorbed sulfate. The addition of sulfate adsorption improved the modeled sulfate dynamics although the model still slightly underestimated the sulfate concentration at 100 cm. Model predictions show no recovery, based on the criteria of Bc/Al ratio above 1 in the rooting zone, before the year 2050, independent of future deposition cuts. - Desorption of sulfate still influences soil chemistry
Mancuso, Katherine; Mauck, Matthew C; Kuchenbecker, James A; Neitz, Maureen; Neitz, Jay
2010-01-01
In 1993, DeValois and DeValois proposed a 'multi-stage color model' to explain how the cortex is ultimately able to deconfound the responses of neurons receiving input from three cone types in order to produce separate red-green and blue-yellow systems, as well as segregate luminance percepts (black-white) from color. This model extended the biological implementation of Hurvich and Jameson's Opponent-Process Theory of color vision, a two-stage model encompassing the three cone types combined in a later opponent organization, which has been the accepted dogma in color vision. DeValois' model attempts to satisfy the long-remaining question of how the visual system separates luminance information from color, but what are the cellular mechanisms that establish the complicated neural wiring and higher-order operations required by the Multi-stage Model? During the last decade and a half, results from molecular biology have shed new light on the evolution of primate color vision, thus constraining the possibilities for the visual circuits. The evolutionary constraints allow for an extension of DeValois' model that is more explicit about the biology of color vision circuitry, and it predicts that human red-green colorblindness can be cured using a retinal gene therapy approach to add the missing photopigment, without any additional changes to the post-synaptic circuitry.
Sippel, Judith; Meeßen, Christian; Cacace, Mauro; Mechie, James; Fishwick, Stewart; Heine, Christian; Scheck-Wenderoth, Magdalena; Strecker, Manfred R.
2017-01-01
We present three-dimensional (3-D) models that describe the present-day thermal and rheological state of the lithosphere of the greater Kenya rift region aiming at a better understanding of the rift evolution, with a particular focus on plume-lithosphere interactions. The key methodology applied is the 3-D integration of diverse geological and geophysical observations using gravity modelling. Accordingly, the resulting lithospheric-scale 3-D density model is consistent with (i) reviewed descriptions of lithological variations in the sedimentary and volcanic cover, (ii) known trends in crust and mantle seismic velocities as revealed by seismic and seismological data and (iii) the observed gravity field. This data-based model is the first to image a 3-D density configuration of the crystalline crust for the entire region of Kenya and northern Tanzania. An upper and a basal crustal layer are differentiated, each composed of several domains of different average densities. We interpret these domains to trace back to the Precambrian terrane amalgamation associated with the East African Orogeny and to magmatic processes during Mesozoic and Cenozoic rifting phases. In combination with seismic velocities, the densities of these crustal domains indicate compositional differences. The derived lithological trends have been used to parameterise steady-state thermal and rheological models. These models indicate that crustal and mantle temperatures decrease from the Kenya rift in the west to eastern Kenya, while the integrated strength of the lithosphere increases. Thereby, the detailed strength configuration appears strongly controlled by the complex inherited crustal structure, which may have been decisive for the onset, localisation and propagation of rifting.
A practical method of predicting client revisit intention in a hospital setting.
Lee, Kyun Jick
2005-01-01
Data mining (DM) models are an alternative to traditional statistical methods for examining whether higher customer satisfaction leads to higher revisit intention. This study used a total of 906 outpatients' satisfaction data collected from a nationwide survey interviews conducted by professional interviewers on a face-to-face basis in South Korea, 1998. Analyses showed that the relationship between overall satisfaction with hospital services and outpatients' revisit intention, along with word-of-mouth recommendation as intermediate variables, developed into a nonlinear relationship. The five strongest predictors of revisit intention were overall satisfaction, intention to recommend to others, awareness of hospital promotion, satisfaction with physician's kindness, and satisfaction with treatment level.
International Nuclear Information System (INIS)
Dong, Jianping
2011-01-01
The many-body space fractional quantum system is studied using the density matrix method. We give the new results of the Thomas-Fermi model, obtain the quantum pressure of the free electron gas. We also show the validity of the Hohenberg-Kohn theorems in the space fractional quantum mechanics and generalize the density functional theory to the fractional quantum mechanics. -- Highlights: → Thomas-Fermi model under the framework of fractional quantum mechanics is studied. → We show the validity of the HK theorems in the space fractional quantum mechanics. → The density functional theory is generalized to the fractional quantum mechanics.
Mancuso, Katherine; Mauck, Matthew C.; Kuchenbecker, James A.; Neitz, Maureen; Neitz, Jay
2010-01-01
In 1993, DeValois and DeValois proposed a “multi-stage color model” to explain how the cortex is ultimately able to deconfound the responses of neurons receiving input from three cone types in order to produce separate red-green and blue-yellow systems, as well as segregate luminance percepts (black-white) from color. This model extended the biological implementation of Hurvich and Jameson’s Opponent-Process Theory of color vision, a two-stage model encompassing the three cone types combined ...
Revisiting Constructivist Teaching Methods in Ontario Colleges Preparing for Accreditation
Schultz, Rachel A.
2015-01-01
At the time of writing, the first community colleges in Ontario were preparing for transition to an accreditation model from an audit system. This paper revisits constructivist literature, arguing that a more pragmatic definition of constructivism effectively blends positivist and interactionist philosophies to achieve both student centred…
Directory of Open Access Journals (Sweden)
Raymond C K Chan
Full Text Available BACKGROUND: Neurological soft signs and neurocognitive impairments have long been considered important features of schizophrenia. Previous correlational studies have suggested that there is a significant relationship between neurological soft signs and neurocognitive functions. The purpose of the current study was to examine the underlying relationships between these two distinct constructs with structural equation modeling (SEM. METHODS: 118 patients with schizophrenia and 160 healthy controls were recruited for the current study. The abridged version of the Cambridge Neurological Inventory (CNI and a set of neurocognitive function tests were administered to all participants. SEM was then conducted independently in these two samples to examine the relationships between neurological soft signs and neurocognitive functions. RESULTS: Both the measurement and structural models showed that the models fit well to the data in both patients and healthy controls. The structural equations also showed that there were modest to moderate associations among neurological soft signs, executive attention, verbal memory, and visual memory, while the healthy controls showed more limited associations. CONCLUSIONS: The current findings indicate that motor coordination, sensory integration, and disinhibition contribute to the latent construct of neurological soft signs, whereas the subset of neurocognitive function tests contribute to the latent constructs of executive attention, verbal memory, and visual memory in the present sample. Greater evidence of neurological soft signs is associated with more severe impairment of executive attention and memory functions. Clinical and theoretical implications of the model findings are discussed.
Haedt-Matt, Alissa A.; Keel, Pamela K.
2011-01-01
The affect regulation model of binge eating, which posits that patients binge eat to reduce negative affect (NA), has received support from cross-sectional and laboratory-based studies. Ecological momentary assessment (EMA) involves momentary ratings and repeated assessments over time and is ideally suited to identify temporal antecedents and…
DEFF Research Database (Denmark)
Sørup, Christian Michel; Jacobsen, Peter
2014-01-01
are entitled safety and satisfaction, waiting time, information delivery, and infrastructure accordingly. As an empirical foundation, a recently published comprehensive survey in 11 Danish EDs is analysed in depth using structural equation modeling (SEM). Consulting the proposed framework, ED decision makers...
Nazem, Mohsen; Trépanier, Martin; Morency, Catherine
2015-01-01
An Enhanced Intervening Opportunities Model (EIOM) is developed for Public Transit (PT). This is a distribution supply dependent model, with single constraints on trip production for work trips during morning peak hours (6:00 a.m.-9:00 a.m.) within the Island of Montreal, Canada. Different data sets, including the 2008 Origin-Destination (OD) survey of the Greater Montreal Area, the 2006 Census of Canada, GTFS network data, along with the geographical data of the study area, are used. EIOM is a nonlinear model composed of socio-demographics, PT supply data and work location attributes. An enhanced destination ranking procedure is used to calculate the number of spatially cumulative opportunities, the basic variable of EIOM. For comparison, a Basic Intervening Opportunities Model (BIOM) is developed by using the basic destination ranking procedure. The main difference between EIOM and BIOM is in the destination ranking procedure: EIOM considers the maximization of a utility function composed of PT Level Of Service and number of opportunities at the destination, along with the OD trip duration, whereas BIOM is based on a destination ranking derived only from OD trip durations. Analysis confirmed that EIOM is more accurate than BIOM. This study presents a new tool for PT analysts, planners and policy makers to study the potential changes in PT trip patterns due to changes in socio-demographic characteristics, PT supply, and other factors. Also it opens new opportunities for the development of more accurate PT demand models with new emergent data such as smart card validations.
International Nuclear Information System (INIS)
Hiatt, JR; Rivard, MJ
2014-01-01
Purpose: The model S700 Axxent electronic brachytherapy source by Xoft was characterized in 2006 by Rivard et al. The source design was modified in 2006 to include a plastic centering insert at the source tip to more accurately position the anode. The objectives of the current study were to establish an accurate Monte Carlo source model for simulation purposes, to dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and to determine dose differences between the source with and without the centering insert. Methods: Design information from dissected sources and vendor-supplied CAD drawings were used to devise the source model for radiation transport simulations of dose distributions in a water phantom. Collision kerma was estimated as a function of radial distance, r, and polar angle, θ, for determination of reference TG-43 dosimetry parameters. Simulations were run for 10 10 histories, resulting in statistical uncertainties on the transverse plane of 0.03% at r=1 cm and 0.08% at r=10 cm. Results: The dose rate distribution the transverse plane did not change beyond 2% between the 2006 model and the current study. While differences exceeding 15% were observed near the source distal tip, these diminished to within 2% for r>1.5 cm. Differences exceeding a factor of two were observed near θ=150° and in contact with the source, but diminished to within 20% at r=10 cm. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% over a third of the available solid angle external from the source. For clinical applications using balloons or applicators with tissue located within 5 cm from the source, dose differences exceeding 2% were observed only for θ>110°. This study carefully examined the current source geometry and presents a modern reference TG-43 dosimetry dataset for the model S700 source
Neutrino assisted GUT baryogenesis revisited
Huang, Wei-Chih; Päs, Heinrich; Zeißner, Sinan
2018-03-01
Many grand unified theory (GUT) models conserve the difference between the baryon and lepton number, B -L . These models can create baryon and lepton asymmetries from heavy Higgs or gauge boson decays with B +L ≠0 but with B -L =0 . Since the sphaleron processes violate B +L , such GUT-generated asymmetries will finally be washed out completely, making GUT baryogenesis scenarios incapable of reproducing the observed baryon asymmetry of the Universe. In this work, we revisit the idea to revive GUT baryogenesis, proposed by Fukugita and Yanagida, where right-handed neutrinos erase the lepton asymmetry before the sphaleron processes can significantly wash out the original B +L asymmetry, and in this way one can prevent a total washout of the initial baryon asymmetry. By solving the Boltzmann equations numerically for baryon and lepton asymmetries in a simplified 1 +1 flavor scenario, we can confirm the results of the original work. We further generalize the analysis to a more realistic scenario of three active and two right-handed neutrinos to highlight flavor effects of the right-handed neutrinos. Large regions in the parameter space of the Yukawa coupling and the right-handed neutrino mass featuring successful baryogenesis are identified.
Pereyra, Marcelo
2016-01-01
Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in many areas of data science such as mathematical imaging and machine learning, where high dimensionality is addressed by using models that are log-concave and whose posterior mode can be computed efficiently by using convex optimisation algorithms. However, despite its success and rapid adoption, MAP estimation is not theoretically well understood yet, and the prevalent view is that it is generally not proper ...
Engel, Benjamin D; Ludington, William B; Marshall, Wallace F
2009-10-05
The assembly and maintenance of eukaryotic flagella are regulated by intraflagellar transport (IFT), the bidirectional traffic of IFT particles (recently renamed IFT trains) within the flagellum. We previously proposed the balance-point length control model, which predicted that the frequency of train transport should decrease as a function of flagellar length, thus modulating the length-dependent flagellar assembly rate. However, this model was challenged by the differential interference contrast microscopy observation that IFT frequency is length independent. Using total internal reflection fluorescence microscopy to quantify protein traffic during the regeneration of Chlamydomonas reinhardtii flagella, we determined that anterograde IFT trains in short flagella are composed of more kinesin-associated protein and IFT27 proteins than trains in long flagella. This length-dependent remodeling of train size is consistent with the kinetics of flagellar regeneration and supports a revised balance-point model of flagellar length control in which the size of anterograde IFT trains tunes the rate of flagellar assembly.
Resolution of Reflection Seismic Data Revisited
DEFF Research Database (Denmark)
Hansen, Thomas Mejer; Mosegaard, Klaus; Zunino, Andrea
The Rayleigh Principle states that the minimum separation between two reflectors that allows them to be visually separated is the separation where the wavelet maxima from the two superimposed reflections combine into one maximum. This happens around Δtres = λb/8, where λb is the predominant...... lower vertical resolution of reflection seismic data. In the following we will revisit think layer model and demonstrate that there is in practice no limit to the vertical resolution using the parameterization of Widess (1973), and that the vertical resolution is limited by the noise in the data...
The mass of the black hole in 1A 0620-00, revisiting the ellipsoidal light curve modelling
van Grunsven, Theo F. J.; Jonker, Peter G.; Verbunt, Frank W. M.; Robinson, Edward L.
2017-12-01
The mass distribution of stellar-mass black holes can provide important clues to supernova modelling, but observationally it is still ill constrained. Therefore, it is of importance to make black hole mass measurements as accurate as possible. The X-ray transient 1A 0620-00 is well studied, with a published black hole mass of 6.61 ± 0.25 M⊙, based on an orbital inclination i of 51.0° ± 0.9°. This was obtained by Cantrell et al. (2010) as an average of independent fits to V-, I- and H-band light curves. In this work, we perform an independent check on the value of i by re-analysing existing YALO/SMARTS V-, I- and H-band photometry, using different modelling software and fitting strategy. Performing a fit to the three light curves simultaneously, we obtain a value for i of 54.1° ± 1.1°, resulting in a black hole mass of 5.86 ± 0.24 M⊙. Applying the same model to the light curves individually, we obtain 58.2° ± 1.9°, 53.6° ± 1.6° and 50.5° ± 2.2° for V-, I- and H-band, respectively, where the differences in best-fitting i are caused by the contribution of the residual accretion disc light in the three different bands. We conclude that the mass determination of this black hole may still be subject to systematic effects exceeding the statistical uncertainty. Obtaining more accurate masses would be greatly helped by continuous phase-resolved spectroscopic observations simultaneous with photometry.
Putilov, A. V.; Bugaenko, M. V.; Timokhin, D. V.
2017-01-01
In the article approaches to the modernization of the national education system with the use of IT-technologies are offered, the review of the problems and obstacles of such modernization is held and concrete steps on the adaptation of the educational process to the labor market requirements are stated. On the basis of the previously proposed model of "economic cross" strategic directions of informatization of the educational process are defined, the analysis of conditions and intensity of the use of IT-technologies at the time of this writing is conducted, the recommendations on the improvement of known modernization tools and the development of new ones for Russian education are developed.
Directory of Open Access Journals (Sweden)
Manuel Jimmy Saint-Cyr
2015-12-01
Full Text Available Due to its toxic properties, high stability, and prevalence, the presence of deoxynivalenol (DON in the food chain is a major threat to food safety and therefore a health risk for both humans and animals. In this study, experiments were carried out with sows and female rats to examine the kinetics of DON after intravenous and oral administration at 100 µg/kg of body weight. After intravenous administration of DON in pigs, a two-compartment model with rapid initial distribution (0.030 ± 0.019 h followed by a slower terminal elimination phase (1.53 ± 0.54 h was fitted to the concentration profile of DON in pig plasma. In rats, a short elimination half-life (0.46 h and a clearance of 2.59 L/h/kg were estimated by sparse sampling non-compartmental analysis. Following oral exposure, DON was rapidly absorbed and reached maximal plasma concentrations (Cmax of 42.07 ± 8.48 and 10.44 ± 5.87 µg/L plasma after (tmax 1.44 ± 0.52 and 0.17 h in pigs and rats, respectively. The mean bioavailability of DON was 70.5% ± 25.6% for pigs and 47.3% for rats. In the framework of DON risk assessment, these two animal models could be useful in an exposure scenario in two different ways because of their different bioavailability.
DEFF Research Database (Denmark)
Collet, R.; Nordlund, Ã.; Asplund, M.
2018-01-01
We present an abundance analysis of the low-metallicity benchmark red giant star HD 122563 based on realistic, state-of-the-art, high-resolution, three-dimensional (3D) model stellar atmospheres including non-grey radiative transfer through opacity binning with 4, 12, and 48 bins. The 48-bin 3D...... simulation reaches temperatures lower by ˜300-500 K than the corresponding 1D model in the upper atmosphere. Small variations in the opacity binning, adopted line opacities, or chemical mixture can cool the photospheric layers by a further ˜100-300 K and alter the effective temperature by ˜100 K. A 3D local...... molecular bands and lines in the ultraviolet, visible, and infrared. We find a small positive 3D-1D abundance correction for carbon (+0.03 dex) and negative ones for nitrogen (-0.07 dex) and oxygen (-0.34 dex). From the analysis of the [O I] line at 6300.3 Å, we derive a significantly higher oxygen...
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager
2004-01-01
The derivation of the life quality index (LQI) is revisited for a revision. This revision takes into account the unpaid but necessary work time needed to stay alive in clean and healthy conditions to be fit for effective wealth producing work and to enjoyable free time. Dimension analysis...... at birth should not vary between countries. Finally the distributional assumptions are relaxed as compared to the assumptions made in an earlier work by the author. These assumptions concern the calculation of the life expectancy change due to the removal of an accident source. Moreover a simple public...... consistency problems with the standard power function expression of the LQI are pointed out. It is emphasized that the combination coefficient in the convex differential combination between the relative differential of the gross domestic product per capita and the relative differential of the expected life...
Balcerak, Ernie
2012-12-01
In January 1994, the two geostationary satellites known as Anik-E1 and Anik-E2, operated by Telesat Canada, failed one after the other within 9 hours, leaving many northern Canadian communities without television and data services. The outage, which shut down much of the country's broadcast television for hours and cost Telesat Canada more than $15 million, generated significant media attention. Lam et al. used publicly available records to revisit the event; they looked at failure details, media coverage, recovery effort, and cost. They also used satellite and ground data to determine the precise causes of those satellite failures. The researchers traced the entire space weather event from conditions on the Sun through the interplanetary medium to the particle environment in geostationary orbit.
Klein's double discontinuity revisited
DEFF Research Database (Denmark)
Winsløw, Carl; Grønbæk, Niels
2014-01-01
Much effort and research has been invested into understanding and bridging the ‘gaps’ which many students experience in terms of contents and expectations as they begin university studies with a heavy component of mathematics, typically in the form of calculus courses. We have several studies...... of bridging measures, success rates and many other aspects of these “entrance transition” problems. In this paper, we consider the inverse transition, experienced by university students as they revisit core parts of high school mathematics (in particular, calculus) after completing the undergraduate...... mathematics courses which are mandatory to become a high school teacher of mathematics. To what extent does the “advanced” experience enable them to approach the high school calculus in a deeper and more autonomous way ? To what extent can “capstone” courses support such an approach ? How could it be hindered...
Reframing in dentistry: Revisited
Directory of Open Access Journals (Sweden)
Sivakumar Nuvvula
2013-01-01
Full Text Available The successful practice of dentistry involves a good combination of technical skills and soft skills. Soft skills or communication skills are not taught extensively in dental schools and it can be challenging to learn and at times in treating dental patients. Guiding the child′s behavior in the dental operatory is one of the preliminary steps to be taken by the pediatric dentist and one who can successfully modify the behavior can definitely pave the way for a life time comprehensive oral care. This article is an attempt to revisit a simple behavior guidance technique, reframing and explain the possible psychological perspectives behind it for better use in the clinical practice.
Predictors and Outcomes of Revisits in Older Adults Discharged from the Emergency Department.
de Gelder, Jelle; Lucke, Jacinta A; de Groot, Bas; Fogteloo, Anne J; Anten, Sander; Heringhaus, Christian; Dekkers, Olaf M; Blauw, Gerard J; Mooijaart, Simon P
2018-04-01
To study predictors of emergency department (ED) revisits and the association between ED revisits and 90-day functional decline or mortality. Multicenter cohort study. One academic and two regional Dutch hospitals. Older adults discharged from the ED (N=1,093). At baseline, data on demographic characteristics, illness severity, and geriatric parameters (cognition, functional capacity) were collected. All participants were prospectively followed for an unplanned revisit within 30 days and for functional decline and mortality 90 days after the initial visit. The median age was 79 (interquartile range 74-84), and 114 participants (10.4%) had an ED revisit within 30 days of discharge. Age (hazard ratio (HR)=0.96, 95% confidence interval (CI)=0.92-0.99), male sex (HR=1.61, 95% CI=1.05-2.45), polypharmacy (HR=2.06, 95% CI=1.34-3.16), and cognitive impairment (HR=1.71, 95% CI=1.02-2.88) were independent predictors of a 30-day ED revisit. The area under the receiver operating characteristic curve to predict an ED revisit was 0.65 (95% CI=0.60-0.70). In a propensity score-matched analysis, individuals with an ED revisit were at higher risk (odds ratio=1.99 95% CI=1.06-3.71) of functional decline or mortality. Age, male sex, polypharmacy, and cognitive impairment were independent predictors of a 30-day ED revisit, but no useful clinical prediction model could be developed. However, an early ED revisit is a strong new predictor of adverse outcomes in older adults. © 2018 The Authors. The Journal of the American Geriatrics Society published by Wiley Periodicals, Inc. on behalf of The American Geriatrics Society.
Collet, R.; Nordlund, Å.; Asplund, M.; Hayek, W.; Trampedach, R.
2018-04-01
We present an abundance analysis of the low-metallicity benchmark red giant star HD 122563 based on realistic, state-of-the-art, high-resolution, three-dimensional (3D) model stellar atmospheres including non-grey radiative transfer through opacity binning with 4, 12, and 48 bins. The 48-bin 3D simulation reaches temperatures lower by ˜300-500 K than the corresponding 1D model in the upper atmosphere. Small variations in the opacity binning, adopted line opacities, or chemical mixture can cool the photospheric layers by a further ˜100-300 K and alter the effective temperature by ˜100 K. A 3D local thermodynamic equilibrium (LTE) spectroscopic analysis of Fe I and Fe II lines gives discrepant results in terms of derived Fe abundance, which we ascribe to non-LTE effects and systematic errors on the stellar parameters. We also determine C, N, and O abundances by simultaneously fitting CH, OH, NH, and CN molecular bands and lines in the ultraviolet, visible, and infrared. We find a small positive 3D-1D abundance correction for carbon (+0.03 dex) and negative ones for nitrogen (-0.07 dex) and oxygen (-0.34 dex). From the analysis of the [O I] line at 6300.3 Å, we derive a significantly higher oxygen abundance than from molecular lines (+0.46 dex in 3D and +0.15 dex in 1D). We rule out important OH photodissociation effects as possible explanation for the discrepancy and note that lowering the surface gravity would reduce the oxygen abundance difference between molecular and atomic indicators.
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
Haedt-Matt, Alissa A.; Keel, Pamela K.
2011-01-01
The affect regulation model of binge eating, which posits that patients binge eat to reduce negative affect (NA), has received support from cross-sectional and laboratory-based studies. Ecological momentary assessment (EMA) involves momentary ratings and repeated assessments over time and is ideally suited to identify temporal antecedents and consequences of binge eating. This meta-analytic review includes EMA studies of affect and binge eating. Electronic database and manual searches produced 36 EMA studies with N = 968 participants (89% Caucasian women). Meta-analyses examined changes in affect before and after binge eating using within-subjects standardized mean gain effect sizes (ES). Results supported greater NA preceding binge eating relative to average affect (ES = .63) and affect before regular eating (ES = .68). However, NA increased further following binge episodes (ES = .50). Preliminary findings suggested that NA decreased following purging in Bulimia Nervosa (ES = −.46). Moderators included diagnosis (with significantly greater elevations of NA prior to bingeing in Binge Eating Disorder compared to Bulimia Nervosa) and binge definition (with significantly smaller elevations of NA before binge versus regular eating episodes for the DSM definition compared to lay definitions of binge eating). Overall, results fail to support the affect regulation model of binge eating and challenge reductions in NA as a maintenance factor for binge eating. However, limitations of this literature include unidimensional analyses of NA and inadequate examination of affect during binge eating as binge eating may regulate only specific facets of affect or may reduce NA only during the episode. PMID:21574678
Directory of Open Access Journals (Sweden)
MEHDI M. POORANGI
2013-01-01
Full Text Available The current climate of business necessitates competitions that are often tough and unpredictable. All organizations, regardless their size and scope of operation, are facing severe competitive challenges. In order to cope with this phenomenon, managers are turning to e-commerce in their respective organizations. The present study hinges upon exploring and explaining the different dimensions of the adoption of e-commerce among Small and medium enterprises, based on the Five Factors of Diffusion of Innovation Model derived by Rogers. In this study, however, we employed the survey methods. A questionnaire was distributed to 1,200 managers and employees in the manufacturing, service and agricultural sectors by email; with a response rate of 10%. The results gleamed from this study posits that relative advantage is influential vis-à-vis e-commerce adoption. Trialability and Observability factors affect the level of confidence of management, which in turn, influences e-commerce adoption. Meanwhile, the existing culture of a company affects the resistance of employees, which in turn negatively effects the e-commerce adoption, while complexity does not significantly influence the e-commerce adoption.
Leukemia and ionizing radiation revisited
Energy Technology Data Exchange (ETDEWEB)
Cuttler, J.M. [Cuttler & Associates Inc., Vaughan, Ontario (Canada); Welsh, J.S. [Loyola University-Chicago, Dept. or Radiation Oncology, Stritch School of Medicine, Maywood, Illinois (United States)
2016-03-15
A world-wide radiation health scare was created in the late 19508 to stop the testing of atomic bombs and block the development of nuclear energy. In spite of the large amount of evidence that contradicts the cancer predictions, this fear continues. It impairs the use of low radiation doses in medical diagnostic imaging and radiation therapy. This brief article revisits the second of two key studies, which revolutionized radiation protection, and identifies a serious error that was missed. This error in analyzing the leukemia incidence among the 195,000 survivors, in the combined exposed populations of Hiroshima and Nagasaki, invalidates use of the LNT model for assessing the risk of cancer from ionizing radiation. The threshold acute dose for radiation-induced leukemia, based on about 96,800 humans, is identified to be about 50 rem, or 0.5 Sv. It is reasonable to expect that the thresholds for other cancer types are higher than this level. No predictions or hints of excess cancer risk (or any other health risk) should be made for an acute exposure below this value until there is scientific evidence to support the LNT hypothesis. (author)
Revisiting the safety of aspartame.
Choudhary, Arbind Kumar; Pretorius, Etheresia
2017-09-01
Aspartame is a synthetic dipeptide artificial sweetener, frequently used in foods, medications, and beverages, notably carbonated and powdered soft drinks. Since 1981, when aspartame was first approved by the US Food and Drug Administration, researchers have debated both its recommended safe dosage (40 mg/kg/d) and its general safety to organ systems. This review examines papers published between 2000 and 2016 on both the safe dosage and higher-than-recommended dosages and presents a concise synthesis of current trends. Data on the safe aspartame dosage are controversial, and the literature suggests there are potential side effects associated with aspartame consumption. Since aspartame consumption is on the rise, the safety of this sweetener should be revisited. Most of the literature available on the safety of aspartame is included in this review. Safety studies are based primarily on animal models, as data from human studies are limited. The existing animal studies and the limited human studies suggest that aspartame and its metabolites, whether consumed in quantities significantly higher than the recommended safe dosage or within recommended safe levels, may disrupt the oxidant/antioxidant balance, induce oxidative stress, and damage cell membrane integrity, potentially affecting a variety of cells and tissues and causing a deregulation of cellular function, ultimately leading to systemic inflammation. © The Author(s) 2017. Published by Oxford University Press on behalf of the International Life Sciences Institute. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Revisiting R-invariant direct gauge mediation
Energy Technology Data Exchange (ETDEWEB)
Chiang, Cheng-Wei [Center for Mathematics and Theoretical Physics andDepartment of Physics, National Central University,Taoyuan, Taiwan 32001, R.O.C. (China); Institute of Physics, Academia Sinica,Taipei, Taiwan 11529, R.O.C. (China); Physics Division, National Center for Theoretical Sciences,Hsinchu, Taiwan 30013, R.O.C. (China); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8583 (Japan); Harigaya, Keisuke [Department of Physics, University of California,Berkeley, California 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory,Berkeley, California 94720 (United States); ICRR, University of Tokyo,Kashiwa, Chiba 277-8582 (Japan); Ibe, Masahiro [Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8583 (Japan); ICRR, University of Tokyo,Kashiwa, Chiba 277-8582 (Japan); Yanagida, Tsutomu T. [Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8583 (Japan)
2016-03-21
We revisit a special model of gauge mediated supersymmetry breaking, the “R-invariant direct gauge mediation.” We pay particular attention to whether the model is consistent with the minimal model of the μ-term, i.e., a simple mass term of the Higgs doublets in the superpotential. Although the incompatibility is highlighted in view of the current experimental constraints on the superparticle masses and the observed Higgs boson mass, the minimal μ-term can be consistent with the R-invariant gauge mediation model via a careful choice of model parameters. We derive an upper limit on the gluino mass from the observed Higgs boson mass. We also discuss whether the model can explain the 3σ excess of the Z+jets+E{sub T}{sup miss} events reported by the ATLAS collaboration.
International Nuclear Information System (INIS)
Neretnieks, Ivars; Liu Longcheng; Moreno, Luis
2010-03-01
Models are presented for solute transport between seeping water in fractured rock and a copper canister embedded in a clay buffer. The migration through an undamaged buffer is by molecular diffusion only as the clay has so low hydraulic conductivity that water flow can be neglected. In the fractures and in any damaged zone seeping water carries the solutes to or from the vicinity of the buffer in the deposition hole. During the time the water passes the deposition hole molecular diffusion aids in the mass transfer of solutes between the water/buffer interface and the water at some distance from the interface. The residence time of the water and the contact area between the water and the buffer determine the rate of mass transfer between water and buffer. Simple analytical solutions are presented for the mass transfer in the seeping water. For complex migration geometries simplifying assumptions are made that allow analytical solutions to be obtained. The influence of variable apertures on the mass transfer is discussed and is shown to be moderate. The impact of damage to the rock around the deposition hole by spalling and by the presence of a cemented and fractured buffer is also explored. These phenomena lead to an increase of mass transfer between water and buffer. The overall rate of mass transfer between the bulk of the water and the canister is proportional to the overall concentration difference and inversely proportional to the sum of the mass transfer resistances. For visualization purposes the concept of equivalent flowrate is introduced. This entity can be thought as of the flowrate of water that will be depleted of its solute during the water passage past the deposition hole. The equivalent flowrate is also used to assess the release rate of radionuclides from a damaged canister. Examples are presented to illustrate how various factors influence the rate of mass transfer
Neutrino dark energy. Revisiting the stability issue
Energy Technology Data Exchange (ETDEWEB)
Eggers Bjaelde, O.; Hannestad, S. [Aarhus Univ. (Denmark). Dept. of Physics and Astronomy; Brookfield, A.W. [Sheffield Univ. (United Kingdom). Dept. of Applied Mathematics and Dept. of Physics, Astro-Particle Theory and Cosmology Group; Van de Bruck, C. [Sheffield Univ. (United Kingdom). Dept. of Applied Mathematics, Astro-Particle Theory and Cosmology Group; Mota, D.F. [Heidelberg Univ. (Germany). Inst. fuer Theoretische Physik]|[Institute of Theoretical Astrophysics, Oslo (Norway); Schrempp, L. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Tocchini-Valentini, D. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Physics and Astronomy
2007-05-15
A coupling between a light scalar field and neutrinos has been widely discussed as a mechanism for linking (time varying) neutrino masses and the present energy density and equation of state of dark energy. However, it has been pointed out that the viability of this scenario in the non-relativistic neutrino regime is threatened by the strong growth of hydrodynamic perturbations associated with a negative adiabatic sound speed squared. In this paper we revisit the stability issue in the framework of linear perturbation theory in a model independent way. The criterion for the stability of a model is translated into a constraint on the scalar-neutrino coupling, which depends on the ratio of the energy densities in neutrinos and cold dark matter. We illustrate our results by providing meaningful examples both for stable and unstable models. (orig.)
Leadership and Management Theories Revisited
DEFF Research Database (Denmark)
Madsen, Mona Toft
2001-01-01
The goal of the paper is to revisit and analyze key contributions to the understanding of leadership and management. As a part of the discussion a role perspective that allows for additional and/or integrated leader dimensions, including a change-centered, will be outlined. Seemingly, a major...
Revisiting Inter-Genre Similarity
DEFF Research Database (Denmark)
Sturm, Bob L.; Gouyon, Fabien
2013-01-01
We revisit the idea of ``inter-genre similarity'' (IGS) for machine learning in general, and music genre recognition in particular. We show analytically that the probability of error for IGS is higher than naive Bayes classification with zero-one loss (NB). We show empirically that IGS does...... not perform well, even for data that satisfies all its assumptions....
'Counterfeit deviance' revisited.
Griffiths, Dorothy; Hingsburger, Dave; Hoath, Jordan; Ioannou, Stephanie
2013-09-01
The field has seen a renewed interest in exploring the theory of 'counterfeit deviance' for persons with intellectual disability who sexually offend. The term was first presented in 1991 by Hingsburger, Griffiths and Quinsey as a means to differentiate in clinical assessment a subgroup of persons with intellectual disability whose behaviours appeared like paraphilia but served a function that was not related to paraphilia sexual urges or fantasies. Case observations were put forward to provide differential diagnosis of paraphilia in persons with intellectual disabilities compared to those with counterfeit deviance. The brief paper was published in a journal that is no longer available and as such much of what is currently written on the topic is based on secondary sources. The current paper presents a theoretical piece to revisit the original counterfeit deviance theory to clarify the myths and misconceptions that have arisen and evaluate the theory based on additional research and clinical findings. The authors also propose areas where there may be a basis for expansion of the theory. The theory of counterfeit deviance still has relevance as a consideration for clinicians when assessing the nature of a sexual offence committed by a person with an intellectual disability. Clinical differentiation of paraphilia from counterfeit deviance provides a foundation for intervention that is designed to specifically treat the underlying factors that contributed to the offence for a given individual. Counterfeit deviance is a concept that continues to provide areas for consideration for clinicians regarding the assessment and treatment of an individual with an intellectual disability who has sexually offended. It is not and never was an explanation for all sexually offending behavior among persons with intellectual disabilities. © 2013 John Wiley & Sons Ltd.
Gaussian entanglement revisited
Lami, Ludovico; Serafini, Alessio; Adesso, Gerardo
2018-02-01
We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.
Ellsworth, W. L.; Bulut, F.
2016-12-01
Much of what we know about the initiation of earthquakes comes from the temporal and spatial relationship of foreshocks to the initiation point of the mainshock. The 1999 Mw 7.6 Izmit, Turkey, earthquake was preceded by a 44 minute-long foreshock sequence. Bouchon et al. (Science, 2011) analyzed the foreshocks using a single seismic station, UCG, located to the north of the east-west fault, and concluded on the basis of waveform similarity that the foreshocks repeatedly re-ruptured the same fault patch, driven by slow slip at the base of the crust. We revisit the foreshock sequence using seismograms from 9 additional stations that recorded the four largest foreshocks (Mw 2.0 to 2.8) to better characterize spatial and temporal evolution of the foreshock sequence and their relationship to the mainshock hypocenter. Cross-correlation timing and hypocentroid location with hypoDD reveals a systematic west-to-east propagation of the four largest foreshocks toward the mainshock hypocenter. Foreshock rupture dimensions estimated using spectral ratios imply no major overlap for the first three foreshocks. The centroid of 4th and largest foreshock continues the eastward migration, but lies within the circular source area of the 3rd. The 3rd, however, has a low stress drop and strong directivity to the west . The mainshock hypocenter locates on the eastern edge of foreshock 4. We also re-analyzed waveform similarity of all 18 foreshocks recorded at UCG by removing the common mode signal and clustering the residual seismogram using the correlation coefficient as the distance metric. The smaller foreshocks cluster with the larger events in time order, sometimes as foreshocks and more commonly as aftershocks. These observations show that the Izmit foreshock sequence is consistent with a stress-transfer driven cascade, moving systematically to the east along the fault and that there is no observational requirement for creep as a driving mechanism.
Place attachment and social legitimacy: Revisiting the sustainable entrepreneurship journey
Kibler, E; Fink, M; Lang, R; Munoz, PA
2015-01-01
This paper revisits the sustainable entrepreneurship journey by introducing a ‘place- based’ sustainable venture path model. We suggest that distinguishing between emo- tional (‘caring about the place’) and instrumental (‘using the place’) place attachment of sustainable entrepreneurs deepens our understanding of how place-based challenges of sustainable venture legitimacy are managed over time. We conclude with avenues for future sustainable entrepreneurship research.
Revisiting kaon physics in general Z scenario
Directory of Open Access Journals (Sweden)
Motoi Endo
2017-08-01
Full Text Available New physics contributions to the Z penguin are revisited in the light of the recently-reported discrepancy of the direct CP violation in K→ππ. Interference effects between the standard model and new physics contributions to ΔS=2 observables are taken into account. Although the effects are overlooked in the literature, they make experimental bounds significantly severer. It is shown that the new physics contributions must be tuned to enhance B(KL→π0νν¯, if the discrepancy of the direct CP violation is explained with satisfying the experimental constraints. The branching ratio can be as large as 6×10−10 when the contributions are tuned at the 10% level.
Re-visiting the electrophysiology of language.
Obleser, Jonas
2015-09-01
This editorial accompanies a special issue of Brain and Language re-visiting old themes and new leads in the electrophysiology of language. The event-related potential (ERP) as a series of characteristic deflections ("components") over time and their distribution on the scalp has been exploited by speech and language researchers over decades to find support for diverse psycholinguistic models. Fortunately, methodological and statistical advances have allowed human neuroscience to move beyond some of the limitations imposed when looking at the ERP only. Most importantly, we currently witness a refined and refreshed look at "event-related" (in the literal sense) brain activity that relates itself more closely to the actual neurobiology of speech and language processes. It is this imminent change in handling and interpreting electrophysiological data of speech and language experiments that this special issue intends to capture. Copyright © 2015 Elsevier Inc. All rights reserved.
Revisiting Mutual Fund Performance Evaluation
Angelidis, Timotheos; Giamouridis, Daniel; Tessaromatis, Nikolaos
2012-01-01
Mutual fund manager excess performance should be measured relative to their self-reported benchmark rather than the return of a passive portfolio with the same risk characteristics. Ignoring the self-reported benchmark introduces biases in the measurement of stock selection and timing components of excess performance. We revisit baseline empirical evidence in mutual fund performance evaluation utilizing stock selection and timing measures that address these biases. We introduce a new factor e...
Article Review: Advanced Change Theory Revisited: An Article Critique
Directory of Open Access Journals (Sweden)
R. Scott Pochron
2008-12-01
Full Text Available The complexity of life in 21st century society requires new models for leadingand managing change. With that in mind, this paper revisits the model for AdvancedChange Theory (ACT as presented by Quinn, Spreitzer, and Brown in their article,“Changing Others Through Changing Ourselves: The Transformation of HumanSystems” (2000. The authors present ACT as a potential model for facilitating change incomplex organizations. This paper presents a critique of the article and summarizesopportunities for further exploring the model in the light of current trends indevelopmental and integral theory.
Directory of Open Access Journals (Sweden)
Rangga Restu Prayogo
2016-12-01
Full Text Available The purpose of this research is to study the relationship among destination image, service quality, e-WOM, and revisit intentions in the tourism industry. A questionnaire given to tourists who visit one of the farrest island in western part of Indonesia, Sabang Island and using sampling through the convenience sampling. A structural equation model (SEM test with WarpPLS 3.0 was used to test the relationship between research variables. This research gathered from 150 respondents. The empirical results from PLS-SEM showed that; the destination image positive affect e-WOM and revisit intention; service quality affect e-WOM and revisit intention; e-WOM positive affect to revisit intention tourists. The implications and future research issues were discussed.
The Discipline Controversy Revisited.
Baumrind, Diana
1996-01-01
Found that neither the authoritative model nor the liberal (permissive) model offers parents an efficacious model of childrearing. Each polarized model contains an element of truth, but each demonizes the other. Argues that within a responsive and supportive parent-child relationship, prudent use of punishment is a necessary tool in discipline.…
Pair Production Constraints on Superluminal Neutrinos Revisited
International Nuclear Information System (INIS)
Brodsky, Stanley
2012-01-01
We revisit the pair creation constraint on superluminal neutrinos considered by Cohen and Glashow in order to clarify which types of superluminal models are constrained. We show that a model in which the superluminal neutrino is effectively light-like can evade the Cohen-Glashow constraint. In summary, any model for which the CG pair production process operates is excluded because such timelike neutrinos would not be detected by OPERA or other experiments. However, a superluminal neutrino which is effectively lightlike with fixed p 2 can evade the Cohen-Glashow constraint because of energy-momentum conservation. The coincidence involved in explaining the SN1987A constraint certainly makes such a picture improbable - but it is still intrinsically possible. The lightlike model is appealing in that it does not violate Lorentz symmetry in particle interactions, although one would expect Hughes-Drever tests to turn up a violation eventually. Other evasions of the CG constraints are also possible; perhaps, e.g., the neutrino takes a 'short cut' through extra dimensions or suffers anomalous acceleration in matter. Irrespective of the OPERA result, Lorentz-violating interactions remain possible, and ongoing experimental investigation of such possibilities should continue.
International Nuclear Information System (INIS)
Giveon, A.; Sarid, U.; Hall, L.J.; California Univ., Berkeley, CA
1991-01-01
Model-independent criteria for unification in the SU(5) framework are studied. These are applied to the minimal supersymmetric standard model and to the standard model with a split 45 Higgs representation. Although the former is consistent with SU(5) unification, the superpartner masses can vary over a wide range, and may even all lie well beyond the reach of planned colliders. Adding a split 45 to the standard model can also satisfy the unification criteria, so supersymmetric SU(5) is far from unique. Furthermore, we learn that separate Higgs doublets must couple to the top and bottom quarks in order to give a correct m b /m τ prediction. (orig.)
Schroedinger's variational method of quantization revisited
International Nuclear Information System (INIS)
Yasue, K.
1980-01-01
Schroedinger's original quantization procedure is revisited in the light of Nelson's stochastic framework of quantum mechanics. It is clarified why Schroedinger's proposal of a variational problem led us to a true description of quantum mechanics. (orig.)
Tourists' perceptions and intention to revisit Norway
Lazar, Ana Florina; Komolikova-Blindheim, Galyna
2016-01-01
Purpose - The overall purpose of this study is to explore tourists' perceptions and their intention to revisit Norway. The aim is to find out what are the factors that drive the overall satisfaction, the willingness to recommend and the revisit intention of international tourists that spend their holiday in Norway. Design-Method-Approach - the Theory of Planned Behavior (Ajzen 1991), is used as a framework to investigate tourists' intention and behavior towards Norway as destination. The o...
DEFF Research Database (Denmark)
Rostgaard, Tine; B. Eydal, G.
2011-01-01
The Nordic childcare policy model is often reviewed and even recommended internationally for its contribution to gender equality, high female labour force participation and, perhaps more indirectly, to a high fertility rate. Nordic childcare services and parental leave schemes have thus been...... portrayed in the literature as policies which have managed to facilitate a work–family model of dual earners and dual carers. However, the recent introduction of cash-for-care schemes seems to go against the Nordic dual earner/dual carer model and ideals of gender equality, in supporting parental (maternal...
Irreversible investments revisited
DEFF Research Database (Denmark)
Sandal, Leif K.; Steinshamn, Stein I.; Hoff, Ayoe
2007-01-01
A multi-dimensional, non-linear dynamic model in continuous time is presented for the purpose of finding the optimal combination of exploitation and capital investment in optimal renewable resource management. Non-malleability of capital is incorporated in the model through an asymmetric cost......-function of investment, and investments can be both positive and negative. Exploitation is controlled through the utilisation rate of available capital. A novel feature in this model is that there are costs associated with the available capital whether it is utilised or not. And, in contrast to most of the previous...
Early-Transition Output Decline Revisited
Directory of Open Access Journals (Sweden)
Crt Kostevc
2016-05-01
Full Text Available In this paper we revisit the issue of aggregate output decline that took place in the early transition period. We propose an alternative explanation of output decline that is applicable to Central- and Eastern-European countries. In the first part of the paper we develop a simple dynamic general equilibrium model that builds on work by Gomulka and Lane (2001. In particular, we consider price liberalization, interpreted as elimination of distortionary taxation, as a trigger of the output decline. We show that price liberalization in interaction with heterogeneous adjustment costs and non-employment benefits lead to aggregate output decline and surge in wage inequality. While these patterns are consistent with actual dynamics in CEE countries, this model cannot generate output decline in all sectors. Instead sectors that were initially taxed even exhibit output growth. Thus, in the second part we consider an alternative general equilibrium model with only one production sector and two types of labor and distortion in a form of wage compression during the socialist era. The trigger for labor mobility and consequently output decline is wage liberalization. Assuming heterogeneity of workers in terms of adjustment costs and non-employment benefits can explain output decline in all industries.
Irreversible investments revisited
DEFF Research Database (Denmark)
Sandal, Leif K.; Steinshamn, Stein I.; Hoff, Ayoe
2007-01-01
A multi-dimensional, non-linear dynamic model in continuous time is presented for the purpose of finding the optimal combination of exploitation and capital investment in optimal renewable resource management. Non-malleability of capital is incorporated in the model through an asymmetric cost......-function of investment, and investments can be both positive and negative. Exploitation is controlled through the utilisation rate of available capital. A novel feature in this model is that there are costs associated with the available capital whether it is utilised or not. And, in contrast to most of the previous...... literature, the state variables, namely the physical capital and the biological resource, enter the objective function. Due to the nonlinearities in this model some of the results are in sharp contrast to previous literature....
Nordic Corporate Governance Revisited
DEFF Research Database (Denmark)
Thomsen, Steen
2016-01-01
This paper reviews the key elements of the Nordic governance model, which include a distinct legal system, high governance ratings and low levels of corruption. Other characteristics include concentrated ownership, foundation ownership, semi two-tier board structures, employee representation...
Fridriksson, Julius; den Ouden, Dirk-Bart; Hillis, Argye E; Hickok, Gregory; Rorden, Chris; Basilakos, Alexandra; Yourganov, Grigori; Bonilha, Leonardo
2018-01-17
In most cases, aphasia is caused by strokes involving the left hemisphere, with more extensive damage typically being associated with more severe aphasia. The classical model of aphasia commonly adhered to in the Western world is the Wernicke-Lichtheim model. The model has been in existence for over a century, and classification of aphasic symptomatology continues to rely on it. However, far more detailed models of speech and language localization in the brain have been formulated. In this regard, the dual stream model of cortical brain organization proposed by Hickok and Poeppel is particularly influential. Their model describes two processing routes, a dorsal stream and a ventral stream, that roughly support speech production and speech comprehension, respectively, in normal subjects. Despite the strong influence of the dual stream model in current neuropsychological research, there has been relatively limited focus on explaining aphasic symptoms in the context of this model. Given that the dual stream model represents a more nuanced picture of cortical speech and language organization, cortical damage that causes aphasic impairment should map clearly onto the dual processing streams. Here, we present a follow-up study to our previous work that used lesion data to reveal the anatomical boundaries of the dorsal and ventral streams supporting speech and language processing. Specifically, by emphasizing clinical measures, we examine the effect of cortical damage and disconnection involving the dorsal and ventral streams on aphasic impairment. The results reveal that measures of motor speech impairment mostly involve damage to the dorsal stream, whereas measures of impaired speech comprehension are more strongly associated with ventral stream involvement. Equally important, many clinical tests that target behaviours such as naming, speech repetition, or grammatical processing rely on interactions between the two streams. This latter finding explains why patients with
Mutated hilltop inflation revisited
Pal, Barun Kumar
2018-05-01
In this work we re-investigate pros and cons of mutated hilltop inflation. Applying Hamilton-Jacobi formalism we solve inflationary dynamics and find that inflation goes on along the {W}_{-1} branch of the Lambert function. Depending on the model parameter mutated hilltop model renders two types of inflationary solutions: one corresponds to small inflaton excursion during observable inflation and the other describes large field inflation. The inflationary observables from curvature perturbation are in tune with the current data for a wide range of the model parameter. The small field branch predicts negligible amount of tensor to scalar ratio r˜ O(10^{-4}), while the large field sector is capable of generating high amplitude for tensor perturbations, r˜ O(10^{-1}). Also, the spectral index is almost independent of the model parameter along with a very small negative amount of scalar running. Finally we find that the mutated hilltop inflation closely resembles the α -attractor class of inflationary models in the limit of α φ ≫ 1.
Rudy M. Schuster; Laura Sullivan; Duarte Morais; Diane Kuehn
2009-01-01
This analysis explores the differences in Affective and Cognitive Destination Image among three Hudson River Valley (New York) tourism communities. Multiple regressions were used with six dimensions of visitors' images to predict future intention to revisit. Two of the three regression models were significant. The only significantly contributing independent...
International Nuclear Information System (INIS)
Doyle, Joseph; Muehlegger, Erich; Samphantharak, Krislert
2010-01-01
Some gasoline markets exhibit remarkable price cycles, where price spikes are followed by a series of small price declines: a pattern consistent with a model of Edgeworth cycles described by Maskin and Tirole. We extend the model and empirically test its predictions with a new dataset of daily station-level prices in 115 US cities. Consistent with the theory, and often in contrast with previous empirical work, we find the least and most concentrated markets are much less likely to exhibit cycling behavior both within and across cities; areas with more independent convenience-store gas stations are also more likely to cycle. (author)
International Nuclear Information System (INIS)
Wendroff, B.
1988-01-01
The cooling of hot surfaces can be modeled in certain simples cases by a nonlinear eigenvalue problem describing the motion of a steady traveling cooling wave. Earlier work on the mathematical theory, the numerical analysis, and the asymptotics of this problem are reviewed
Differentiated Duopoly Revisited
Onozaki, Tamotsu
2012-01-01
The present paper explores what happens in the analytical results of a duopoly model with product differentiation when heterogeneity of production cost is introduced. It is shown that there is a possibility that the price strategy is dominant over the quantity strategy even if goods are substitutes.
DEFF Research Database (Denmark)
Sporring, Jon
Principle Component Analysis is a simple tool to obtain linear models for stochastic data and is used both for a data reduction or equivalently noise elim- ination and for data analysis. Principle Component Analysis ts a multivariate Gaussian distribution to the data, and the typical method is by...
Critical boundary sine-Gordon revisited
International Nuclear Information System (INIS)
Hasselfield, M.; Lee, Taejin; Semenoff, G.W.; Stamp, P.C.E.
2006-01-01
We revisit the exact solution of the two space-time dimensional quantum field theory of a free massless boson with a periodic boundary interaction and self-dual period. We analyze the model by using a mapping to free fermions with a boundary mass term originally suggested in Ref. [J. Polchinski, L. Thorlacius, Phys. Rev. D 50 (1994) 622]. We find that the entire SL (2, C) family of boundary states of a single boson are boundary sine-Gordon states and we derive a simple explicit expression for the boundary state in fermion variables and as a function of sine-Gordon coupling constants. We use this expression to compute the partition function. We observe that the solution of the model has a strong-weak coupling generalization of T-duality. We then examine a class of recently discovered conformal boundary states for compact bosons with radii which are rational numbers times the self-dual radius. These have simple expression in fermion variables. We postulate sine-Gordon-like field theories with discrete gauge symmetries for which they are the appropriate boundary states
The drive revisited: Mastery and satisfaction.
Denis, Paul
2016-06-01
Starting from the theory of the libido and the notions of the experience of satisfaction and the drive for mastery introduced by Freud, the author revisits the notion of the drive by proposing the following model: the drive takes shape in the combination of two currents of libidinal cathexis, one which takes the paths of the 'apparatus for obtaining mastery' (the sense-organs, motricity, etc.) and strives to appropriate the object, and the other which cathects the erotogenic zones and the experience of satisfaction that is experienced through stimulation in contact with the object. The result of this combination of cathexes constitutes a 'representation', the subsequent evocation of which makes it possible to tolerate for a certain period of time the absence of a satisfying object. On the basis of this conception, the author distinguishes the representations proper, vehicles of satisfaction, from imagos and traumatic images which give rise to excitation that does not link up with the paths taken by the drives. This model makes it possible to conciliate the points of view of the advocates of 'object-seeking' and of those who give precedence to the search for pleasure, and, further, to renew our understanding of object-relations, which can then be approached from the angle of their relations to infantile sexuality. Destructiveness is considered in terms of "mastery madness" and not in terms of the late Freudian hypothesis of the death drive. Copyright © 2015 Institute of Psychoanalysis.
Ellis, John
2016-01-01
We revisit minimal supersymmetric SU(5) grand unification (GUT) models in which the soft supersymmetry-breaking parameters of the minimal supersymmetric Standard Model (MSSM) are universal at some input scale, $M_{in}$, above the supersymmetric gauge coupling unification scale, $M_{GUT}$. As in the constrained MSSM (CMSSM), we assume that the scalar masses and gaugino masses have common values, $m_0$ and $m_{1/2}$ respectively, at $M_{in}$, as do the trilinear soft supersymmetry-breaking parameters $A_0$. Going beyond previous studies of such a super-GUT CMSSM scenario, we explore the constraints imposed by the lower limit on the proton lifetime and the LHC measurement of the Higgs mass, $m_h$. We find regions of $m_0$, $m_{1/2}$, $A_0$ and the parameters of the SU(5) superpotential that are compatible with these and other phenomenological constraints such as the density of cold dark matter, which we assume to be provided by the lightest neutralino. Typically, these allowed regions appear for $m_0$ and $m_{1/...
Post-inflationary gravitino production revisited
Energy Technology Data Exchange (ETDEWEB)
Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics, King' s College London, London WC2R 2LS (United Kingdom); Garcia, Marcos A.G.; Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Peloso, Marco, E-mail: john.ellis@cern.ch, E-mail: garciagarcia@physics.umn.edu, E-mail: dimitri@physics.tamu.edu, E-mail: olive@physics.umn.edu, E-mail: peloso@physics.umn.edu [School of Physics and Astronomy and Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States)
2016-03-01
We revisit gravitino production following inflation. As a first step, we review the standard calculation of gravitino production in the thermal plasma formed at the end of post-inflationary reheating when the inflaton has completely decayed. Next we consider gravitino production prior to the completion of reheating, assuming that the inflaton decay products thermalize instantaneously while they are still dilute. We then argue that instantaneous thermalization is in general a good approximation, and also show that the contribution of non-thermal gravitino production via the collisions of inflaton decay products prior to thermalization is relatively small. Our final estimate of the gravitino-to-entropy ratio is approximated well by a standard calculation of gravitino production in the post-inflationary thermal plasma assuming total instantaneous decay and thermalization at a time t ≅ 1.2/Γ{sub φ}. Finally, in light of our calculations, we consider potential implications of upper limits on the gravitino abundance for models of inflation, with particular attention to scenarios for inflaton decays in supersymmetric Starobinsky-like models.
DEFF Research Database (Denmark)
Järvinen, Margaretha; Ravn, Signe
2014-01-01
A considerable part of today's sociological research on recreational drug use is (explicitly or implicitly) inspired by Howard Becker's classical model of deviant careers. The aim of the present paper is to directly apply Becker's theory to empirical data on present-day cannabis use and to suggest...... in treatment for cannabis problems in Copenhagen, Denmark. We suggest a revision of Becker's career model in relation to four aspects: initiation of cannabis use, differentiation between socially integrated and individualised, disintegrated use, social control from non-users, and the users' moral stance...... on cannabis. A central point of the paper is that social interaction may both motivate cannabis use, as Becker proposed, and serve as a protective factor against extensive, problematic use....
International Nuclear Information System (INIS)
Chialva, Diego; Danielsson, Ulf H
2008-01-01
This paper represents an in-depth treatment of the chain inflation scenario. We fully determine the evolution of the universe in the model, the conditions necessary in order to have a successful inflationary period, and the matching with the observational results regarding the cosmological perturbations. We study in great detail, and in general, the dynamics of the background, as well as the mechanism of generation of the perturbations. We also find an explicit formula for the spectrum of adiabatic perturbations. Our results prove that chain inflation is a viable model for solving the horizon, entropy and flatness problems of standard cosmology and for generating the right amount of adiabatic cosmological perturbations. The results are radically different from those found in previous works on the subject. Finally, we argue that there is a natural way to embed chain inflation into flux compactified string theory. We discuss the details of the implementation and how to fit observations
Revisiting Absorptive Capacity
DEFF Research Database (Denmark)
de Araújo, Ana Luiza Lara; Ulhøi, John Parm; Lettl, Christopher
Absorptive capacity has mostly been perceived as a 'passive' outcome of R&D investments. Recently, however, a growing interest into its 'proactive' potentials has emerged. This paper taps into this development and proposes a dynamic model for conceptualizing the determinants of the complementary...... learning processes of absorptive capacity, which comprise combinative and adaptive capabilities. Drawing on survey data (n=169), the study concludes that combinative capabilities primarily enhance transformative and exploratory learning processes, while adaptive capabilities strengthen all three learning...
Parental overprotection revisited.
Thomasgard, M; Metz, W P
1993-01-01
Dimensions of parental overprotection are clarified in a critical review of the research and clinical literature. An indulgent style of parenting is distinguished from an overprotective parent-child relationship. Differential antecedents and outcomes are proposed for each of these forms of parent-child interaction. Measures of protection are reviewed. A new conceptual model of parental overprotection is presented which takes into account child, parent, family, socio-cultural, environmental and resiliency factors. Directions for future research are suggested.
Schumpeter's core works revisited
DEFF Research Database (Denmark)
Andersen, Esben Sloth
2012-01-01
This paper organises Schumpeter’s core books in three groups: the programmatic duology,the evolutionaryeconomic duology,and the socioeconomic synthesis. By analysing these groups and their interconnections from the viewpoint of modern evolutionaryeconomics,the paper summarises resolved problems a...... and points at remaining challenges. Its analyses are based on distinctions between microevolution and macroevolution, between economic evolution and socioeconomic coevolution, and between Schumpeter’s three major evolutionary models (called Mark I, Mark II and Mark III)....
Wilson, Gloria Lodato
2016-01-01
Most co-teachers agree that there just isn't enough time for co-teachers to appropriately and effectively preplan every aspect of every activity in every lesson. This lack of time leads co-teachers to turn to models that fail to maximize the benefits of a two-teacher classroom. Wilson suggests that if co-teachers use their limited planning time to…
Charmed, beauty hadrons revisited
International Nuclear Information System (INIS)
Chabab, M.
1998-01-01
Applying two different versions of QCD sum-rules, we reanalyze rigourously the rich spectroscopy of mesons and baryons built from charm and beauty quarks. An improved determination of the masses and the leptonic decay constants of B c (bc-bar), B c *(bc-bar), and Λ(bcu) is presented. Our optimal results, constrained by stability criteria, are consistent in both versions and support the general pattern common to potential models predictions
Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth
2015-01-01
Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with “equivalent” 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. PMID:25780063
International Nuclear Information System (INIS)
Linnell, A.P.; Kallrath, J.
1986-08-01
New analysis tools and additional unanalyzed observations justify a reanalysis of MR Cygni. The reanalysis applied successively more restrictive physical models, each with an optimization program. The final model assigned separate first and second order limb darkening coefficients, from model atmospheres, to individual grid points. Proper operation of the optimization procedure was tested on simulated observational data, produced by light synthesis with assigned system parameters, and modulated by simulated observational error. The iterative solution converged to a weakly-determined mass ratio of 0.75. Assuming the B3 primary component is on the main sequence, the HR diagram location of the secondary from the light ratio (ordinate) and adjusted T sub eff (abscissa) was calculated. The derived mass ratio, together with a main-sequence mass for the B3 component, implies a main-sequence secondary spectral type of B4. The photometrically-determined secondary radii agree with this spectral type, in marginal disagreement with the B7 type from the HR diagram analysis. The individual masses, derived from the radial velocity curve of the primary component, the photometrically-determined i, and alternative values of derived mass ratio are seriously discrepant with main sequence objects. The imputed physical status of the system is in disagreement with representations that have appeared in the literature
Revisiting Antarctic Ozone Depletion
Grooß, Jens-Uwe; Tritscher, Ines; Müller, Rolf
2015-04-01
Antarctic ozone depletion is known for almost three decades and it has been well settled that it is caused by chlorine catalysed ozone depletion inside the polar vortex. However, there are still some details, which need to be clarified. In particular, there is a current debate on the relative importance of liquid aerosol and crystalline NAT and ice particles for chlorine activation. Particles have a threefold impact on polar chlorine chemistry, temporary removal of HNO3 from the gas-phase (uptake), permanent removal of HNO3 from the atmosphere (denitrification), and chlorine activation through heterogeneous reactions. We have performed simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) employing a recently developed algorithm for saturation-dependent NAT nucleation for the Antarctic winters 2011 and 2012. The simulation results are compared with different satellite observations. With the help of these simulations, we investigate the role of the different processes responsible for chlorine activation and ozone depletion. Especially the sensitivity with respect to the particle type has been investigated. If temperatures are artificially forced to only allow cold binary liquid aerosol, the simulation still shows significant chlorine activation and ozone depletion. The results of the 3-D Chemical Transport Model CLaMS simulations differ from purely Lagrangian longtime trajectory box model simulations which indicates the importance of mixing processes.
The role of brand destination experience in determining revisit intention
DEFF Research Database (Denmark)
Mattsson, Jan; Barnes, Stuart; Sørensen, Flemming
Destination branding has developed considerably as a topic area in the last decade with numerous conceptualizations focusing on different aspects of the brand. However, a unified view has not yet emerged. This paper examines destination branding via a new conceptualization, brand destination...... experience, which provides a more holistic and unified view of the brand destination. The research uses a logistic regression model to determine the role of satisfaction and brand experience in determining revisit intentions. The study also examines differences among subgroups and four brand experience sub...
Industrialization and inequality revisited
DEFF Research Database (Denmark)
Molitoris, Joseph; Dribe, Martin
2016-01-01
This work combines economic and demographic data to examine inequality of living standards in Stockholm at the turn of the twentieth century. Using a longitudinal population register with occupational information, we utilize event-history models to show that despite absolute decreases in mortality......, relative differences between socioeconomic groups remained virtually constant. The results also show that child mortality continued to be sensitive to short-term fluctuations in wages and that there were no socioeconomic differences in this response. We argue that the persistent inequality in living...
International Nuclear Information System (INIS)
Brandenberger, R.H.
1986-01-01
Cosmological phase transitions are examined using a new approach based on the dynamical analysis of the equations of motion of quantum fields rather than on static effective potential considerations. In many models the universe enters a period of exponential expansion required for an inflationary cosmology. Analytical methods show that this will be the case if the interaction rate due to quantum field nonlinearities is small compared to the expansion rate of the universe. They derive a heuristic criterion for the maximal value of the coupling constant for which they expect inflation. The prediction is in good agreement with numerical results
DEFF Research Database (Denmark)
Thiele, Jan; Kollmann, Johannes Christian; Markussen, Bo
2010-01-01
; and (4) the total invaded range is an inappropriate measure for quantifying regional impact because the habitat area available for invasion can vary markedly among invasive species. Mathematical models and empirical data using an invasive alien plant species (Heracleum mantegazzianum) indicate......The theoretical underpinnings of the assessment of invasive alien species impacts need to be improved. At present most approaches are unreliable to quantify impact at regional scales and do not allow for comparison of different invasive species. There are four basic problems that need...... and we discuss the quantification of the invaded range. These improvements are crucial for impact assessment with the overall aim of prioritizing management of invasive species....
Van Hove singularities revisited
International Nuclear Information System (INIS)
Dzyaloshinskii, I.
1987-07-01
Beginning with the work of Hirsch and Scalapino the importance of ln 2 -Van Hove singularity in T c -enhancement in La 2 CuO 4 -based compounds was realized, which is nicely reviewed by Rice. However, the theoretical treatment carried out before is incomplete. Two things were apparently not paid due attention to: interplay of particle-particle and particle-hole channels and Umklapp processes. In what follows a two-dimensional weak coupling model of LaCuO 4 will be solved exactly in the ln 2 -approximation. The result in the Hubbard limit (one bare charge) is that the system is unstable at any sign of interaction. Symmetry breaking moreover is pretty peculiar. Of course, there are separate singlet superconducting pairings in the pp-channel (attraction) and SDW (repulsion) and CDW (attraction) in the ph-channel. It is natural that Umklapps produce an SDW + CDW mixture at either sign of the interaction. What is unusual is that both the pp-ph interplay and the Umklapps give rise to a monster-coherent SS + SDW + CDW mixture, again at either sign of the bare charge. In the general model where all 4 charges involved are substantially different, the system might remain metallic. A more realistic approach which takes into account dopping in La-M-Cu-O and interlayer interaction provides at least a qualitative understanding of the experimental picture. 10 refs, 5 figs
The power reinforcement framework revisited
DEFF Research Database (Denmark)
Nielsen, Jeppe; Andersen, Kim Normann; Danziger, James N.
2016-01-01
Whereas digital technologies are often depicted as being capable of disrupting long-standing power structures and facilitating new governance mechanisms, the power reinforcement framework suggests that information and communications technologies tend to strengthen existing power arrangements within...... public organizations. This article revisits the 30-yearold power reinforcement framework by means of an empirical analysis on the use of mobile technology in a large-scale programme in Danish public sector home care. It explores whether and to what extent administrative management has controlled decision......-making and gained most benefits from mobile technology use, relative to the effects of the technology on the street-level workers who deliver services. Current mobile technology-in-use might be less likely to be power reinforcing because it is far more decentralized and individualized than the mainly expert...
The climate continuum revisited
Emile-Geay, J.; Wang, J.; Partin, J. W.
2015-12-01
A grand challenge of climate science is to quantify the extent of natural variability on adaptation-relevant timescales (10-100y). Since the instrumental record is too short to adequately estimate the spectra of climate measures, this information must be derived from paleoclimate proxies, which may harbor a many-to-one, non-linear (e.g. thresholded) and non-stationary relationship to climate. In this talk, I will touch upon the estimation of climate scaling behavior from climate proxies. Two case studies will be presented: an investigation of scaling behavior in a reconstruction of global surface temperature using state-of- the-art data [PAGES2K Consortium, in prep] and methods [Guillot et al., 2015]. Estimating the scaling exponent β in spectra derived from this reconstruction, we find that 0 long-term memory. Overall, the reconstruction-based spectra are steeper than the ones based on an instrumental dataset [HadCRUT4.2, Morice et al., 2012], and those estimated from PMIP3/CMIP5 models, suggesting the climate system is more energetic at multidecadal to centennial timescales than can be inferred from the short instrumental record or from the models developed to reproduce it [Laepple and Huybers, 2014]. an investigation of scaling behavior in speleothems records of tropical hydroclimate. We will make use of recent advances in proxy system modeling [Dee et al., 2015] and investigate how various aspects of the speleothem system (karst dynamics, age uncertainties) may conspire to bias the estimate of scaling behavior from speleothem timeseries. The results suggest that ignoring such complications leads to erroneous inferences about hydroclimate scaling. References Dee, S. G., J. Emile-Geay, M. N. Evans, Allam, A., D. M. Thompson, and E. J. Steig (2015), J. Adv. Mod. Earth Sys., 07, doi:10.1002/2015MS000447. Guillot, D., B. Rajaratnam, and J. Emile-Geay (2015), Ann. Applied. Statist., pp. 324-352, doi:10.1214/14-AOAS794. Laepple, T., and P. Huybers (2014), PNAS, doi
King, Scott D.; Adam, Claudia
2014-10-01
The first attempts to quantify the width and height of hotspot swells were made more than 30 years ago. Since that time, topography, ocean-floor age, and sediment thickness datasets have improved considerably. Swell heights and widths have been used to estimate the heat flow from the core-mantle boundary, constrain numerical models of plumes, and as an indicator of the origin of hotspots. In this paper, we repeat the analysis of swell geometry and buoyancy flux for 54 hotspots, including the 37 considered by Sleep (1990) and the 49 considered by Courtillot et al. (2003), using the latest and most accurate data. We are able to calculate swell geometry for a number of hotspots that Sleep was only able to estimate by comparison with other swells. We find that in spite of the increased resolution in global bathymetry models there is significant uncertainty in our calculation of buoyancy fluxes due to differences in our measurement of the swells’ width and height, the integration method (volume integration or cross-sectional area), and the variations of the plate velocities between HS2-Nuvel1a (Gripp and Gordon, 1990) and HS3-Nuvel1a (Gripp and Gordon, 2002). We also note that the buoyancy flux for Pacific hotspots is in general larger than for Eurasian, North American, African and Antarctic hotspots. Considering that buoyancy flux is linearly related to plate velocity, we speculate that either the calculation of buoyancy flux using plate velocity over-estimates the actual vertical flow of material from the deep mantle or that convection in the Pacific hemisphere is more vigorous than the Atlantic hemisphere.
Meta-analysis in clinical trials revisited.
DerSimonian, Rebecca; Laird, Nan
2015-11-01
In this paper, we revisit a 1986 article we published in this Journal, Meta-Analysis in Clinical Trials, where we introduced a random-effects model to summarize the evidence about treatment efficacy from a number of related clinical trials. Because of its simplicity and ease of implementation, our approach has been widely used (with more than 12,000 citations to date) and the "DerSimonian and Laird method" is now often referred to as the 'standard approach' or a 'popular' method for meta-analysis in medical and clinical research. The method is especially useful for providing an overall effect estimate and for characterizing the heterogeneity of effects across a series of studies. Here, we review the background that led to the original 1986 article, briefly describe the random-effects approach for meta-analysis, explore its use in various settings and trends over time and recommend a refinement to the method using a robust variance estimator for testing overall effect. We conclude with a discussion of repurposing the method for Big Data meta-analysis and Genome Wide Association Studies for studying the importance of genetic variants in complex diseases. Published by Elsevier Inc.
Energy Technology Data Exchange (ETDEWEB)
P., Henry
2008-11-20
A recent article in which John Searle claims to refute dualism is examined from a scientific perspective. John Searle begins his recent article 'Dualism Revisited' by stating his belief that the philosophical problem of consciousness has a scientific solution. He then claims to refute dualism. It is therefore appropriate to examine his arguments against dualism from a scientific perspective. Scientific physical theories contain two kinds of descriptions: (1) Descriptions of our empirical findings, expressed in an every-day language that allows us communicate to each other our sensory experiences pertaining to what we have done and what we have learned; and (2) Descriptions of a theoretical model, expressed in a mathematical language that allows us to communicate to each other certain ideas that exist in our mathematical imaginations, and that are believed to represent, within our streams of consciousness, certain aspects of reality that we deem to exist independently of their being perceived by any human observer. These two parts of our scientific description correspond to the two aspects of our general contemporary dualistic understanding of the total reality in which we are imbedded, namely the empirical-mental aspect and the theoretical-physical aspect. The duality question is whether this general dualistic understanding of ourselves should be regarded as false in some important philosophical or scientific sense.
Revisit to diffraction anomalous fine structure
International Nuclear Information System (INIS)
Kawaguchi, T.; Fukuda, K.; Tokuda, K.; Shimada, K.; Ichitsubo, T.; Oishi, M.; Mizuki, J.; Matsubara, E.
2014-01-01
The diffraction anomalous fine structure method has been revisited by applying this measurement technique to polycrystalline samples and using an analytical method with the logarithmic dispersion relation. The diffraction anomalous fine structure (DAFS) method that is a spectroscopic analysis combined with resonant X-ray diffraction enables the determination of the valence state and local structure of a selected element at a specific crystalline site and/or phase. This method has been improved by using a polycrystalline sample, channel-cut monochromator optics with an undulator synchrotron radiation source, an area detector and direct determination of resonant terms with a logarithmic dispersion relation. This study makes the DAFS method more convenient and saves a large amount of measurement time in comparison with the conventional DAFS method with a single crystal. The improved DAFS method has been applied to some model samples, Ni foil and Fe 3 O 4 powder, to demonstrate the validity of the measurement and the analysis of the present DAFS method
Economics of vaccines revisited.
Postma, Maarten J; Standaert, Baudouin A
2013-05-01
Performing a total health economic analysis of a vaccine newly introduced into the market today is a challenge when using the conventional cost-effectiveness analysis we normally apply on pharmaceutical products. There are many reasons for that, such as: the uncertainty in the total benefit (direct and indirect) to be measured in a population when using a cohort model; (1) appropriate rules about discounting the long-term impact of vaccines are absent jeopardizing therefore their value at the initial investment; (2) the presence of opposite contexts when introducing the vaccine in developed vs. the developing world with high benefits, low initial health care investment for the latter vs. marginal benefit and high cost for the former; with a corresponding paradox for the vaccine becoming very cost-effective in low income countries but rather medium in middle low to high middle income countries; (3) and the type of trial assessment for the newer vaccines is now often performed with immunogenicity reaction instead of clinical endpoints which still leaves questions on their real impact and their head-to-head comparison. (4.)
International Nuclear Information System (INIS)
Lomb, Nick
2013-01-01
The set of sunspot numbers observed since the invention of the telescope is one of the most studied time series in astronomy and yet it is also one of the most complex. Fourteen frequencies are found in the yearly mean sunspot numbers from 1700 to 2011using the Lomb-Scargle periodogram and prewhitening. All of the frequencies corresponding to shorter term periods can be matched with simple algebraic combinations of the frequency of the main 11-year period and the frequencies of the longer term periods in the periodogram. This is exactly what can be expected from amplitude and phase modulation of an 11.12-year periodicity by longer term variations. Similar, though not identical, results are obtained after correcting the sunspot number series as proposed by Svalgaard. On looking separately at the amplitude and phase modulation a clear relationship is found between the two modulations although this relationship has broken down for the last four solar cycles. The phase modulation implies that there is a definite underlying period for the solar cycle. Such a clock mechanism does seem to be a possibility in models of the solar dynamo incorporating a conveyor-belt-like meridional circulation between high polar latitudes and the equator.
Coded aperture tomography revisited
International Nuclear Information System (INIS)
Bizais, Y.; Rowe, R.W.; Zubal, I.G.; Bennett, G.W.; Brill, A.B.
1983-01-01
Coded aperture (CA) Tomography never achieved wide spread use in Nuclear Medicine, except for the degenerate case of Seven Pinhole tomagraphy (7PHT). However it enjoys several attractive features (high sensitivity and tomographic ability with a statis detector). On the other hand, resolution is usually poor especially along the depth axis and the reconstructed volume is rather limited. Arguments are presented justifying the position that CA tomography can be useful for imaging time-varying 3D structures, if its major drawbacks (poor longitudinal resolution and difficulty in quantification) are overcome. Poor results obtained with 7PHT can be explained by both a very limited angular range sampled and a crude modelling of the image formation process. Therefore improvements can be expected by the use of a dual-detector system, along with a better understanding of its sampling properties and the use of more powerful reconstruction algorithms. Non overlapping multipinhole plates, because they do not involve a decoding procedure, should be considered first for practical applications. Use of real CA should be considered for cases in which non overlapping multipinhole plates do not lead to satisfactory solutions. We have been and currently are carrying out theoretical and experimental works, in order to define the factors which limit CA imaging and to propose satisfactory solutions for Dynamic Emission Tomography
Douglas, A.
2007-01-01
contrast simple, comprising one or two cycles of large amplitude followed by a low-amplitude coda. Earthquake signals on the other hand were often complex with numerous arrivals of similar amplitude spread over 35 s or more. It therefore appeared that earthquakes could be recognised on complexity. Later however, complex explosion signals were observed which reduced the apparent effectiveness of complexity as a criterion for identifying earthquakes. Nevertheless, the AWE Group concluded that for many paths to teleseismic distances, Earth is transparent for P signals and this provides a window through which source differences will be most clearly seen. Much of the research by the Group has focused on understanding the influence of source type on P seismograms recorded at teleseismic distances. Consequently the paper concentrates on teleseismic methods of distinguishing between explosions and earthquakes. One of the most robust criteria for discriminating between earthquakes and explosions is the m b : M s criterion which compares the amplitudes of the SP P waves as measured by the body-wave magnitude m b, and the long-period (LP: ˜0.05 Hz) Rayleigh-wave amplitude as measured by the surface-wave magnitude M s; the P and Rayleigh waves being the main wave types used in forensic seismology. For a given M s, the m b for explosions is larger than for most earthquakes. The criterion is difficult to apply however, at low magnitude (say m b fail. Consequently the AWE Group in cooperation with the University of Cambridge used seismogram modelling to try and understand what controls complexity of SP P seismograms, and to put the m b : M s criterion on a theoretical basis. The results of this work show that the m b : M s criterion is robust because several factors contribute to the separation of earthquakes and explosions. The principal reason for the separation however, is that for many orientations of the earthquake source there is at least one P nodal plane in the teleseismic
The Lanthanide Contraction Revisited
Energy Technology Data Exchange (ETDEWEB)
Seitz, Michael; Oliver, Allen G.; Raymond, Kenneth N.
2007-04-19
A complete, isostructural series of lanthanide complexes (except Pm) with the ligand TREN-1,2-HOIQO has been synthesized and structurally characterized by means of single-crystal X-ray analysis. All complexes are 1D-polymeric species in the solid state, with the lanthanide being in an eight-coordinate, distorted trigonal-dodecahedral environment with a donor set of eight unique oxygen atoms. This series constitutes the first complete set of isostructural lanthanide complexes with a ligand of denticity greater than two. The geometric arrangement of the chelating moieties slightly deviates across the lanthanide series, as analyzed by a shape parameter metric based on the comparison of the dihedral angles along all edges of the coordination polyhedron. The apparent lanthanide contraction in the individual Ln-O bond lengths deviates considerably from the expected quadratic decrease that was found previously in a number of complexes with ligands of low denticity. The sum of all bond lengths around the trivalent metal cation, however, is more regular, showing an almost ideal quadratic behavior across the entire series. The quadratic nature of the lanthanide contraction is derived theoretically from Slater's model for the calculation of ionic radii. In addition, the sum of all distances along the edges of the coordination polyhedron show exactly the same quadratic dependency as the Ln-X bond lengths. The universal validity of this coordination sphere contraction, concomitant with the quadratic decrease in Ln-X bond lengths, was confirmed by reexamination of four other, previously published, almost complete series of lanthanide complexes. Due to the importance of multidentate ligands for the chelation of rare-earth metals, this result provides a significant advance for the prediction and rationalization of the geometric features of the corresponding lanthanide complexes, with great potential impact for all aspects of lanthanide coordination.
Training programming: revisiting terminology
Directory of Open Access Journals (Sweden)
Mário C. Marques
2017-11-01
Full Text Available Does the way the literature presents the classic periodization or programming make sense? In our opinion, the answer is clearly no. To get started, periodization and programming are terms used interchangeably (as synonyms in scientific literature when they actually have different meanings. Thus, to periodize is to set periods for a process (e.g., to a season or the sports life, whereas programming is defined as to devise and order the necessary actions to carry out a project. Accordingly, coaches and physical conditioning professionals should divide or periodize the season in different cycles and then, within each cycle, programming the training sessions. The periodization should not only help to structure the training process, but also to express the goals to achieve, to control the training process evolution and allow a great execution of the action plan. When designing a plan, we simply organize all the “ingredients” that should be part of the work/training design in a concrete and detailed way. From a scientific point of view, the programming is nothing more than an adequate interpretation of the training biological laws (Tschione, 1992; Latonov, 1997, Issurin, 2008 and must have the performance improvement as the major reference criteria (Issurin 2010. In practice, during the last decades, we have followed a set of instructions mainly based on experienced coaches (Matveyev, 1981, Bompa, 1994, Zatsiorsky, 1995 who have obtained relevant results. As a consequence, it is very difficult to accept another solid scientific based vision or proposal since the accumulation of systematic experiences has led to the construction of a theoretical model, even though there are no scientific evidences. The multiplication and implementation of the traditional programming models (Matveyev, 1981, Bompa, 1994 have guided us to a set of erroneous terms, among which we highlight the “micro”, the “meso” and the “macro” cycles, that were never
Directory of Open Access Journals (Sweden)
Abe D Hofman
Full Text Available We propose and test three statistical models for the analysis of children's responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779, and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808. For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development.
Double neutron stars: merger rates revisited
Chruslinska, Martyna; Belczynski, Krzysztof; Klencki, Jakub; Benacquista, Matthew
2018-03-01
We revisit double neutron star (DNS) formation in the classical binary evolution scenario in light of the recent Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo DNS detection (GW170817). The observationally estimated Galactic DNS merger rate of R_MW = 21^{+28}_{-14} Myr-1, based on three Galactic DNS systems, fully supports our standard input physics model with RMW = 24 Myr-1. This estimate for the Galaxy translates in a non-trivial way (due to cosmological evolution of progenitor stars in chemically evolving Universe) into a local (z ≈ 0) DNS merger rate density of Rlocal = 48 Gpc-3 yr-1, which is not consistent with the current LIGO/Virgo DNS merger rate estimate (1540^{+3200}_{-1220} Gpc-3 yr-1). Within our study of the parameter space, we find solutions that allow for DNS merger rates as high as R_local ≈ 600^{+600}_{-300} Gpc-3 yr-1 which are thus consistent with the LIGO/Virgo estimate. However, our corresponding BH-BH merger rates for the models with high DNS merger rates exceed the current LIGO/Virgo estimate of local BH-BH merger rate (12-213 Gpc-3 yr-1). Apart from being particularly sensitive to the common envelope treatment, DNS merger rates are rather robust against variations of several of the key factors probed in our study (e.g. mass transfer, angular momentum loss, and natal kicks). This might suggest that either common envelope development/survival works differently for DNS (˜10-20 M⊙ stars) than for BH-BH (˜40-100 M⊙ stars) progenitors, or high black hole (BH) natal kicks are needed to meet observational constraints for both types of binaries. Our conclusion is based on a limited number of (21) evolutionary models and is valid within this particular DNS and BH-BH isolated binary formation scenario.
Directory of Open Access Journals (Sweden)
Ly Thi Minh Pham
2016-10-01
Full Text Available The purpose of this study is to examine how brand equity, from a customer point of view, influences quick-service restaurant revisit intention. The authors propose a conceptual framework in which three dimensions of brand equity including brand associations combined with brand awareness, perceived quality, brand loyalty and perceived value are related to revisit intention. Data from 570 customers who had visited four quick-service restaurants in Ho Chi Minh City were used for the structural equation modelling (SEM analysis. The results show that strong brand equity is significantly correlated with revisit intention. Additionally, the effect of brand equity on revisit intention was mediated by perceived value, among others. Overall, this study emphasizes the importance of perceived value in lodging in the customer’s mind. Finally, managerial implications are presented based on the study results.
Réal, Florent; Vallet, Valérie; Flament, Jean-Pierre; Masella, Michel
2013-09-21
We present a revised version of the water many-body model TCPE [M. Masella and J.-P. Flament, J. Chem. Phys. 107, 9105 (1997)], which is based on a static three charge sites and a single polarizable site to model the molecular electrostatic properties of water, and on an anisotropic short range many-body energy term specially designed to accurately model hydrogen bonding in water. The parameters of the revised model, denoted TCPE/2013, are here developed to reproduce the ab initio energetic and geometrical properties of small water clusters (up to hexamers) and the repulsive water interactions occurring in cation first hydration shells. The model parameters have also been refined to reproduce two liquid water properties at ambient conditions, the density and the vaporization enthalpy. Thanks to its computational efficiency, the new model range of applicability was validated by performing simulations of liquid water over a wide range of temperatures and pressures, as well as by investigating water liquid/vapor interfaces over a large range of temperatures. It is shown to reproduce several important water properties at an accurate enough level of precision, such as the existence liquid water density maxima up to a pressure of 1000 atm, the water boiling temperature, the properties of the water critical point (temperature, pressure, and density), and the existence of a "singularity" temperature at about 225 K in the supercooled regime. This model appears thus to be particularly well-suited for characterizing ion hydration properties under different temperature and pressure conditions, as well as in different phases and interfaces.
Wang, Wei-Wei; Dang, Jing-Shuang; Zhao, Xiang; Nagase, Shigeru
2017-11-09
We introduce a mechanistic study based on a controversial fullerene bottom-up growth model proposed by R. Saito, G. Dresselhaus, and M. S. Dresselhaus. The so-called SDD C 2 addition model has been dismissed as chemically inadmissible but here we prove that it is feasible via successive atomic-carbon-participated addition and migration reactions. Kinetic calculations on the formation of isolated pentagon rule (IPR)-obeying C 70 and Y 3 N@C 80 are carried out by employing the SDD model for the first time. A stepwise mechanism is proposed with a considerably low barrier of ca. 2 eV which is about 3 eV lower than a conventional isomerization-containing fullerene growth pathway.
Revisiting tourist behavior via destination brand worldness
Directory of Open Access Journals (Sweden)
Murat Kayak
2016-11-01
Full Text Available Taking tourists’ perspective rather than destination offerings as its core concept, this study introduces “perceived destination brand worldness” as a variable. Perceived destination brand worldness is defined as the positive perception that a tourist has of a country that is visited by tourists from all over the world. Then, the relationship between perceived destination brand worldness and intention to revisit is analyzed using partial least squares regression. This empirical study selects Taiwanese tourists as its sample, and the results show that perceived destination brand worldness is a direct predictor of intention to revisit. In light of these empirical findings and observations, practical and theoretical implications are discussed.
Energy Technology Data Exchange (ETDEWEB)
Hiatt, JR [Rhode Island Hospital, Providence, RI (United States); Rivard, MJ [Tufts University School of Medicine, Boston, MA (United States)
2014-06-01
Purpose: The model S700 Axxent electronic brachytherapy source by Xoft was characterized in 2006 by Rivard et al. The source design was modified in 2006 to include a plastic centering insert at the source tip to more accurately position the anode. The objectives of the current study were to establish an accurate Monte Carlo source model for simulation purposes, to dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and to determine dose differences between the source with and without the centering insert. Methods: Design information from dissected sources and vendor-supplied CAD drawings were used to devise the source model for radiation transport simulations of dose distributions in a water phantom. Collision kerma was estimated as a function of radial distance, r, and polar angle, θ, for determination of reference TG-43 dosimetry parameters. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.03% at r=1 cm and 0.08% at r=10 cm. Results: The dose rate distribution the transverse plane did not change beyond 2% between the 2006 model and the current study. While differences exceeding 15% were observed near the source distal tip, these diminished to within 2% for r>1.5 cm. Differences exceeding a factor of two were observed near θ=150° and in contact with the source, but diminished to within 20% at r=10 cm. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% over a third of the available solid angle external from the source. For clinical applications using balloons or applicators with tissue located within 5 cm from the source, dose differences exceeding 2% were observed only for θ>110°. This study carefully examined the current source geometry and presents a modern reference TG-43 dosimetry dataset for the model S700 source.
Clifford Algebra Implying Three Fermion Generations Revisited
International Nuclear Information System (INIS)
Krolikowski, W.
2002-01-01
The author's idea of algebraic compositeness of fundamental particles, allowing to understand the existence in Nature of three fermion generations, is revisited. It is based on two postulates. Primo, for all fundamental particles of matter the Dirac square-root procedure √p 2 → Γ (N) ·p works, leading to a sequence N=1, 2, 3, ... of Dirac-type equations, where four Dirac-type matrices Γ (N) μ are embedded into a Clifford algebra via a Jacobi definition introducing four ''centre-of-mass'' and (N - 1) x four ''relative'' Dirac-type matrices. These define one ''centre-of-mass'' and N - 1 ''relative'' Dirac bispinor indices. Secundo, the ''centre-of-mass'' Dirac bispinor index is coupled to the Standard Model gauge fields, while N - 1 ''relative'' Dirac bispinor indices are all free indistinguishable physical objects obeying Fermi statistics along with the Pauli principle which requires the full antisymmetry with respect to ''relative'' Dirac indices. This allows only for three Dirac-type equations with N = 1, 3, 5 in the case of N odd, and two with N = 2, 4 in the case of N even. The first of these results implies unavoidably the existence of three and only three generations of fundamental fermions, namely leptons and quarks, as labelled by the Standard Model signature. At the end, a comment is added on the possible shape of Dirac 3 x 3 mass matrices for four sorts of spin-1/2 fundamental fermions appearing in three generations. For charged leptons a prediction is m τ = 1776.80 MeV, when the input of experimental m e and m μ is used. (author)
Solar system anomalies: Revisiting Hubble's law
Plamondon, R.
2017-12-01
This paper investigates the impact of a new metric recently published [R. Plamondon and C. Ouellet-Plamondon, in On Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories, edited by K. Rosquist, R. T. Jantzen, and R. Ruffini (World Scientific, Singapore, 2015), p. 1301] for studying the space-time geometry of a static symmetric massive object. This metric depends on a complementary error function (erfc) potential that characterizes the emergent gravitation field predicted by the model. This results in two types of deviations as compared to computations made on the basis of a Newtonian potential: a constant and a radial outcome. One key feature of the metric is that it postulates the existence of an intrinsic physical constant σ , the massive object-specific proper length that scales measurements in its surroundings. Although σ must be evaluated experimentally, we use a heuristic to estimate its value and point out some latent relationships between the Hubble constant, the secular increase in the astronomical unit, and the Pioneers delay. Indeed, highlighting the systematic errors that emerge when the effect of σ is neglected, one can link the Hubble constant H 0 to σ Sun and the secular increase V AU to σ Earth . The accuracy of the resulting numerical predictions, H 0 = 74 . 42 ( 0 . 02 ) ( km / s ) / Mpc and V AU ≅ 7.8 cm yr-1 , calls for more investigations of this new metric by specific experts. Moreover, we investigate the expected impacts of the new metric on the flyby anomalies, and we revisit the Pioneers delay. It is shown that both phenomena could be partly taken into account within the context of this unifying paradigm, with quite accurate numerical predictions. A correction for the osculating asymptotic velocity at the perigee of the order of 10 mm/s and an inward radial acceleration of 8 . 34 × 10 - 10 m / s 2 affecting the Pioneer ! space crafts could be explained by this new model.
Clifford Algebra Implying Three Fermion Generations Revisited
Krolikowski, Wojciech
2002-09-01
The author's idea of algebraic compositeness of fundamental particles, allowing to understand the existence in Nature of three fermion generations, is revisited. It is based on two postulates. Primo, for all fundamental particles of matter the Dirac square-root procedure √ {p2} → {Γ }(N)p works, leading to a sequence N = 1,2,3, ... of Dirac-type equations, where four Dirac-type matrices {Γ }(N)μ are embedded into a Clifford algebra via a Jacobi definition introducing four ``centre-of-mass'' and (N-1)× four ``relative'' Dirac-type matrices. These define one ``centre-of-mass'' and (N-1) ``relative'' Dirac bispinor indices. Secundo, the ``centre-of-mass'' Dirac bispinor index is coupled to the Standard Model gauge fields, while (N-1) ``relative'' Dirac bispinor indices are all free indistinguishable physical objects obeying Fermi statistics along with the Pauli principle which requires the full antisymmetry with respect to ``relative'' Dirac indices. This allows only for three Dirac-type equations with N = 1,3,5 in the case of N odd, and two with N = 2,4 in the case of N even. The first of these results implies unavoidably the existence of three and only three generations of fundamental fermions, namely leptons and quarks, as labelled by the Standard Model signature. At the end, a comment is added on the possible shape of Dirac 3x3 mass matrices for four sorts of spin-1/2 fundamental fermions appearing in three generations. For charged leptons a prediction is mτ = 1776.80 MeV, when the input of experimental me and mμ is used.
Stößel, Maria; Rehra, Lena; Haastert-Talini, Kirsten
2017-10-01
The rat median nerve injury and repair model gets increasingly important for research on novel bioartificial nerve grafts. It allows follow-up evaluation of the recovery of the forepaw functional ability with several sensitive techniques. The reflex-based grasping test, the skilled forelimb reaching staircase test, as well as electrodiagnostic recordings have been described useful in this context. Currently, no standard values exist, however, for comparison or comprehensive correlation of results obtained in each of the three methods after nerve gap repair in adult rats. Here, we bilaterally reconstructed 7-mm median nerve gaps with autologous nerve grafts (ANG) or autologous muscle-in-vein grafts (MVG), respectively. During 8 and 12 weeks of observation, functional recovery of each paw was separately monitored using the grasping test (weekly), the staircase test, and noninvasive electrophysiological recordings from the thenar muscles (both every 4 weeks). Evaluation was completed by histomorphometrical analyses at 8 and 12 weeks postsurgery. The comprehensive evaluation detected a significant difference in the recovery of forepaw functional motor ability between the ANG and MVG groups. The correlation between the different functional tests evaluated precisely displayed the recovery of distinct levels of forepaw functional ability over time. Thus, this multimodal evaluation model represents a valuable preclinical model for peripheral nerve reconstruction approaches.
Imbs, Diane-Charlotte; El Cheikh, Raouf; Boyer, Arnaud; Ciccolini, Joseph; Mascaux, Céline; Lacarelle, Bruno; Barlesi, Fabrice; Barbolosi, Dominique; Benzekry, Sébastien
2018-01-01
Concomitant administration of bevacizumab and pemetrexed-cisplatin is a common treatment for advanced nonsquamous non-small cell lung cancer (NSCLC). Vascular normalization following bevacizumab administration may transiently enhance drug delivery, suggesting improved efficacy with sequential administration. To investigate optimal scheduling, we conducted a study in NSCLC-bearing mice. First, experiments demonstrated improved efficacy when using sequential vs. concomitant scheduling of bevacizumab and chemotherapy. Combining this data with a mathematical model of tumor growth under therapy accounting for the normalization effect, we predicted an optimal delay of 2.8 days between bevacizumab and chemotherapy. This prediction was confirmed experimentally, with reduced tumor growth of 38% as compared to concomitant scheduling, and prolonged survival (74 vs. 70 days). Alternate sequencing of 8 days failed in achieving a similar increase in efficacy, thus emphasizing the utility of modeling support to identify optimal scheduling. The model could also be a useful tool in the clinic to personally tailor regimen sequences. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger
2017-09-01
The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).
Individualist Biocentrism vs. Holism Revisited
Directory of Open Access Journals (Sweden)
Katie McShane
2014-06-01
Full Text Available While holist views such as ecocentrism have considerable intuitive appeal, arguing for the moral considerability of ecological wholes such as ecosystems has turned out to be a very difficult task. In the environmental ethics literature, individualist biocentrists have persuasively argued that individual organisms—but not ecological wholes—are properly regarded as having a good of their own . In this paper, I revisit those arguments and contend that they are fatally flawed. The paper proceeds in five parts. First, I consider some problems brought about by climate change for environmental conservation strategies and argue that these problems give us good pragmatic reasons to want a better account of the welfare of ecological wholes. Second, I describe the theoretical assumptions from normative ethics that form the background of the arguments against holism. Third, I review the arguments given by individualist biocentrists in favour of individualism over holism. Fourth, I review recent work in the philosophy of biology on the units of selection problem, work in medicine on the human biome, and work in evolutionary biology on epigenetics and endogenous viral elements. I show how these developments undermine both the individualist arguments described above as well as the distinction between individuals and wholes as it has been understood by individualists. Finally, I consider five possible theoretical responses to these problems.
Revisit the spin-FET: Multiple reflection, inelastic scattering, and lateral size effects
Xu, Luting; Li, Xin-Qi; Sun, Qing-feng
2014-01-01
We revisit the spin-injected field effect transistor (spin-FET) by simulating a lattice model based on recursive lattice Green's function approach. In the one-dimensional case and coherent regime, the simulated results reveal noticeable differences from the celebrated Datta-Das model, which motivate thus an improved treatment and lead to analytic and generalized result. The simulation also allows us to address inelastic scattering (using B\\"uttiker's fictitious reservoir approach) and lateral...
Kreus, Markus; Paetsch, Johannes; Grosse, Fabian; Lenhart, Hermann; Peck, Myron; Pohlmann, Thomas
2017-04-01
Ongoing Ocean Acidification (OA) and climate change related trends impact on physical (temperature), chemical (CO2 buffer capacity) and biological (stoichiometric) properties of the marine environment. These threats affect the global ocean but they appear particularly pronounced in marginal and shelf seas. Marine biogeochemical models are often used to investigate the impacts of climate change and changes in OA on the marine system as well as its exchange with the atmosphere. Different studies showed that both the structural composition of the models and the elemental ratios of particulate organic matter in the surface ocean affect the key processes controlling the ocean's efficiency storing atmospheric excess carbon. Recent studies focus on the variability of the elemental ratios of phytoplankton and found that the high plasticity of C:N:P ratios enables the storage of large amounts of carbon by incorporation into carbohydrates and lipids. Our analysis focuses on the North Sea, a temperate European shelf sea, for the period 2000-2014. We performed an ensemble of model runs differing only in phytoplankton stoichiometry, representing combinations of C:P = [132.5, 106, 79.5] and N:P=[20, 16, 12] (i.e., Redfield ratio +/- 25%). We examine systematically the variations in annual averages of net primary production (NPP), net ecosystem production in the upper 30 m (NEP30), export production below 30 m depth (EXP30), and the air-sea flux of CO2 (ASF). Ensemble average fluxes (and standard deviations) resulted in NPP = 15.4 (2.8) mol C m-2 a-1, NEP30 = 5.4 (1.1) mol C m-2 a-1, EXP30 = 8.1 (1.1) mol C m-2 a-1 and ASF = 1.1 (0.5) mol C m-2 a-1. All key parameters exhibit only minor variations along the axis of constant C:N, but correlate positively with increasing C:P and decreasing N:P ratios. Concerning regional differences, lowest variations in local fluxes due to different stoichiometric ratios can be found in the shallow southern and coastal North Sea. Highest
International Nuclear Information System (INIS)
Hagedorn, Claudia; King, Stephen F.; Luhn, Christoph
2012-01-01
Following the recent results from Daya Bay and RENO, which measure the lepton mixing angle θ 13 l ≈0.15, we revisit a supersymmetric (SUSY) S 4 ×SU(5) model, which predicts tri-bimaximal (TB) mixing in the neutrino sector with θ 13 l being too small in its original version. We show that introducing one additional S 4 singlet flavon into the model gives rise to a sizable θ 13 l via an operator which leads to the breaking of one of the two Z 2 symmetries preserved in the neutrino sector at leading order (LO). The results of the original model for fermion masses, quark mixing and the solar mixing angle are maintained to good precision. The atmospheric and solar mixing angle deviations from TB mixing are subject to simple sum rule bounds.
International Nuclear Information System (INIS)
Bogolubov, N.N. Jr.; Prykarpatsky, A.K.; Ufuk Taneri
2008-07-01
The main fundamental principles characterizing the vacuum field structure are formulated and the modeling of the related vacuum medium and charged point particle dynamics by means of de- vised field theoretic tools are analyzed. The Maxwell electrodynamic theory is revisited and newly derived from the suggested vacuum field structure principles and the classical special relativity theory relationship between the energy and the corresponding point particle mass is revisited and newly obtained. The Lorentz force expression with respect to arbitrary non-inertial reference frames is revisited and discussed in detail, and some new interpretations of relations between the special relativity theory and quantum mechanics are presented. The famous quantum-mechanical Schroedinger type equations for a relativistic point particle in the external potential and magnetic fields within the quasiclassical approximation as the Planck constant (h/2π) → 0 and the light velocity c → ∞ are obtained. (author)
Revisiting Hansen Solubility Parameters by Including Thermodynamics
Louwerse, Manuel J; Fernández-Maldonado, Ana María; Rousseau, Simon; Moreau-Masselon, Chloe; Roux, Bernard; Rothenberg, Gadi
2017-01-01
The Hansen solubility parameter approach is revisited by implementing the thermodynamics of dissolution and mixing. Hansen's pragmatic approach has earned its spurs in predicting solvents for polymer solutions, but for molecular solutes improvements are needed. By going into the details of entropy
The Future of Engineering Education--Revisited
Wankat, Phillip C.; Bullard, Lisa G.
2016-01-01
This paper revisits the landmark CEE series, "The Future of Engineering Education," published in 2000 (available free in the CEE archives on the internet) to examine the predictions made in the original paper as well as the tools and approaches documented. Most of the advice offered in the original series remains current. Despite new…
Revisiting Weak Simulation for Substochastic Markov Chains
DEFF Research Database (Denmark)
Jansen, David N.; Song, Lei; Zhang, Lijun
2013-01-01
of the logic PCTL\\x, and its completeness was conjectured. We revisit this result and show that soundness does not hold in general, but only for Markov chains without divergence. It is refuted for some systems with substochastic distributions. Moreover, we provide a counterexample to completeness...
Coccolithophorids in polar waters: Wigwamma spp. revisited
DEFF Research Database (Denmark)
Thomsen, Helge Abildhauge; Østergaard, Jette B.; Heldal, Mikal
2013-01-01
A contingent of weakly calcified coccolithophorid genera and species were described from polar regions almost 40 years ago. In the interim period a few additional findings have been reported enlarging the realm of some of the species. The genus Wigwamma is revisited here with the purpose of provi...... appearance of the coccolith armour of the cell...
The Faraday effect revisited: General theory
DEFF Research Database (Denmark)
Cornean, Horia Decebal; Nenciu, Gheorghe; Pedersen, Thomas Garm
2006-01-01
This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. At zero temperature and zero frequency...
The Faraday effect revisited: General theory
DEFF Research Database (Denmark)
Cornean, Horia Decebal; Nenciu, Gheorghe; Pedersen, Thomas Garm
This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. For free electrons, the transverse...
Cui, Ming-Yang; Pan, Xu; Yuan, Qiang; Fan, Yi-Zhong; Zong, Hong-Shi
2018-06-01
We study the cosmic ray antiprotons with updated constraints on the propagation, proton injection, and solar modulation parameters based on the newest AMS-02 data near the Earth and Voyager data in the local interstellar space, and on the cross section of antiproton production due to proton-proton collisions based on new collider data. We use a Bayesian approach to properly consider the uncertainties of the model predictions of both the background and the dark matter (DM) annihilation components of antiprotons. We find that including an extra component of antiprotons from the annihilation of DM particles into a pair of quarks can improve the fit to the AMS-02 antiproton data considerably. The favored mass of DM particles is about 60~100 GeV, and the annihilation cross section is just at the level of the thermal production of DM (langleσvrangle ~ O(10‑26) cm3 s‑1).
Directory of Open Access Journals (Sweden)
Mikael Nygård
2013-02-01
Full Text Available This article examines political participation among older adults in Österbotten, Finland, and Västerbotten, Sweden. Two specific hypotheses are tested. First, we anticipate that older adults are loyal voters but less avid in engaging in politics between elections. Second, we expect individuallevel resources to explain why older people participate in politics. The article offers two contributions to the literature on political participation of older adults. First, it corroborates earlier findings by showing that older adults indeed have a higher inclination to vote than to engage in political activities between elections, but it also shows that the latter engagement is more diversified than one could expect. Second, although the findings largely support the resource model, they suggest that we need to consider also other factors such as the overall attitude towards older people.
Yu, Zonghuo; Wang, Fei
2016-03-12
Substantial research has shown that emotions play a critical role in physical health. However, most of these studies were conducted in industrialized countries, and it is still an open question whether the emotion-health connection is a "first-world problem". In the current study, we examined socio-economic development's influence on emotion-health connection by performing multilevel-modeling analysis in a dataset of 33,600 individuals from 162 counties in China. Results showed that both positive emotions and negative emotions predicted level of physical health and regional Gross Domestic Product Per Capita (GDPPC) had some impact on the association between emotion and health through accessibility of medical resources and educational status. But these impacts were suppressed, and the total effects of GDPPC on emotion-health connections were not significant. These results support the universality of emotion-health connection across levels of GDPPC and provide new insight into how socio-economic development might affect these connections.
Pathak, Arup Kumar
2014-12-01
An explicit analytical expression has been obtained for vertical detachment energy (VDE) that can be used to calculate the same over a wide range (both stable and unstable regions) of cluster sizes including the bulk from the knowledge of VDE for a finite number of stable clusters (n = 16-23). The calculated VDE for the bulk is found to be very good in agreement (within 1%) with the available experimental result and the domain of instability lies between n = 0 and n = 15 for the hydrated clusters, PO3 -4 . nH2O. The minimum number (n0) of water molecules needed to stabilise the phosphate anion is 16. We are able to explain the origin of solvent-berg model and anomalous conductivity from the knowledge of first stable cluster. We have also provided a scheme to calculate the radius of the solvent-berg for phosphate anion. The calculated conductivity using Stokes-Einstein relation and the radius of solvent-berg is found to be very good in agreement (within 4%) with the available experimental results.
Yasui, Kyuichi; Kozuka, Teruyuki; Yasuoka, Masaki; Kato, Kazumi
2015-11-01
There are two major categories in a thermoacoustic prime-mover. One is the traveling-wave type and the other is the standing-wave type. A simple analytical model of a standing-wave thermoacoustic prime-mover is proposed at relatively low heat-flux for a stack much shorter than the acoustic wavelength, which approximately describes the Brayton cycle. Numerical simulations of Rott's equations have revealed that the work flow (acoustic power) increases by increasing of the amplitude of the particle velocity (| U|) for the traveling-wave type and by increasing cosΦ for the standing-wave type, where Φ is the phase difference between the particle velocity and the acoustic pressure. In other words, the standing-wave type is a phase-dominant type while the traveling-wave type is an amplitude-dominant one. The ratio of the absolute value of the traveling-wave component (| U|cosΦ) to that of the standing-wave component (| U|sinΦ) of any thermoacoustic engine roughly equals the ratio of the absolute value of the increasing rate of | U| to that of cosΦ. The different mechanism between the traveling-wave and the standing-wave type is discussed regarding the dependence of the energy efficiency on the acoustic impedance of a stack as well as that on ωτα, where ω is the angular frequency of an acoustic wave and τα is the thermal relaxation time. While the energy efficiency of the traveling-wave type at the optimal ωτα is much higher than that of the standing-wave type, the energy efficiency of the standing-wave type is higher than that of the traveling-wave type at much higher ωτα under a fixed temperature difference between the cold and the hot ends of the stack.
Zavala, Miguel A; Angulo, Oscar; Bravo de la Parra, Rafael; López-Marcos, Juan C
2007-02-07
relevance of partial differential equations systems as a tool for exploring the individual-level mechanisms underpinning forest structure, particularly in relation to more complex forest simulation models that are more difficult to analyze and to interpret from a biological point of view.
Sirmas, Nick; Radulescu, Matei I.
2016-01-01
The problem of thermal ignition in a homogeneous gas is revisited from a molecular dynamics perspective. A two-dimensional model is adopted, which assumes reactive disks of type A and B in a fixed area that react to form type C products if an activation threshold for impact is surpassed. Such a reaction liberates kinetic energy to the product particles, representative of the heat release. The results for the ignition delay are compared with those obtained from the continuum description assumi...
First-principles lattice-gas Hamiltonian revisited: O-Pd(100)
Kappus, Wolfgang
2016-01-01
The methodology of deriving an adatom lattice-gas Hamiltonian (LGH) from first principles (FP) calculations is revisited. Such LGH cluster expansions compute a large set of lateral pair-, trio-, quarto interactions by solving a set of linear equations modelling regular adatom configurations and their FP energies. The basic assumption of truncating interaction terms beyond fifth nearest neighbors does not hold when adatoms show longer range interactions, e.g. substrate mediated elastic interac...
Revisiting the description of Protein-Protein interfaces. Part II: Experimental study
Cazals , Frédéric; Proust , Flavien
2006-01-01
This paper provides a detailed experimental study of an interface model developed in the companion article F. Cazals and F. Proust, Revisiting the description of Protein-Protein interfaces. Part I: algorithms. Our experimental study is concerned with the usual database of protein-protein complexes, split into five families (Proteases, Immune system, Enzyme Complexes, Signal transduction, Misc.) Our findings, which bear some contradictions with usual statements are the following: (i)Connectivi...
Sarvari, Neda Gholizadeh
2012-01-01
ABSTRACT: This study revisits the previous studies carried out by several researchers on Customer – Based Brand Equity with an intension to further investigate the applications and testing of the Customer-Based Brand Equity (CBBE) model in relation to destination branding. The study specifically examines the effects of Brand Equity Dimensions (Brand Awareness, Brand Loyalty, Brand Value, Brand Quality and Brand Image) on Tourists Satisfaction and ultimately on Future Behaviours that result i...
Multivariate linear models and repeated measurements revisited
DEFF Research Database (Denmark)
Dalgaard, Peter
2009-01-01
Methods for generalized analysis of variance based on multivariate normal theory have been known for many years. In a repeated measurements context, it is most often of interest to consider transformed responses, typically within-subject contrasts or averages. Efficiency considerations leads...... to sphericity assumptions, use of F tests and the Greenhouse-Geisser and Huynh-Feldt adjustments to compensate for deviations from sphericity. During a recent implementation of such methods in the R language, the general structure of such transformations was reconsidered, leading to a flexible specification...
Daniel Bernoulli’s epidemiological model revisited
Dietz, K.; Heesterbeek, J.A.P.
2002-01-01
The seminal paper by Daniel Bernoulli published in 1766 is put into a new perspective. After a short account of smallpox inoculation and of Bernoulli’s life, the motivation for that paper and its impact are described. It determines the age-specific equilibrium prevalence of immune individuals in an
The traumagenic neurodevelopmental model of psychosis revisited
DEFF Research Database (Denmark)
Read, John; Fosse, Roar; Moskowitz, Andrew
2014-01-01
of this paper, therefore, is to summarize the literature on biological mechanisms underlying the relationship between childhood trauma and psychosis published since 2001. A comprehensive search for relevant papers was undertaken via Medline, PubMed and psycINFO. In total, 125 papers were identified...
Revisiting Johnson and Jackson boundary conditions for granular flows
Energy Technology Data Exchange (ETDEWEB)
Li, Tingwen; Benyahia, Sofiane
2012-07-01
In this article, we revisit Johnson and Jackson boundary conditions for granular flows. The oblique collision between a particle and a flat wall is analyzed by adopting the classic rigid-body theory and a more realistic semianalytical model. Based on the kinetic granular theory, the input parameter for the partial-slip boundary conditions, specularity coefficient, which is not measurable in experiments, is then interpreted as a function of the particle-wall restitution coefficient, the frictional coefficient, and the normalized slip velocity at the wall. An analytical expression for the specularity coefficient is suggested for a flat, frictional surface with a low frictional coefficient. The procedure for determining the specularity coefficient for a more general problem is outlined, and a working approximation is provided.
REVISITING THE SCATTERING GREENHOUSE EFFECT OF CO2 ICE CLOUDS
International Nuclear Information System (INIS)
Kitzmann, D.
2016-01-01
Carbon dioxide ice clouds are thought to play an important role for cold terrestrial planets with thick CO 2 dominated atmospheres. Various previous studies showed that a scattering greenhouse effect by carbon dioxide ice clouds could result in a massive warming of the planetary surface. However, all of these studies only employed simplified two-stream radiative transfer schemes to describe the anisotropic scattering. Using accurate radiative transfer models with a general discrete ordinate method, this study revisits this important effect and shows that the positive climatic impact of carbon dioxide clouds was strongly overestimated in the past. The revised scattering greenhouse effect can have important implications for the early Mars, but also for planets like the early Earth or the position of the outer boundary of the habitable zone
Revisiting Cementoblastoma with a Rare Case Presentation
Directory of Open Access Journals (Sweden)
Vijayanirmala Subramani
2017-01-01
Full Text Available Cementoblastoma is a rare benign odontogenic neoplasm which is characterized by the proliferation of cellular cementum. Diagnosis of cementoblastoma is challenging because of its protracted clinical, radiographic features, and bland histological appearance; most often cementoblastoma is often confused with other cementum and bone originated lesions. The aim of this article is to overview/revisit, approach the diagnosis of cementoblastoma, and also present a unique radiographic appearance of a cementoblastoma lesion associated with an impacted tooth.
Social Life Cycle Assessment Revisited
Directory of Open Access Journals (Sweden)
Ruqun Wu
2014-07-01
Full Text Available To promote the development of Social Life Cycle Assessment (SLCA, we conducted a comprehensive review of recently developed frameworks, methods, and characterization models for impact assessment for future method developers and SLCA practitioners. Two previous reviews served as our foundations for this review. We updated the review by including a comprehensive list of recently-developed SLCA frameworks, methods and characterization models. While a brief discussion from goal, data, and indicator perspectives is provided in Sections 2 to 4 for different frameworks/methods, the focus of this review is Section 5 where discussion on characterization models for impact assessment of different methods is provided. The characterization models are categorized into two types following the UNEP/SETAC guidelines: type I models without impact pathways and type II models with impact pathways. Different from methods incorporating type I/II characterization models, another LCA modeling approach, Life Cycle Attribute Assessment (LCAA, is also discussed in this review. We concluded that methods incorporating either type I or type II models have limitations. For type I models, the challenge lies in the systematic identification of relevant stakeholders and materiality issues; while for type II models, identification of impact pathways that most closely and accurately represent the real-world causal relationships is the key. LCAA may avoid these problems, but the ultimate questions differ from those asked by the methods using type I and II models.
Large J expansion in ABJM theory revisited.
Dimov, H; Mladenov, S; Rashkov, R C
Recently there has been progress in the computation of the anomalous dimensions of gauge theory operators at strong coupling by making use of the AdS/CFT correspondence. On the string theory side they are given by dispersion relations in the semiclassical regime. We revisit the problem of a large-charge expansion of the dispersion relations for simple semiclassical strings in an [Formula: see text] background. We present the calculation of the corresponding anomalous dimensions of the gauge theory operators to an arbitrary order using three different methods. Although the results of the three methods look different, power series expansions show their consistency.
Sloan Digital Sky Survey Photometric Calibration Revisited
International Nuclear Information System (INIS)
Marriner, John
2012-01-01
The Sloan Digital Sky Survey calibration is revisited to obtain the most accurate photometric calibration. A small but significant error is found in the flat-fielding of the Photometric telescope used for calibration. Two SDSS star catalogs are compared and the average difference in magnitude as a function of right ascension and declination exhibits small systematic errors in relative calibration. The photometric transformation from the SDSS Photometric Telescope to the 2.5 m telescope is recomputed and compared to synthetic magnitudes computed from measured filter bandpasses.
Revisiting the Political Economy of Communication
Directory of Open Access Journals (Sweden)
Nicholas Garnham
2014-02-01
The task of the paper and the seminar was to revisit some of Nicholas Garnham’s ideas, writings and contributions to the study of the Political Economy of Communication and to reflect on the concepts, history, current status and perspectives of this field and the broader study of political economy today. The topics covered include Raymond Williams’ cultural materialism, Pierre Bourdieu’s sociology of culture, the debate between Political Economy and Cultural Studies, information society theory, Karl Marx’s theory and the critique of capitalism.
Sloan Digital Sky Survey Photometric Calibration Revisited
Energy Technology Data Exchange (ETDEWEB)
Marriner, John; /Fermilab
2012-06-29
The Sloan Digital Sky Survey calibration is revisited to obtain the most accurate photometric calibration. A small but significant error is found in the flat-fielding of the Photometric telescope used for calibration. Two SDSS star catalogs are compared and the average difference in magnitude as a function of right ascension and declination exhibits small systematic errors in relative calibration. The photometric transformation from the SDSS Photometric Telescope to the 2.5 m telescope is recomputed and compared to synthetic magnitudes computed from measured filter bandpasses.
The Motivation for Hedging Revisited
Pennings, J.M.E.; Leuthold, R.M.
2000-01-01
This article develops an alternative view on the motivation to hedge. A conceptual model shows how hedging facilitates contract relationships between firms and can solve conflicts between firms. In this model, the contract preferences, level of power, and conflicts in contractual relationships of
Increased 30-Day Emergency Department Revisits Among Homeless Patients with Mental Health Conditions
Directory of Open Access Journals (Sweden)
Chun Nok Lam
2016-09-01
Full Text Available Introduction: Patients with mental health conditions frequently use emergency medical services. Many suffer from substance use and homelessness. If they use the emergency department (ED as their primary source of care, potentially preventable frequent ED revisits and hospital readmissions can worsen an already crowded healthcare system. However, the magnitude to which homelessness affects health service utilization among patients with mental health conditions remains unclear in the medical community. This study assessed the impact of homelessness on 30-day ED revisits and hospital readmissions among patients presenting with mental health conditions in an urban, safety-net hospital. Methods: We conducted a secondary analysis of administrative data on all adult ED visits in 2012 in an urban safety-net hospital. Patient demographics, mental health status, homelessness, insurance coverage, level of acuity, and ED disposition per ED visit were analyzed using multilevel modeling to control for multiple visits nested within patients. We performed multivariate logistic regressions to evaluate if homelessness moderated the likelihood of mental health patients’ 30-day ED revisits and hospital readmissions. Results: Study included 139,414 adult ED visits from 92,307 unique patients (43.5±15.1 years, 51.3% male, 68.2% Hispanic/Latino. Nearly 8% of patients presented with mental health conditions, while 4.6% were homeless at any time during the study period. Among patients with mental health conditions, being homeless contributed to an additional 28.0% increase in likelihood (4.28 to 5.48 odds of 30-day ED revisits and 38.2% increase in likelihood (2.04 to 2.82 odds of hospital readmission, compared to non-homeless, non-mental health (NHNM patients as the base category. Adjusted predicted probabilities showed that homeless patients presenting with mental health conditions have a 31.1% chance of returning to the ED within 30-day post discharge and a 3
Non-minimal inflation revisited
International Nuclear Information System (INIS)
Nozari, Kourosh; Shafizadeh, Somayeh
2010-01-01
We reconsider an inflationary model that inflaton field is non-minimally coupled to gravity. We study the parameter space of the model up to the second (and in some cases third) order of the slow-roll parameters. We calculate inflation parameters in both Jordan and Einstein frames, and the results are compared in these two frames and also with observations. Using the recent observational data from combined WMAP5+SDSS+SNIa datasets, we study constraints imposed on our model parameters, especially the non-minimal coupling ξ.
Fontes, Duarte; Mühlleitner, Margarete; Romão, Jorge C.; Santos, Rui; Silva, João P.; Wittbrodt, Jonas
2018-02-01
The complex two-Higgs doublet model is one of the simplest ways to extend the scalar sector of the Standard Model to include a new source of CP-violation. The model has been used as a benchmark model to search for CP-violation at the LHC and as a possible explanation for the matter-antimatter asymmetry of the Universe. In this work, we re-analyse in full detail the softly broken ℤ 2 symmetric complex two-Higgs doublet model (C2HDM). We provide the code C2HDM_HDECAY implementing the C2HDM in the well-known HDECAY program which calculates the decay widths including the state-of-the-art higher order QCD corrections and the relevant off-shell decays. Using C2HDM_HDECAY together with the most relevant theoretical and experimental constraints, including electric dipole moments (EDMs), we review the parameter space of the model and discuss its phenomenology. In particular, we find cases where large CP-odd couplings to fermions are still allowed and provide benchmark points for these scenarios. We examine the prospects of discovering CP-violation at the LHC and show how theoretically motivated measures of CP-violation correlate with observables.
Revisiting the Landau fluid closure.
Hunana, P.; Zank, G. P.; Webb, G. M.; Adhikari, L.
2017-12-01
Advanced fluid models that are much closer to the full kinetic description than the usual magnetohydrodynamic description are a very useful tool for studying astrophysical plasmas and for interpreting solar wind observational data. The development of advanced fluid models that contain certain kinetic effects is complicated and has attracted much attention over the past years. Here we focus on fluid models that incorporate the simplest possible forms of Landau damping, derived from linear kinetic theory expanded about a leading-order (gyrotropic) bi-Maxwellian distribution function f_0, under the approximation that the perturbed distribution function f_1 is gyrotropic as well. Specifically, we focus on various Pade approximants to the usual plasma response function (and to the plasma dispersion function) and examine possibilities that lead to a closure of the linear kinetic hierarchy of fluid moments. We present re-examination of the simplest Landau fluid closures.
density-dependent selection revisited
Indian Academy of Sciences (India)
Unknown
is a more useful way of looking at density-dependent selection, and then go on ... these models was that the condition for maintenance of ... In a way, their formulation may be viewed as ... different than competition among species, and typical.
Revisiting interaction in knowledge translation
Directory of Open Access Journals (Sweden)
Zackheim Lisa
2007-10-01
Full Text Available Abstract Background Although the study of research utilization is not new, there has been increased emphasis on the topic over the recent past. Science push models that are researcher driven and controlled and demand pull models emphasizing users/decision-maker interests have largely been abandoned in favour of more interactive models that emphasize linkages between researchers and decisionmakers. However, despite these and other theoretical and empirical advances in the area of research utilization, there remains a fundamental gap between the generation of research findings and the application of those findings in practice. Methods Using a case approach, the current study looks at the impact of one particular interaction approach to research translation used by a Canadian funding agency. Results Results suggest there may be certain conditions under which different levels of decisionmaker involvement in research will be more or less effective. Four attributes are illuminated by the current case study: stakeholder diversity, addressability/actionability of results, finality of study design and methodology, and politicization of results. Future research could test whether these or other variables can be used to specify some of the conditions under which different approaches to interaction in knowledge translation are likely to facilitate research utilization. Conclusion This work suggests that the efficacy of interaction approaches to research translation may be more limited than current theory proposes and underscores the need for more completely specified models of research utilization that can help address the slow pace of change in this area.
Barunik, Jozef; Barunikova, Michaela
2015-01-01
This paper revisits the fractional co-integrating relationship between ex-ante implied volatility and ex-post realized volatility. Previous studies on stock index options have found biases and inefficiencies in implied volatility as a forecast of future volatility. It is argued that the concept of corridor implied volatility (CIV) should be used instead of the popular model-free option-implied volatility (MFIV) when assessing the relation as the latter may introduce bias to the estimation. In...
An Adaptive and Hybrid Approach for Revisiting the Visibility Pipeline
Directory of Open Access Journals (Sweden)
Ícaro Lins Leitão da Cunha
2016-04-01
Full Text Available We revisit the visibility problem, which is traditionally known in Computer Graphics and Vision fields as the process of computing a (potentially visible set of primitives in the computational model of a scene. We propose a hybrid solution that uses a dry structure (in the sense of data reduction, a triangulation of the type J1a, to accelerate the task of searching for visible primitives. We came up with a solution that is useful for real-time, on-line, interactive applications as 3D visualization. In such applications the main goal is to load the minimum amount of primitives from the scene during the rendering stage, as possible. For this purpose, our algorithm executes the culling by using a hybrid paradigm based on viewing-frustum, back-face culling and occlusion models. Results have shown substantial improvement over these traditional approaches if applied separately. This novel approach can be used in devices with no dedicated processors or with low processing power, as cell phones or embedded displays, or to visualize data through the Internet, as in virtual museums applications.
Revisiting NLTE Rovibrational Excitation of CO in UV Irradiated Environments
Zhang, Ziwei; Yang, Benhui H.; Stancil, Phillip C.; Walker, Kyle M.; Forrey, Robert C.; Naduvalath, Balakrishnan
2018-06-01
Being the second most abundant molecule in the ISM, CO has been well observed and studied as a tracer for many astrophysical processes. Highly rovibrationally excited CO emission is used to reveal features in intense UV-irradiated regions such as the inner rim of protoplanetary disks, carbon star envelopes, and star forming regions. Collisional rate coefficients are crucial for non-local thermodynamic equilibrium (NLTE) molecular analysis in such regions, while data for high rovibrational levels for CO were previously unavailable. Here we revisit CO excitation properties with comprehensive collisional data including high rovibrational states (up to v=5 and J=40) colliding with H2, H and He, in various NLTE astrophysical environments with the spectral modeling packages RADEX and Cloudy. We studied line ratio diagnostics between low- and high-vibrational transitions with RADEX. Using Cloudy, we investigated molecular properties in complex environments, such as photodissociation regions and the outflow of the carbon star IRC+10216, illustrating the potential for utilizing high rovibrational NLTE analysis in future astrophysical modeling.This work was supported by NASA Grants NNX15AI61G and NNX16AF09G.
Fermion to boson mappings revisited
International Nuclear Information System (INIS)
Ginocchio, J.N.; Johnson, C.W.
1996-01-01
We briefly review various mappings of fermion pairs to bosons, including those based on mapping operators, such as Belyaev-Zelevinskii, and those on mapping states, such as Marumori; in particular we consider the work of Otsuka-Arima-Iachello, aimed at deriving the Interacting Boson Model. We then give a rigorous and unified description of state-mapping procedures which allows one to systematically go beyond Otsuka-Arima-Iachello and related approaches, along with several exact results. (orig.)
Rational Asset Pricing Bubbles Revisited
Jan Werner
2012-01-01
Price bubble arises when the price of an asset exceeds the asset's fundamental value, that is, the present value of future dividend payments. The important result of Santos and Woodford (1997) says that price bubbles cannot exist in equilibrium in the standard dynamic asset pricing model with rational agents as long as assets are in strictly positive supply and the present value of total future resources is finite. This paper explores the possibility of asset price bubbles when either one of ...
The f electron collapse revisited
International Nuclear Information System (INIS)
Bennett, B.I.
1987-03-01
A reexamination of the collapse of 4f and 5f electrons in the lanthanide and actinide series is presented. The calculations show the well-known collapse of the f electron density at the thresholds of these series along with an f 2 collapse between thorium and protactinium. The collapse is sensitive to the choice of model for the exchange-correlation potential and the behavior of the potential at large radius
Chapter 1. Traditional marketing revisited
Lambin, Jean-Jacques
2013-01-01
The objective of this chapter is to review the traditional marketing concept and to analyse its main ambiguities as presented in popular textbooks. The traditional marketing management model placing heavy emphasis of the marketing mix is in fact a supply-driven approach of the market, using the understanding of consumers’ needs to mould demand to the requirements of supply, instead of adapting supply to the expectations of demand. To clarify the true role of marketing, a distinction is made b...
Silent cries, dancing tears: the metapsychology of art revisited/revised.
Aragno, Anna
2011-04-01
Against the backdrop of a broad survey of the literature on applied psychoanalysis, a number of concepts underpinning the metapsychology of art are revisited and revised: sublimation; interrelationships between primary and secondary processes; symbolization; "fantasy"; and "cathexis." Concepts embedded in dichotomous or drive/energic contexts are examined and reformulated in terms of a continuum of semiotic processes. Freudian dream structure is viewed as a biological/natural template for nonrepressive artistic forms of sublimation. The synthesis presented proposes a model of continuous rather than discontinuous processes, in a nonenergic, biosemiotic metatheoretical framework.
Ayres, Thomas R.; Wiedemann, Gunter R.
1989-01-01
A more extensive and detailed non-LTE simulation of the Delta v = 1 bands of CO than attempted previously is reported. The equations of statistical equilibrium are formulated for a model molecule containing 10 bound vibrational levels, each split into 121 rotational substates and connected by more than 1000 radiative transitions. Solutions are obtained for self-consistent populations and radiation fields by iterative application of the 'Lambda-operator' to an initial LTE distribution. The formalism is used to illustrate models of the sun and Arcturus. For the sun, negligible departures from LTE are found in either a theoretical radiative-equilibrium photosphere with outwardly falling temperatures in its highest layers or in a semiempirical hot chromosphere that reproduces the spatially averaged emission cores of Ca II H and K. The simulations demonstrate that the puzzling 'cool cores' of the CO Delta V = 1 bands observed in limb spectra of the sun and in flux spectra of Arcturus cannot be explained simply by non-LTE scattering effects.
The isotropic radio background revisited
Energy Technology Data Exchange (ETDEWEB)
Fornengo, Nicolao; Regis, Marco [Dipartimento di Fisica Teorica, Università di Torino, via P. Giuria 1, I–10125 Torino (Italy); Lineros, Roberto A. [Instituto de Física Corpuscular – CSIC/U. Valencia, Parc Científic, calle Catedrático José Beltrán, 2, E-46980 Paterna (Spain); Taoso, Marco, E-mail: fornengo@to.infn.it, E-mail: rlineros@ific.uv.es, E-mail: regis@to.infn.it, E-mail: taoso@cea.fr [Institut de Physique Théorique, CEA/Saclay, F-91191 Gif-sur-Yvette Cédex (France)
2014-04-01
We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky.
The isotropic radio background revisited
International Nuclear Information System (INIS)
Fornengo, Nicolao; Regis, Marco; Lineros, Roberto A.; Taoso, Marco
2014-01-01
We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky
Local Equilibrium and Retardation Revisited.
Hansen, Scott K; Vesselinov, Velimir V
2018-01-01
In modeling solute transport with mobile-immobile mass transfer (MIMT), it is common to use an advection-dispersion equation (ADE) with a retardation factor, or retarded ADE. This is commonly referred to as making the local equilibrium assumption (LEA). Assuming local equilibrium, Eulerian textbook treatments derive the retarded ADE, ostensibly exactly. However, other authors have presented rigorous mathematical derivations of the dispersive effect of MIMT, applicable even in the case of arbitrarily fast mass transfer. We resolve the apparent contradiction between these seemingly exact derivations by adopting a Lagrangian point of view. We show that local equilibrium constrains the expected time immobile, whereas the retarded ADE actually embeds a stronger, nonphysical, constraint: that all particles spend the same amount of every time increment immobile. Eulerian derivations of the retarded ADE thus silently commit the gambler's fallacy, leading them to ignore dispersion due to mass transfer that is correctly modeled by other approaches. We then present a particle tracking simulation illustrating how poor an approximation the retarded ADE may be, even when mobile and immobile plumes are continually near local equilibrium. We note that classic "LEA" (actually, retarded ADE validity) criteria test for insignificance of MIMT-driven dispersion relative to hydrodynamic dispersion, rather than for local equilibrium. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Post-Inflationary Gravitino Production Revisited
Ellis, John; Nanopoulos, Dimitri V.; Olive, Keith A.; Peloso, Marco
2016-01-01
We revisit gravitino production following inflation. As a first step, we review the standard calculation of gravitino production in the thermal plasma formed at the end of post-inflationary reheating when the inflaton has completely decayed. Next we consider gravitino production prior to the completion of reheating, assuming that the inflaton decay products thermalize instantaneously while they are still dilute. We then argue that instantaneous thermalization is in general a good approximation, and also show that the contribution of non-thermal gravitino production via the collisions of inflaton decay products prior to thermalization is relatively small. Our final estimate of the gravitino-to-entropy ratio is approximated well by a standard calculation of gravitino production in the post-inflationary thermal plasma assuming total instantaneous decay and thermalization at a time $t \\simeq 1.2/\\Gamma_\\phi$. Finally, in light of our calculations, we consider potential implications of upper limits on the gravitin...
The Faraday effect revisited General theory
Cornean, H D; Pedersen, T G
2005-01-01
This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. For free electrons, the transverse conductivity can be explicitly computed and coincides with the classical result. In the general case, using magnetic perturbation theory, the conductivity tensor is expanded in powers of the strength of the magnetic field $B$. Then the linear term in $B$ of this expansion is written down in terms of the zero magnetic field Green function and the zero field current operator. In the periodic case, the linear term in $B$ of the conductivity tensor is expressed in terms of zero magnetic field Bloch functions and energies. No derivatives with respect to the quasimomentum appear and thereby all ambiguities are removed, in contrast to earlier work.
Revisiting instanton corrections to the Konishi multiplet
Energy Technology Data Exchange (ETDEWEB)
Alday, Luis F. [Mathematical Institute, University of Oxford,Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom); Korchemsky, Gregory P. [Institut de Physique Théorique, Université Paris Saclay, CNRS, CEA,F-91191 Gif-sur-Yvette (France)
2016-12-01
We revisit the calculation of instanton effects in correlation functions in N=4 SYM involving the Konishi operator and operators of twist two. Previous studies revealed that the scaling dimensions and the OPE coefficients of these operators do not receive instanton corrections in the semiclassical approximation. We go beyond this approximation and demonstrate that, while operators belonging to the same N=4 supermultiplet ought to have the same conformal data, the evaluation of quantum instanton corrections for one operator can be mapped into a semiclassical computation for another operator in the same supermultiplet. This observation allows us to compute explicitly the leading instanton correction to the scaling dimension of operators in the Konishi supermultiplet as well as to their structure constants in the OPE of two half-BPS scalar operators. We then use these results, together with crossing symmetry, to determine instanton corrections to scaling dimensions of twist-four operators with large spin.
Sparse random matrices: The eigenvalue spectrum revisited
International Nuclear Information System (INIS)
Semerjian, Guilhem; Cugliandolo, Leticia F.
2003-08-01
We revisit the derivation of the density of states of sparse random matrices. We derive a recursion relation that allows one to compute the spectrum of the matrix of incidence for finite trees that determines completely the low concentration limit. Using the iterative scheme introduced by Biroli and Monasson [J. Phys. A 32, L255 (1999)] we find an approximate expression for the density of states expected to hold exactly in the opposite limit of large but finite concentration. The combination of the two methods yields a very simple geometric interpretation of the tails of the spectrum. We test the analytic results with numerical simulations and we suggest an indirect numerical method to explore the tails of the spectrum. (author)
Risk prediction of emergency department revisit 30 days post discharge: a prospective study.
Directory of Open Access Journals (Sweden)
Shiying Hao
Full Text Available Among patients who are discharged from the Emergency Department (ED, about 3% return within 30 days. Revisits can be related to the nature of the disease, medical errors, and/or inadequate diagnoses and treatment during their initial ED visit. Identification of high-risk patient population can help device new strategies for improved ED care with reduced ED utilization.A decision tree based model with discriminant Electronic Medical Record (EMR features was developed and validated, estimating patient ED 30 day revisit risk. A retrospective cohort of 293,461 ED encounters from HealthInfoNet (HIN, Maine's Health Information Exchange (HIE, between January 1, 2012 and December 31, 2012, was assembled with the associated patients' demographic information and one-year clinical histories before the discharge date as the inputs. To validate, a prospective cohort of 193,886 encounters between January 1, 2013 and June 30, 2013 was constructed. The c-statistics for the retrospective and prospective predictions were 0.710 and 0.704 respectively. Clinical resource utilization, including ED use, was analyzed as a function of the ED risk score. Cluster analysis of high-risk patients identified discrete sub-populations with distinctive demographic, clinical and resource utilization patterns.Our ED 30-day revisit model was prospectively validated on the Maine State HIN secure statewide data system. Future integration of our ED predictive analytics into the ED care work flow may lead to increased opportunities for targeted care intervention to reduce ED resource burden and overall healthcare expense, and improve outcomes.
Compound verbs in English revisited
Directory of Open Access Journals (Sweden)
Alexandra Bagasheva
2011-05-01
Full Text Available Compound verbs (CVs raise a number of puzzling questions concerning their classification, their word formation properties, their basic onomasiological function and their transitory status between “relations” and “conceptual-cores”. Using the constructionist framework in the context of a usage-based network model of language, the paper develops a proposal for the classification of CVs and an account of the semantics of word formation niches of CVs created by analogy, which yield unified semantic analyses. A hypothesis is formulated concerning the acategorial nature of CV internal constituents, which naturally accommodates the proposed classification and word formation niche analyses. A hypothesis is formulated in this context concerning the intermediary status of CVs as language-cognition interface units collapsing the “relation-conceptual core” distinction. Conclusions are drawn relating to the transitory nature of most CVs as nonce creations performing a special function in communicative interaction.
Compound verbs in English revisited
Directory of Open Access Journals (Sweden)
Alexandra Bagasheva
2011-01-01
Full Text Available Compound verbs (CVs raise a number of puzzling questions concerning their classification, their word formation properties, their basic onomasiological function and their transitory status between “relations” and “conceptual-cores”. Using the constructionist framework in the context of a usage-based network model of language, the paper develops a proposal for the classification of CVs and an account of the semantics of word formation niches of CVs created by analogy, which yield unified semantic analyses. A hypothesis is formulated concerning the acategorial nature of CV internal constituents, which naturally accommodates the proposed classification and word formation niche analyses. A hypothesis is formulated in this context concerning the intermediary status of CVs as language-cognition interface units collapsing the “relation-conceptual core” distinction. Conclusions are drawn relating to the transitory nature of most CVs as nonce creations performing a special function in communicative interaction.
The furrows of Rhinolophidae revisited.
Vanderelst, Dieter; Jonas, Reijniers; Herbert, Peremans
2012-05-07
Rhinolophidae, a family of echolocating bats, feature very baroque noseleaves that are assumed to shape their emission beam. Zhuang & Muller (Zhuang & Muller 2006 Phys. Rev. Lett. 97, 218701 (doi:10.1103/PhysRevLett.97.218701); Zhuang & Muller 2007 Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 76(Pt. 1), 051902 (doi:10.1103/PhysRevE.76.051902)) have proposed, based on finite element simulations, that the furrows present in the noseleaves of these bats act as resonance cavities. Using Rhinolophus rouxi as a model species, they reported that a resonance phenomenon causes the main beam to be elongated at a particular narrow frequency range. Virtually filling the furrows reduced the extent of the main lobe. However, the results of Zhuang & Muller are difficult to reconcile with the ecological background of R. rouxi. In this report, we replicate the study of Zhuang & Muller, and extend it in important ways: (i) we take the filtering of the moving pinnae into account, (ii) we use a model of the echolocation task faced by Rhinolophidae to estimate the effect of any alterations to the emission beam on the echolocation performance of the bat, and (iii) we validate our simulations using a physical mock-up of the morphology of R. rouxi. In contrast to Zhuang & Muller, we find the furrows to focus the emitted energy across the whole range of frequencies contained in the calls of R. rouxi (both in simulations and in measurements). Depending on the frequency, the focusing effect of the furrows has different consequences for the estimated echolocation performance. We argue that the furrows act to focus the beam in order to reduce the influence of clutter echoes.
Norton, Larry
2014-01-01
At the root of science lie basic rules, if we can discover or deduce them. This is not an abstract project but practical; if we can understand the why then perhaps we can rationally intervene. One of the unifying unsolved problems in physics is the hypothetical "Theory of Everything." In a similar vein, we can ask whether our own field contains such hidden fundamental truths and, if so, how we can use them to develop better therapies and outcomes for our patients. Modern oncology has developed as drugs and translational science have matured over the 50 years since ASCO's founding, but almost from that beginning tumor modeling has been a key tool. Through this general approach Norton and Simon changed our understanding of cancer biology and response to therapy when they described the fit of Gompertzian curves to both clinical and animal observations of tumor growth. The practical relevance of these insights has only grown with the development of DNA sequencing promising a raft of new targets (and drugs). In that regard, Larry Norton's contribution to this year's Educational Book reminds us to always think creatively about the fundamental problems of tumor growth and metastases as well as therapeutic response. Demonstrating the creativity and thoughtfulness that have marked his remarkable career, he now incorporates a newer concept of self-seeding to further explain why Gompertzian growth occurs and, in the process, provides a novel potential therapeutic target. As you read his elegantly presented discussion, consider how this understanding, wisely applied to the modern era of targeted therapies, might speed the availability of better treatments. But even more instructive is his personal model-not only the Norton-Simon Hypothesis-of how to live and approach science, biology, patients and their families, as well as the broader community. He shows that with energy, enthusiasm, optimism, intellect, and hard work we can make the world better. Clifford A. Hudis, MD, FACP
Directory of Open Access Journals (Sweden)
I Nyoman Sudiarta
2014-03-01
Full Text Available This study aimed to determine the effect of perceptions of distributive, procedural and interactional justice on post-service recovery satisfaction and post-service recovery satisfaction effect on the intention to revisit and WOM recommendations of foreign tourists to Bali. The respondents of this study were foreign tourists who visited Bali and ever experienced complaint. The number of eligible samples was 100 respondents. The questionnaire was given to tourists visiting tourist attractions of Tanah Lot, Kintamani and Besakih. Data were analyzed using multivariate statistical analysis, namely structural equation modeling (SEM. The results of this study indicated that the perception of distributive justice, procedural and interactional had a positive and significant effect on the post-service recovery satisfaction of foreign tourists who visited Bali. The study also found a positive and significant effect of post-service recovery satisfaction on the intention to revisit and the intention of recommending positive WOM of foreign tourists who visited Bali.
Revisiting Cross-Channel Information Transfer for Chromatic Aberration Correction
Sun, Tiancheng; Peng, Yifan; Heidrich, Wolfgang
2017-01-01
Image aberrations can cause severe degradation in image quality for consumer-level cameras, especially under the current tendency to reduce the complexity of lens designs in order to shrink the overall size of modules. In simplified optical designs, chromatic aberration can be one of the most significant causes for degraded image quality, and it can be quite difficult to remove in post-processing, since it results in strong blurs in at least some of the color channels. In this work, we revisit the pixel-wise similarity between different color channels of the image and accordingly propose a novel algorithm for correcting chromatic aberration based on this cross-channel correlation. In contrast to recent weak prior-based models, ours uses strong pixel-wise fitting and transfer, which lead to significant quality improvements for large chromatic aberrations. Experimental results on both synthetic and real world images captured by different optical systems demonstrate that the chromatic aberration can be significantly reduced using our approach.
Revisiting Cross-Channel Information Transfer for Chromatic Aberration Correction
Sun, Tiancheng
2017-12-25
Image aberrations can cause severe degradation in image quality for consumer-level cameras, especially under the current tendency to reduce the complexity of lens designs in order to shrink the overall size of modules. In simplified optical designs, chromatic aberration can be one of the most significant causes for degraded image quality, and it can be quite difficult to remove in post-processing, since it results in strong blurs in at least some of the color channels. In this work, we revisit the pixel-wise similarity between different color channels of the image and accordingly propose a novel algorithm for correcting chromatic aberration based on this cross-channel correlation. In contrast to recent weak prior-based models, ours uses strong pixel-wise fitting and transfer, which lead to significant quality improvements for large chromatic aberrations. Experimental results on both synthetic and real world images captured by different optical systems demonstrate that the chromatic aberration can be significantly reduced using our approach.
Motivations for Extradyadic Infidelity Revisited.
Selterman, Dylan; Garcia, Justin R; Tsapelas, Irene
2017-12-15
Relationship infidelities are motivated by many distinct factors, with previous research indicating motivations of dissatisfaction, neglect, anger, and sexual desire (Barta & Kiene, 2005). We expand on this by demonstrating additional, empirically distinct motivations for infidelity. Using an Internet-based questionnaire, participants (N = 495), most of whom were young adults, self-reported their infidelities. In addition to evidence for previously studied motivations, our data demonstrate additional factors, including lack of love ("I had 'fallen out of love with' my primary partner"), low commitment ("I was not very committed to my primary partner"), esteem ("I wanted to enhance my popularity"), gaining sexual variety ("I wanted a greater variety of sexual partners"), and situational factors ("I was drunk and not thinking clearly"). Our results also show personality correlates with infidelity motivations. Consistent with predictions, attachment insecurity was associated with motivations of anger, lack of love, neglect, low commitment, and esteem, while unrestricted sociosexual orientation was associated with sexual variety. Implicit beliefs (e.g., growth, destiny, romanticism) were differentially associated with sexual desire, low commitment, lack of love, and neglect. These findings highlight multifaceted motivations underlying infidelity, moving beyond relationship deficit models of infidelity, with implications for research and psychotherapy involving people's romantic and sexual relationships.
Friction and anchorage loading revisited.
Dholakia, Kartik D
2012-01-01
Contemporary concepts of sliding mechanics explain that friction is inevitable. To overcome this frictional resistance, excess force is required to retract the tooth along the archwire (ie, individual retraction of canines, en masse retraction of anterior teeth), in addition to the amount of force required for tooth movement. The anterior tooth retraction force, in addition to excess force (to overcome friction), produces reciprocal protraction force on molars, thereby leading to increased anchorage loading. However, this traditional concept was challenged in recent literature, which was based on the finite element model, but did not bear correlation to the clinical scenario. This article will reinforce the fact that clinically, friction increases anchorage loading in all three planes of space, considering the fact that tooth movement is a quasistatic process rather than a purely continuous or static one, and that conventional ways of determining the effects of static or dynamic friction on anchorage load cannot be applied to clinical situations (which consist of anatomical resistance units and a complex muscular force system). The article does not aim to quantify friction and its effect on the amount of anchorage load. Rather, a new perspective regarding the role of various additional factors (which is not explained by contemporary concept) that may influence friction and anchorage loading is provided..
Generalized spin Sutherland systems revisited
Directory of Open Access Journals (Sweden)
L. Fehér
2015-04-01
Full Text Available We present generalizations of the spin Sutherland systems obtained earlier by Blom and Langmann and by Polychronakos in two different ways: from SU(n Yang–Mills theory on the cylinder and by constraining geodesic motion on the N-fold direct product of SU(n with itself, for any N>1. Our systems are in correspondence with the Dynkin diagram automorphisms of arbitrary connected and simply connected compact simple Lie groups. We give a finite-dimensional as well as an infinite-dimensional derivation and shed light on the mechanism whereby they lead to the same classical integrable systems. The infinite-dimensional approach, based on twisted current algebras (alias Yang–Mills with twisted boundary conditions, was inspired by the derivation of the spinless Sutherland model due to Gorsky and Nekrasov. The finite-dimensional method relies on Hamiltonian reduction under twisted conjugations of N-fold direct product groups, linking the quantum mechanics of the reduced systems to representation theory similarly as was explored previously in the N=1 case.
Mercury's magnetosphere and magnetotial revisited
International Nuclear Information System (INIS)
Bergan, S.; Engle, I.M.
1981-01-01
Magnetic observations which are not complicated by currents of trapped plasma are a good test of geomagnetopause and geomagnetotail predictions. Recent attempts to model the Hermean magnetospheric field based on a planet-centered magnetic multipole field with a quadrupole moment in addition to the planetary dipole field or a dipole field linearly displaced from planet center and no quadrupole moment have produced reasonably good fits to the Mercury magnetic field measurements. In this work we find a better fit for a dipole displacement from the planet center by making use of an improved representation of the magnetic field in the magnetotail, where many of the Mercury measurements were made. The rms deviation of the data was reduced from 10. or 11. γ to 9.3 γ by employing this new tail field representation. Also, by making use of this new tail field representation, we find a best fit for a dipole displacement of -0.0285 R/sub M/ (earlier, 0.026 R/sub M/) toward the dawn in the magnetic equatorial plane and 0.17 R/sub M/ (earlier, 0.189 R/sub M/ (earlier 0.189 R/sub M/) northward along the magnetic dipole axis, where R/sub M/ is the planet radius. Thus with only minor adjustments in the displacement vector of the dipole from the planet center we achieve a measurable improvement in the fit of the data by using the improved magnetotail field representation
The evolution of individuality revisited.
Radzvilavicius, Arunas L; Blackstone, Neil W
2018-03-25
Evolutionary theory is formulated in terms of individuals that carry heritable information and are subject to selective pressures. However, individuality itself is a trait that had to evolve - an individual is not an indivisible entity, but a result of evolutionary processes that necessarily begin at the lower level of hierarchical organisation. Traditional approaches to biological individuality focus on cooperation and relatedness within a group, division of labour, policing mechanisms and strong selection at the higher level. Nevertheless, despite considerable theoretical progress in these areas, a full dynamical first-principles account of how new types of individuals arise is missing. To the extent that individuality is an emergent trait, the problem can be approached by recognising the importance of individuating mechanisms that are present from the very beginning of the transition, when only lower-level selection is acting. Here we review some of the most influential theoretical work on the role of individuating mechanisms in these transitions, and demonstrate how a lower-level, bottom-up evolutionary framework can be used to understand biological complexity involved in the origin of cellular life, early eukaryotic evolution, sexual life cycles and multicellular development. Some of these mechanisms inevitably stem from environmental constraints, population structure and ancestral life cycles. Others are unique to specific transitions - features of the natural history and biochemistry that are co-opted into conflict mediation. Identifying mechanisms of individuation that provide a coarse-grained description of the system's evolutionary dynamics is an important step towards understanding how biological complexity and hierarchical organisation evolves. In this way, individuality can be reconceptualised as an approximate model that with varying degrees of precision applies to a wide range of biological systems. © 2018 Cambridge Philosophical Society.
Apollo 12 ropy glasses revisited
Wentworth, S. J.; Mckay, D. S.; Lindstrom, D. J.; Basu, A.; Martinez, R. R.; Bogard, D. D.; Garrison, D. H.
1994-01-01
We analyzed ropy glasses from Apollo 12 soils 12032 and 12033 by a variety of techniques including SEM/EDX, electron microprobe analysis, INAA, and Ar-39-Ar-40 age dating. The ropy glasses have potassium rare earth elements phosphorous (KREEP)-like compositions different from those of local Apollo 12 mare soils; it is likely that the ropy glasses are of exotic origin. Mixing calculations indicate that the ropy glasses formed from a liquid enriched in KREEP and that the ropy glass liquid also contained a significant amount of mare material. The presence of solar Ar and a trace of regolith-derived glass within the ropy glasses are evidence that the ropy glasses contain a small regolith component. Anorthosite and crystalline breccia (KREEP) clasts occur in some ropy glasses. We also found within these glasses clasts of felsite (fine-grained granitic fragments) very similar in texture and composition to the larger Apollo 12 felsites, which have a Ar-39-Ar-40 degassing age of 800 +/- 15 Ma. Measurements of 39-Ar-40-Ar in 12032 ropy glass indicate that it was degassed at the same time as the large felsite although the ropy glass was not completely degassed. The ropy glasses and felsites, therefore, probably came from the same source. Most early investigators suggested that the Apollo 12 ropy glasses were part of the ejecta deposited at the Apollo 12 site from the Copernicus impact. Our new data reinforce this model. If these ropy glasses are from Copernicus, they provide new clues to the nature of the target material at the Copernicus site, a part of the Moon that has not been sampled directly.
Mantle plumes on Venus revisited
Kiefer, Walter S.
1992-01-01
The Equatorial Highlands of Venus consist of a series of quasicircular regions of high topography, rising up to about 5 km above the mean planetary radius. These highlands are strongly correlated with positive geoid anomalies, with a peak amplitude of 120 m at Atla Regio. Shield volcanism is observed at Beta, Eistla, Bell, and Atla Regiones and in the Hathor Mons-Innini Mons-Ushas Mons region of the southern hemisphere. Volcanos have also been mapped in Phoebe Regio and flood volcanism is observed in Ovda and Thetis Regiones. Extensional tectonism is also observed in Ovda and Thetis Regiones. Extensional tectonism is also observed in many of these regions. It is now widely accepted that at least Beta, Atla, Eistla, and Bell Regiones are the surface expressions of hot, rising mantel plumes. Upwelling plumes are consistent with both the volcanism and the extensional tectonism observed in these regions. The geoid anomalies and topography of these four regions show considerable variation. Peak geoid anomalies exceed 90 m at Beta and Atla, but are only 40 m at Eistla and 24 m at Bell. Similarly, the peak topography is greater at Beta and Atla than at Eistla and Bell. Such a range of values is not surprising because terrestrial hotspot swells also have a side range of geoid anomalies and topographic uplifts. Kiefer and Hager used cylindrical axisymmetric, steady-state convection calculations to show that mantle plumes can quantitatively account for both the amplitude and the shape of the long-wavelength geoid and topography at Beta and Atla. In these models, most of the topography of these highlands is due to uplift by the vertical normal stress associated with the rising plume. Additional topography may also be present due to crustal thickening by volcanism and crustal thinning by rifting. Smrekar and Phillips have also considered the geoid and topography of plumes on Venus, but they restricted themselves to considering only the geoid-topography ratio and did not
Ting, Hiram; Thurasamy, Ramayah
2016-01-01
Notwithstanding the rise of trendy coffee café, little is done to investigate revisit intention towards the café in the context of developing markets. In particular, there is a lack of study which provides theoretical and practical explanation to the perceptions and behaviours of infrequent customers. Hence, the study aims to look into the subject matter by using the theory of reasoned action and social exchange theory as the underpinning basis. The framework proposed by Pine and Gilmore (Strat Leadersh 28:18-23, 2000), which asserts the importance of product quality, service quality and experience quality in a progressive manner, is used to decompose perceived value in the model so as to determine their effects on intention to revisit the café. Given the importance to gain practical insights into revisit intention of infrequent customers, pragmatism stance is assumed. Explanatory sequential mixed-method design is thus adopted whereby qualitative approach is used to confirm and complement quantitative findings. Self-administered questionnaire-based survey is first administered before personal interview is carried out at various cafés. Partial least squares structural equation modelling and content analysis are appropriated successively. In the quantitative findings, although product quality, service quality and experience quality are found to have positive effect on perceived value and revisit intention towards trendy coffee café, experience quality is found to have the greater effect than the others among the infrequent customers. The qualitative findings not only confirm their importance, but most importantly explain the favourable impressions they have at trendy coffee café based on their last in-store experience. While product and service quality might not necessary stimulate them to revisit trendy coffee café, experience quality driven by purposes of visit would likely affect their intention to revisit. As retaining customers is of utmost importance to
Carbon emission from global hydroelectric reservoirs revisited.
Li, Siyue; Zhang, Quanfa
2014-12-01
Substantial greenhouse gas (GHG) emissions from hydropower reservoirs have been of great concerns recently, yet the significant carbon emitters of drawdown area and reservoir downstream (including spillways and turbines as well as river reaches below dams) have not been included in global carbon budget. Here, we revisit GHG emission from hydropower reservoirs by considering reservoir surface area, drawdown zone and reservoir downstream. Our estimates demonstrate around 301.3 Tg carbon dioxide (CO2)/year and 18.7 Tg methane (CH4)/year from global hydroelectric reservoirs, which are much higher than recent observations. The sum of drawdown and downstream emission, which is generally overlooked, represents 42 % CO2 and 67 % CH4 of the total emissions from hydropower reservoirs. Accordingly, the global average emissions from hydropower are estimated to be 92 g CO2/kWh and 5.7 g CH4/kWh. Nonetheless, global hydroelectricity could currently reduce approximate 2,351 Tg CO2eq/year with respect to fuel fossil plant alternative. The new findings show a substantial revision of carbon emission from the global hydropower reservoirs.
Revisiting the argument from fetal potential
Directory of Open Access Journals (Sweden)
Manninen Bertha
2007-05-01
Full Text Available Abstract One of the most famous, and most derided, arguments against the morality of abortion is the argument from potential, which maintains that the fetus' potential to become a person and enjoy the valuable life common to persons, entails that its destruction is prima facie morally impermissible. In this paper, I will revisit and offer a defense of the argument from potential. First, I will criticize the classical arguments proffered against the importance of fetal potential, specifically the arguments put forth by philosophers Peter Singer and David Boonin, by carefully unpacking the claims made in these arguments and illustrating why they are flawed. Secondly, I will maintain that fetal potential is morally relevant when it comes to the morality of abortion, but that it must be accorded a proper place in the argument. This proper place, however, cannot be found until we first answer a very important and complex question: we must first address the issue of personal identity, and when the fetus becomes the type of being who is relevantly identical to a future person. I will illustrate why the question of fetal potential can only be meaningfully addressed after we have first answered the question of personal identity and how it relates to the human fetus.
THE CONCEPT OF REFERENCE CONDITION, REVISITED ...
Ecological assessments of aquatic ecosystems depend on the ability to compare current conditions against some expectation of how they could be in the absence of significant human disturbance. The concept of a ‘‘reference condition’’ is often used to describe the standard or benchmark against which current condition is compared. If assessments are to be conducted consistently, then a common understanding of the definitions and complications of reference condition is necessary. A 2006 paper (Stoddard et al., 2006, Ecological Applications 16:1267-1276) made an early attempt at codifying the reference condition concept; in this presentation we will revisit the points raised in that paper (and others) and examine how our thinking has changed in a little over 10 years.Among the issues to be discussed: (1) the “moving target” created when reference site data are used to set thresholds in large scale assessments; (2) natural vs. human disturbance and their effects on reference site distributions; (3) circularity and the use of biological data to assist in reference site identification; (4) using site-scale (in-stream or in-lake) measurements vs. landscape-level human activity to identify reference conditions. Ecological assessments of aquatic ecosystems depend on the ability to compare current conditions against some expectation of how they could be in the absence of significant human disturbance. The concept of a ‘‘reference condition’’ is often use
Pipe failure probability - the Thomas paper revisited
International Nuclear Information System (INIS)
Lydell, B.O.Y.
2000-01-01
Almost twenty years ago, in Volume 2 of Reliability Engineering (the predecessor of Reliability Engineering and System Safety), a paper by H. M. Thomas of Rolls Royce and Associates Ltd. presented a generalized approach to the estimation of piping and vessel failure probability. The 'Thomas-approach' used insights from actual failure statistics to calculate the probability of leakage and conditional probability of rupture given leakage. It was intended for practitioners without access to data on the service experience with piping and piping system components. This article revisits the Thomas paper by drawing on insights from development of a new database on piping failures in commercial nuclear power plants worldwide (SKI-PIPE). Partially sponsored by the Swedish Nuclear Power Inspectorate (SKI), the R and D leading up to this note was performed during 1994-1999. Motivated by data requirements of reliability analysis and probabilistic safety assessment (PSA), the new database supports statistical analysis of piping failure data. Against the background of this database development program, the article reviews the applicability of the 'Thomas approach' in applied risk and reliability analysis. It addresses the question whether a new and expanded database on the service experience with piping systems would alter the original piping reliability correlation as suggested by H. M. Thomas
Revisiting the Survival Mnemonic Effect in Children
Directory of Open Access Journals (Sweden)
Josefa N. S. Pand Eirada
2014-04-01
Full Text Available The survival processing paradigm is designed to explore the adaptive nature of memory functioning. The mnemonic advantage of processing information in fitness-relevant contexts, as has been demonstrated using this paradigm, is now well established, particularly in young adults; this phenomenon is often referred to as the “survival processing effect.” In the current experiment, we revisited the investigation of this effect in children and tested it in a new cultural group, using a procedure that differs from the existing studies with children. A group of 40 Portuguese children rated the relevance of unrelated words to a survival and a new moving scenario. This encoding task was followed by a surprise free-recall task. Akin to what is typically found, survival processing produced better memory performance than the control condition (moving. These data put on firmer ground the idea that a mnemonic tuning to fitness-relevant encodings is present early in development. The theoretical importance of this result to the adaptive memory literature is discussed, as well as potential practical implications of this kind of approach to the study of memory in children.
Sapisochin, Gabriel
2016-06-01
In this commentary on Paul Denis's paper 'The drive revisited: mastery and satisfaction', the author defends the idea of a plurality of metapsychologies that must be contrasted with and distinguished from each other while avoiding incompatible translations between models. In this connection he presents various theoretical approaches to aggression and the death drive, and demonstrates the differences between the drive model and the model underlying the theory of internalized object relations. The author holds that the concept of the internal object differs from Freud's notion of the representation (Vorstellung). He also considers that the imago as defined by Paul Denis in fact corresponds to the concept of the internal object. Lastly, he addresses the complex issue of listening to archaic forms of psychic functioning and their non-discursive presentation within the analytic process, which affects the transference-countertransference link. Copyright © 2016 Institute of Psychoanalysis.
Pockets of Participation: Revisiting Child-Centred Participation Research
Franks, Myfanwy
2011-01-01
This article revisits the theme of the clash of interests and power relations at work in participatory research which is prescribed from above. It offers a possible route toward solving conflict between adult-led research carried out by young researchers, funding requirements and organisational constraints. The article explores issues of…
Rereading Albert B. Lord's The Singer of Tales . Revisiting the ...
African Journals Online (AJOL)
Access to a fresh set of video-recordings of Sesotho praise-poetry made in the year 2000 enabled the author to revisit his adaptation of Albert Lord's definition of the formula as a dynamic compositional device that the oral poet utilizes during delivery. The basic adaptation made in 1983 pertains to heroic praises (dithoko tsa ...
Literary Origins of the Term "School Psychologist" Revisited
Fagan, Thomas K.
2005-01-01
Previous research on the literary origins of the term "school psychologist" is revisited, and conclusions are revised in light of new evidence. It appears that the origin of the term in the American literature occurred as early as 1898 in an article by Hugo Munsterberg, predating the usage by Wilhelm Stern in 1911. The early references to the…
The Neutrosophic Logic View to Schrodinger's Cat Paradox, Revisited
Directory of Open Access Journals (Sweden)
Florentin Smarandache
2008-07-01
Full Text Available The present article discusses Neutrosophic logic view to Schrodinger's cat paradox. We argue that this paradox involves some degree of indeterminacy (unknown which Neutrosophic logic can take into consideration, whereas other methods including Fuzzy logic cannot. To make this proposition clear, we revisit our previous paper by offering an illustration using modified coin tossing problem, known as Parrondo's game.
High precision mass measurements in Ψ and Υ families revisited
International Nuclear Information System (INIS)
Artamonov, A.S.; Baru, S.E.; Blinov, A.E.
2000-01-01
High precision mass measurements in Ψ and Υ families performed in 1980-1984 at the VEPP-4 collider with OLYA and MD-1 detectors are revisited. The corrections for the new value of the electron mass are presented. The effect of the updated radiative corrections has been calculated for the J/Ψ(1S) and Ψ(2S) mass measurements [ru
The Importance of Being a Complement: CED Effects Revisited
Jurka, Johannes
2010-01-01
This dissertation revisits subject island effects (Ross 1967, Chomsky 1973) cross-linguistically. Controlled acceptability judgment studies in German, English, Japanese and Serbian show that extraction out of specifiers is consistently degraded compared to extraction out of complements, indicating that the Condition on Extraction domains (CED,…
Surface tension in soap films: revisiting a classic demonstration
International Nuclear Information System (INIS)
Behroozi, F
2010-01-01
We revisit a classic demonstration for surface tension in soap films and introduce a more striking variation of it. The demonstration shows how the film, pulling uniformly and normally on a loose string, transforms it into a circular arc under tension. The relationship between the surface tension and the string tension is analysed and presented in a useful graphical form. (letters and comments)
Additively homomorphic encryption with a double decryption mechanism, revisited
Peter, Andreas; Kronberg, M.; Trei, W.; Katzenbeisser, S.
We revisit the notion of additively homomorphic encryption with a double decryption mechanism (DD-PKE), which allows for additions in the encrypted domain while having a master decryption procedure that can decrypt all properly formed ciphertexts by using a special master secret. This type of
Revisiting Jack Goody to Rethink Determinisms in Literacy Studies
Collin, Ross
2013-01-01
This article revisits Goody's arguments about literacy's influence on social arrangements, culture, cognition, economics, and other domains of existence. Whereas some of his arguments tend toward technological determinism (i.e., literacy causes change in the world), other of his arguments construe literacy as a force that shapes and is shaped by…
Surface tension in soap films: revisiting a classic demonstration
Energy Technology Data Exchange (ETDEWEB)
Behroozi, F [Department of Physics, University of Northern Iowa, Cedar Falls, IA 50614 (United States)], E-mail: behroozi@uni.edu
2010-01-15
We revisit a classic demonstration for surface tension in soap films and introduce a more striking variation of it. The demonstration shows how the film, pulling uniformly and normally on a loose string, transforms it into a circular arc under tension. The relationship between the surface tension and the string tension is analysed and presented in a useful graphical form. (letters and comments)
A control center design revisited: learning from users’ appropriation
DEFF Research Database (Denmark)
Souza da Conceição, Carolina; Cordeiro, Cláudia
2014-01-01
This paper aims to present the lessons learned during a control center design project by revisiting another control center from the same company designed two and a half years before by the same project team. In light of the experience with the first project and its analysis, the designers and res...
A Feminist Revisit to the First-Year Curriculum.
Bernstein, Anita
1996-01-01
A seminar at Chicago-Kent College of Law (Illinois) that reviews six first-year law school courses by focusing on feminist issues in course content and structure is described. The seminar functions as both a review and a shift in perspective. Courses revisited include civil procedure, contracts, criminal law, justice and the legal system,…
Revisiting deforestation in Africa (1990–2010): One more lost ...
African Journals Online (AJOL)
This spotlight revisits the dynamics and prognosis outlined in the late 1980's published in Déforestation en Afrique. This book on deforestation in Africa utilized available statistical data from the 1980's and was a pioneering self - styled attempt to provide a holistic viewpoint of the ongoing trends pertaining to deforestation in ...
Moral Judgment Development across Cultures: Revisiting Kohlberg's Universality Claims
Gibbs, John C.; Basinger, Karen S.; Grime, Rebecca L.; Snarey, John R.
2007-01-01
This article revisits Kohlberg's cognitive developmental claims that stages of moral judgment, facilitative processes of social perspective-taking, and moral values are commonly identifiable across cultures. Snarey [Snarey, J. (1985). "The cross-cultural universality of social-moral development: A critical review of Kohlbergian research."…
Revisiting the quantum harmonic oscillator via unilateral Fourier transforms
International Nuclear Information System (INIS)
Nogueira, Pedro H F; Castro, Antonio S de
2016-01-01
The literature on the exponential Fourier approach to the one-dimensional quantum harmonic oscillator problem is revised and criticized. It is shown that the solution of this problem has been built on faulty premises. The problem is revisited via the Fourier sine and cosine transform method and the stationary states are properly determined by requiring definite parity and square-integrable eigenfunctions. (paper)
Transport benchmarks for one-dimensional binary Markovian mixtures revisited
International Nuclear Information System (INIS)
Malvagi, F.
2013-01-01
The classic benchmarks for transport through a binary Markovian mixture are revisited to look at the probability distribution function of the chosen 'results': reflection, transmission and scalar flux. We argue that the knowledge of the ensemble averaged results is not sufficient for reliable predictions: a measure of the dispersion must also be obtained. An algorithm to estimate this dispersion is tested. (author)
Thorbecke Revisited : The Role of Doctrinaire Liberalism in Dutch Politics
Drentje, Jan
2011-01-01
Thorbecke Revisited: The Role of Doctrinaire Liberalism in Dutch Politics In the political history of the nineteenth century Thorbecke played a crucial role. As the architect of the 1848 liberal constitutional reform he led three cabinets. In many ways he dominated the political discourse during the
Faraday effect revisited: sum rules and convergence issues
DEFF Research Database (Denmark)
Cornean, Horia; Nenciu, Gheorghe
2010-01-01
This is the third paper of a series revisiting the Faraday effect. The question of the absolute convergence of the sums over the band indices entering the Verdet constant is considered. In general, sum rules and traces per unit volume play an important role in solid-state physics, and they give...
The Lumbar Lordosis in Males and Females, Revisited.
Directory of Open Access Journals (Sweden)
Ori Hay
Full Text Available Whether differences exist in male and female lumbar lordosis has been debated by researchers who are divided as to the nature of variations in the spinal curve, their origin, reasoning, and implications from a morphological, functional and evolutionary perspective. Evaluation of the spinal curvature is constructive in understanding the evolution of the spine, as well as its pathology, planning of surgical procedures, monitoring its progression and treatment of spinal deformities. The aim of the current study was to revisit the nature of lumbar curve in males and females.Our new automated method uses CT imaging of the spine to measure lumbar curvature in males and females. The curves extracted from 158 individuals were based on the spinal canal, thus avoiding traditional pitfalls of using bone features for curve estimation. The model analysis was carried out on the entire curve, whereby both local and global descriptors were examined in a single framework. Six parameters were calculated: segment length, curve length, curvedness, lordosis peak location, lordosis cranial peak height, and lordosis caudal peak height.Compared to males, the female spine manifested a statistically significant greater curvature, a caudally located lordotic peak, and greater cranial peak height. As caudal peak height is similar for males and females, the illusion of deeper lordosis among females is due partially to the fact that the upper part of the female lumbar curve is positioned more dorsally (more backwardly inclined.Males and females manifest different lumbar curve shape, yet similar amount of inward curving (lordosis. The morphological characteristics of the female spine were probably developed to reduce stress on the vertebral elements during pregnancy and nursing.
The Lumbar Lordosis in Males and Females, Revisited.
Hay, Ori; Dar, Gali; Abbas, Janan; Stein, Dan; May, Hila; Masharawi, Youssef; Peled, Nathan; Hershkovitz, Israel
2015-01-01
Whether differences exist in male and female lumbar lordosis has been debated by researchers who are divided as to the nature of variations in the spinal curve, their origin, reasoning, and implications from a morphological, functional and evolutionary perspective. Evaluation of the spinal curvature is constructive in understanding the evolution of the spine, as well as its pathology, planning of surgical procedures, monitoring its progression and treatment of spinal deformities. The aim of the current study was to revisit the nature of lumbar curve in males and females. Our new automated method uses CT imaging of the spine to measure lumbar curvature in males and females. The curves extracted from 158 individuals were based on the spinal canal, thus avoiding traditional pitfalls of using bone features for curve estimation. The model analysis was carried out on the entire curve, whereby both local and global descriptors were examined in a single framework. Six parameters were calculated: segment length, curve length, curvedness, lordosis peak location, lordosis cranial peak height, and lordosis caudal peak height. Compared to males, the female spine manifested a statistically significant greater curvature, a caudally located lordotic peak, and greater cranial peak height. As caudal peak height is similar for males and females, the illusion of deeper lordosis among females is due partially to the fact that the upper part of the female lumbar curve is positioned more dorsally (more backwardly inclined). Males and females manifest different lumbar curve shape, yet similar amount of inward curving (lordosis). The morphological characteristics of the female spine were probably developed to reduce stress on the vertebral elements during pregnancy and nursing.
Who should do the dishes now? Revisiting gender and housework in contemporary urban South Wales
Mannay, Dawn
2016-01-01
This chapter revisits Jane Pilcher’s (1994) seminal work ‘Who should do the dishes? Three generations of Welsh women talking about men and housework’, which was originally published in Our Sister’s Land: the changing identities of women in Wales. As discussed in the introductory chapter, I began revisiting classic Welsh studies as part of my doctoral study Mothers and daughters on the margins: gender, generation and education (Mannay, 2012); this lead to the later publication of a revisiting ...
Directory of Open Access Journals (Sweden)
Nikola Dedić
2013-06-01
Full Text Available The main aim of this text is to show parallels between rock music and poststructuralist philosophy. As a case study one of the most celebrated rock albums of all times – Bob Dylan’s Highway 61 Revisited from 1965 is taken. It is one of the crucial albums in the history of popular culture which influenced further development of rock music within American counter culture of the 60s. Dylan’s turn from the politics of American New Left and folk movement, his relation towards the notions of the author and intertextuality, and his connection with experimental usage of language in the manner of avant-garde and neoavant-garde poetry, are juxtaposed with the main philosophical standpoints of Jean-François Lyotard, Jean Baudrillard, Roland Barthes and Julia Kristeva which historically and chronologically coincide with the appearance of Dylan’s album.
The coordinate coherent states approach revisited
International Nuclear Information System (INIS)
Miao, Yan-Gang; Zhang, Shao-Jun
2013-01-01
We revisit the coordinate coherent states approach through two different quantization procedures in the quantum field theory on the noncommutative Minkowski plane. The first procedure, which is based on the normal commutation relation between an annihilation and creation operators, deduces that a point mass can be described by a Gaussian function instead of the usual Dirac delta function. However, we argue this specific quantization by adopting the canonical one (based on the canonical commutation relation between a field and its conjugate momentum) and show that a point mass should still be described by the Dirac delta function, which implies that the concept of point particles is still valid when we deal with the noncommutativity by following the coordinate coherent states approach. In order to investigate the dependence on quantization procedures, we apply the two quantization procedures to the Unruh effect and Hawking radiation and find that they give rise to significantly different results. Under the first quantization procedure, the Unruh temperature and Unruh spectrum are not deformed by noncommutativity, but the Hawking temperature is deformed by noncommutativity while the radiation specturm is untack. However, under the second quantization procedure, the Unruh temperature and Hawking temperature are untack but the both spectra are modified by an effective greybody (deformed) factor. - Highlights: ► Suggest a canonical quantization in the coordinate coherent states approach. ► Prove the validity of the concept of point particles. ► Apply the canonical quantization to the Unruh effect and Hawking radiation. ► Find no deformations in the Unruh temperature and Hawking temperature. ► Provide the modified spectra of the Unruh effect and Hawking radiation.
Backwardation in energy futures markets: Metalgesellschaft revisited
International Nuclear Information System (INIS)
Charupat, N.; Deaves, R.
2003-01-01
Energy supply contracts negotiated by the US Subsidiary of Metalgesellschaft Refining and Marketing (MGRM), which were the subject of much subsequent debate, are re-examined. The contracts were hedged by the US Subsidiary barrel-for-barrel using short-dated energy derivatives. When the hedge program experienced difficulties, the derivatives positions were promptly liquidated by the parent company. Revisiting the MGRM contracts also provides the opportunity to explore the latest evidence on backwardation in energy markets. Accordingly, the paper discusses first the theoretical reasons for backwardation, followed by an empirical examination using the MGRM data available at the time of the hedge program in 1992 and a second set of data that became available in 2000. By using a more up-to-date data set covering a longer time period and by controlling the time series properties of the data, the authors expect to provide more reliable empirical evidence on the behaviour of energy futures prices. Results based on the 1992 data suggest that the strategy employed by MGRM could be expected to be profitable while the risks are relatively low. However, analysis based on the 2000 data shows lower, although still significant profits, but higher risks. The final conclusion was that the likelihood of problems similar to those faced by MGRM in 1992 are twice as high with the updated 2000 data, suggesting that the risk-return pattern of the stack-and-roll hedging strategy using short-dated energy future contracts to hedge long-tem contracts is less appealing now than when MGRM implemented its hedging program in 1992. 24 refs., 3 tabs., 6 figs
Generalized Fractional Processes with Long Memory and Time Dependent Volatility Revisited
Directory of Open Access Journals (Sweden)
M. Shelton Peiris
2016-09-01
Full Text Available In recent years, fractionally-differenced processes have received a great deal of attention due to their flexibility in financial applications with long-memory. This paper revisits the class of generalized fractionally-differenced processes generated by Gegenbauer polynomials and the ARMA structure (GARMA with both the long-memory and time-dependent innovation variance. We establish the existence and uniqueness of second-order solutions. We also extend this family with innovations to follow GARCH and stochastic volatility (SV. Under certain regularity conditions, we give asymptotic results for the approximate maximum likelihood estimator for the GARMA-GARCH model. We discuss a Monte Carlo likelihood method for the GARMA-SV model and investigate finite sample properties via Monte Carlo experiments. Finally, we illustrate the usefulness of this approach using monthly inflation rates for France, Japan and the United States.
Is Sky the Limit? Revisiting ‘Exogenous Productivity of Judges’ Argument
Directory of Open Access Journals (Sweden)
Kamil Jonski
2014-12-01
Full Text Available This paper revisits ‘exogenous productivity of judges’ hypothesis, laid down in numerous Law & Economics studies based on ‘production function’ approach. It states that judges confronted with growing caseload pressure, adjust their productivity thereby increasing number of resolved cases. We attribute such results to assumptions regarding the shape of court’s ‘production function’, and present alternative – hockey-stick ‘production function’ model, explicitly taking into account the time constraint faced by judges. Hence, we offer an attempt to reconcile ‘production function’ with more traditional approaches to the court performance – such as weighted caseload methods. We argue that such empirical strategy is particularly valuable in case of continental legal systems – characterized by higher procedural formalism. We also propose extended methodology of model evaluation, taking into account their ability to reproduce empirical regularities observed in ‘real world’ court systems.
Running spectral index from large-field inflation with modulations revisited
Energy Technology Data Exchange (ETDEWEB)
Czerny, Michael, E-mail: mczerny@tuhep.phys.tohoku.ac.jp [Department of Physics, Tohoku University, Sendai 980-8578 (Japan); Kobayashi, Takeshi, E-mail: takeshi@cita.utoronto.ca [Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto, Ontario M5S 3H8 (Canada); Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, Ontario N2L 2Y5 (Canada); Takahashi, Fuminobu, E-mail: fumi@tuhep.phys.tohoku.ac.jp [Department of Physics, Tohoku University, Sendai 980-8578 (Japan); Kavli IPMU, TODIAS, University of Tokyo, Kashiwa 277-8583 (Japan)
2014-07-30
We revisit large field inflation models with modulations in light of the recent discovery of the primordial B-mode polarization by the BICEP2 experiment, which, when combined with the Planck+WP+highL data, gives a strong hint for additional suppression of the CMB temperature fluctuations at small scales. Such a suppression can be explained by a running spectral index. In fact, it was pointed out by two of the present authors (TK and FT) that the existence of both tensor mode perturbations and a sizable running of the spectral index is a natural outcome of large inflation models with modulations such as axion monodromy inflation. We find that this holds also in the recently proposed multi-natural inflation, in which the inflaton potential consists of multiple sinusoidal functions and therefore the modulations are a built-in feature.
Wibowo, Setyo Ferry; Sazali, Adnan; Kresnamurti R. P., Agung
2016-01-01
The purpose of this research are: 1) To find out the description of destination image, tourist satisfaction, and revisit intention of Betawi cultural village Setu Babakan, 2) test empirically the influence of destination image toward revisit intention of Betawi cultural village Setu Babakan, 3) test empirically the influence of tourist satisfaction toward revisit intention of Betawi cultural village Setu Babakan, 4) test empirically the influence of destination image toward revisit intention ...
The role of glycans in immune evasion: the human fetoembryonic defence system hypothesis revisited.
Clark, Gary F
2014-03-01
Emerging data suggest that mechanisms to evade the human immune system may be shared by the conceptus, tumour cells, persistent pathogens and viruses. It is therefore timely to revisit the human fetoembryonic defense system (Hu-FEDS) hypothesis that was proposed in two papers in the 1990s. The initial paper suggested that glycoconjugates expressed in the human reproductive system inhibited immune responses directed against gametes and the developing human by employing their carbohydrate sequences as functional groups. These glycoconjugates were proposed to block specific binding interactions and interact with lectins linked to signal transduction pathways that modulated immune cell functions. The second article suggested that aggressive tumour cells and persistent pathogens (HIV, H. pylori, schistosomes) either mimicked or acquired the same carbohydrate functional groups employed in this system to evade immune responses. This subterfuge enabled these pathogens and tumour cells to couple their survival to the human reproductive imperative. The Hu-FEDS model has been repeatedly tested since its inception. Data relevant to this model have also been obtained in other studies. Herein, the Hu-FEDS hypothesis is revisited in the context of these more recent findings. Far more supportive evidence for this model now exists than when it was first proposed, and many of the original predictions have been validated. This type of subterfuge by pathogens and tumour cells likely applies to all sexually reproducing metazoans that must protect their gametes from immune responses. Intervention in these pathological states will likely remain problematic until this system of immune evasion is fully understood and appreciated.
The timeline of the lunar bombardment: Revisited
Morbidelli, A.; Nesvorny, D.; Laurenz, V.; Marchi, S.; Rubie, D. C.; Elkins-Tanton, L.; Wieczorek, M.; Jacobson, S.
2018-05-01
The timeline of the lunar bombardment in the first Gy of Solar System history remains unclear. Basin-forming impacts (e.g. Imbrium, Orientale), occurred 3.9-3.7 Gy ago, i.e. 600-800 My after the formation of the Moon itself. Many other basins formed before Imbrium, but their exact ages are not precisely known. There is an intense debate between two possible interpretations of the data: in the cataclysm scenario there was a surge in the impact rate approximately at the time of Imbrium formation, while in the accretion tail scenario the lunar bombardment declined since the era of planet formation and the latest basins formed in its tail-end. Here, we revisit the work of Morbidelli et al. (2012) that examined which scenario could be compatible with both the lunar crater record in the 3-4 Gy period and the abundance of highly siderophile elements (HSE) in the lunar mantle. We use updated numerical simulations of the fluxes of asteroids, comets and planetesimals leftover from the planet-formation process. Under the traditional assumption that the HSEs track the total amount of material accreted by the Moon since its formation, we conclude that only the cataclysm scenario can explain the data. The cataclysm should have started ∼ 3.95 Gy ago. However we also consider the possibility that HSEs are sequestered from the mantle of a planet during magma ocean crystallization, due to iron sulfide exsolution (O'Neil, 1991; Rubie et al., 2016). We show that this is likely true also for the Moon, if mantle overturn is taken into account. Based on the hypothesis that the lunar magma ocean crystallized about 100-150 My after Moon formation (Elkins-Tanton et al., 2011), and therefore that HSEs accumulated in the lunar mantle only after this timespan, we show that the bombardment in the 3-4 Gy period can be explained in the accretion tail scenario. This hypothesis would also explain why the Moon appears so depleted in HSEs relative to the Earth. We also extend our analysis of the
Revisit ocean thermal energy conversion system
International Nuclear Information System (INIS)
Huang, J.C.; Krock, H.J.; Oney, S.K.
2003-01-01
by-products, especially drinking water, aquaculture and mariculture, can easily translate into billions of dollars in business opportunities. The current status of the OTEC system definitely deserves to be carefully revisited. This paper will examine recent major advancements in technology, evaluate costs and effectiveness, and assess the overall market environment of the OTEC system and describe its great renewable energy potential and overall benefits to the nations of the world
REVISITING THE SCATTERING GREENHOUSE EFFECT OF CO{sub 2} ICE CLOUDS
Energy Technology Data Exchange (ETDEWEB)
Kitzmann, D., E-mail: daniel.kitzmann@csh.unibe.ch [Center for Space and Habitability, University of Bern, Sidlerstr. 5, 3012 Bern (Switzerland)
2016-02-01
Carbon dioxide ice clouds are thought to play an important role for cold terrestrial planets with thick CO{sub 2} dominated atmospheres. Various previous studies showed that a scattering greenhouse effect by carbon dioxide ice clouds could result in a massive warming of the planetary surface. However, all of these studies only employed simplified two-stream radiative transfer schemes to describe the anisotropic scattering. Using accurate radiative transfer models with a general discrete ordinate method, this study revisits this important effect and shows that the positive climatic impact of carbon dioxide clouds was strongly overestimated in the past. The revised scattering greenhouse effect can have important implications for the early Mars, but also for planets like the early Earth or the position of the outer boundary of the habitable zone.
Revisiting top-bottom-tau Yukawa unification in supersymmetric grand unified theories
International Nuclear Information System (INIS)
Tobe, Kazuhiro; Wells, James D.
2003-01-01
Third family Yukawa unification, as suggested by minimal SO(10) unification, is revisited in light of recent experimental measurements and theoretical progress. We characterize unification in a semi-model-independent fashion, and conclude that finite b quark mass corrections from superpartners must be non-zero, but much smaller than naively would be expected. We show that a solution that does not require cancellations of dangerously large tanβ effects in observables implies that scalar superpartner masses should be substantially heavier than the Z scale, and perhaps inaccessible to all currently approved colliders. On the other hand, gauginos must be significantly lighter than the scalars. We demonstrate that a spectrum of anomaly-mediated gaugino masses and heavy scalars works well as a theory compatible with third family Yukawa unification and dark matter observations
The significance test controversy revisited the fiducial Bayesian alternative
Lecoutre, Bruno
2014-01-01
The purpose of this book is not only to revisit the “significance test controversy,”but also to provide a conceptually sounder alternative. As such, it presents a Bayesian framework for a new approach to analyzing and interpreting experimental data. It also prepares students and researchers for reporting on experimental results. Normative aspects: The main views of statistical tests are revisited and the philosophies of Fisher, Neyman-Pearson and Jeffrey are discussed in detail. Descriptive aspects: The misuses of Null Hypothesis Significance Tests are reconsidered in light of Jeffreys’ Bayesian conceptions concerning the role of statistical inference in experimental investigations. Prescriptive aspects: The current effect size and confidence interval reporting practices are presented and seriously questioned. Methodological aspects are carefully discussed and fiducial Bayesian methods are proposed as a more suitable alternative for reporting on experimental results. In closing, basic routine procedures...
Energy Technology Data Exchange (ETDEWEB)
Janson, Oleg; Held, Karsten [IFP, TU Wien (Austria); Furukawa, Shunsuke [University of Tokyo (Japan); Momoi, Tsutomu [Condensed Matter Theory Laboratory, RIKEN (Japan); RIKEN Center for Emergent Material Science (Japan); Sindzingre, Philippe [Universite Pierre and Marie Curie, Paris (France); Richter, Johannes [University of Magdeburg (Germany)
2016-07-01
Motivated by recent experiments on volborthite single crystals showing a wide (1)/(3)-magnetization plateau, we adopt the structural data and perform microscopic modeling by means of density functional theory (DFT). Using DFT+U, we find four leading magnetic exchanges: antiferromagnetic J and J{sub 2}, as well as ferromagnetic J{sup '} and J{sub 1}. Simulations of the spin Hamiltonian show good agreement with the experiment for J:J{sup '}:J{sub 1}:J{sub 2} = 1: -0.2: -0.5: 0.2 with J ≅ 252 K. The (1)/(3)-plateau phase pertains to polarized magnetic trimers formed by strong J bonds. An effective J → ∞ model shows a tendency towards condensation of magnon bound states preceding the plateau phase.
International Nuclear Information System (INIS)
Buck, B.; Merchant, A.C.
1988-10-01
A elementary cluster model that visualizes an alpha particle interacting with a 3 H of 3 He cluster via a local potential is employed. An assurance is made that the major requirements of the Pauli principle are satisfied by choosing relative motion quantum numbers N (Principal Quantum Number) and L to correspond to the microscopic situation in which the 3 H or 3 He nucleons occupy shell-model orbitals above the (O ) - orbitals filled by the alpha particles. The S o factor for the capture reactions 3 He (α,γ) 7 Be and 3 H (α,γ) 7 Li is calculated by considering only electric dipole capture from incident s-waves as follows. (author)
Radiative corrections to neutrino deep inelastic scattering revisited
International Nuclear Information System (INIS)
Arbuzov, Andrej B.; Bardin, Dmitry Yu.; Kalinovskaya, Lidia V.
2005-01-01
Radiative corrections to neutrino deep inelastic scattering are revisited. One-loop electroweak corrections are re-calculated within the automatic SANC system. Terms with mass singularities are treated including higher order leading logarithmic corrections. Scheme dependence of corrections due to weak interactions is investigated. The results are implemented into the data analysis of the NOMAD experiment. The present theoretical accuracy in description of the process is discussed
The Assassination of John F. Kennedy: Revisiting the Medical Data
Rohrich, Rod J.; Nagarkar, Purushottam; Stokes, Mike; Weinstein, Aaron; Mantik, David W.; Jensen, J. Arthur
2014-01-01
Thank you for publishing "The Assassination of John F. Kennedy: Revisiting the Medical Data."1 The central conclusion of this study is that the assassination remains controversial and that some of the controversy must be attributable to the "reporting and handling of the medical evidence." With the greatest respect for you and Dr. Robert McClelland, let me argue that your text and on-line interviews perpetuate the central misunderstanding of the assassination and there...
Ambulatory thyroidectomy: A multistate study of revisits and complications
Orosco, RK; Lin, HW; Bhattacharyya, N
2015-01-01
© 2015 American Academy of Otolaryngology - Head and Neck Surgery Foundation. Objective. Determine rates and reasons for revisits after ambulatory adult thyroidectomy. Study Design. Cross-sectional analysis of multistate ambulatory surgery and hospital databases. Setting. Ambulatory surgery data from the State Ambulatory Surgery Databases of California, Florida, Iowa, and New York for calendar years 2010 and 2011. Subjects and Methods. Ambulatory thyroidectomy cases were linked to state ambul...
Doppler Processing with Ultra-Wideband (UWB) Radar Revisited
2018-01-01
REPORT TYPE Technical Note 3. DATES COVERED (From - To) December 2017 4. TITLE AND SUBTITLE Doppler Processing with Ultra-Wideband (UWB) Radar...unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This technical note revisits previous work performed at the US Army Research Laboratory related to...target considered previously is proportional to a delayed version of the transmitted signal, up to a complex constant factor. We write the received
Radiative corrections to double-Dalitz decays revisited
Kampf, Karol; Novotný, Jiři; Sanchez-Puertas, Pablo
2018-03-01
In this study, we revisit and complete the full next-to-leading order corrections to pseudoscalar double-Dalitz decays within the soft-photon approximation. Comparing to the previous study, we find small differences, which are nevertheless relevant for extracting information about the pseudoscalar transition form factors. Concerning the latter, these processes could offer the opportunity to test them—for the first time—in their double-virtual regime.
Dispute Resolution and Technology: Revisiting the Justification of Conflict Management
Koulu, Riikka
2016-01-01
This study, Dispute Resolution and Technology: Revisiting the Justification of Conflict Management, belongs to the fields of procedural law, legal theory and law and technology studies. In this study the changes in dispute resolution caused by technology are evaluated. The overarching research question of this study is how does implementing technology to dispute resolution challenge the justification of law as a legitimised mode of violence? Before answering such an abstract research question...
Deja vu: The Unified Command Plan of the Future Revisited
2011-05-19
Approved for Public Release; Distribution is Unlimited Déjà vu : The Unified Command Plan of the Future Revisited A Monograph by Lieutenant...DD-MM-YYYY) 19-05-2011 2. REPORT TYPE Monograph 3. DATES COVERED (From - To) JUL 2010 – MAY 2011 4. TITLE AND SUBTITLE Déjà vu : The Unified...i SCHOOL OF ADVANCED MILITARY STUDIES MONOGRAPH APPROVAL Lieutenant Colonel Edward Francis Martignetti Title of Monograph: Déjà vu : The Unified
Hospital revisit rate after a diagnosis of conversion disorder.
Merkler, Alexander E; Parikh, Neal S; Chaudhry, Simriti; Chait, Alanna; Allen, Nicole C; Navi, Babak B; Kamel, Hooman
2016-04-01
To estimate the hospital revisit rate of patients diagnosed with conversion disorder (CD). Using administrative data, we identified all patients discharged from California, Florida and New York emergency departments (EDs) and acute care hospitals between 2005 and 2011 with a primary discharge diagnosis of CD. Patients discharged with a primary diagnosis of seizure or transient global amnesia (TGA) served as control groups. Our primary outcome was the rate of repeat ED visits and hospital admissions after initial presentation. Poisson regression was used to compare rates between diagnosis groups while adjusting for demographic characteristics. We identified 7946 patients discharged with a primary diagnosis of CD. During a mean follow-up of 3.0 (±1.6) years, patients with CD had a median of three (IQR, 1-9) ED or inpatient revisits, compared with 0 (IQR, 0-2) in patients with TGA and 3 (IQR, 1-7) in those with seizures. Revisit rates were 18.25 (95% CI, 18.10 to 18.40) visits per 100 patients per month in those with CD, 3.90 (95% CI, 3.84 to 3.95) in those with TGA and 17.78 (95% CI, 17.75 to 17.81) in those with seizures. As compared to CD, the incidence rate ratio for repeat ED visits or hospitalisations was 0.89 (95% CI, 0.86 to 0.93) for seizure disorder and 0.32 (95% CI 0.31 to 0.34) for TGA. CD is associated with a substantial hospital revisit rate. Our findings suggest that CD is not an acute, time-limited response to stress, but rather that CD is a manifestation of a broader pattern of chronic neuropsychiatric disease. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Serotype-specific mortality from invasive Streptococcus pneumoniae disease revisited
DEFF Research Database (Denmark)
Martens, Pernille; Worm, Signe Westring; Lundgren, Bettina
2004-01-01
Serotype-specific mortality from invasive Streptococcus pneumoniae disease revisited.Martens P, Worm SW, Lundgren B, Konradsen HB, Benfield T. Department of Infectious Diseases 144, Hvidovre University Hospital, DK-2650 Hvidovre, Denmark. pernillemartens@yahoo.com BACKGROUND: Invasive infection...... with Streptococcus pneumoniae (pneumococci) causes significant morbidity and mortality. Case series and experimental data have shown that the capsular serotype is involved in the pathogenesis and a determinant of disease outcome. METHODS: Retrospective review of 464 cases of invasive disease among adults diagnosed...
Energy Technology Data Exchange (ETDEWEB)
Guo, Rui-Yun [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Zhang, Xin [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Peking University, Center for High Energy Physics, Beijing (China)
2017-12-15
We revisit the constraints on inflation models by using the current cosmological observations involving the latest local measurement of the Hubble constant (H{sub 0} = 73.00 ± 1.75 km s{sup -1} Mpc{sup -1}). We constrain the primordial power spectra of both scalar and tensor perturbations with the observational data including the Planck 2015 CMB full data, the BICEP2 and Keck Array CMB B-mode data, the BAO data, and the direct measurement of H{sub 0}. In order to relieve the tension between the local determination of the Hubble constant and the other astrophysical observations, we consider the additional parameter N{sub eff} in the cosmological model. We find that, for the ΛCDM+r+N{sub eff} model, the scale invariance is only excluded at the 3.3σ level, and ΔN{sub eff} > 0 is favored at the 1.6σ level. Comparing the obtained 1σ and 2σ contours of (n{sub s},r) with the theoretical predictions of selected inflation models, we find that both the convex and the concave potentials are favored at 2σ level, the natural inflation model is excluded at more than 2σ level, the Starobinsky R{sup 2} inflation model is only favored at around 2σ level, and the spontaneously broken SUSY inflation model is now the most favored model. (orig.)
The Time has come to Revisit Solvency Funding Rules
Directory of Open Access Journals (Sweden)
Norma Nielson
2018-02-01
was small, and so the rules were not an unacceptable burden. Those rich returns are gone. Now, that gap between valuations has grown dramatically. In B.C., for example, a recent analysis found that when using a going-concern evaluation, 75 per cent of 143 defined-benefit plans registered in the province in 2015 had at least 100-per-cent funding, while the median funding ratio was 124 per cent. Using a solvency model, the median funding ratio was instead estimated to be a much lower 85 per cent. Closing that gap would require onerous pension contributions. More importantly, the contributions it triggers might never be needed to cover benefits. Quebec is the first province to recognize that pension-funding rules need to be revisited and made more responsive, with new rules coming in that will reduce the unnecessary burden on employers while also adapting to changes in the economic environment. Ontario is showing signs that it will take steps in the same direction. Regulators everywhere should be revisiting pension rules to: remove the solvency-valuation requirement for well-funded plans, while allowing the regulator to assume a worst-case scenario in the uncommon case where they believe it to be warranted; to develop a method to rate the credit risk of a plan; to be less stringent and more realistic about plan liabilities (by allowing some types of liabilities to use a longer amortization period; but still restricting plan changes for underfunded plans. The result would not only reduce the cost and work of over-regulating well-funded, well-run plans, while freeing up cash . By reducing pressure on the cash flow for sponsors, and adding more flexibility, the policymakers will ultimately make defined-benefit pension plans more sustainable. They might even see defined-benefit plans making a comeback among employers who found heavy contributions enough to drive them out of the DB world.
Replacement Models Revisited | Alabi | Journal of Research in ...
African Journals Online (AJOL)
The objective is to review the annual total cost and the cumulative annual total cost average hitherto used as replacement methods. The study showed some disparity in the optimal age of single replacement as used by some authors. Hence, an arithmetic mean method of finding the optimal single- replacement- age of an ...
Revisiting Aristotle's causality: model for development in Nigeria ...
African Journals Online (AJOL)
Thus, the development equation must be balanced. Aristotle‟s theory of causality balances the equation. Since Aristotle has no theory of development therefore every individual, nation or industry in pursuit of development should seek to drive economic growth and human capital development together rather than focus on ...
Rabbit models for biomedical research revisited via genome editing approaches
HONDA, Arata; OGURA, Atsuo
2017-01-01
Although the laboratory rabbit has long contributed to many paradigmatic studies in biology and medicine, it is often considered to be a “classical animal model” because in the last 30 years, the laboratory mouse has been more often used, thanks to the availability of embryonic stem cells that have allowed the generation of gene knockout (KO) animals. However, recent genome-editing strategies have changed this unrivaled condition; so far, more than 10 mammalian species have been added to the ...
Rabbit models for biomedical research revisited via genome editing approaches
HONDA, Arata; OGURA, Atsuo
2017-01-01
Although the laboratory rabbit has long contributed to many paradigmatic studies in biology and medicine, it is often considered to be a “classical animal model” because in the last 30 years, the laboratory mouse has been more often used, thanks to the availability of embryonic stem cells that have allowed the generation of gene knockout (KO) animals. However, recent genome-editing strategies have changed this unrivaled condition; so far, more than 10 mammalian species have been added to the list of KO animals. Among them, the rabbit has distinct advantages for application of genome-editing systems, such as easy application of superovulation, consistency with fertile natural mating, well-optimized embryo manipulation techniques, and the short gestation period. The rabbit has now returned to the stage of advanced biomedical research. PMID:28579598
REVISITING A CLASSIC: THE PARKER–MOFFATT PROBLEM
International Nuclear Information System (INIS)
Pezzi, O.; Servidio, S.; Valentini, F.; Malara, F.; Veltri, P.; Parashar, T. N.; Yang, Y.; Matthaeus, W. H.; Vásconez, C. L.
2017-01-01
The interaction of two colliding Alfvén wave packets is described here by means of magnetohydrodynamics (MHD) and hybrid kinetic numerical simulations. The MHD evolution revisits the theoretical insights described by Moffatt, Parker, Kraichnan, Chandrasekhar, and Elsässer in which the oppositely propagating large-amplitude wave packets interact for a finite time, initiating turbulence. However, the extension to include compressive and kinetic effects, while maintaining the gross characteristics of the simpler classic formulation, also reveals intriguing features that go beyond the pure MHD treatment.
Revisiting the level scheme of the proton emitter 151Lu
International Nuclear Information System (INIS)
Wang, F.; Sun, B.H.; Liu, Z.; Scholey, C.; Eeckhaudt, S.; Grahn, T.; Greenlees, P.T.; Jones, P.; Julin, R.; Juutinen, S.; Kettelhut, S.; Leino, M.; Nyman, M.; Rahkila, P.; Saren, J.; Sorri, J.; Uusitalo, J.; Ashley, S.F.; Cullen, I.J.; Garnsworthy, A.B.; Gelletly, W.; Jones, G.A.; Pietri, S.; Podolyak, Z.; Steer, S.; Thompson, N.J.; Walker, P.M.; Williams, S.; Bianco, L.; Darby, I.G.; Joss, D.T.; Page, R.D.; Pakarinen, J.; Rigby, S.; Cullen, D.M.; Khan, S.; Kishada, A.; Gomez-Hornillos, M.B.; Simpson, J.; Jenkins, D.G.; Niikura, M.; Seweryniak, D.; Shizuma, Toshiyuki
2015-01-01
An experiment aiming to search for new isomers in the region of proton emitter 151 Lu was performed at the Accelerator Laboratory of the University of Jyväskylä (JYFL), by combining the high resolution γ-ray array JUROGAM, gas-filled RITU separator and GREAT detectors with the triggerless total data readout acquisition (TDR) system. In this proceeding, we revisit the level scheme of 151 Lu by using the proton-tagging technique. A level scheme consistent with the latest experimental results is obtained, and 3 additional levels are identified at high excitation energies. (author)
Rural-Nonrural Disparities in Postsecondary Educational Attainment Revisited
Byun, Soo-yong; Meece, Judith L.; Irvin, Matthew J.
2013-01-01
Using data from the National Educational Longitudinal Study, this study revisited rural-nonrural disparities in educational attainment by considering a comprehensive set of factors that constrain and support youth's college enrollment and degree completion. Results showed that rural students were more advantaged in community social resources compared to nonrural students, and these resources were associated with a significant increase in the likelihood of bachelor's degree attainment. Yet results confirmed that rural students lagged behind nonrural students in attaining a bachelor's degree largely due to their lower socioeconomic background. The findings present a more comprehensive picture of the complexity of geographic residence in shaping college enrollment and degree attainment. PMID:24285873
Revisited neoclassical transport theory for steep, collisional plasma edge profiles
International Nuclear Information System (INIS)
Rogister, A.L.
1994-01-01
Published neoclassical results are misleading as concerns the plasma edge for they do not adequately take the peculiar local conditions into account, in particular the fact that the density and temperature variation length-scales are quite small. Coupled novel neoclassical equations obtain, not only for the evolution of the density and temperatures, but also for the radial electric field and the evolution of the parallel ion momentum: gyro-stresses and inertia indeed upset the otherwise de facto ambipolarity of particle transport and a radial electric field necessarily builds up. The increased nonlinear character of these revisited neoclassical equations widens the realm of possible plasma behaviors. (author)
Small-angle scattering theory revisited: Photocurrent and spatial localization
DEFF Research Database (Denmark)
Basse, N.P.; Zoletnik, S.; Michelsen, Poul
2005-01-01
In this paper theory on collective scattering measurements of electron density fluctuations in fusion plasmas is revisited. We present the first full derivation of the expression for the photocurrent beginning at the basic scattering concepts. Thereafter we derive detailed expressions for the auto......- and crosspower spectra obtained from measurements. These are discussed and simple simulations made to elucidate the physical meaning of the findings. In this context, the known methods of obtaining spatial localization are discussed and appraised. Where actual numbers are applied, we utilize quantities from two...
NP-hardness of the cluster minimization problem revisited
Adib, Artur B.
2005-10-01
The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.
NP-hardness of the cluster minimization problem revisited
International Nuclear Information System (INIS)
Adib, Artur B
2005-01-01
The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested
NP-hardness of the cluster minimization problem revisited
Energy Technology Data Exchange (ETDEWEB)
Adib, Artur B [Physics Department, Brown University, Providence, RI 02912 (United States)
2005-10-07
The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.
REVISITING A CLASSIC: THE PARKER–MOFFATT PROBLEM
Energy Technology Data Exchange (ETDEWEB)
Pezzi, O.; Servidio, S.; Valentini, F.; Malara, F.; Veltri, P. [Dipartimento di Fisica, Università della Calabria, 87036 Rende (CS) (Italy); Parashar, T. N.; Yang, Y.; Matthaeus, W. H. [Department of Physics and Astronomy, University of Delaware, DE 19716 (United States); Vásconez, C. L. [Departamento de Física, Escuela Politécnica Nacional, Quito (Ecuador)
2017-01-10
The interaction of two colliding Alfvén wave packets is described here by means of magnetohydrodynamics (MHD) and hybrid kinetic numerical simulations. The MHD evolution revisits the theoretical insights described by Moffatt, Parker, Kraichnan, Chandrasekhar, and Elsässer in which the oppositely propagating large-amplitude wave packets interact for a finite time, initiating turbulence. However, the extension to include compressive and kinetic effects, while maintaining the gross characteristics of the simpler classic formulation, also reveals intriguing features that go beyond the pure MHD treatment.
Revisiting reflexology: Concept, evidence, current practice, and practitioner training
Embong, Nurul Haswani; Soh, Yee Chang; Ming, Long Chiau; Wong, Tin Wui
2015-01-01
Reflexology is basically a study of how one part of the human body relates to another part of the body. Reflexology practitioners rely on the reflexes map of the feet and hands to all the internal organs and other human body parts. They believe that by applying the appropriate pressure and massage certain spots on the feet and hands, all other body parts could be energized and rejuvenated. This review aimed to revisit the concept of reflexology and examine its effectiveness, practices, and th...
Revisiting the role of individual variability in population persistence and stability.
Directory of Open Access Journals (Sweden)
Andrew Morozov
Full Text Available Populations often exhibit a pronounced degree of individual variability and this can be important when constructing ecological models. In this paper, we revisit the role of inter-individual variability in population persistence and stability under predation pressure. As a case study, we consider interactions between a structured population of zooplankton grazers and their predators. Unlike previous structured population models, which only consider variability of individuals according to the age or body size, we focus on physiological and behavioural structuring. We first experimentally demonstrate a high degree of variation of individual consumption rates in three dominant species of herbivorous copepods (Calanus finmarchicus, Calanus glacialis, Calanus euxinus and show that this disparity implies a pronounced variation in the consumption capacities of individuals. Then we construct a parsimonious predator-prey model which takes into account the intra-population variability of prey individuals according to behavioural traits: effectively, each organism has a 'personality' of its own. Our modelling results show that structuring of prey according to their growth rate and vulnerability to predation can dampen predator-prey cycles and enhance persistence of a species, even if the resource stock for prey is unlimited. The main mechanism of efficient top-down regulation is shown to work by letting the prey population become dominated by less vulnerable individuals when predator densities are high, while the trait distribution recovers when the predator densities are low.
Quantization of the damped harmonic oscillator revisited
Energy Technology Data Exchange (ETDEWEB)
Baldiotti, M.C., E-mail: baldiott@fma.if.usp.b [Instituto de Fisica, Universidade de Sao Paulo, Caixa Postal 66318-CEP, 05315-970 Sao Paulo, S.P. (Brazil); Fresneda, R., E-mail: fresneda@gmail.co [Instituto de Fisica, Universidade de Sao Paulo, Caixa Postal 66318-CEP, 05315-970 Sao Paulo, S.P. (Brazil); Gitman, D.M., E-mail: gitman@dfn.if.usp.b [Instituto de Fisica, Universidade de Sao Paulo, Caixa Postal 66318-CEP, 05315-970 Sao Paulo, S.P. (Brazil)
2011-04-11
We return to the description of the damped harmonic oscillator with an assessment of previous works, in particular the Bateman-Caldirola-Kanai model and a new model proposed by one of the authors. We argue the latter has better high energy behavior and is connected to existing open-systems approaches. - Highlights: We prove the local equivalence of two damped harmonic oscillator models. We find different high energy behaviors between the two models. Based on the local equivalence, we make a simple construction of the coherent states.
Quantization of the damped harmonic oscillator revisited
International Nuclear Information System (INIS)
Baldiotti, M.C.; Fresneda, R.; Gitman, D.M.
2011-01-01
We return to the description of the damped harmonic oscillator with an assessment of previous works, in particular the Bateman-Caldirola-Kanai model and a new model proposed by one of the authors. We argue the latter has better high energy behavior and is connected to existing open-systems approaches. - Highlights: → We prove the local equivalence of two damped harmonic oscillator models. → We find different high energy behaviors between the two models. → Based on the local equivalence, we make a simple construction of the coherent states.
Radioecological sensitivity. Danish fallout data revisited
International Nuclear Information System (INIS)
Nielsen, S.P.; Oehlenschlaeger, M.
1999-01-01
Danish fallout data covering four decades are interpreted in terms of radioecological sensitivity. The radioecological sensitivity is the time-integrated radionuclide concentration in an environmental sample from a unit ground deposition (e.g. Bq y kg -1 per Gq m -2 ). The fallout data comprise observed levels of the radionuclides 137 Cs and 90 Sr in precipitation, grass, milk, beef and diet. The data are analysed with different types of radioecological models: traditional UNSCEAR models and more recent dynamic models. The traditional models provide empirical relationships between the annual fallout from precipitation and the annual average levels in grass, milk, beef and diet. The relationships may be derived from spreadsheet calculations. ECOSYS and FARMLAND represent more recent radioecological models, which are available as software for personal computers. These models are more mechanistic and require information on a range of topics, e.g. mode of deposition, nuclide dependent and nuclide independent parameters. The more recent models do not reproduce the fallout data better than the traditional models. But the general features of the more recent models make them suited for prediction of radiological consequences of routine and accidental releases in areas where limited radioecological data are available. The work is part of the NKS/BOK-2.1 project on Important Nordic Food Chains aiming at characterising radioecological sensitivity and variability across the Nordic countries. (au)
Learned Unsustainability: Bandura's Bobo Doll Revisited
Graham, Peter; Arshad-Ayaz, Adeela
2016-01-01
Developmental social psychologist Albert Bandura's 1961 Bobo doll experiments provide interesting insights for the field of education for sustainable development (ESD) today. This article discusses some of the implications Bandura's model of learned aggression has for modelling learned unsustainability. These lessons are not limited to educational…
Piezoresistance in p-type silicon revisited
DEFF Research Database (Denmark)
Richter, Jacob; Pedersen, Jesper; Brandbyge, Mads
2008-01-01
We calculate the shear piezocoefficient pi44 in p-type Si with a 6×6 k·p Hamiltonian model using the Boltzmann transport equation in the relaxation-time approximation. Furthermore, we fabricate and characterize p-type silicon piezoresistors embedded in a (001) silicon substrate. We find...... to experiments. Finally, we present a fitting function of temperature and acceptor density to the 6×6 model that can be used to predict the piezoresistance effect in p-type silicon. ©2008 American Institute of Physics...... that the relaxation-time model needs to include all scattering mechanisms in order to obtain correct temperature and acceptor density dependencies. The k·p results are compared to results obtained using a recent tight-binding (TB) model. The magnitude of the pi44 piezocoefficient obtained from the TB model...
The light gluino mass window revisited
Janot, Patrick
2003-01-01
The precise measurements of the ``electroweak observables'' performed at LEP and SLC are well consistent with the standard model predictions. Deviations from the standard model arising from vacuum polarization diagrams (also called ``weak loop corrections'') have been constrained in a model-independent manner with the epsilon formalism. Within the same formalism, additional deviations from new physics production processes can also be constrained, still in a model-independent way. For instance, a 95% C.L. limit of Delta Gamma_had} q qbar gluino gluino process, it allows an absolute lower limit to be set on the gluino mass, m_gluino > 6.3 GeV/c2 at 95% C.L., which definitely closes the so-called light gluino mass window. The precise measurements of the "electroweak observables" performed at LEP and SLC are well consistent with the standard model predictions. Deviations from the standard model arising from vacuum polarization diagrams (also called "weak loop corrections") have been constrained in a model-indepe...
Dynamics of Shape Fluctuations of Quasi-spherical Vesicles Revisited
DEFF Research Database (Denmark)
Miao, L.; Lomholt, Michael Andersen; Kleis, J.
2002-01-01
In this paper, the dynamics of spontaneous shape fluctuations of a single, giant quasi-spherical vesicle formed from a single lipid species is revisited theoretically. A coherent physical theory for the dynamics is developed based on a number of fundamental principles and considerations, and a sy......In this paper, the dynamics of spontaneous shape fluctuations of a single, giant quasi-spherical vesicle formed from a single lipid species is revisited theoretically. A coherent physical theory for the dynamics is developed based on a number of fundamental principles and considerations...... of the phenomenological constants in a canonical continuum description of fluid lipid-bilayer membranes and shown the consequences of this new interpretation in terms of the characteristics of the dynamics of vesicle shape fluctuations. Moreover, we have used the systematic formulation of our theory as a framework...... against which we have discussed the previously existing theories and their discrepancies. Finally, we have made a systematic prediction about the system-dependent characteristics of the relaxation dynamics of shape fluctuations of quasi-spherical vesicles with a view of experimental studies...
"Rapid Revisit" Measurements of Sea Surface Winds Using CYGNSS
Park, J.; Johnson, J. T.
2017-12-01
The Cyclone Global Navigation Satellite System (CYGNSS) is a space-borne GNSS-R (GNSS-Reflectometry) mission that launched December 15, 2016 for ocean surface wind speed measurements. CYGNSS includes 8 small satellites in the same LEO orbit, so that the mission provides wind speed products having unprecedented coverage both in time and space to study multi-temporal behaviors of oceanic winds. The nature of CYGNSS coverage results in some locations on Earth experiencing multiple wind speed measurements within a short period of time (a "clump" of observations in time resulting in a "rapid revisit" series of measurements). Such observations could seemingly provide indications of regions experiencing rapid changes in wind speeds, and therefore be of scientific utility. Temporally "clumped" properties of CYGNSS measurements are investigated using early CYGNSS L1/L2 measurements, and the results show that clump durations and spacing vary with latitude. For example, the duration of a clump can extend as long as a few hours at higher latitudes, with gaps between clumps ranging from 6 to as high as 12 hours depending on latitude. Examples are provided to indicate the potential of changes within a clump to produce a "rapid revisit" product for detecting convective activity. Also, we investigate detector design for identifying convective activities. Results from analyses using recent CYGNSS L2 winds will be provided in the presentation.
Revisiting the Decision of Death in Hurst v. Florida.
Cooke, Brian K; Ginory, Almari; Zedalis, Jennifer
2016-12-01
The United States Supreme Court has considered the question of whether a judge or a jury must make the findings necessary to support imposition of the death penalty in several notable cases, including Spaziano v. Florida (1984), Hildwin v. Florida (1989), and Ring v. Arizona (2002). In 2016, the U.S. Supreme Court revisited the subject in Hurst v. Florida Florida Statute § 921.141 allows the judge, after weighing aggravating and mitigating circumstances, to enter a sentence of life imprisonment or death. Before Hurst, Florida's bifurcated sentencing proceedings included an advisory sentence from jurors and a separate judicial hearing without juror involvement. In Hurst, the Court revisited the question of whether Florida's capital sentencing scheme violates the Sixth Amendment, which requires a jury, not a judge, to find each fact necessary to impose a sentence of death in light of Ring In an eight-to-one decision, the Court reversed the judgment of the Florida Supreme Court, holding that the Sixth Amendment requires a jury to find the aggravating factors necessary for imposing the death penalty. The role of Florida juries in capital sentencing proceedings was thereby elevated from advisory to determinative. We examine the Court's decision and offer commentary regarding this shift from judge to jury in the final imposition of the death penalty and the overall effect of this landmark case. © 2016 American Academy of Psychiatry and the Law.
Directory of Open Access Journals (Sweden)
Siavash Zokaeieh
2018-05-01
Full Text Available Postmethod pedagogy and critical pedagogy have influential roles in education and language teaching. A number of practitioners may claim to instruct based on the tenets of postmethod pedagogy, however, they may not be entirely aware of the oppositional intention and dynamicity of this model. This article aims at revisiting the tenets and constructing elements of critical pedagogy and Freire’s point of view vis-a-vis postmethod pedagogy and Kumaravadivelu’s developed model to enlighten the open-mindedness and dynamic perceptions of these interwoven approaches. Furthermore, some criticisms towards critical pedagogy and postmethod pedagogy are brought into consideration for better understanding of the relevance and the weaknesses. It is hoped that by bringing these two notions, teachers especially those who wish to use postmethod pedagogy in their setting become more aware of the intellectual priorities of critical pedagogy and postmethod pedagogy such as moving from banking model of education, absence of bias and deviation from predetermined and fixed frameworks in the classrooms.
Decision Mining Revisited – Discovering Overlapping Rules
Mannhardt, F.; de Leoni, M.; Reijers, H.A.; van der Aalst, W.M.P.
2016-01-01
Decision mining enriches process models with rules underlying decisions in processes using historical process execution data. Choices between multiple activities are specified through rules defined over process data. Existing decision mining methods focus on discovering mutually-exclusive rules,
Decision Mining Revisited - Discovering Overlapping Rules
Mannhardt, F.; De Leoni, M.; Reijers, H.A.; van der Aalst, W.M.P.; Nurcan, S.; Soffer, P.; Bajec, M.; Eder, J.
2016-01-01
Decision mining enriches process models with rules underlying decisions in processes using historical process execution data. Choices between multiple activities are specified through rules defined over process data. Existing decision mining methods focus on discovering mutually-exclusive rules,
Decision mining revisited - Discovering overlapping rules
Mannhardt, Felix; De Leoni, Massimiliano; Reijers, Hajo A.; Van Der Aalst, Wil M P
2016-01-01
Decision mining enriches process models with rules underlying decisions in processes using historical process execution data. Choices between multiple activities are specified through rules defined over process data. Existing decision mining methods focus on discovering mutually-exclusive rules,
Revisiting concepts of thermal physiology: Predicting responses of mammals to climate change.
Mitchell, Duncan; Snelling, Edward P; Hetem, Robyn S; Maloney, Shane K; Strauss, Willem Maartin; Fuller, Andrea
2018-02-26
The accuracy of predictive models (also known as mechanistic or causal models) of animal responses to climate change depends on properly incorporating the principles of heat transfer and thermoregulation into those models. Regrettably, proper incorporation of these principles is not always evident. We have revisited the relevant principles of thermal physiology and analysed how they have been applied in predictive models of large mammals, which are particularly vulnerable, to climate change. We considered dry heat exchange, evaporative heat transfer, the thermoneutral zone and homeothermy, and we examined the roles of size and shape in the thermal physiology of large mammals. We report on the following misconceptions in influential predictive models: underestimation of the role of radiant heat transfer, misassignment of the role and misunderstanding of the sustainability of evaporative cooling, misinterpretation of the thermoneutral zone as a zone of thermal tolerance or as a zone of sustainable energetics, confusion of upper critical temperature and critical thermal maximum, overestimation of the metabolic energy cost of evaporative cooling, failure to appreciate that the current advantages of size and shape will become disadvantageous as climate change advances, misassumptions about skin temperature and, lastly, misconceptions about the relationship between body core temperature and its variability with body mass in large mammals. Not all misconceptions invalidate the models, but we believe that preventing inappropriate assumptions from propagating will improve model accuracy, especially as models progress beyond their current typically static format to include genetic and epigenetic adaptation that can result in phenotypic plasticity. © 2018 The Authors. Journal of Animal Ecology © 2018 British Ecological Society.
Characteristic length of the knotting probability revisited
International Nuclear Information System (INIS)
Uehara, Erica; Deguchi, Tetsuo
2015-01-01
We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(−N/N K ), where the estimates of parameter N K are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius r ex , i.e. the screening length of double-stranded DNA. (paper)
The motor theory of speech perception revisited.
Massaro, Dominic W; Chen, Trevor H
2008-04-01
Galantucci, Fowler, and Turvey (2006) have claimed that perceiving speech is perceiving gestures and that the motor system is recruited for perceiving speech. We make the counter argument that perceiving speech is not perceiving gestures, that the motor system is not recruitedfor perceiving speech, and that speech perception can be adequately described by a prototypical pattern recognition model, the fuzzy logical model of perception (FLMP). Empirical evidence taken as support for gesture and motor theory is reconsidered in more detail and in the framework of the FLMR Additional theoretical and logical arguments are made to challenge gesture and motor theory.
Central bank independence and inflation revisited
Klomp, Jeroen; de Haan, Jakob
We re-examine the relationship between central bank independence (CBI), proxied by the central bank governor's turnover rate and an indicator based on central bank laws in place, and inflation using a random coefficient model with the Hildreth-Houck estimator for more than 100 countries in the
Central bank independence and inflation revisited
Klomp, J.G.; Haan, de J.
2010-01-01
We re-examine the relationship between central bank independence (CBI), proxied by the central bank governor's turnover rate and an indicator based on central bank laws in place, and inflation using a random coefficient model with the Hildreth-Houck estimator for more than 100 countries in the
Smith, Yolanda Y.
2013-01-01
Ella Josephine Baker (1903-1986), an African-American human rights activist, influenced the trajectory of the modern civil rights movement with her emphasis on local action, liberative pedagogy, and grassroots involvement. She modeled a pedagogy of liberation convinced that grassroots efforts at participatory democracy could empower diverse…
S-wave scattering of fermion revisited
International Nuclear Information System (INIS)
Rahaman, Anisur
2011-01-01
A model where a Dirac fermion is coupled to background dilaton field is considered to study s-wave scattering of fermion by a back ground dilaton black hole. It is found that an uncomfortable situation towards information loss scenario arises when one loop correction gets involved during bosonization.
The LeChatelier-Samuelson principle revisited
Dietzenbacher, E
1992-01-01
Within the context of a linear Leontief model, the LeChatelier-Samuelson principle examines the effects of an increase in some final demand on the output levels under the constraint that the production of certain goods is held at its original value. The principle states that the increase in any
Circuit effects on Pierce instabilities revisited
International Nuclear Information System (INIS)
Kuhn, S.; Hoerhager, M.; Crystal, T.L.
1985-01-01
The problem of external circuit effects on Pierce diode instability studied by Raadu and Silevitch is reconsidered. The characteristic equation and the ensuing eigenfrequencies are found to disagree with those given by the authors above, which discrepancy is attributed to the fact that one of their boundary conditions is inconsistent with the model chosen. (author)
Structure of Energetic Particle Mediated Shocks Revisited
International Nuclear Information System (INIS)
Mostafavi, P.; Zank, G. P.; Webb, G. M.
2017-01-01
The structure of collisionless shock waves is often modified by the presence of energetic particles that are not equilibrated with the thermal plasma (such as pickup ions [PUIs] and solar energetic particles [SEPs]). This is relevant to the inner and outer heliosphere and the Very Local Interstellar Medium (VLISM), where observations of shock waves (e.g., in the inner heliosphere) show that both the magnetic field and thermal gas pressure are less than the energetic particle component pressures. Voyager 2 observations revealed that the heliospheric termination shock (HTS) is very broad and mediated by energetic particles. PUIs and SEPs contribute both a collisionless heat flux and a higher-order viscosity. We show that the incorporation of both effects can completely determine the structure of collisionless shocks mediated by energetic ions. Since the reduced form of the PUI-mediated plasma model is structurally identical to the classical cosmic ray two-fluid model, we note that the presence of viscosity, at least formally, eliminates the need for a gas sub-shock in the classical two-fluid model, including in that regime where three are possible. By considering parameters upstream of the HTS, we show that the thermal gas remains relatively cold and the shock is mediated by PUIs. We determine the structure of the weak interstellar shock observed by Voyager 1 . We consider the inclusion of the thermal heat flux and viscosity to address the most general form of an energetic particle-thermal plasma two-fluid model.
Structure of Energetic Particle Mediated Shocks Revisited
Energy Technology Data Exchange (ETDEWEB)
Mostafavi, P.; Zank, G. P. [Department of Space Science, University of Alabama in Huntsville, Huntsville, AL 35899 (United States); Webb, G. M. [Center for Space Plasma and Aeronomic Research (CSPAR), University of Alabama in Huntsville, Huntsville, AL 35899 (United States)
2017-05-20
The structure of collisionless shock waves is often modified by the presence of energetic particles that are not equilibrated with the thermal plasma (such as pickup ions [PUIs] and solar energetic particles [SEPs]). This is relevant to the inner and outer heliosphere and the Very Local Interstellar Medium (VLISM), where observations of shock waves (e.g., in the inner heliosphere) show that both the magnetic field and thermal gas pressure are less than the energetic particle component pressures. Voyager 2 observations revealed that the heliospheric termination shock (HTS) is very broad and mediated by energetic particles. PUIs and SEPs contribute both a collisionless heat flux and a higher-order viscosity. We show that the incorporation of both effects can completely determine the structure of collisionless shocks mediated by energetic ions. Since the reduced form of the PUI-mediated plasma model is structurally identical to the classical cosmic ray two-fluid model, we note that the presence of viscosity, at least formally, eliminates the need for a gas sub-shock in the classical two-fluid model, including in that regime where three are possible. By considering parameters upstream of the HTS, we show that the thermal gas remains relatively cold and the shock is mediated by PUIs. We determine the structure of the weak interstellar shock observed by Voyager 1 . We consider the inclusion of the thermal heat flux and viscosity to address the most general form of an energetic particle-thermal plasma two-fluid model.
Racecadotril versus loperamide - Antidiarrheal research revisited
Huijghebaert, S.; Awouters, F.; Tytgat, G. N. J.
2003-01-01
Racecadotril is an enkephalinase inhibitor, presented as a purely antisecretory agent with advantages over the opiate-receptor agonist loperamide in the treatment of diarrhea. A critical review of the literature and the models used was performed. Although pretreatment with high doses of racecadotril
Revisiting the Yoruba ethnogenesis: a religiocultural hermeneutics ...
African Journals Online (AJOL)
The ethnogenesis of the Yoruba has been a subject of various historical critical arguments in recent scholarship. Local historians and their foreign counterpart have explored different scholarly models to ascertain the origin of the Yoruba race. A sizeable number of these accounts usually explore the historical evidence from ...
The Determinants of Visitor’s Revisit Intention: A Lesson from Ijen Car Free Day
Directory of Open Access Journals (Sweden)
Cesya Rizkika Parahiyanti
2015-09-01
Full Text Available Event industry currently is considered as one of interesting business opportunity in contributing major positive economic impact. Event could be categorized into some activities conducted by an event management or event organizer in the case of achieving some specific outcomes. An event is also recognized as an essential marketing tool in branding of particular destination. It has a powerful function to make a differentiation between one destination and others. This study aims to establish a theoretical event brand equity for which the key components of the brand equity were evaluated from visitor perspective in the tourism context. Brand equity is constructed by four multidimensions which are event brand awareness (EBA, event brand image (EBI, event brand quality (EBQ, and event revisit intention (ERI. By using convenience sampling, 205 visitors of Ijen Car Free Day (ICFD as the event object were used as respondents to obtain the data. This study uses Partial Least Square (PLS to analyze the data both in outer model and inner model measurements. The finding of this study indicate that EBA has positive and significant influence to EBI, EBQ, and ERI. Then, EBI is also proven giving positive and significant influence to EBQ and ERI. In contrary, EBQ does not show significant influence to ERI. The significance movement of this study could be useful measurement in assessing event brand equity management in the future.
Revisiting the decoupling effects in the running of the Cosmological Constant
International Nuclear Information System (INIS)
Antipin, Oleg; Melic, Blazenka
2017-01-01
We revisit the decoupling effects associated with heavy particles in the renormalization group running of the vacuum energy in a mass-dependent renormalization scheme. We find the running of the vacuum energy stemming from the Higgs condensate in the entire energy range and show that it behaves as expected from the simple dimensional arguments meaning that it exhibits the quadratic sensitivity to the mass of the heavy particles in the infrared regime. The consequence of such a running to the fine-tuning problem with the measured value of the Cosmological Constant is analyzed and the constraint on the mass spectrum of a given model is derived. We show that in the Standard Model (SM) this fine-tuning constraint is not satisfied while in the massless theories this constraint formally coincides with the well known Veltman condition. We also provide a remarkably simple extension of the SM where saturation of this constraint enables us to predict the radiative Higgs mass correctly. Generalization to constant curvature spaces is also given. (orig.)
Revisiting the decoupling effects in the running of the Cosmological Constant
Energy Technology Data Exchange (ETDEWEB)
Antipin, Oleg; Melic, Blazenka [Rudjer Boskovic Institute, Division of Theoretical Physics, Zagreb (Croatia)
2017-09-15
We revisit the decoupling effects associated with heavy particles in the renormalization group running of the vacuum energy in a mass-dependent renormalization scheme. We find the running of the vacuum energy stemming from the Higgs condensate in the entire energy range and show that it behaves as expected from the simple dimensional arguments meaning that it exhibits the quadratic sensitivity to the mass of the heavy particles in the infrared regime. The consequence of such a running to the fine-tuning problem with the measured value of the Cosmological Constant is analyzed and the constraint on the mass spectrum of a given model is derived. We show that in the Standard Model (SM) this fine-tuning constraint is not satisfied while in the massless theories this constraint formally coincides with the well known Veltman condition. We also provide a remarkably simple extension of the SM where saturation of this constraint enables us to predict the radiative Higgs mass correctly. Generalization to constant curvature spaces is also given. (orig.)
The life span of the biosphere revisited
Caldeira, Ken; Kasting, James F.
1992-01-01
How much longer the biosphere can survive on earth is reexamined using a more elaborate model than that of Lovelock and Whitfield (1982). The model includes a more accurate treatment of the greenhouse effect of CO2, a biologically mediated weathering parametrization, and the realization that C4 photosynthesis can persist to much lower concentrations of atmospheric CO2. It is found that a C4-plant-based biosphere could survive for at least another 0.9 Gyr to 1.5 Gyr after the present time, depending respectively on whether CO2 or temperature is the limiting factor. Within an additional 1 Gyr, earth may lose water to space, thereby following the path of Venus.
Revisiting the flocking transition using active spins.
Solon, A P; Tailleur, J
2013-08-16
We consider an active Ising model in which spins both diffuse and align on lattice in one and two dimensions. The diffusion is biased so that plus or minus spins hop preferably to the left or to the right, which generates a flocking transition at low temperature and high density. We construct a coarse-grained description of the model that predicts this transition to be a first-order liquid-gas transition in the temperature-density ensemble, with a critical density sent to infinity. In this first-order phase transition, the magnetization is proportional to the liquid fraction and thus varies continuously throughout the phase diagram. Using microscopic simulations, we show that this theoretical prediction holds in 2D whereas the fluctuations alter the transition in 1D, preventing, for instance, any spontaneous symmetry breaking.
Environmental policies in a differentiated oligopoly revisited
Energy Technology Data Exchange (ETDEWEB)
Fujiwara, Kenji [School of Economics, Kwansei Gakuin Univ., Uegahara 1-1-155, Nishinomiya, Hyogo 662-8501 (Japan)
2009-08-15
Constructing a model of polluting oligopoly with product differentiation, we consider how product differentiation, together with the presence and absence of free entry, affects optimal pollution tax/subsidy policies. The sign of the short- and long-run optimal pollution taxes are highly sensitive to the parameter measuring product differentiation as well as the presence of free entry. How they are affected by a change in product differentiation, which is not addressed in the existing literature, is also made clear. (author)
Stages of educational development? Beeby revisited
Guthrie, Gerard
1980-12-01
Beeby's model of stages of educational change in developing countries has been accepted into the educational literature with remarkably little critical analysis. Though valuable for a large number of experiential insights, the author argues that the model has certain weaknesses which should restrict its application. The stages have a teleological bias and are not sufficiently distinct, nor do the labels used for them meet the formal requirements of measuring scales. Furthermore, the model overgeneralizes from the experience of British-tradition South Pacific school systems, and the equation of western teaching with good teaching is an unsupported view which may not be valid in many countries. The most fundamental problem is the lack of clear distinction between empirical issues and the ethical judgements implicit in the formulation. However, the model has a number of positive features well worth building upon, such as its focus on the qualitative aspects of teaching and on qualitative change, the realistic emphasis on the gradualism of such change in practice, and the identification of the teacher as the key change agent in the classroom — a fundamental point often overlooked by innovators. The continua of general and professional education which underlie the teaching styles provide useful hypotheses for empirical analysis of the relationship between teacher education and classroom teaching style, but the association of this education with certain types of teaching style needs careful examination. Stripped of its evaluative connotations, Beeby's interest in qualitative change was a valuable early attempt to move attention in developing countries from linear, quantitative expansion of existing systems to a consideration of what was actually being taught and how. The real lesson to be learned from his work is that education should be paying closer attention to its theoretical propositions.
Genetic Optimization Algorithm for Metabolic Engineering Revisited
Directory of Open Access Journals (Sweden)
Tobias B. Alter
2018-05-01
Full Text Available To date, several independent methods and algorithms exist for exploiting constraint-based stoichiometric models to find metabolic engineering strategies that optimize microbial production performance. Optimization procedures based on metaheuristics facilitate a straightforward adaption and expansion of engineering objectives, as well as fitness functions, while being particularly suited for solving problems of high complexity. With the increasing interest in multi-scale models and a need for solving advanced engineering problems, we strive to advance genetic algorithms, which stand out due to their intuitive optimization principles and the proven usefulness in this field of research. A drawback of genetic algorithms is that premature convergence to sub-optimal solutions easily occurs if the optimization parameters are not adapted to the specific problem. Here, we conducted comprehensive parameter sensitivity analyses to study their impact on finding optimal strain designs. We further demonstrate the capability of genetic algorithms to simultaneously handle (i multiple, non-linear engineering objectives; (ii the identification of gene target-sets according to logical gene-protein-reaction associations; (iii minimization of the number of network perturbations; and (iv the insertion of non-native reactions, while employing genome-scale metabolic models. This framework adds a level of sophistication in terms of strain design robustness, which is exemplarily tested on succinate overproduction in Escherichia coli.
Revisiting fine-tuning in the MSSM
Energy Technology Data Exchange (ETDEWEB)
Ross, Graham G. [Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP (United Kingdom); Schmidt-Hoberg, Kai [DESY, Notkestraße 85, D-22607 Hamburg (Germany); Staub, Florian [Institute for Theoretical Physics (ITP), Karlsruhe Institute of Technology, Engesserstraße 7, D-76128 Karlsruhe (Germany); Institute for Nuclear Physics (IKP), Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany)
2017-03-06
We evaluate the amount of fine-tuning in constrained versions of the minimal supersymmetric standard model (MSSM), with different boundary conditions at the GUT scale. Specifically we study the fully constrained version as well as the cases of non-universal Higgs and gaugino masses. We allow for the presence of additional non-holomorphic soft-terms which we show further relax the fine-tuning. Of particular importance is the possibility of a Higgsino mass term and we discuss possible origins for such a term in UV complete models. We point out that loop corrections typically lead to a reduction in the fine-tuning by a factor of about two compared to the estimate at tree-level, which has been overlooked in many recent works. Taking these loop corrections into account, we discuss the impact of current limits from SUSY searches and dark matter on the fine-tuning. Contrary to common lore, we find that the MSSM fine-tuning can be as small as 10 while remaining consistent with all experimental constraints. If, in addition, the dark matter abundance is fully explained by the neutralino LSP, the fine-tuning can still be as low as ∼20 in the presence of additional non-holomorphic soft-terms. We also discuss future prospects of these models and find that the MSSM will remain natural even in the case of a non-discovery in the foreseeable future.
Revisiting fine-tuning in the MSSM
Energy Technology Data Exchange (ETDEWEB)
Ross, Graham G. [Oxford Univ. (United Kingdom). Rudolf Peierls Centre for Theoretical Physics; Schmidt-Hoberg, Kai [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Staub, Florian [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany). Inst. fuer Theoretische Physik; Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany). Inst. fuer Experimentelle Kernphysik
2017-03-15
We evaluate the amount of fine-tuning in constrained versions of the minimal supersymmetric standard model (MSSM), with different boundary conditions at the GUT scale. Specifically we study the fully constrained version as well as the cases of non-universal Higgs and gaugino masses. We allow for the presence of additional non-holomorphic soft-terms which we show further relax the fine-tuning. Of particular importance is the possibility of a Higgsino mass term and we discuss possible origins for such a term in UV complete models. We point out that loop corrections typically lead to a reduction in the fine-tuning by a factor of about two compared to the estimate at tree-level, which has been overlooked in many recent works. Taking these loop corrections into account, we discuss the impact of current limits from SUSY searches and dark matter on the fine-tuning. Contrary to common lore, we find that the MSSM fine-tuning can be as small as 10 while remaining consistent with all experimental constraints. If, in addition, the dark matter abundance is fully explained by the neutralino LSP, the fine-tuning can still be as low as ∝20 in the presence of additional non-holomorphic soft-terms. We also discuss future prospects of these models and find that the MSSM will remain natural even in the case of a non-discovery in the foreseeable future.
Quantum singularities in the FRW universe revisited
International Nuclear Information System (INIS)
Letelier, Patricio S.; Pitelli, Joao Paulo M.
2010-01-01
The components of the Riemann tensor in the tetrad basis are quantized and, through the Einstein equation, we find the local expectation value in the ontological interpretation of quantum mechanics of the energy density and pressure of a perfect fluid with equation of state p=(1/3)ρ in the flat Friedmann-Robertson-Walker quantum cosmological model. The quantum behavior of the equation of state and energy conditions are then studied, and it is shown that the energy conditions are violated since the singularity is removed with the introduction of quantum cosmology, but in the classical limit both the equation of state and the energy conditions behave as in the classical model. We also calculate the expectation value of the scale factor for several wave packets in the many-worlds interpretation in order to show the independence of the nonsingular character of the quantum cosmological model with respect to the wave packet representing the wave function of the Universe. It is also shown that, with the introduction of nonnormalizable wave packets, solutions of the Wheeler-DeWitt equation, the singular character of the scale factor, can be recovered in the ontological interpretation.
Spin-orbit evolution of Mercury revisited
Noyelles, Benoît; Frouard, Julien; Makarov, Valeri V.; Efroimsky, Michael
2014-10-01
Although it is accepted that the significant eccentricity of Mercury (0.206) favours entrapment into the 3:2 spin-orbit resonance, open are the questions of how and when the capture took place. A recent work by Makarov (Makarov, V.V. [2012]. Astrophys. J., 752, 73) has proven that trapping into this state is certain for eccentricities larger than 0.2, provided we use a realistic tidal model based on the Darwin-Kaula expansion of the tidal torque. While in Ibid. a Mercury-like planet had its eccentricity fixed, we take into account its evolution. To that end, a family of possible histories of the eccentricity is generated, based on synthetic time evolution consistent with the expected statistics of the distribution of eccentricity. We employ a model of tidal friction, which takes into account both the rheology and self-gravitation of the planet. As opposed to the commonly used constant time lag (CTL) and constant phase lag (CPL) models, the physics-based tidal model changes dramatically the statistics of the possible final spin states. First, we discover that after only one encounter with the spin-orbit 3:2 resonance this resonance becomes the most probable end-state. Second, if a capture into this (or any other) resonance takes place, the capture becomes final, several crossings of the same state being forbidden by our model. Third, within our model the trapping of Mercury happens much faster than previously believed: for most histories, 10-20 Myr are sufficient. Fourth, even a weak laminar friction between the solid mantle and a molten core would most likely result in a capture in the 2:1 or even higher resonance, which is confirmed both semi-analytically and by limited numerical simulations. So the principal novelty of our paper is that the 3:2 end-state is more ancient than the same end-state obtained when the constant time lag model is employed. The swift capture justifies our treatment of Mercury as a homogeneous, unstratified body whose liquid core had not
The 'Casino Model' of Internationalization
DEFF Research Database (Denmark)
Håkanson, Lars; Kappen, Philip
2017-01-01
Forty years after the publication of the original Uppsala Model, we revisit the empirical observations that inspired its conceptual development. The empirical evidence, we suggest, invites the formulation of an alternative and complementary model of the internationalization process of the firm, o...... that we have named the 'Casino Model' of internationalization. The Casino Model uncovers a number of new research issues pertaining to internationalization and to the nature of strategic decision-making under conditions of environmental uncertainty and partial ignorance....
Terrestrial Planet Formation from an Annulus -- Revisited
Deienno, Rogerio; Walsh, Kevin J.; Kretke, Katherine A.; Levison, Harold F.
2018-04-01
Numerous recent theories of terrestrial planet formation suggest that, in order to reproduce the observed large Earth to Mars mass ratio, planets formed from an annulus of material within 1 au. The success of these models typically rely on a Mars sized embryo being scattered outside 1 au (to ~1.5 au) and starving, while those remaining inside 1 au continue growing, forming Earth and Venus. In some models the scattering is instigated by the migration of giant planets, while in others an embryo-instability naturally occurs due to the dissipation of the gaseous solar nebula. While these models can typically succeed in reproducing the overall mass ratio among the planets, the final angular momentum deficit (AMD) of the present terrestrial planets in our Solar System, and their radial mass concentration (RMC), namely the position where Mars end up in the simulations, are not always well reproduced. Assuming that the gas nebula may not be entirely dissipated when such an embryo-instability happens, here, we study the effects that the time of such an instability can have on the final AMD and RMC. In addition, we also included energy dissipation within embryo-embryo collisions by assuming a given coefficient of restitution for collisions. Our results show that: i) dissipation within embryo-embryo collisions do not play any important role in the final terrestrial planetary system; ii) the final AMD decreases only when the number of final planets formed increases; iii) the RMC tends to always be lower than the present value no matter the number of final planets; and iv) depending on the time that the embryo-instability happen, if too early, with too much gas still present, a second instability will generally happen after the dissipation of the gas nebula.
A de Sitter tachyonic braneworld revisited
Barbosa-Cendejas, Nandinii; Cartas-Fuentevilla, Roberto; Herrera-Aguilar, Alfredo; Rigel Mora-Luna, Refugio; da Rocha, Roldão
2018-01-01
Within the framework of braneworlds, several interesting physical effects can be described in a wide range of energy scales, starting from high-energy physics to cosmology and low-energy physics. An usual way to generate a thick braneworld model relies in coupling a bulk scalar field to higher dimensional warped gravity. Quite recently, a novel braneworld was generated with the aid of a tachyonic bulk scalar field, having several remarkable properties. It comprises a regular and stable solution that contains a relevant 3-brane with de Sitter induced metric, arising as an exact solution to the 5D field equations, describing the inflationary eras of our Universe. Besides, it is asymptotically flat, despite of the presence of a negative 5D cosmological constant, which is an interesting feature that contrasts with most of the known, asymptotically either dS or AdS models. Moreover, it encompasses a graviton spectrum with a single massless bound state, accounting for 4D gravity localized on the brane, separated from the continuum of Kaluza-Klein massive graviton modes by a mass gap that makes the 5D corrections to Newton's law to decay exponentially. Finally, gauge, scalar and fermion fields are also shown to be localized on this braneworld. In this work, we show that this tachyonic braneworld allows for a nontrivial solution with a vanishing 5D cosmological constant that preserves all the above mentioned remarkable properties with a less amount of parameters, constituting an important contribution to the construction of a realistic cosmological braneworld model.
Revisiting Glycogen Content in the Human Brain.
Öz, Gülin; DiNuzzo, Mauro; Kumar, Anjali; Moheet, Amir; Seaquist, Elizabeth R
2015-12-01
Glycogen provides an important glucose reservoir in the brain since the concentration of glucosyl units stored in glycogen is several fold higher than free glucose available in brain tissue. We have previously reported 3-4 µmol/g brain glycogen content using in vivo (13)C magnetic resonance spectroscopy (MRS) in conjunction with [1-(13)C]glucose administration in healthy humans, while higher levels were reported in the rodent brain. Due to the slow turnover of bulk brain glycogen in humans, complete turnover of the glycogen pool, estimated to take 3-5 days, was not observed in these prior studies. In an attempt to reach complete turnover and thereby steady state (13)C labeling in glycogen, here we administered [1-(13)C]glucose to healthy volunteers for 80 h. To eliminate any net glycogen synthesis during this period and thereby achieve an accurate estimate of glycogen concentration, volunteers were maintained at euglycemic blood glucose levels during [1-(13)C]glucose administration and (13)C-glycogen levels in the occipital lobe were measured by (13)C MRS approximately every 12 h. Finally, we fitted the data with a biophysical model that was recently developed to take into account the tiered structure of the glycogen molecule and additionally incorporated blood glucose levels and isotopic enrichments as input function in the model. We obtained excellent fits of the model to the (13)C-glycogen data, and glycogen content in the healthy human brain tissue was found to be 7.8 ± 0.3 µmol/g, a value substantially higher than previous estimates of glycogen content in the human brain.
SAM revisited: uniform semiclassical approximation with absorption
International Nuclear Information System (INIS)
Hussein, M.S.; Pato, M.P.
1986-01-01
The uniform semiclassical approximation is modified to take into account strong absorption. The resulting theory, very similar to the one developed by Frahn and Gross is used to discuss heavy-ion elastic scattering at intermediate energies. The theory permits a reasonably unambiguos separation of refractive and diffractive effects. The systems 12 C+ 12 C and 12 C+ 16 O, which seem to exhibit a remnant of a nuclear rainbow at E=20 Mev/N, are analysed with theory which is built directly on a model for the S-matrix. Simple relations between the fit S-matrix and the underlying complex potential are derived. (Author) [pt
Interstellar Abundances Toward X Per, Revisited
Valencic, Lynne A.; Smith, Randall K.
2014-01-01
The nearby X-ray binary X Per (HD 24534) provides a useful beacon with which to measure elemental abundances in the local ISM. We examine absorption features of 0, Mg, and Si along this line of sight using spectra from the Chandra Observatory's LETG/ ACIS-S and XMM-Newton's RGS instruments. In general, we find that the abundances and their ratios are similar to those of young F and G stars and the most recent solar values. We compare our results with abundances required by dust grain models.
The giant Kalgoorlie Gold Field revisited
Directory of Open Access Journals (Sweden)
Noreen Mary Vielreicher
2016-05-01
Direct timing constraints on gold mineralization indicate that Fimiston- and Mt Charlotte-style mineralization formed within a relative short period of time around 2.64 Ga, and, as such, support a model of progressive deformation of a rheologically heterogeneous rock package late in the structural history. Fluid characteristics, combined with the structural, metamorphic and absolute timing, support description of gold mineralization at the Golden Mile as orogenic and mesozonal, and this allows direct correlation with orogenic gold deposits worldwide, which classically formed during accretion along convergent margins throughout Earth history.
Leal, L C; Kitis, G; Guber, K H; Quaranta, A; Koehler, P E
2002-01-01
The purpose of the proposed project of an accurate measurement of the relevant neutron cross sections of $^{186}$Os and $^{187}$Os is to remove the principal nuclear physics uncertainties in the analysis of the Re/Os cosmochronometer. The necessary cross section information will be obtained in complementary experiments at the nTOF facility at CERN and at the Karlsruhe Van de Graaff accelerator. Transformation of these results into significantly improved stellar reaction rates will allow to evaluate the age of the elements in the framework of galactic chemical evolution models.
Radiation hardening revisited: Role of intracascade clustering
DEFF Research Database (Denmark)
Singh, B.N.; Foreman, A.J.E.; Trinkaus, H.
1997-01-01
be explained in terms of conventional dispersed-barrier hardening because (a) the grown-in dislocations are not free, and (b) irradiation-induced defect clusters are not rigid indestructible Orowan obstacles. A new model called 'cascade-induced source hardening' is presented where glissile loops produced...... directly in cascades are envisaged to decorate the grown-in dislocations so that they cannot act as dislocation sources. The upper yield stress is related to the breakaway stress which is necessary to pull the dislocation away from the clusters/loops decorating it. The magnitude of the breakaway stress has...
Induced gravity and gauge interactions revisited
International Nuclear Information System (INIS)
Broda, Boguslaw; Szanecki, Michal
2009-01-01
It has been shown that the primary, old-fashioned idea of Sakharov's induced gravity and gauge interactions, in the 'one-loop dominance' version, works astonishingly well yielding phenomenologically reasonable results. As a byproduct, the issue of the role of the UV cutoff in the context of the induced gravity has been reexamined (an idea of self-cutoff induced gravity). As an additional check, the black hole entropy has been used in the place of the action. Finally, it has been explicitly shown that the induced coupling constants of gauge interactions of the standard model assume qualitatively realistic values.
Statistical Theory of Normal Grain Growth Revisited
International Nuclear Information System (INIS)
Gadomski, A.; Luczka, J.
2002-01-01
In this paper, we discuss three physically relevant problems concerning the normal grain growth process. These are: Infinite vs finite size of the system under study (a step towards more realistic modeling); conditions of fine-grained structure formation, with possible applications to thin films and biomembranes, and interesting relations to superplasticity of materials; approach to log-normality, an ubiquitous natural phenomenon, frequently reported in literature. It turns out that all three important points mentioned are possible to be included in a Mulheran-Harding type behavior of evolving grains-containing systems that we have studied previously. (author)
Three-body unitarity with isobars revisited
Energy Technology Data Exchange (ETDEWEB)
Mai, M.; Hu, B. [The George Washington University, Washington, DC (United States); Doering, M. [The George Washington University, Washington, DC (United States); Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Pilloni, A. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Szczepaniak, A. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Indiana University, Center for Exploration of Energy and Matter, Bloomington, IN (United States); Indiana University, Physics Department, Bloomington, IN (United States)
2017-09-15
The particle exchange model of hadron interactions can be used to describe three-body scattering under the isobar assumption. In this study we start from the 3 → 3 scattering amplitude for spinless particles, which contains an isobar-spectator scattering amplitude. Using a Bethe-Salpeter Ansatz for the latter, we derive a relativistic three-dimensional scattering equation that manifestly fulfills three-body unitarity and two-body unitarity for the sub-amplitudes. This property holds for energies above breakup and also in the presence of resonances in the sub-amplitudes. (orig.)
Behaviour of irradiated uranium silicide fuel revisited
International Nuclear Information System (INIS)
Finlay, M. Ross; Hofman, Gerard L.; Rest, Jeffrey; Snelgrove, James L.
2002-01-01
Irradiated U 3 Si 2 dispersion fuels demonstrate very low levels of swelling, even at extremely high burn-up. This behaviour is attributed to the stability of fission gas bubbles that develop during irradiation. The bubbles remain uniformly distributed throughout the fuel and show no obvious signs of coalescence. Close examination of high burn-up samples during the U 3 Si 2 qualification program revealed a bimodal distribution of fission gas bubbles. Those observations suggested that an underlying microstructure was responsible for the behaviour. An irradiation induced recrystallisation model was developed that relied on the presence of sufficient grain boundary surface to trap and pin fission gas bubbles and prevent coalescence. However, more recent work has revealed that the U 3 Si 2 becomes amorphous almost instantaneously upon irradiation. Consequently, the recrystallisation model does not adequately explain the nucleation and growth of fission gas bubbles in U 3 Si 2 . Whilst it appears to work well within the range of measured data, it cannot be relied on to extrapolate beyond that range since it is not mechanistically valid. A review of the mini-plates irradiated in the Oak Ridge Research Reactor from the U 3 Si 2 qualification program has been performed. This has yielded a new understanding of U 3 Si 2 behaviour under irradiation. (author)
Thomas Piketty’s capitalism revisited
Directory of Open Access Journals (Sweden)
Milovanović Milić
2015-01-01
Full Text Available Thomas Piketty’s international best selling Capital in the Twenty-First Century lays out his theory of a long-run rise in income inequality under capitalism. It is written as a manifesto urging reintegration of social sciences. A number of reviewers judged it on ideological grounds, labeling it either as a revolution in economic thinking, or dismissing it offhandedly. Piketty’s theory of rising inequality is based on the two Fundamental Laws of Capitalism, developed after the Solow growth model. However, this model is inconsistent with Piketty’s own characterization of modern capitalism. Moreover, his sole justification for the constant discrepancy between rate of return and rate of income growth (r > g is based on the high elasticity of substitution between capital and labor. However, that is just one factor that can have an influence on factor income shares. By failing to offer a consistent theory of rising inequality, his piece can hardly be considered as a useful founding stone for a new social science.
Currency Crisis Revisited: A Literature Review
Directory of Open Access Journals (Sweden)
Teuta Ismaili Muharremi
2015-12-01
Full Text Available This paper elaborates on currency crisis, focusing on the main factors causing the currency crisis. After a brief overview of the main factors driving currency crisis, the paper provides a literature review highlighting that the history of the global economy experienced a number of currency crisis whereas as relates to the triggers of the currency crisis there are three generations of models that have been used to explain currency crisis during the last four decades. Underscoring the role of the government in financial market, in particular the evolution of this role as a result of the recent global financial crisis and highlighting other factors that trigger such crisis, the paper concludes that the potential financial crisis can be addressed using early warning system, which consists of indicators proven to be beneficial in anticipation of the currency crisis, and using the advanced empirical models of currency crisis. In this context the paper reveals that currency crisis are associated with all factors impacting them such as inflation, real exchange rate, import growth, US interest rates, public debt/GDP, and current account/GDP – all with a slightly different time lag.
Emergent gravity of fractons: Mach's principle revisited
Pretko, Michael
2017-07-01
Recent work has established the existence of stable quantum phases of matter described by symmetric tensor gauge fields, which naturally couple to particles of restricted mobility, such as fractons. We focus on a minimal toy model of a rank 2 tensor gauge field, consisting of fractons coupled to an emergent graviton (massless spin-2 excitation). We show how to reconcile the immobility of fractons with the expected gravitational behavior of the model. First, we reformulate the fracton phenomenon in terms of an emergent center of mass quantum number, and we show how an effective attraction arises from the principles of locality and conservation of center of mass. This interaction between fractons is always attractive and can be recast in geometric language, with a geodesiclike formulation, thereby satisfying the expected properties of a gravitational force. This force will generically be short-ranged, but we discuss how the power-law behavior of Newtonian gravity can arise under certain conditions. We then show that, while an isolated fracton is immobile, fractons are endowed with finite inertia by the presence of a large-scale distribution of other fractons, in a concrete manifestation of Mach's principle. Our formalism provides suggestive hints that matter plays a fundamental role, not only in perturbing, but in creating the background space in which it propagates.
Revisiting cytogenetic paradigms armed with new tools
International Nuclear Information System (INIS)
Cornforth, M.N.
2003-01-01
It could be argued that most of the fundamental tenets of radiation biology were either discovered, or subsequently confirmed, by observing eukaryotic chromosomes under the microscope. These include, but are certainly not limited to, dose-response relationships with respect to intensity (dose rate/dose fractionation) and radiation quality (LET/track structure). Chromosome aberrations are exquisitely sensitive indicators of radiation damage, and provide quantitative information of biological effect on a cell-by-cell-basis. As such, they have long been a favored endpoint for theoreticians, thereby figuring prominently in the development of generalized models of radiation action. Most of these seminal contributions to radiation biology occurred over a period of time when cytogenetic techniques were, of course, less refined than today. Considering the increasing rate at which technological advances have been made available to the researcher over the past few years, a reexamination of some of the radiological principles that cytogenetics helped to found seems in order. As an example of such effort, this talk will center around improvements to the use of whole chromosome painting by FISH- principally combinatorial painting techniques like mFISH and SKY- for the purposes of examining in greater detail structural aberrations to chromosomes produced following exposure to ionizing radiations of differing quality and intensity. This and related approaches by various laboratories around the world have turned up a few surprise discoveries that do not always fit established paradigms, and which serve to sharpen arguments that have been used to buttress existing models of aberration formation
Simulation of Two-Way Pushdown Automata Revisited
Directory of Open Access Journals (Sweden)
Robert Glück
2013-09-01
Full Text Available The linear-time simulation of 2-way deterministic pushdown automata (2DPDA by the Cook and Jones constructions is revisited. Following the semantics-based approach by Jones, an interpreter is given which, when extended with random-access memory, performs a linear-time simulation of 2DPDA. The recursive interpreter works without the dump list of the original constructions, which makes Cook's insight into linear-time simulation of exponential-time automata more intuitive and the complexity argument clearer. The simulation is then extended to 2-way nondeterministic pushdown automata (2NPDA to provide for a cubic-time recognition of context-free languages. The time required to run the final construction depends on the degree of nondeterminism. The key mechanism that enables the polynomial-time simulations is the sharing of computations by memoization.
Response variance in functional maps: neural darwinism revisited.
Directory of Open Access Journals (Sweden)
Hirokazu Takahashi
Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Theory of magnetohydrodynamic waves: The WKB approximation revisited
International Nuclear Information System (INIS)
Barnes, A.
1992-01-01
Past treatments of the eikonal or WKB theory of the propagation of magnetohydrodynamics waves have assumed a strictly isentropic background. IF in fact there is a gradient in the background entropy, then in second order in the WKB ordering, adiabatic fluctuations (in the Lagrangian sense) are not strictly isentropic in the Eulerian sense. This means that in the second order of the WKB expansion, which determines the variation of wave amplitude along rays, the violation of isentropy must be accounted for. The present paper revisits the derivation of the WKB approximation for small-amplitude magnetohydrodynamic waves, allowing for possible spatial variation of the background entropy. The equation of variation of wave amplitude is rederived; it is a bilinear equation which, it turns out, can be recast in the action conservation form. It is shown that this action conservation equation is in fact equivalent to the action conservation law obtained from Lagrangian treatments
Revisiting the Performance of MACD and RSI Oscillators
Directory of Open Access Journals (Sweden)
Terence Tai-Leung Chong
2014-02-01
Full Text Available Chong and Ng (2008 find that the Moving Average Convergence–Divergence (MACD and Relative Strength Index (RSI rules can generate excess return in the London Stock Exchange. This paper revisits the performance of the two trading rules in the stock markets of five other OECD countries. It is found that the MACD(12,26,0 and RSI(21,50 rules consistently generate significant abnormal returns in the Milan Comit General and the S&P/TSX Composite Index. In addition, the RSI(14,30/70 rule is also profitable in the Dow Jones Industrials Index. The results shed some light on investors’ belief in these two technical indicators in different developed markets.
Mixture toxicity revisited from a toxicogenomic perspective.
Altenburger, Rolf; Scholz, Stefan; Schmitt-Jansen, Mechthild; Busch, Wibke; Escher, Beate I
2012-03-06
The advent of new genomic techniques has raised expectations that central questions of mixture toxicology such as for mechanisms of low dose interactions can now be answered. This review provides an overview on experimental studies from the past decade that address diagnostic and/or mechanistic questions regarding the combined effects of chemical mixtures using toxicogenomic techniques. From 2002 to 2011, 41 studies were published with a focus on mixture toxicity assessment. Primarily multiplexed quantification of gene transcripts was performed, though metabolomic and proteomic analysis of joint exposures have also been undertaken. It is now standard to explicitly state criteria for selecting concentrations and provide insight into data transformation and statistical treatment with respect to minimizing sources of undue variability. Bioinformatic analysis of toxicogenomic data, by contrast, is still a field with diverse and rapidly evolving tools. The reported combined effect assessments are discussed in the light of established toxicological dose-response and mixture toxicity models. Receptor-based assays seem to be the most advanced toward establishing quantitative relationships between exposure and biological responses. Often transcriptomic responses are discussed based on the presence or absence of signals, where the interpretation may remain ambiguous due to methodological problems. The majority of mixture studies design their studies to compare the recorded mixture outcome against responses for individual components only. This stands in stark contrast to our existing understanding of joint biological activity at the levels of chemical target interactions and apical combined effects. By joining established mixture effect models with toxicokinetic and -dynamic thinking, we suggest a conceptual framework that may help to overcome the current limitation of providing mainly anecdotal evidence on mixture effects. To achieve this we suggest (i) to design studies to
Revisiting nitrogen species in covalent triazine frameworks
Osadchii, Dmitrii Yu.
2017-11-28
Covalent triazine frameworks (CTFs) are porous organic materials promising for applications in catalysis and separation due to their high stability, adjustable porosity and intrinsic nitrogen functionalities. CTFs are prepared by ionothermal trimerization of aromatic nitriles, however, multiple side reactions also occur under synthesis conditions, and their influence on the material properties is still poorly described. Here we report the systematic characterization of nitrogen in CTFs using X-ray photoelectron spectroscopy (XPS). With the use of model compounds, we could distinguish several types of nitrogen species. By combining these data with textural properties, we unravel the influence that the reaction temperature, the catalyst and the monomer structure and composition have on the properties of the resulting CTF materials.
Di-interstitial defect in silicon revisited
International Nuclear Information System (INIS)
Londos, C. A.; Antonaras, G.; Chroneos, A.
2013-01-01
Infrared spectroscopy was used to study the defect spectrum of Cz-Si samples following fast neutron irradiation. We mainly focus on the band at 533 cm −1 , which disappears from the spectra at ∼170 °C, exhibiting similar thermal stability with the Si-P6 electron paramagnetic resonance (EPR) spectrum previously correlated with the di-interstitial defect. The suggested structural model of this defect comprises of two self-interstitial atoms located symmetrically around a lattice site Si atom. The band anneals out following a first-order kinetics with an activation energy of 0.88 ± 0.3 eV. This value does not deviate considerably from previously quoted experimental and theoretical values for the di-interstitial defect. The present results indicate that the 533 cm −1 IR band originates from the same structure as that of the Si-P6 EPR spectrum
Revisiting entanglement entropy of lattice gauge theories
Energy Technology Data Exchange (ETDEWEB)
Hung, Ling-Yan [Department of Physics and Center for Field Theory and Particle Physics, Fudan University,220 Handan Lu, Shanghai 200433 (China); Collaborative Innovation Center of Advanced Microstructures, Fudan University,220 Handan Lu, Shanghai 200433 (China); Wan, Yidun [Perimeter Institute for Theoretical Physics,31 Caroline Street, Waterloo, ON N2L 2Y5 (Canada)
2015-04-22
It is realized recently that the entanglement entropy in gauge theories is ambiguous because the Hilbert space cannot be expressed as a simple direct product of Hilbert spaces defined on the two regions; different ways of dividing the Hilbert spaces near the boundary leads to significantly different result, to the extreme that it could annihilate the otherwise finite topological entanglement entropy between two regions altogether. In this article, we first show that the topological entanglement entropy in the Kitaev model http://dx.doi.org/10.1016/S0003-4916(02)00018-0 which is not a true gauge theory, is free of ambiguity. Then, we give a physical interpretation, from the perspectives of what can be measured in an experiment, to the purported ambiguity of true gauge theories, where the topological entanglement arises as redundancy in counting the degrees of freedom along the boundary separating two regions. We generalize these discussions to non-Abelian gauge theories.