Effective constraint potential in lattice Weinberg - Salam model
Polikarpov, M I
2011-01-01
We investigate lattice Weinberg - Salam model without fermions for the value of the Weinberg angle $\\theta_W \\sim 30^o$, and bare fine structure constant around $\\alpha \\sim 1/150$. We consider the value of the scalar self coupling corresponding to bare Higgs mass around 150 GeV. The effective constraint potential for the zero momentum scalar field is used in order to investigate phenomena existing in the vicinity of the phase transition between the physical Higgs phase and the unphysical symmetric phase of the lattice model. This is the region of the phase diagram, where the continuum physics is to be approached. We compare the above mentioned effective potential (calculated in selected gauges) with the effective potential for the value of the scalar field at a fixed space - time point. We also calculate the renormalized fine structure constant using the correlator of Polyakov lines and compare it with the one - loop perturbative estimate.
Effective Weinberg-Salam model from higher dimensions
Mac, A; Mielke, E W; Matos, T; Mac, Alfredo; Camacho, Abel; Mielke, Eckehard W; Matos, Tonatiuh
1996-01-01
We consider an 8--dimensional gravitational theory, which possesses a principle fiber bundle structure, with Lorentz--scalar fields coupled to the metric. One of them plays the role of a Higgs field and the other one that of a dilaton field. The effective cosmological constant is interpreted as a Higgs potential. The Yukawa couplings are introduce by hand. The extra dimensions constitute a SU(2)_{L} \\times U(1)_{Y} \\times SU(2)_{R} group manifold. Dirac fields are coupled to the potentials derived from the metric. As result, we obtain an effective four--dimensional theory which contains all couplings of a Weinberg--Salam--Glashow theory in a curved space-time. The masses of the gauge bosons and of the first two fermion families are given by the theory.
Generalized Gauge Theories and Weinberg-Salam Model with Dirac-Kähler Fermions
Kawamoto, N; Umetsu, H; Kawamoto, Noboru; Tsukioka, Takuya; Umetsu, Hiroshi
2001-01-01
We extend previously proposed generalized gauge theory formulation of Chern-Simons type and topological Yang-Mills type actions into Yang-Mills type actions. We formulate gauge fields and Dirac-K\\"ahler matter fermions by all degrees of differential forms. The simplest version of the model which includes only zero and one form gauge fields accommodated with the graded Lie algebra of $SU(2|1)$ supergroup leads Weinberg-Salam model. Thus the Weinberg-Salam model formulated by noncommutative geometry is a particular example of the present formulation.
The fluctuational region on the phase diagram of lattice Weinberg - Salam model
Zubkov, M A
2009-01-01
The lattice Weinberg - Salam model without fermions is investigated numerically for the realistic choice of bare coupling constants correspondent to the value of the Weinberg angle $\\theta_W \\sim 30^o$, and the fine structure constant $\\alpha \\sim {1/100}$. On the phase diagram there exists the vicinity of the phase transition between the physical Higgs phase and the unphysical symmetric phase, where the fluctuations of the scalar field become strong. The classical Nambu monopole can be considered as an embryo of the unphysical symmetric phase within the physical phase. In the fluctuational region quantum Nambu monopoles are dense and, therefore, the perturbation expansion around trivial vacuum cannot be applied. The maximal value of the cutoff at the given values of coupling constants calculated using the lattices of sizes $8^3\\times 16$ and $12^3\\times 16$ is $\\Lambda_c \\sim 1.4 \\pm 0.2$ Tev.
MAP, MAC, and vortex-rings configurations in the Weinberg-Salam model
Teh, Rosy; Ng, Ban-Loong; Wong, Khai-Ming
2015-11-01
We report on the presence of new axially symmetric monopoles, antimonopoles and vortex-rings solutions of the SU(2)×U(1) Weinberg-Salam model of electromagnetic and weak interactions. When the ϕ-winding number n = 1, and 2, the configurations are monopole-antimonopole pair (MAP) and monopole-antimonopole chain (MAC) with poles of alternating sign magnetic charge arranged along the z-axis. Vortex-rings start to appear from the MAP and MAC configurations when the winding number n = 3. The MAP configurations possess zero net magnetic charge whereas the MAC configurations possess net magnetic charge of 4 πn / e. In the MAP configurations, the monopole-antimonopole pair is bounded by the Z0 field flux string and there is an electromagnetic current loop encircling it. The monopole and antimonopole possess magnetic charges ± 4πn/e sin2θW respectively. In the MAC configurations there is no string connecting the monopole and the adjacent antimonopole and they possess magnetic charges ± 4 πn/e respectively. The MAC configurations possess infinite total energy and zero magnetic dipole moment whereas the MAP configurations which are actually sphalerons possess finite total energy and magnetic dipole moment. The configurations were investigated for varying values of Higgs self-coupling constant 0 ≤ λ ≤ 40 at Weinberg angle θW = π/4.
Feynman-Weinberg Quantum Gravity and the Extended Standard Model as a Theory of Everything
Tipler, Frank J
2005-01-01
I argue that the (extended) Standard Model (SM) of particle physics and the renormalizable Feynman-Weinberg theory of quantum gravity comprise a theory of everything. I show that imposing the appropriate cosmological boundary conditions make the theory finite. The infinities that are normally renormalized away and the series divergence infinities are both eliminated by the same mechanism. Furthermore, this theory can resolve the horizon, flatness, and isotropy problems of cosmology. Joint mathematical consistency naturally yields a scale-free, Gaussian, adiabatic perturbation spectrum, and more matter than antimatter. I show that mathematical consistency of the theory requires the universe to begin at an initial singularity with a pure $SU(2)_L$ gauge field. I show that quantum mechanics requires this field to have a Planckian spectrum whatever its temperature. If this field has managed to survive thermalization to the present day, then it would be the CMBR. If so, then we would have a natural explanation for...
2009-01-01
Steven Weinberg visiting the ATLAS cavern accompanied by Peter JenniIt was no surprise that the CERN audience arrived early in the Globe of Science and Innovation for the colloquium on 7 July. Nobel laureate Steven Weinberg is one of the major contributors to the Standard Model of particle physics. He received the Nobel Prize for Physics in 1979 for his work on the unified theory of the electromagnetic and weak interactions, one of the essential pillars of the Standard Model. After lunch at CERN and a visit to ATLAS, Weinberg gave a colloquium on "The Quantum Theory of Fields: Effective or Fundamental" to a packed audience. In his talk, he looked at how the use of quantum field theory in particle physics has fluctuated in popularity since Paul Dirac first introduced the approach to describe the interaction of particles with electromagnetic fields in the late 1920s. In particular, he posed the question: Is quantum field theory fundamental or does it a...
Electrodynamics with Weinberg's photons
Dvoeglazov, V V
1993-01-01
The interaction of the spinor field with the Weinberg's $2(2S+1)$- component massless field is considered. New interpretation of the Weinberg's spinor is proposed. The equation analogous to the Dirac oscillator is obtained.
Sphaleron and sphaleron-antisphaleron pair of the Weinberg-Salam model
Teh, Rosy; Ng, Ban-Loong; Wong, Khai-Ming
2015-04-01
In this paper we present the full solutions of the Weinberg-Salam equations of motion for (1) the sphaleron and (2) the sphaleron-antisphaleron pair using numerical methods. In the SU(2) field part of the theory, the solutions obtained are (1) the one monopole-antimonopole pair and (2) the two monopole-antimonopole pairs respectively while in the U(1) field part, the solutions are (1) a one current loop and (2) two current loops respectively. In these sphaleron and sphaleron-antisphaleron pair solutions, both the sphaleron and antisphaleron are monopole-antimonopole pair (MAP) lying along the z-axis with an electromagnetic current loop circulating around it. The monopole and antimonopole in the MAP is joined by a flux string of the neutral ℒ0 field. In these solutions, the Weinberg angle is not arbitrary but it takes the value of θW=π/4 . The magnetic charges of the monopole and antimonopole in each MAP is ±2/π e which is half the magnetic charge of a Cho-Maison monopole. Hence the MAP poles are half-monopoles. These new axially symmetric solutions possess finite energy and magnetic dipole moment and they are investigated for a range of Higgs field mass term 0 ≤ µ2 ≤ 40. Their total energies are found to increase almost linearly with µ, whereas the magnetic dipole moments decrease exponentially fast with µ.
Energy Technology Data Exchange (ETDEWEB)
Sidhu, D.P.
1980-09-01
I discuss a left-right-symmetric model of weak and electromagnetic interactions which is consistent with the results of all weak-interaction experiments including observed parity violation in eN interactions. The model is essentially indistinguishable from the Weinberg-Salam (WS) model at low energies and differs from it significantly at high q/sup 2/. Of the two (Z/sub 1/,Z/sub 2/) neutral bosons of the model, MZ-italic/sub 1/approx. =M/sub Z/ of the WS model and MZ-italic/sub 2/approx. =2.5M/sub Z//sub 1/approx. =230 GeV. The prospects of distinguishing the two classes of models in e/sup +/e/sup -/ experiments at LEP and in pp and p-barp colliding-beam experiments at ISABELLE are also discussed.
McMurran, Shawnee L.
2010-01-01
This module was initially developed for a course in applications of mathematics in biology. The objective of this lesson is to investigate how the allele and genotypic frequencies associated with a particular gene might evolve over successive generations. The lesson will discuss how the Hardy-Weinberg model provides a basis for comparison when…
Implications of b{yields}s{gamma} in the Weinberg three-Higgs-doublet models
Energy Technology Data Exchange (ETDEWEB)
Chang, Darwin; Chen, Chuan-Hung; Geng, Chao-Qiang [National Tsing Hua Univ., Hsinchu, TW (China). Dept. of Physics
1996-06-01
Using recent experimental measurements on Br(b{yields}s{gamma}) from CLEO, we study the constraints on the charged Higgs sector in various three-Higgs-doublet models. Some phenomenological implications in these models with emphasis on CP violation are presented. In particular, in some of these models, the CP violating muon polarization in K{sub {mu}3} can be detected using the current KEK experiment E246. (author)
On extending the Hardy-Weinberg law
Directory of Open Access Journals (Sweden)
Alan E. Stark
2007-01-01
Full Text Available This paper gives a general mating system for an autosomal locus with two alleles. The population reproduces in discrete and non-overlapping generations. The parental population, the same in both sexes, is arbitrary as is that of the offspring and the gene frequencies of the parents are maintained in the offspring. The system encompasses a number of special cases including the random mating model of Weinberg and Hardy. Thus it demonstrates, in the most general way possible, how genetic variation can be conserved in an indefinitely large population without invoking random mating or balancing selection. An important feature is that it provides a mating system which identifies when mating does and does not produce Hardy-Weinberg proportions among offspring.
The 3.5 keV X-ray line signature from annihilating and decaying dark matter in Weinberg model
Baek, Seungwon; Park, Wan-Il
2014-01-01
Recently two groups independently observed unidentified X-ray line signal at the energy 3.55 keV from the galaxy clusters and Andromeda galaxy. We show that this anomalous signal can be explained in annihilating dark matter model, for example, fermionic dark matter model in hidden sector with global $U(1)_X$ symmetry proposed by Weinberg. There are two scenarios for the production of the annihilating dark matters. In the first scenario the dark matters with mass 3.55 keV decouple from the interaction with Goldstone bosons and go out of thermal equilibrium at high temperature ($>$ 1 TeV) when they are still relativistic, their number density per comoving volume being essentially fixed to be the current value. The correct relic abundance of this warm dark matter is obtained by assuming that about ${\\cal O}(10^3)$ relativistic degrees of freedom were present at the decoupling temperature or alternatively large entropy production occurred at high temperature. In the other scenario, the dark matters were absent at...
Minimal Coleman-Weinberg theory explains the diphoton excess
DEFF Research Database (Denmark)
Antipin, Oleg; Mojaza, Matin; Sannino, Francesco
2016-01-01
It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require the introdu......It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require...
Weinberg's Approach and Antisymmetric Tensor Fields
Dvoeglazov, V V
2002-01-01
We extend the previous series of articles \\cite{HPA} devoted to finding mappings between the Weinberg-Tucker-Hammer formalism and antisymmetric tensor fields. Now we take into account solutions of different parities of the Weinberg-like equations. Thus, the Proca, Duffin-Kemmer and Bargmann-Wigner formalisms are generalized.
Generalized Weinberg Sum Rules in Deconstructed QCD
Sekhar-Chivukula, R; Tanabashi, Masaharu; Kurachi, Masafumi; Tanabashi, Masaharu
2004-01-01
Recently, Son and Stephanov have considered an "open moose" as a possible dual model of a QCD-like theory of chiral symmetry breaking. In this note we demonstrate that although the Weinberg sum rules are satisfied in any such model, the relevant sums converge very slowly and in a manner unlike QCD. Further, we show that such a model satisfies a set of generalized sum rules. These sum rules can be understood by looking at the operator product expansion for the correlation function of chiral currents, and correspond to the absence of low-dimension gauge-invariant chiral symmetry breaking condensates. These results imply that, regardless of the couplings and F-constants chosen, the open moose is not the dual of any QCD-like theory of chiral symmetry breaking. We also show that the generalized sum rules lead to a compact expression for the difference of vector- and axial-current correlation functions. This expression allows for a simple formula for the S parameter (L_10), which implies that S is always positive a...
Spiral inflation with Coleman-Weinberg potential
Barenboim, Gabriela; Park, Wan-Il
2015-03-01
We apply the idea of spiral inflation to the Coleman-Weinberg potential and show that inflation matching our observations well is allowed for a symmetry-breaking scale ranging from an intermediate scale to a grand unified theory (GUT) scale even if the quartic coupling λ is of O (0.1 ). The tensor-to-scalar ratio can be of O (0.01 ) in the case of GUT-scale symmetry breaking.
Is the Higgs boson associated with Coleman-Weinberg dynamical symmetry breaking?
Energy Technology Data Exchange (ETDEWEB)
Hill, Christopher T. [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States)
2014-04-01
The Higgs mechanism may be a quantum phenomenon, i.e., a Coleman-Weinberg potential generated by the explicit breaking of scale symmetry in Feynman loops. We review the relationship of scale symmetry, trace anomalies, and emphasize the role of the renormalization group in determining Coleman- Weinberg potentials. We propose a simple phenomenological model with "maximal visibility" at the LHC containing a "dormant" Higgs doublet (no VEV, coupled to standard model gauge interactions $SU(2)\\times U(1)$) with a mass of $\\sim 380$ GeV. We discuss the LHC phenomenology and UV challenges of such a model. We also give a schematic model in which new heavy fermions, with masses $\\sim 230$ GeV, can drive a Coleman-Weinberg potential at two-loops. The role of the "improved stress tensor" is emphasized, and we propose a non-gravitational term, analogous to the $\\theta$-term in QCD, which generates it from a scalar action.
Graffelman, Jan; Sánchez, Milagros; Cook, Samantha; Moreno, Victor
2013-01-01
In genetic association studies, tests for Hardy-Weinberg proportions are often employed as a quality control checking procedure. Missing genotypes are typically discarded prior to testing. In this paper we show that inference for Hardy-Weinberg proportions can be biased when missing values are discarded. We propose to use multiple imputation of missing values in order to improve inference for Hardy-Weinberg proportions. For imputation we employ a multinomial logit model that uses information from allele intensities and/or neighbouring markers. Analysis of an empirical data set of single nucleotide polymorphisms possibly related to colon cancer reveals that missing genotypes are not missing completely at random. Deviation from Hardy-Weinberg proportions is mostly due to a lack of heterozygotes. Inbreeding coefficients estimated by multiple imputation of the missings are typically lowered with respect to inbreeding coefficients estimated by discarding the missings. Accounting for missings by multiple imputation qualitatively changed the results of 10 to 17% of the statistical tests performed. Estimates of inbreeding coefficients obtained by multiple imputation showed high correlation with estimates obtained by single imputation using an external reference panel. Our conclusion is that imputation of missing data leads to improved statistical inference for Hardy-Weinberg proportions.
Minimal Coleman-Weinberg theory explains the diphoton excess
DEFF Research Database (Denmark)
Antipin, Oleg; Mojaza, Matin; Sannino, Francesco
2016-01-01
the introduction of an extra singlet scalar further coupled to new fermions. In this constrained setup the Higgs mass was close to the observed value and the new scalar mass was below a TeV scale. Here we first extend the previous analysis by taking into account the important difference between running mass......It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require...... and pole mass of the scalar states. We then investigate whether these theories can account for the 750 GeV excess in diphotons observed by the LHC collaborations. New QCD-colored fermions in the TeV mass range coupled to the new scalar state are needed to describe the excess. We further show, by explicit...
Exact tests for Hardy-Weinberg proportions.
Engels, William R
2009-12-01
Exact conditional tests are often required to evaluate statistically whether a sample of diploids comes from a population with Hardy-Weinberg proportions or to confirm the accuracy of genotype assignments. This requirement is especially common when the sample includes multiple alleles and sparse data, thus rendering asymptotic methods, such as the common chi(2)-test, unreliable. Such an exact test can be performed using the likelihood ratio as its test statistic rather than the more commonly used probability test. Conceptual advantages in using the likelihood ratio are discussed. A substantially improved algorithm is described to permit the performance of a full-enumeration exact test on sample sizes that are too large for previous methods. An improved Monte Carlo algorithm is also proposed for samples that preclude full enumeration. These algorithms are about two orders of magnitude faster than those currently in use. Finally, methods are derived to compute the number of possible samples with a given set of allele counts, a useful quantity for evaluating the feasibility of the full enumeration procedure. Software implementing these methods, ExactoHW, is provided.
Weinberg Angle Derivation from Discrete Subgroups of SU(2 and All That
Directory of Open Access Journals (Sweden)
Potter F.
2015-01-01
Full Text Available The Weinberg angle W of the Standard Model of leptons and quarks is derived from specific discrete (i.e., finite subgroups of the electroweak local gauge group SU(2 L U(1 Y . In addition, the cancellation of the triangle anomaly is achieved even when there are four quark families and three lepton families!
Minimal Coleman-Weinberg theory explains the diphoton excess
Antipin, Oleg; Mojaza, Matin; Sannino, Francesco
2016-06-01
We replace the standard Higgs-mechanism by the Coleman-Weinberg mechanism, and investigate its viability through the addition of a new scalar field. As we showed in a previous study, minimal models of this type can alleviate the hierarchy problem of the Higgs-mass through the so-called Veltman conditions. We here extend the previous analysis by taking into account the important difference between running mass and pole mass of the scalar states. We then investigate whether these theories can account for the 750 GeV excess in diphotons observed by the LHC collaborations. New QCD-colored fermions in the TeV mass range coupled to the new scalar state are needed to describe the excess. We further show, by explicit computation of the running of the couplings, that the model is under perturbative control till just above the masses of the heaviest states of the theory. We further suggest related testable signatures and thereby show that the LHC experiments can test these models.
A natural Coleman-Weinberg theory explains the diphoton excess
Antipin, Oleg; Sannino, Francesco
2015-01-01
It is possible to delay the hierarchy problem, by replacing the standard Higgs-mechanism by the Coleman-Weinberg mechanism, and at the same time ensuring perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, this scenario requires the coupling to an extra singlet scalar, which also has to couple to a dark or heavy fermionic sector to satisfy the Veltman conditions for delayed naturalness and simultaneously be consistent with the experimental values of the standard model parameters. Intriguingly, the Higgs mass value becomes a prediction of this scenario. Furthermore, the mass of the extra singlet scalar is also predicted, and has so far been out of experimental reach. In this paper, we show that this scenario can explain a 750 GeV resonance, producing a diphoton excess by the decay of the heavy scalar through its predicted couplings to fermions. The severe theory constraints of this scenario lead to additional observable predictions; in particular, we will show t...
Stages in the evolution of the Hardy-Weinberg law
Directory of Open Access Journals (Sweden)
Alan E. Stark
2006-01-01
Full Text Available The Hardy-Weinberg law has been used widely for about one hundred years with little question as to the foundations laid down by its originators. The basic assumption of random mating, that is choice of mates by a process akin to that of a lottery, was shown to produce genotypic proportions following the "binomial-square" rule, the so-called Hardy-Weinberg proportions (HWP. It has been assumed by many that random mating was the only way of pairing genes capable of producing HWP. However it has been shown that HWP can be obtained and maintained by non-random mating. The steps along the way to this revelation and some implications are reviewed.
Distributions of Hardy-Weinberg equilibrium test statistics.
Rohlfs, R V; Weir, B S
2008-11-01
It is well established that test statistics and P-values derived from discrete data, such as genetic markers, are also discrete. In most genetic applications, the null distribution for a discrete test statistic is approximated with a continuous distribution, but this approximation may not be reasonable. In some cases using the continuous approximation for the expected null distribution may cause truly null test statistics to appear nonnull. We explore the implications of using continuous distributions to approximate the discrete distributions of Hardy-Weinberg equilibrium test statistics and P-values. We derive exact P-value distributions under the null and alternative hypotheses, enabling a more accurate analysis than is possible with continuous approximations. We apply these methods to biological data and find that using continuous distribution theory with exact tests may underestimate the extent of Hardy-Weinberg disequilibrium in a sample. The implications may be most important for the widespread use of whole-genome case-control association studies and Hardy-Weinberg equilibrium (HWE) testing for data quality control.
$\\gamma$ and $\\upsilon$ distributions for neutral current reactions of the Weinberg- type
Albright, C H
1974-01-01
The y and v distributions for inclusive neutrino and antineutrino reactions arising from a neutral current of the Weinberg-type are investigated in the framework of two quark parton models. While the v distributions appear of little use at present, it is shown that by making a cut in the y variable, one can determine sin/sup 2/ theta /sub W/ reasonably accurately, independent of the cross section determination, even with the present narrow-band dichromatic neutrino beam at NAL. (18 refs).
Asymptotic symmetries of QED and Weinberg's soft photon theorem
Campiglia, Miguel
2015-01-01
Various equivalences between so-called soft theorems which constrain scattering amplitudes and Ward identities related to asymptotic symmetries have recently been established in gauge theories and gravity. So far these equivalences have been restricted to the case of massless matter fields, the reason being that the asymptotic symmetries are defined at null infinity. The restriction is however unnatural from the perspective of soft theorems which are insensitive to the masses of the external particles. In this work we remove the aforementioned restriction in the context of scalar QED. Inspired by the radiative phase space description of massless fields at null infinity, we introduce a manifold description of time-like infinity on which the asymptotic phase space for massive fields can be defined. The "angle dependent" large gauge transformations are shown to have a well defined action on this phase space, and the resulting Ward identities are found to be equivalent to Weinberg's soft photon theorem.
Gravitational Coleman–Weinberg potential and its finite temperature counterpart
Energy Technology Data Exchange (ETDEWEB)
Bhattacharjee, Srijit [Astroparticle Physics and Cosmology Division, Saha Institute of Nuclear Physics, Kolkata 700064 (India); Discipline of Physics, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat 382424 (India); Majumdar, Parthasarathi [Department of Physics, Ramakrishna Mission Vivekananada University, Belur Math, Howrah 711202 (India)
2014-08-15
Coleman–Weinberg (CW) phenomena for the case of gravitons minimally coupled to massless scalar field is studied. The one-loop effect completely vanishes if there is no self-interaction term present in the matter sector. The one-loop effective potential is shown to develop an instability in the form of acquiring an imaginary part, which can be traced to the tachyonic pole in the graviton propagator. The finite temperature counterpart of this CW potential is computed to study the behaviour of the potential in the high and low temperature regimes with respect to the typical energy scale of the theory. Finite temperature contribution to the imaginary part of gravitational CW potential exhibits a damped oscillatory behaviour; all thermal effects are damped out as the temperature vanishes, consistent with the zero-temperature result.
A Lab Exercise Explaining Hardy-Weinberg Equilibrium and Evolution Effectively.
Winterer, Juliette
2001-01-01
Presents a set of six activities in population genetics for a college-level biology course that helps students understand the Hardy-Weinberg principle. Activities focus on characterizing a population, Hardy-Weinberg proportions, genetic drift, mutation and selection, population size and divergence, and secondary contact. The only materials…
From the Weinberg Angle to Cardiac MRI a career change
ten Have, I
1997-01-01
In summer 1994 I left particle physics to pursue a career in industry. Now three years later I am working for Philips Medical Systems in the Netherlands. I am responsible, both technically and commercially, for adapting the exist-ingMagnetic Resonance Imaging (MRI) technique for cardiology applications. In this seminar I will talk about the interesting challenges my current position holds for me. I will present the substantial added value of my working experience at CERN: the physics research, working at technological frontiers, the international collaboration, expatriate life. Finally I will also describe what I have done to actively support this career change. Speaker: 1985-1989 Ð University of Nijmegen and CERN - Ph.D. on B0-B0bar mixing, Eurojet, top quark cross-sections, test of the O(as3) QCD matrix element, calculations for future hadron colliders, UA1. 1989-1994 Ð University of Glasgow and CERN - ALEPH - Jet charge studies, measurement of the Weinberg Angle. 1994-1996 Ð Master of Business Administr...
Bertrand's Law and Weinberg's Principle and Their Extension
Institute of Scientific and Technical Information of China (English)
Cao Yuqing; Hu Kuanrong
2000-01-01
Bertrand's law that the plant can't live in lack of some indispensable element, appropriate amount of the element will make the plant thrive but excessive amount of it will make the plant poisoning and even die was obtained through the study on the biologic adaptability in laboratory for the indispensable element manganese by G. Bertrand. E. D. Weinberg developed Bertrand's law as that certain amount of manganese was appropriate for the growth of some bacteria but not for the form of bacteriophage.The double threshold contents of elements indispensable for the organisms and their physiological effects can be extended to different hydrogeochemical zones of hydrogeological unit. Some elements are lack in the hydrogeochemical zone, in which the elements leaching and transfer are very strong the biological physiological effect is negative to the elements content. However, in the elements enrichment area caused by leaching and concentrating by evaporation or environmental pollution, the biological physiological effects are positive to the element content. The elements content in other areas which are in between two above types is appropriate for the organisms.From the hydrogeochemical study in Liliu , Shanxi province we obtained that the rate of KBD, IDD and dental caries are resulted from deficiency of elements Se, I and F in water (soil), respectively, the rate of diseases above is inversely related with the element content, while in the zone with excessive fluorine caused by enrichment and leaching, the rate of endemic fluorosis is positive to the fluorine content.
The use of Hardy-Weinberg Equilibrium in clonal plant systems.
Douhovnikoff, Vladimir; Leventhal, Matthew
2016-02-01
Traditionally population genetics precludes the use of the same genetic individual more than once in Hardy-Weinberg (HW) based calculations due to the model's explicit assumptions. However, when applied to clonal plant populations this can be difficult to do, and in some circumstances, it may be ecologically informative to use the ramet as the data unit. In fact, ecologists have varied the definition of the individual from a strict adherence to a single data point per genotype to a more inclusive approach of one data point per ramet. With the advent of molecular tools, the list of facultatively clonal plants and the recognition of their ecological relevance grows. There is an important risk of misinterpretation when HW calculations are applied to a clonal plant not recognized as clonal, as well as when the definition of the individual for those calculations is not clearly stated in a known clonal species. Focusing on heterozygosity values, we investigate cases that demonstrate the extreme range of potential modeling outcomes and describe the different contexts where a particular definition could better meet ecological modeling goals. We emphasize that the HW model can be ecologically relevant when applied to clonal plants, but caution is necessary in how it is used, reported, and interpreted. We propose that in known clonal plants, both genotype (GHet) and ramet (RHet) based calculations are reported to define the full range of potential values and better facilitate cross-study comparisons.
Powerful haplotype-based Hardy-Weinberg equilibrium tests for tightly linked loci.
Directory of Open Access Journals (Sweden)
Wei-Gao Mao
Full Text Available Recently, there have been many case-control studies proposed to test for association between haplotypes and disease, which require the Hardy-Weinberg equilibrium (HWE assumption of haplotype frequencies. As such, haplotype inference of unphased genotypes and development of haplotype-based HWE tests are crucial prior to fine mapping. The goodness-of-fit test is a frequently-used method to test for HWE for multiple tightly-linked loci. However, its degrees of freedom dramatically increase with the increase of the number of loci, which may lack the test power. Therefore, in this paper, to improve the test power for haplotype-based HWE, we first write out two likelihood functions of the observed data based on the Niu's model (NM and inbreeding model (IM, respectively, which can cause the departure from HWE. Then, we use two expectation-maximization algorithms and one expectation-conditional-maximization algorithm to estimate the model parameters under the HWE, IM and NM models, respectively. Finally, we propose the likelihood ratio tests LRT[Formula: see text] and LRT[Formula: see text] for haplotype-based HWE under the NM and IM models, respectively. We simulate the HWE, Niu's, inbreeding and population stratification models to assess the validity and compare the performance of these two LRT tests. The simulation results show that both of the tests control the type I error rates well in testing for haplotype-based HWE. If the NM model is true, then LRT[Formula: see text] is more powerful. While, if the true model is the IM model, then LRT[Formula: see text] has better performance in power. Under the population stratification model, LRT[Formula: see text] is still more powerful. To this end, LRT[Formula: see text] is generally recommended. Application of the proposed methods to a rheumatoid arthritis data set further illustrates their utility for real data analysis.
Systematic study of the d=5 Weinberg operator at one-loop order
Bonnet, Florian; Ota, Toshihiko; Winter, Walter
2012-01-01
We perform a systematic study of the $d=5$ Weinberg operator at the one-loop level. We identify three different categories of neutrino mass generation: (1) finite irreducible diagrams; (2) finite extensions of the usual seesaw mechanisms at one-loop and (3) divergent loop realizations of the seesaws. All radiative one-loop neutrino mass models must fall into one of these classes. Case (1) gives the leading contribution to neutrino mass naturally and a classic example of this class is the Zee model. We demonstrate that in order to prevent that a tree level contribution dominates in case (2), Majorana fermions running in the loop and an additional $\\mathbb{Z}_2$ symmetry are needed for a genuinely leading one-loop contribution. In the type-II loop extensions, the lepton number violating coupling will be generated at one loop, whereas the type-I/III extensions can be interpreted as loop-induced inverse or linear seesaw mechanisms. For the divergent diagrams in category (3), the tree level contribution cannot be ...
The evolution of gauge couplings and the Weinberg angle in 5 dimensions for an SU(3) gauge group
Khojali, Mohammed Omer; Deandrea, Aldo
2016-01-01
We test in a simplified 5-dimensional model with SU(3) gauge symmetry, the evolution equations of the gauge couplings of a model containing bulk fields, gauge fields and one pair of fermions. In this model we assume that the fermion doublet and two singlet fields are located at fixed points of the extra-dimension compactified on an $S^{1}/Z_{2}$ orbifold. The gauge coupling evolution is derived at one-loop in 5-dimensions, for the gauge group $G = SU(3)$, and used to test the impact on lower energy observables, in particular the Weinberg angle. The gauge bosons and the Higgs field arise from the gauge bosons in 5 dimensions, as in a gauge-Higgs model. The model is used as a testing ground as it is not a complete and realistic model for the electroweak interactions.
Dark energy from logarithmically modified gravity and deformed Coleman-Weinberg potential
Institute of Scientific and Technical Information of China (English)
Ahmad Rami El-Nabulsi
2011-01-01
Recent astrophysical measurements strongly suggest the existence of a missing energy component dubbed dark energy that is responsible for the current accelerated expansion of the universe. A new class of modified gravity theory is introduced which yields a universe accelerating in time and dominated by dark energy. The new modified gravity model constructed here concurrently includes a Gauss-Bonnet invariant term, barotropic fluid with a time-dependent equation of state parameter,a Coleman-Weinberg (CW) potential-like expression V(φ) = ξφm In φn and a new Einstein-Hilbert term f(R, φ) = E(φ)R which depends on both the scalar curvature and the scalar field φ through a generic logarithmic function E(φ) =In φ. Here m and n take different values from the standard CW potential and ξ is a real parameter.It was shown that the presence of these terms provides many useful features which are discussed in some detail.
Graffelman, Jan; Nelson, S.; Gogarten, S. M.; Weir, B. S.
2015-01-01
This paper addresses the issue of exact-test based statistical inference for Hardy−Weinberg equilibrium in the presence of missing genotype data. Missing genotypes often are discarded when markers are tested for Hardy−Weinberg equilibrium, which can lead to bias in the statistical inference about equilibrium. Single and multiple imputation can improve inference on equilibrium. We develop tests for equilibrium in the presence of missingness by using both inbreeding coefficients (or, equivalently, χ2 statistics) and exact p-values. The analysis of a set of markers with a high missing rate from the GENEVA project on prematurity shows that exact inference on equilibrium can be altered considerably when missingness is taken into account. For markers with a high missing rate (>5%), we found that both single and multiple imputation tend to diminish evidence for Hardy−Weinberg disequilibrium. Depending on the imputation method used, 6−13% of the test results changed qualitatively at the 5% level. PMID:26377959
Maciejewski, W
2006-01-01
When integrals in the standard Tremaine-Weinberg method are evaluated for the case of a realistic model of a doubly barred galaxy, their modifications introduced by the second rotating pattern are in accord with what can be derived from a simple extension of that method, based on separation of tracer's density. This extension yields a qualitative argument that discriminates between prograde and retrograde inner bars. However, the estimate of the value of inner bar's pattern speed requires further assumptions. When this extension of the Tremaine-Weinberg method is applied to the recent observation of the doubly barred galaxy NGC 2950, it indicates that the inner bar there is counter-rotating, possibly with the pattern speed of -140 +/- 50 km/s/arcsec. The occurrence of counter-rotating inner bars can constrain theories of galaxy formation.
Weinberg's nonlinear quantum mechanics and the Einstein-Podolsky-Rosen paradox
Polchinski, Joseph
1991-01-01
The constraints imposed on observables by the requirement that transmission not occur in the Einstein-Podolsky-Rosen (EPR) experiment are determined, leading to a different treatment of separated systems from that originally proposed by Weinberg (1989). It is found that forbidding EPR communication in nonlinear quantum mechanics necessarily leads to another sort of unusual communication: that between different branches of the wave function.
The neutron electric dipole moment and the Weinberg mechanism
Energy Technology Data Exchange (ETDEWEB)
Chang, D. (Northwestern Univ., Evanston, IL (USA). Dept. of Physics and Astronomy Fermi National Accelerator Lab., Batavia, IL (USA))
1990-01-01
We gave an overview of various mechanism for CP violation paying special attention to their prediction of the neutron electric dipole moment. The implication of the recent developments associated with the color electric dipole moment of gluon in various models of CP-violation are then critically assessed. 25 refs.
On S.N. Bernstein's derivation of Mendel's Law and 'rediscovery' of the Hardy-Weinberg distribution
Directory of Open Access Journals (Sweden)
Alan Stark
2012-01-01
Full Text Available Around 1923 the soon-to-be famous Soviet mathematician and probabilist Sergei N. Bernstein started to construct an axiomatic foundation of a theory of heredity. He began from the premise of stationarity (constancy of type proportions from the first generation of offspring. This led him to derive the Mendelian coefficients of heredity. It appears that he had no direct influence on the subsequent development of population genetics. A basic assumption of Bernstein was that parents coupled randomly to produce offspring. This paper shows that a simple model of non-random mating, which nevertheless embodies a feature of the Hardy-Weinberg Law, can produce Mendelian coefficients of heredity while maintaining the population distribution. How W. Johannsen's monograph influenced Bernstein is discussed.
Strangeness $S=-1$ hyperon-nucleon scattering at leading order in the covariant Weinberg's approach
Li, Kai-Wen; Geng, Li-Sheng
2016-01-01
Inspired by the success of covariant baryon chiral perturbation theory in the one baryon sector and in the heavy-light systems, we explore the relevance of relativistic effects in the construction of the strangeness $S=-1$ hyperon-nucleon interaction using chiral perturbation theory. Due to the non-perturbative nature of the hyperon-nucleon interaction, we follow the covariant Weinberg's approach recently proposed by Epelbaum and Gegelia to sum the leading order chiral potential using the Kadyshevsky equation (Epelbaum, 2012) in this exploratory work. By fitting the five low-energy constants to available experimental data, we find that the cutoff dependence is mitigated compared with the results obtained in the Weinberg's approach for both partial wave phase shifts and the description of experimental data. Nevertheless, at leading order, the description of experimental data remains quantitatively similar. We discuss in detail the cutoff dependence of the partial wave phase shifts and cross sections in the Wei...
A note on testing the Hardy-Weinberg law across strata.
Troendle, J F; Yu, K F
1994-10-01
The problem of testing the Hardy-Weinberg law when the data are stratified in K strata is considered. Previous methods lose power when the departure from the law is irregular from stratum to stratum. Two methods based on the squared distance are proposed to overcome this problem. Simulations show that the new methods can have a dramatic improvement over the previous methods. The methods are applied to red cell glyoxalase genotype data from populations in India.
Small field Coleman-Weinberg inflation driven by a fermion condensate
Iso, Satoshi; Kohri, Kazunori; Shimada, Kengo
2015-02-01
We revisit the small-field Coleman-Weinberg inflation, which has the following two problems: First, the smallness of the slow roll parameter ɛ requires the inflation scale to be very low. Second, the spectral index ns≈1 +2 η tends to become smaller compared to the observed value. In this paper, we consider two possible effects on the dynamics of inflation: radiatively generated nonminimal coupling to gravity ξ ϕ2R and condensation of fermions coupled to the inflaton as ϕ ψ ¯ψ . We show that the fermion condensate can solve the above problems.
Rapid convergence of the Weinberg expansion of the deuteron stripping amplitude
Pang, D Y; Johnson, R C; Tostevin, J A
2013-01-01
Theories of $(d,p)$ reactions frequently use a formalism based on a transition amplitude that is dominated by the components of the total three-body scattering wave function where the spatial separation between the incoming neutron and proton is confined by the range of the $n$-$p$ interaction, $V_{np}$. By comparison with calculations based on the CDCC method we show that the $(d,p)$ transition amplitude is dominated by the first term of the expansion of the three-body wave function in a complete set of Weinberg states. We use the \
Energy Technology Data Exchange (ETDEWEB)
Weinberg, Steven [Texas Univ., Austin, TX (United States). Dept. fuer Physik und Astronomie
2015-07-01
Quantum mechanics represents the central revolution of modern natural science and reaches in its importance farely beyond physics. Neither chemistry nor biology on the molecular scale would be understandable without it. Modern information technology from the laptop over the mobile telephone and the flat screen until the supercomputer would be unthinkable without quantum-mechanical effects. It desribes the world on the atomic and subatomic scale and is by this the starting point of our modern worldview. The Nobel-prize carrier Steven Weinberg has done ever among others by his theory of the unification of the weak and the electromagnetic interaction one of the most important contributions to this revolution. In this book he reproduces his personal view of quantum mechanics, which captivates by its strictly logic construction, precise linguistic representation, and mathematical clearness and completeness. This book appeals to studyings of natural sciences, especially of physics. Accompanied is the test by exercise problems, which allow the studying to apply immediately the knowledge, but also test their understanding. Because of its precision and clearness ''Lectures on Quantum Mechanics'' by Weinberg is also essentially suited for the self-study.
Directory of Open Access Journals (Sweden)
Terry Noviar Panggabean
2016-08-01
Full Text Available Abstrak—Karna representasi abstrak dari beberapa sistem pengambilan keputusan yang nyata dalam kehidupan sehari hari membuat masalah optimasi kombinatorial umumnya sangat sulit untuk dipecahkan. Bin packing problem ialah solusi terbaik dalam mengatasi masalah optimasi kombinatorial, yang digunakan untuk mencari sebuah objek secara optimal dari sekelompok himpunan objek yang berhingga. Serangkaian pendekatan hybrid telah dikembangkan dalam hal ini untuk memecahkan masalah Bin Packing. Metaheuristik adalah salah satu pendekatan tingkat tinggi dalam memandu dalam memodifikasi beberapa metode heuristik lainnya untuk mencari tingkat optimasi yang lebih baik. Genetic Algorithm atau Algoritma Genetika juga merupakan metode metaheuristik yang digunakan untuk menyelesaikan berbagai masalah dalam hal peningkatan optimasi. Dalam algoritma genetika terdapat bermacam-macam varian. Dalam penelitian dipaparkan mengenai taksonomi dari algoritma genetika parallel (Parallel Genetic Algorithm yang memiliki kemampuan yang lebih baik dari algoritma genetika konvensional dalam hal kinerja dan skalabilitasnya. Tetapi algoritma genetika paralel ini hanya cocok untuk permasalahan jaringan komputer heterogen dan sistem terdistribusi. Berdasarkan penelitian yang sudah pernah dilakukan sebelumnya dan dari uraian diatas maka penulis tertarik untuk melakukan penelitian bagaimana menerapkan hukum ketetapan Hardy-Weinberg dari bidang biologi kedalam algoritma genetika melakukan analisis tingkat optimasi terhadap Bin Packing Problem.. Keywords— Genetic Algortihm, Hardy-Weinberg, Bin Packing Problem.
A new method of testing for Hardy–Weinberg equilibrium and ordering populations
Indian Academy of Sciences (India)
Nader Ebrahimi; Devrim Bilgili
2007-01-01
The assumption of Hardy–Weinberg equilibrium (HWE) among alleles in a nonevolving population is of fundamental importance in genetic studies. Deviation from HWE in a population usually indicates inbreeding, stratification and sometimes problems in genotyping. In populations of affected individuals, these deviations can also provide evidence for association. In this paper, we introduce a measure based on the Kullback–Leibler discrimination information function that quantifies the deviation from HWE in a population. We use this measure to order populations. We also propose a test for HWE based on an estimate of this measure. The test is a statistically consistent test of the null hypothesis for all alternatives and is very easy to implement. Our proposed test statistic is compared with an earlier, widely used, test. Finally, the use of the proposed new test is shown in an illustrative example.
Edery, Ariel; Graham, Noah
2015-05-01
We consider a massless conformally (Weyl) invariant classical action consisting of a magnetic monopole coupled to gravity in an anti-de Sitter background spacetime. We implement quantum corrections and this breaks the conformal (Weyl) symmetry, introduces a length scale via the process of renormalization and leads to the trace anomaly. We calculate the one-loop effective potential and determine from it the vacuum expectation value (VEV). Spontaneous symmetry breaking is radiatively induced a la Coleman-Weinberg and the scalar coupling constant is exchanged for the dimensionful VEV via dimensional transmutation. An important result is that the Ricci scalar of the AdS background spacetimeis determined entirely by the value of the VEV.
Evading Weinberg's no-go theorem to construct mass dimension one fermions: Constructing darkness
Ahluwalia, Dharam Vir
2016-01-01
Recent theoretical work reporting the construction of a new quantum field of spin one half fermions with mass dimension one requires that Weinberg's no go theorem must be evaded. Here we show how this comes about. The essence of the argument is to first define a quantum field with due care being taken in fixing the locality phases attached to each of the expansion coefficients. The second ingredient is to systematically construct the adjoint/dual of the field. The Feynman-Dyson propagator constructed from the vacuum expectation value of the field and its adjoint then yields the mass dimensionality of the field. For a quantum field constructed from a complete set of eigenspinors of the charge conjugation operator, with locality phases judiciously chosen, the Feynman-Dyson propagator has mass dimension one. The Lorentz symmetry is preserved, locality anticommutators are satisfied, without violating fermionic statistics as needed for the spin one half field.
The Tremaine-Weinberg method for pattern speeds using H-alpha emission from ionized gas
Beckman, John E; Piñol, Núria; Toonen, Silvia; Hernandez, Olivier; Carignan, Claude
2007-01-01
The Fabry-Perot interferometer FaNTOmM was used at the 3.6m Canada France Hawaii Telescope and the 1.6m Mont Megantic Telescope to obtain data cubes in H-alpha of 9 nearby spiral galaxies from which maps in integrated intensity, velocity, and velocity dispersion were derived. We then applied the Tremaine-Weinberg method, in which the pattern speed can be deduced from its velocity field, by finding the integrated value of the mean velocity along a slit parallel to the major axis weighted by the intensity and divided by the weighted mean distance of the velocity points from the tangent point measured along the slit. The measured variables can be used either to make separate calculations of the pattern speed and derive a mean, or in a plot of one against the other for all the points on all slits, from which a best fit value can be derived. Linear fits were found for all the galaxies in the sample. For two galaxies a clearly separate inner pattern speed with a higher value, was also identified and measured.
The Tremaine-Weinberg Method for Pattern Speeds Using Hα Emission from Ionized Gas
Beckman, J. E.; Fathi, K.; Piñol, N.; Toonen, S.; Hernandez, O.; Carignan, C.
2008-10-01
The Fabry-Perot interferometer FaNTOmM was used at the 3.6-m CFHT and the 1.6-m Mont Mégantic Telescope to obtain data cubes in Hα of 9 nearby spiral galaxies from which maps in integrated intensity, velocity, and velocity dispersion were derived. We then applied the Tremaine-Weinberg method, in which the pattern speed can be deduced from its velocity field, by finding the integrated value of the mean velocity along a slit parallel to the major axis weighted by the intensity and divided by the weighted mean distance of the velocity points from the tangent point measured along the slit. The measured variables can be used either to make separate calculations of the pattern speed and derive a mean, or in a plot of one against the other for all the points on all slits, from which a best fit value can be derived. Linear fits were found for all the galaxies in the sample. For two galaxies a clearly separate inner pattern speed with a higher value, was also identified and measured.
Zeldovich Lambda and Weinberg Relation: An Explanation for the Cosmological Coincidences
Alfonso-Faus, Antonio
2008-01-01
We prove that Zeldovich formula for the cosmological constant, in terms of the gravitational constant G, Plancks constant, and a fundamental particle mass m, is equivalent to the Weinberg relation. This one defines the mass m of a fundamental particle in terms of the same constants, G and h, plus the speed of light c and the Hubble parameter H. Then the speed of light c must be proportional to the Hubble parameter H. We explain the cosmological coincidences and finetuning problems that are puzzling the research in cosmology: We find that the gravitational radius of the Universe and its size are one and the same constant, from a cosmological point of view. Also the matter (energy) density of the Universe and the vacuum energy density are now, and have been always, of the same order of magnitude. We solve the coincidence problem and the cosmological constant problem. We decouple the cosmological constant concept from the vacuum energy concept. These results are achieved by the use of a cosmic Planck constant th...
Testing for Hardy–Weinberg equilibrium at biallelic genetic markers on the X chromosome
Graffelman, J; Weir, B S
2016-01-01
Testing genetic markers for Hardy–Weinberg equilibrium (HWE) is an important tool for detecting genotyping errors in large-scale genotyping studies. For markers at the X chromosome, typically the χ2 or exact test is applied to the females only, and the hemizygous males are considered to be uninformative. In this paper we show that the males are relevant, because a difference in allele frequency between males and females may indicate HWE not to hold. The testing of markers on the X chromosome has received little attention, and in this paper we lay down the foundation for testing biallelic X-chromosomal markers for HWE. We develop four frequentist statistical test procedures for X-linked markers that take both males and females into account: the χ2 test, likelihood ratio test, exact test and permutation test. Exact tests that include males are shown to have a better Type I error rate. Empirical data from the GENEVA project on venous thromboembolism is used to illustrate the proposed tests. Results obtained with the new tests differ substantially from tests that are based on female genotype counts only. The new tests detect differences in allele frequencies and seem able to uncover additional genotyping error that would have gone unnoticed in HWE tests based on females only. PMID:27071844
The Universal Eigenvalue Bounds of Payne–Pólya–Weinberger, Hile–Protter, and H C Yang
Indian Academy of Sciences (India)
Mark S Ashbaugh
2002-02-01
In this paper we present a unified and simplified approach to the universal eigenvalue inequalities of Payne–Pólya–Weinberger, Hile–Protter, and Yang. We then generalize these results to inhomogeneous membranes and Schrödinger's equation with a nonnegative potential. We also show that Yang's inequality is always better than Hile–Protter's (and hence also better than Payne–Pólya–Weinberger's). In fact, Yang's weaker inequality (which deserves to be better known), $$_{k+1} < \\left(1+\\frac{4}{n}\\right)\\frac{1}{k}\\sum^k_{i=1} _i,$$ is also strictly better than Hile–Protter's. Finally, we treat Yang's (and related) inequalities for minimal submanifolds of a sphere and domains contained in a sphere by our methods.
The Pattern Speeds of M51, M83 and NGC 6946 Using CO and the Tremaine-Weinberg Method
Zimmer, P; McGraw, J T
2004-01-01
In spiral galaxies where the molecular phase dominates the ISM, the molecular gas as traced by CO emission will approximately obey the continuity equation on orbital timescales. The Tremaine-Weinberg method can then be used to determine the pattern speed of such galaxies. We have applied the method to single-dish CO maps of three nearby spirals, M51, M83 and NGC 6946 to obtain estimates of their pattern speeds: 38 +/- 7 km/s/kpc, 45 +/- 8 km/s/kpc and 39 +/- 8 km/s/kpc, respectively, and we compare these results to previous measurements. We also analyze the major sources of systematic errors in applying the Tremaine-Weinberg method to maps of CO emission.
Adler, Stephen L
2016-01-01
We study $SU(8)$ symmetry breaking induced by minimizing the Coleman-Weinberg effective potential for a third rank antisymmetric tensor scalar field in the 56 representation. Instead of breaking $SU(8) \\supset SU(3) \\times SU(5)$, we find that the stable minimum of the potential breaks the original symmetry according to $SU(8) \\supset SU(3) \\times Sp(4)$. Using both numerical and analytical methods, we present results for the potential minimum, the corresponding Goldstone boson structure and BEH mechanism, and the group-theoretic classification of the residual states after symmetry breaking.
Adler, Stephen L.
2016-08-01
We study SU(8) symmetry breaking induced by minimizing the Coleman-Weinberg effective potential for a third rank antisymmetric tensor scalar field in the 56 representation. Instead of breaking {SU}(8)\\supset {SU}(3)× {SU}(5), we find that the stable minimum of the potential breaks the original symmetry according to {SU}(8)\\supset {SU}(3)× {Sp}(4). Using both numerical and analytical methods, we present results for the potential minimum, the corresponding Goldstone boson structure and BEH mechanism, and the group-theoretic classification of the residual states after symmetry breaking.
Adler, Stephen L
2016-01-01
We continue our study of Coleman-Weinberg symmetry breaking induced by a third rank antisymmetric tensor scalar, in the context of the $SU(8)$ model [1] we proposed earlier. We discuss the mechanism for giving the spin $\\frac{3}{2}$ field a mass by the BEH mechanism, and analyze the remaining massless spin $\\frac{1}{2}$ fermions, the global chiral symmetries, and the running couplings after symmetry breaking. We note that the smallest gluon mass matrix eigenvalue has an eigenvector suggestive of $U(1)_{B-L}$, and conjecture that the theory runs to an infrared fixed point at which there is a massless gluon with 3 to -1 ratios in generator components. Assuming this, we discuss a mechanism for producing hierarchies, and for generating the standard model fermions as composites formed from the original $SU(8)$ model fermions, which play the role of "preons". Quarks can emerge 5 preon composites and leptons as 3 preon composites, with consequent stability of the proton against decay to a single lepton plus mesons.
Directory of Open Access Journals (Sweden)
Ana María Abreu Velez
2009-01-01
Full Text Available Background : We reported a new variant of endemic pemphigus foliaceus in El Bagre, Colombia. Aims : Our study performed Complex Segregation Analysis (CSA and short tandem repeats to discriminate between environmental and/or genetic factors in this disorder. Materials and Methods: The CSA analysis was carried out according to the unified model, implemented using the transmission probabilities implemented in the computer program POINTER, and evaluated by using a software package for population genetic data analysis (GDA, Arlequin. We performed pedigree analyses by using Cyrillic 2.1 software, with a total of 30 families with 50 probands (47 males and 3 females tested. In parallel to the CSA, we tested for the presence of short tandem repeats from HLA class II, DQ alpha 1, involving the gene locus D6S291 by using the Hardy-Weinberg- Castle law. Results : Our results indicate that the best model of inheritance in this disease is a mixed model, with multifactorial effects within a recessive genotype. Two types of possible segregation patterns were found; one with strong recessive penetrance in families whose phenotype is more Amerindian-like, and another of possible somatic mutations. Conclusion : The penetrance of 10% or less in female patients 60 years of age or older indicates that hormones could protect younger females. The greatest risk factor for men being affected by the disorder was the NN genotype. These findings are only possible due to somatic mutations, and/or strong environmental effects. We also found a protective role for two genetic loci (D6S1019 AND D6S439 in the control group.
Pattern Speeds of BIMA-SONG Galaxies with Molecule-Dominated ISMs Using the Tremaine-Weinberg Method
Rand, R J; Rand, Richard J.; Wallin, John F.
2004-01-01
We apply the Tremaine-Weinberg method of pattern speed determination to data cubes of CO emission in six spiral galaxies from the BIMA SONG survey each with an ISM dominated by molecular gas. We compare derived pattern speeds with estimates based on other methods, usually involving the identification of a predicted behavior at one or more resonances of the pattern(s). In two cases (NGC 1068 and NGC 4736) we find evidence for a central bar pattern speed that is greater than that of the surrounding spiral and roughly consistent with previous estimates. However, the spiral pattern speed in both cases is much larger than previous determinations. For the barred spirals NGC 3627 and NGC 4321, the method is insensitive to the bar pattern speed (the bar in each is nearly parallel to the major axis; in this case the method will not work), but for the former galaxy the spiral pattern speed found agrees with previous estimates of the bar pattern speed, suggesting that these two structures are part of a single pattern. F...
Directory of Open Access Journals (Sweden)
Mohammad Hadi Zafarmand
Full Text Available BACKGROUND: The M235T polymorphism in the AGT gene has been related to an increased risk of hypertension. This finding may also suggest an increased risk of coronary heart disease (CHD. METHODOLOGY/PRINCIPAL FINDINGS: A case-cohort study was conducted in 1,732 unrelated middle-age women (210 CHD cases and 1,522 controls from a prospective cohort of 15,236 initially healthy Dutch women. We applied a Cox proportional hazards model to study the association of the polymorphism with acute myocardial infarction (AMI (n = 71 and CHD. In the case-cohort study, no increased risk for CHD was found under the additive genetic model (hazard ratio [HR] = 1.20; 95% confidence interval [CI], 0.86 to 1.68; P = 0.28. This result was not changed by adjustment (HR = 1.17; 95% CI, 0.83 to 1.64; P = 0.38 nor by using dominant, recessive and pairwise genetic models. Analyses for AMI risk under the additive genetic model also did not show any statistically significant association (crude HR = 1.14; 95% CI, 0.93 to 1.39; P = 0.20. To evaluate the association, a comprehensive systematic review and meta-analysis were undertaken of all studies published up to February 2007 (searched through PubMed/MEDLINE, Web of Science and EMBASE. The meta-analysis (38 studies with 13284 cases and 18722 controls showed a per-allele odds ratio (OR of 1.08 (95% CI, 1.01 to 1.15; P = 0.02. Moderate to large levels of heterogeneity were identified between studies. Hardy-Weinberg equilibrium (HWE violation and the mean age of cases were statistically significant sources of the observed variation. In a stratum of non-HWE violation studies, there was no effect. An asymmetric funnel plot, the Egger's test (P = 0.066, and the Begg-Mazumdar test (P = 0.074 were all suggestive of the presence of publication bias. CONCLUSIONS/SIGNIFICANCE: The pooled OR of the present meta-analysis, including our own data, presented evidence that there is an increase in the risk of CHD conferred by the M235T variant
Allen, Roland E
2013-01-01
The particle recently discovered by the CMS and ATLAS collaborations at CERN is almost certainly a Higgs boson, fulfilling a quest that can be traced back to three seminal high energy papers of 1964, but which is intimately connected to ideas in other areas of physics that go back much further. One might oversimplify the history of the features which (i) give mass to the W and Z particles that mediate the weak nuclear interaction, (ii) effectively break gauge invariance, (iii) eliminate physically unacceptable Nambu-Goldstone bosons, and (iv) give mass to fermions (like the electron) by collectively calling them the London-Anderson-Englert-Brout-Higgs-Guralnik-Hagen-Kibble-Weinberg mechanism. More important are the implications for the future: a Higgs boson appears to point toward supersymmetry, since new physics is required to protect its mass from enormous quantum corrections, while the discovery of neutrino masses seems to point toward grand unification of the nongravitational forces.
Chui, Tina Tsz-Ting; Lee, Wen-Chung
2014-01-01
Many diseases result from the interactions between genes and the environment. An efficient method has been proposed for a case-control study to estimate the genetic and environmental main effects and their interactions, which exploits the assumptions of gene-environment independence and Hardy-Weinberg equilibrium. To estimate the absolute and relative risks, one needs to resort to an alternative design: the case-base study. In this paper, the authors show how to analyze a case-base study under the above dual assumptions. This approach is based on a conditional logistic regression of case-counterfactual controls matched data. It can be easily fitted with readily available statistical packages. When the dual assumptions are met, the method is approximately unbiased and has adequate coverage probabilities for confidence intervals. It also results in smaller variances and shorter confidence intervals as compared with a previous method for a case-base study which imposes neither assumption.
Trejo, Salvador; Toscano-Flores, José J; Matute, Esmeralda; Ramírez-Dueñas, María de Lourdes
2015-01-01
The aim of this study was to obtain the genotype and gene frequency from parents of children with attention-deficit/hyperactivity disorder (ADHD) and then assess the Hardy–Weinberg equilibrium of genotype frequency of the variable number tandem repeat (VNTR) III exon of the dopamine receptor D4 (DRD4) gene. The genotypes of the III exon of 48 bp VNTR repeats of the DRD4 gene were determined by polymerase chain reaction in a sample of 30 parents of ADHD cases. In the 60 chromosomes analyzed, the following frequencies of DRD4 gene polymorphisms were observed: six chromosomes (c) with two repeat alleles (r) (10%); 1c with 3r (1.5%); 36c with 4r (60%); 1c with 5r (1.5%); and 16c with 7r (27%). The genotypic distribution of the 30 parents was two parents (p) with 2r/2r (6.67%); 1p with 2r/4r (3.33%); 1p with 2r/5r (3.33%); 1p with 3r/4r (3.33%); 15p with 4r/4r (50%); 4p with 4r/7r (13.33); and 6p with 7r/7r (20%). A Hardy–Weinberg disequilibrium (χ2=13.03, P<0.01) was found due to an over-representation of the 7r/7r genotype. These results suggest that the 7r polymorphism of the DRD4 gene is associated with the ADHD condition in a Mexican population. PMID:26082657
Directory of Open Access Journals (Sweden)
Trejo S
2015-06-01
Full Text Available Salvador Trejo, José J Toscano-Flores, Esmeralda Matute, María de Lourdes Ramírez-Dueñas Laboratorio de Neuropsicología y Neurolingüística, Instituto de Neurociencias CUCBA, Guadalajara, Jalisco, Mexico Abstract: The aim of this study was to obtain the genotype and gene frequency from parents of children with attention-deficit/hyperactivity disorder (ADHD and then assess the Hardy–Weinberg equilibrium of genotype frequency of the variable number tandem repeat (VNTR III exon of the dopamine receptor D4 (DRD4 gene. The genotypes of the III exon of 48 bp VNTR repeats of the DRD4 gene were determined by polymerase chain reaction in a sample of 30 parents of ADHD cases. In the 60 chromosomes analyzed, the following frequencies of DRD4 gene polymorphisms were observed: six chromosomes (c with two repeat alleles (r (10%; 1c with 3r (1.5%; 36c with 4r (60%; 1c with 5r (1.5%; and 16c with 7r (27%. The genotypic distribution of the 30 parents was two parents (p with 2r/2r (6.67%; 1p with 2r/4r (3.33%; 1p with 2r/5r (3.33%; 1p with 3r/4r (3.33%; 15p with 4r/4r (50%; 4p with 4r/7r (13.33; and 6p with 7r/7r (20%. A Hardy–Weinberg disequilibrium (χ2=13.03, P<0.01 was found due to an over-representation of the 7r/7r genotype. These results suggest that the 7r polymorphism of the DRD4 gene is associated with the ADHD condition in a Mexican population. Keywords: ADHD, parents, DRD4, HWE
Energy Technology Data Exchange (ETDEWEB)
Delgado Acosta, E.G.; Banda Guzman, V.M.; Kirchbach, M. [UASLP, Instituto de Fisica, San Luis Potosi (Mexico)
2015-03-01
We propose a general method for the description of arbitrary single spin-j states transforming according to (j, 0) + (0, j) carrier spaces of the Lorentz algebra in terms of Lorentz tensors for bosons, and tensor-spinors for fermions, and by means of second-order Lagrangians. The method allows to avoid the cumbersome matrix calculus and higher ∂{sup 2j} order wave equations inherent to the Weinberg-Joos approach. We start with reducible Lorentz tensor (tensor-spinor) representation spaces hosting one sole (j, 0) + (0, j) irreducible sector and design there a representation reduction algorithm based on one of the Casimir invariants of the Lorentz algebra. This algorithm allows us to separate neatly the pure spin-j sector of interest from the rest, while preserving the separate Lorentz and Dirac indexes. However, the Lorentz invariants are momentum independent and do not provide wave equations. Genuine wave equations are obtained by conditioning the Lorentz tensors under consideration to satisfy the Klein-Gordon equation. In so doing, one always ends up with wave equations and associated Lagrangians that are of second order in the momenta. Specifically, a spin-3/2 particle transforming as (3/2, 0) + (0, 3/2) is comfortably described by a second-order Lagrangian in the basis of the totally anti-symmetric Lorentz tensor-spinor of second rank, Ψ {sub [μν]}. Moreover, the particle is shown to propagate causally within an electromagnetic background. In our study of (3/2, 0) + (0, 3/2) as part of Ψ {sub [μν]} we reproduce the electromagnetic multipole moments known from the Weinberg-Joos theory. We also find a Compton differential cross-section that satisfies unitarity in forward direction. The suggested tensor calculus presents itself very computer friendly with respect to the symbolic software FeynCalc. (orig.)
Delgado Acosta, E. G.; Banda Guzmán, V. M.; Kirchbach, M.
2015-03-01
We propose a general method for the description of arbitrary single spin- j states transforming according to ( j, 0) ⊕ (0, j) carrier spaces of the Lorentz algebra in terms of Lorentz tensors for bosons, and tensor-spinors for fermions, and by means of second-order Lagrangians. The method allows to avoid the cumbersome matrix calculus and higher ∂2 j order wave equations inherent to the Weinberg-Joos approach. We start with reducible Lorentz tensor (tensor-spinor) representation spaces hosting one sole ( j, 0) ⊕ (0, j) irreducible sector and design there a representation reduction algorithm based on one of the Casimir invariants of the Lorentz algebra. This algorithm allows us to separate neatly the pure spin- j sector of interest from the rest, while preserving the separate Lorentz and Dirac indexes. However, the Lorentz invariants are momentum independent and do not provide wave equations. Genuine wave equations are obtained by conditioning the Lorentz tensors under consideration to satisfy the Klein-Gordon equation. In so doing, one always ends up with wave equations and associated Lagrangians that are of second order in the momenta. Specifically, a spin-3/2 particle transforming as (3/2, 0) ⊕ (0, 3/2) is comfortably described by a second-order Lagrangian in the basis of the totally anti-symmetric Lorentz tensor-spinor of second rank, Ψ [ μν]. Moreover, the particle is shown to propagate causally within an electromagnetic background. In our study of (3/2, 0) ⊕ (0, 3/2) as part of Ψ [ μν] we reproduce the electromagnetic multipole moments known from the Weinberg-Joos theory. We also find a Compton differential cross-section that satisfies unitarity in forward direction. The suggested tensor calculus presents itself very computer friendly with respect to the symbolic software FeynCalc.
Reina-Campos, M.; Antoja, T.; Romero-Gómez, M.; Figueras, F.; Roca-Fàbrega, S.
2017-03-01
The pattern speed of the non-axisymmetric structures in the galactic disc is a key parameter to understand the dynamics in the Milky Way. For none the Galactic bar nor the spiral arms is well determined as the current values have large uncertainties associated. We evaluate whether the Tremaine - Weinberg method as derived by Debattista et al. (2002) can be used to determine the pattern speed of the Galactic bar and the spiral arms in the Milky Way. We consider different situations; from simplistic test particle simulations with one structure to N-body simulations with both structures produced self-consistently. We also investigate Gaia mock catalogues with F0 and Red Clump stars as tracers. We conclude that this method can determine the pattern speed of the Galactic bar when going up to 6 kpc in the direction of the Galactic Center, whereas for the spiral arms all-sky radial velocity data up to 2-3 kpc is required.
Finner, Helmut; Strassburger, Klaus; Heid, Iris M; Herder, Christian; Rathmann, Wolfgang; Giani, Guido; Dickhaus, Thorsten; Lichtner, Peter; Meitinger, Thomas; Wichmann, H-Erich; Illig, Thomas; Gieger, Christian
2010-09-30
We study the link between two quality measures of SNP (single nucleotide polymorphism) data in genome-wide association (GWA) studies, that is, per SNP call rates (CR) and p-values for testing Hardy-Weinberg equilibrium (HWE). The aim is to improve these measures by applying methods based on realized randomized p-values, the false discovery rate and estimates for the proportion of false hypotheses. While exact non-randomized conditional p-values for testing HWE cannot be recommended for estimating the proportion of false hypotheses, their realized randomized counterparts should be used. P-values corresponding to the asymptotic unconditional chi-square test lead to reasonable estimates only if SNPs with low minor allele frequency are excluded. We provide an algorithm to compute the probability that SNPs violate HWE given the observed CR, which yields an improved measure of data quality. The proposed methods are applied to SNP data from the KORA (Cooperative Health Research in the Region of Augsburg, Southern Germany) 500 K project, a GWA study in a population-based sample genotyped by Affymetrix GeneChip 500 K arrays using the calling algorithm BRLMM 1.4.0. We show that all SNPs with CR = 100 per cent are nearly in perfect HWE which militates in favor of the population to meet the conditions required for HWE at least for these SNPs. Moreover, we show that the proportion of SNPs not being in HWE increases with decreasing CR. We conclude that using a single threshold for judging HWE p-values without taking the CR into account is problematic. Instead we recommend a stratified analysis with respect to CR.
Institute of Scientific and Technical Information of China (English)
薛晶晶
2013-01-01
为得到Engel群上的Payne-Polya-Weinberger不等式,采用Rayleigh-Ritz原理对Engel群上的sub-Laplace算子进行计算,得到了Engel群上sub-Laplace算子△E=X21+X22=∑X2i特征值的Payne-Polya-Weinberger不等式λm+1-λm≤2/m(m∑i=1λi),其中X1,X2是Engel群上的左不变向量场.%To get a Payne-Pólya-Weinberger type inequality on the Engel group,Rayleigh-Ritz principle is used to calculate the sub-Laplace operator of the Engel group and finally establish the Payne-PólyaWeinberger type inequalityλm+1-λm ≤ 2m∑i=1λifor adjacent eigenvalues on sub-Laplace operator △E=X23+X22=2∑i=1X2i,which X1,X2 be the left-invariant vector fields in the Engel group.
Classically conformal radiative neutrino model with gauged B - L symmetry
Okada, Hiroshi; Orikasa, Yuta
2016-09-01
We propose a classically conformal model in a minimal radiative seesaw, in which we employ a gauged B - L symmetry in the standard model that is essential in order to work the Coleman-Weinberg mechanism well that induces the B - L symmetry breaking. As a result, nonzero Majorana mass term and electroweak symmetry breaking simultaneously occur. In this framework, we show a benchmark point to satisfy several theoretical and experimental constraints. Here theoretical constraints represent inert conditions and Coleman-Weinberg condition. Experimental bounds come from lepton flavor violations (especially μ → eγ), the current bound on the Z‧ mass at the CERN Large Hadron Collider, and neutrino oscillations.
Lefever, Ernest W., Ed.
Two cabinet secretaries address the problems of when and how the United States should use military power. Secretary of Defense Caspar W. Weinberger emphasizes the importance of prudence and restraint in the use of military force in chapter 1: "The Uses of Military Power." Secretary of State George P. Shultz stresses the vital importance…
Radiative breaking of conformal symmetry in the Standard Model
Arbuzov, A. B.; Nazmitdinov, R. G.; Pavlov, A. E.; Pervushin, V. N.; Zakharov, A. F.
2016-02-01
Radiative mechanism of conformal symmetry breaking in a comformal-invariant version of the Standard Model is considered. The Coleman-Weinberg mechanism of dimensional transmutation in this system gives rise to finite vacuum expectation values and, consequently, masses of scalar and spinor fields. A natural bootstrap between the energy scales of the top quark and Higgs boson is suggested.
Directory of Open Access Journals (Sweden)
Francisco A Bosco
2012-12-01
Full Text Available Since the foundations of Population Genetics the notion of genetic equilibrium (in close analogy to Classical Mechanics has been associated with the Hardy-Weinberg (HW Principle and the identification of equilibrium is currently assumed by stating that the HW axioms are valid if appropriate values of Chi-Square (p<0.05 are observed in experiments. Here we show by numerical experiments with the genetic system of one locus/two alleles that considering large ensembles of populations the Chi-Square test is not decisive and may lead to false negatives in random mating populations and false positives in nonrandom mating populations. This result confirms the logical statement that statistical tests cannot be used to deduce if the genetic population is under the HW conditions. Furthermore, we show that under the HW conditions populations of any size evolve in time according to what can be identified as neutral dynamics to which the very notion of equilibrium is unattainable for any practical purpose. Therefore, under the HW conditions the identification of equilibrium properties needs a different approach and the use of more appropriate concepts. We also show that by relaxing the condition of random mating the dynamics acquires all the characteristics of asymptotic stable equilibrium. As a consequence our results show that the question of equilibrium in genetic systems should be approached in close analogy to non-equilibrium statistical physics and its observability should be focused on dynamical quantities like the typical decay properties of the allelic auto correlation function in time. In this perspective one should abandon the classical notion of genetic equilibrium and its relation to the HW proportions and open investigations in the direction of searching for unifying general principles of population genetic transformations capable to take in consideration these systems in their full complexity.
Beyond the Coleman–Weinberg Effective Potential
CERN. Geneva
2015-01-01
The Two-Particle-Irreducible (2PI) formalism as introduced by Cornwall, Jackiw and Tomboulis provides a systematic analytic approach to consistently describing non-perturbative phenomena in Quantum Field Theory. In spite of its great success, one major problem of the 2PI approach is that its loopwise expansion gives rise to residual violations of symmetries and hence to massive Goldstone bosons in the spontaneously broken phase of the theory. In my talk I will present a novel symmetry-improved 2PI formalism which consistently encodes global symmetries in a loopwise expansion. Unlike other methods, I will illustrate how the symmetry-improved 2PI effective action satisfies a number of important field-theoretic properties, such as the masslessness of the Goldstone boson and the fact that the phase transition is of second order in O(N) theories, already in the Hartree-Fock approximation. After taking the sunset diagrams into account, I show how the symmetry-improved 2PI approach properly describe...
Nucleon Electric Dipole Moments in High-Scale Supersymmetric Models
Hisano, Junji; Kuramoto, Wataru; Kuwahara, Takumi
2015-01-01
The electric dipole moments (EDMs) of electron and nucleons are the promising probe of the new physics. In the generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we estimated the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in these scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron EDM in order to discriminate among the high-scale SUSY models.
Nucleon electric dipole moments in high-scale supersymmetric models
Hisano, Junji; Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi
2015-11-01
The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP -violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.
Energy Technology Data Exchange (ETDEWEB)
Garaud, J.
2010-09-15
In this dissertation, we analyze in detail the properties of new string-like solutions of the bosonic sector of the electroweak theory. The new solutions are current carrying generalizations of embedded Abrikosov-Nielsen-Olesen vortices. We were also able to reproduce all previously known features of vortices in the electroweak theory. Generically vortices are current carrying. They are made of a compact conducting core of charged W bosons surrounded by a nonlinear superposition of Z and Higgs field. Far away from the core, the solution is described by purely electromagnetic Biot and Savart field. Solutions exist for generic parameter values including experimental values of the coupling constants. We show that the current whose typical scale is the billion of Amperes can be arbitrarily large. In the second part the linear stability with respect to generic perturbations is studied. The fluctuation spectrum is qualitatively investigated. When negative modes are detected, they are explicitly constructed and their dispersion relation is determined. Most of the unstable modes can be eliminated by imposing periodic boundary conditions along the vortex. However there remains a unique negative mode which is homogeneous. This mode can probably be eliminated by curvature effects if a small piece of vortex is bent into a loop, stabilized against contraction by the electric current. (author)
Standard-model coupling constants from compositeness
Besprosvany, J
2003-01-01
A coupling-constant definition is given based on the compositeness property of some particle states with respect to the elementary states of other particles. It is applied in the context of the vector-spin-1/2-particle interaction vertices of a field theory, and the standard model. The definition reproduces Weinberg's angle in a grand-unified theory. One obtains coupling values close to the experimental ones for appropriate configurations of the standard-model vector particles, at the unification scale within grand-unified models, and at the electroweak breaking scale.
Khoze, Valentin V
2013-01-01
The Standard Model with an added Higgs portal interaction and no explicit mass terms is a classically scale-invariant theory. In this case the scale of electroweak symmetry breaking can be induced radiatively by the Coleman-Weinberg mechanism operational in a hidden sector, and then transmitted to the Standard Model through the Higgs portal. The smallness of the generated values for the Higgs vev and mass, compared to the UV cutoff of our classically scale-invariant effective theory, is naturally explained by this mechanism. We show how these classically conformal models can generate the baryon asymmetry of the Universe without the need of introducing mass scales by hand or their resonant fine-tuning. The minimal model we consider is the Standard Model coupled to the Coleman-Weinberg scalar field charged under the $U(1)_{B-L}$ gauge group. Anomaly cancellation requires automatic inclusion of three generations of right-handed neutrinos. Their GeV-scale Majorana masses are induced by the Coleman-Weinberg field ...
Classically conformal radiative neutrino model with gauged B−L symmetry
Directory of Open Access Journals (Sweden)
Hiroshi Okada
2016-09-01
Full Text Available We propose a classically conformal model in a minimal radiative seesaw, in which we employ a gauged B−L symmetry in the standard model that is essential in order to work the Coleman–Weinberg mechanism well that induces the B−L symmetry breaking. As a result, nonzero Majorana mass term and electroweak symmetry breaking simultaneously occur. In this framework, we show a benchmark point to satisfy several theoretical and experimental constraints. Here theoretical constraints represent inert conditions and Coleman–Weinberg condition. Experimental bounds come from lepton flavor violations (especially μ→eγ, the current bound on the Z′ mass at the CERN Large Hadron Collider, and neutrino oscillations.
Varying Alpha and the Electroweak Model
Kimberly, D; Kimberly, Dagny; Magueijo, Joao
2003-01-01
Inspired by recent claims for a varying fine structure constant, alpha, we investigate the effect of ``promoting coupling constants to variables'' upon various parameters of the standard model. We first consider a toy model: Proca's theory of the massive photon. We then explore the electroweak theory with one and two dilaton fields. We find that a varying alpha unavoidably implies varying W and Z masses. This follows from gauge invariance, and is to be contrasted with Proca' theory. For the two dilaton theory the Weinberg angle is also variable, but Fermi's constant and the tree level fermion masses remain constant unless the Higgs' potential becomes dynamical. We outline some cosmological implications.
Reconstruction of the standard model with classical conformal invariance in noncommutative geometry
Yang, Masaki J S
2015-01-01
In this paper, we derive the standard model with classical conformal invariance from the Yang--Mills--Higgs model in noncommutative geometry (NCG). In the ordinary context of the NCG, the {\\it distance matrix} $M_{nm}$ which corresponds to the vacuum expectation value of Higgs fields is taken to be finite. However, since $M_{nm}$ is arbitrary in this formulation, we can take all $M_{nm}$ to be zero. In the original composite scheme, the Higgs field itself vanishes with the condition $M_{nm} = 0$. Then, we adopt the elemental scheme, in which the gauge and the Higgs bosons are regarded as elemental fields. By these assumptions, all scalars do not have vevs at tree level. The symmetry breaking mechanism will be implemented by the Coleman--Weinberg mechanism. As a result, we show a possibility to solve the hierarchy problem in the context of NCG. Unfortunately, the Coleman--Weinberg mechanism does not work in the SM Higgs sector, because the Coleman--Weinberg effective potential becomes unbounded from below for ...
Minisuperspace models as infrared contributions
Bojowald, Martin
2015-01-01
A direct correspondence of quantum mechanics as a minisuperspace model for a self-interacting scalar quantum-field theory is established by computing, in several models, the infrared contributions to 1-loop effective potentials of Coleman--Weinberg type. A minisuperspace approximation rather than truncation is thereby obtained. By this approximation, the spatial averaging scale of minisuperspace models is identified with an infrared scale (but not a regulator or cut-off) delimiting the modes included in the minisuperspace model. Some versions of the models studied here have discrete space or modifications of the Hamiltonian expected from proposals of loop quantum gravity. They shed light on the question of how minisuperspace models of quantum cosmology can capture features of full quantum gravity. While it is shown that modifications of the Hamiltonian can well be described by minisuperspace truncations, some related phenomena such as signature change, confirmed and clarified here for modified scalar field th...
Neutron electric dipole momento in two-Higgs-doublet model
Hayashi, T; Matsuda, M; Tanimoto, M; Hayashi, T; Koide, Y; Matsuda, M; Tanimoto, M
1994-01-01
The effect of the "chromo-electric" dipole moment on the electric dipole moment(EDM) of the neutron is studied in the two-Higgs-doublet model. The Weinberg's operator O_{3g}=GG\\t G and the operator O_{qg}=\\bar q\\sigma\\t Gq are both investigated in the cases of \\tan\\b\\gg 1, \\tan\\b\\ll 1 and \\tan\\b\\simeq 1. The neutron EDM is considerably reduced due to the destructive contribution with two light Higgs scalars exchanges.
A variational void coalescence model for ductile metals
Siddiq, Amir
2011-08-17
We present a variational void coalescence model that includes all the essential ingredients of failure in ductile porous metals. The model is an extension of the variational void growth model by Weinberg et al. (Comput Mech 37:142-152, 2006). The extended model contains all the deformation phases in ductile porous materials, i.e. elastic deformation, plastic deformation including deviatoric and volumetric (void growth) plasticity followed by damage initiation and evolution due to void coalescence. Parametric studies have been performed to assess the model\\'s dependence on the different input parameters. The model is then validated against uniaxial loading experiments for different materials. We finally show the model\\'s ability to predict the damage mechanisms and fracture surface profile of a notched round bar under tension as observed in experiments. © Springer-Verlag 2011.
Three findings to model a quantum-gravitational theory
Alfonso-Faus, Antonio
2008-01-01
In 1967 Zeldovich expressed the cosmological constant lambda in terms of G, m and h, the gravitational constant, the mass of a fundamental particle and Plancks constant. In 1972 Weinberg expressed m in terms of h, G, the speed of light c and the Hubble parameter H. We proved that both expressions are identical. We also found proportionality between c and H. The critical mass balancing the outward quantum mechanical spreading of the wave function, and its inward gravitational collapse, has been recently estimated. We identify this mass with Zeldovich and Weinberg mass. A semi classical gravity model is reinforced and provides an insight for the modelling of a quantum-gravitational theory. The time evolution of the peak probability density for a free particle, a wave function initially filling the whole Universe, explains the later geometrical properties of the fundamental particles. We prove that they end up acquiring a constant size given by their Compton wavelength. The size of the fundamental particles, as ...
Quantum gravity corrections to the standard model Higgs in Einstein and $R^2$ gravity
Abe, Yugo; Inami, Takeo
2016-01-01
We evaluate quantum gravity corrections to the standard model Higgs potential $V(\\phi)$ a la Coleman-Weinberg and examine the stability question of $V(\\phi)$ at scales of Planck mass $M_{\\rm Pl}$. We compute the gravity one-loop corrections by using the momentum cut-off in Einstein gravity. The gravity corrections affect the potential in a significant manner for the value of $\\Lambda= (1 - 3)M_{\\rm Pl}.$ In view of reducing the UV cut-off dependence we also make a similar study in the $R^2$ gravity.
Magnetic monopoles and vortices in the standard model of electroweak interactions
Achúcarro, A
2000-01-01
These lectures start with an elementary introduction to the subject of magnetic monopoles which should be accesible from any physics background. In the Weinberg-Salam model of electroweak interactions, magnetic monopoles appear at the ends of a type of non-topological vortices called electroweak strings. These will also be discussed, as well as recent simulations of their formation during a phase transition which indicate that, in the (unphysical) range of parameters in which the strings are classically stable, they can form with a density comparable to topological vortices.
The Effective Kahler Potential, Metastable Vacua and R-Symmetry Breaking in O'Raifeartaigh Models
Benjamin, Shermane; Kain, Ben
2010-01-01
Much has been learned about metastable vacua and R-symmetry breaking in O'Raifeartaigh models. Such work has largely been done from the perspective of the superpotential and by including Coleman-Weinberg corrections to the scalar potential. Instead, we consider these ideas from the perspective of the one loop effective Kahler potential. We translate known ideas to this framework and construct convenient formulas for computing individual terms in the expanded effective Kahler potential. We do so for arbitrary R-charge assignments and allow for small R-symmetry violating terms so that both spontaneous and explicit R-symmetry breaking is allowed in our analysis.
The neutron electric dipole form factor in the perturbative chiral quark model
Dib, C; Gutsche, T; Kovalenko, S; Kuckei, J; Lyubovitskij, V E; Pumsa-ard, K; Dib, Claudio; Faessler, Amand; Gutsche, Thomas; Kovalenko, Sergey; Kuckei, Jan; Lyubovitskij, Valery E.; Pumsa-ard, Kem
2006-01-01
We calculate the electric dipole form factor of the neutron in a perturbative chiral quark model, parameterizing CP-violation of generic origin by means of effective electric dipole moments of the constituent quarks and their CP-violating couplings to the chiral fields. We discuss the relation of these effective parameters to more fundamental ones such as the intrinsic electric and chromoelectric dipole moments of quarks and the Weinberg parameter. From the existing experimental upper limits on the neutron EDM we derive constraints on these CP-violating parameters.
General Composite Higgs Models
Marzocca, David; Shu, Jing
2012-01-01
We construct a general class of pseudo-Goldstone composite Higgs models, within the minimal $SO(5)/SO(4)$ coset structure, that are not necessarily of moose-type. We characterize the main properties these models should have in order to give rise to a Higgs mass at around 125 GeV. We assume the existence of relatively light and weakly coupled spin 1 and 1/2 resonances. In absence of a symmetry principle, we introduce the Minimal Higgs Potential (MHP) hypothesis: the Higgs potential is assumed to be one-loop dominated by the SM fields and the above resonances, with a contribution that is made calculable by imposing suitable generalizations of the first and second Weinberg sum rules. We show that a 125 GeV Higgs requires light, often sub-TeV, fermion resonances. Their presence can also be important for the model to successfully pass the electroweak precision tests. Interestingly enough, the latter can be passed also by models with a heavy Higgs around 320 GeV. The composite Higgs models of the moose-type conside...
Varying alpha and the electroweak model
Energy Technology Data Exchange (ETDEWEB)
Kimberly, Dagny; Magueijo, Joao
2004-03-25
Inspired by recent claims for a varying fine structure constant, alpha, we investigate the effect of 'promoting coupling constants to variables' upon various parameters of the standard model. We first consider a toy model: Proca theory of the massive photon. We then explore the electroweak theory with one and two dilaton fields. We find that a varying alpha unavoidably implies varying W and Z masses. This follows from gauge invariance, and is to be contrasted with Proca theory. For the two dilaton theory the Weinberg angle is also variable, but Fermi's constant and the tree level fermion masses remain constant unless the Higgs potential becomes dynamical. We outline some cosmological implications.
Gauge coupling unification in a classically scale invariant model
Haba, Naoyuki; Ishida, Hiroyuki; Takahashi, Ryo; Yamaguchi, Yuya
2016-02-01
There are a lot of works within a class of classically scale invariant model, which is motivated by solving the gauge hierarchy problem. In this context, the Higgs mass vanishes at the UV scale due to the classically scale invariance, and is generated via the Coleman-Weinberg mechanism. Since the mass generation should occur not so far from the electroweak scale, we extend the standard model only around the TeV scale. We construct a model which can achieve the gauge coupling unification at the UV scale. In the same way, the model can realize the vacuum stability, smallness of active neutrino masses, baryon asymmetry of the universe, and dark matter relic abundance. The model predicts the existence vector-like fermions charged under SU(3) C with masses lower than 1 TeV, and the SM singlet Majorana dark matter with mass lower than 2.6 TeV.
Gauge coupling unification in a classically scale invariant model
Haba, Naoyuki; Takahashi, Ryo; Yamaguchi, Yuya
2015-01-01
There are a lot of works within a class of classically scale invariant model, which is motivated by solving the gauge hierarchy problem. In this context, the Higgs mass vanishes at the UV scale due to the classically scale invariance, and is generated via the Coleman-Weinberg mechanism. Since the mass generation should occur not so far from the electroweak scale, we extend the standard model only around the TeV scale. We construct a model which can achieve the gauge coupling unification at the UV scale. In the same way, the model can realize the vacuum stability, smallness of active neutrino masses, baryon asymmetry of the universe, and dark matter relic abundance. The model predicts the existence vector-like fermions charged under $SU(3)_C$ with masses lower than $1\\,{\\rm TeV}$, and the SM singlet Majorana dark matter with mass lower than $2.6\\,{\\rm TeV}$.
Tipler, Frank J.
2010-01-01
I have shown that if we assume that the Standard Model of particle physics and Feynman-Weinberg quantum gravity holds at all times, then in the very early universe, the Cosmic Background Radiation (CBR) cannot couple to right handed electrons and quarks. If this property of CBR has persisted to the present day, the Ultra HIgh Energy Cosmic Rays (UHECR) can propagate a factor of ten further than they could if the CBR were an electromagnetic field, since most of the cross section for pion produ...
The effective Kaehler potential, metastable vacua and R-symmetry breaking in O'Raifeartaigh models
Energy Technology Data Exchange (ETDEWEB)
Benjamin, Shermane; Freund, Christopher [Department of Physics and Astronomy, Rowan University, 201 Mullica Hill Road, Glassboro, NJ 08028 (United States); Kain, Ben, E-mail: kain@rowan.ed [Department of Physics and Astronomy, Rowan University, 201 Mullica Hill Road, Glassboro, NJ 08028 (United States)
2011-01-21
Much has been learned about metastable vacua and R-symmetry breaking in O'Raifeartaigh models. Such work has largely been done from the perspective of the superpotential and by including Coleman-Weinberg corrections to the scalar potential. Instead, we consider these ideas from the perspective of the one loop effective Kaehler potential. We translate known ideas to this framework and construct convenient formulas for computing individual terms in the expanded effective Kaehler potential. We do so for arbitrary R-charge assignments and allow for small R-symmetry violating terms so that both spontaneous and explicit R-symmetry breaking is allowed in our analysis.
Minisuperspace models as infrared contributions
Bojowald, Martin; Brahma, Suddhasattwa
2016-06-01
A direct correspondence of quantum mechanics as a minisuperspace model for a self-interacting scalar quantum-field theory is established by computing, in several models, the infrared contributions to 1-loop effective potentials of Coleman-Weinberg type. A minisuperspace approximation rather than truncation is thereby obtained. By this approximation, the spatial averaging scale of minisuperspace models is identified with an infrared scale (but not a regulator or cutoff) delimiting the modes included in the minisuperspace model. Some versions of the models studied here have discrete space or modifications of the Hamiltonian expected from proposals of loop quantum gravity. They shed light on the question of how minisuperspace models of quantum cosmology can capture features of full quantum gravity. While it is shown that modifications of the Hamiltonian can be well described by minisuperspace truncations, some related phenomena such as signature change, confirmed and clarified here for modified scalar field theories, require at least a perturbative treatment of inhomogeneity beyond a strict minisuperspace model. The new methods suggest a systematic extension of minisuperspace models by a canonical effective formulation of perturbative inhomogeneity.
Magnetic field screening effect in electroweak model
Bakry, A; Zhang, P M; Zou, L P
2014-01-01
It is shown that in the Weinberg-Salam model a magnetic field screening effect for static magnetic solutions takes place. The origin of that phenomenon is conditioned by features of the electro-weak interaction, namely, there is mutual cancellation of Abelian magnetic fields created by the SU(2) gauge fields and Higgs boson. The effect implies monopole charge screening in finite energy system of monopoles and antimonopoles. We consider another manifestation of the screening effect which leads to an essential energy decrease of magnetic solutions. Applying variational method we have found a magnetic field configuration with a topological azimuthal magnetic flux which minimizes the energy functional and possesses a total energy of order 1 TeV. We suppose that corresponding magnetic bound state exists in the electroweak theory and can be detected in experiment.
Multivariate parametric random effect regression models for fecundability studies.
Ecochard, R; Clayton, D G
2000-12-01
Delay until conception is generally described by a mixture of geometric distributions. Weinberg and Gladen (1986, Biometrics 42, 547-560) proposed a regression generalization of the beta-geometric mixture model where covariates effects were expressed in terms of contrasts of marginal hazards. Scheike and Jensen (1997, Biometrics 53, 318-329) developed a frailty model for discrete event times data based on discrete-time analogues of Hougaard's results (1984, Biometrika 71, 75-83). This paper is on a generalization to a three-parameter family distribution and an extension to multivariate cases. The model allows the introduction of explanatory variables, including time-dependent variables at the subject-specific level, together with a choice from a flexible family of random effect distributions. This makes it possible, in the context of medically assisted conception, to include data sources with multiple pregnancies (or attempts at pregnancy) per couple.
Bennequin, Daniel
2016-01-01
We propose a geometric explanation of the standard model of Glashow, Weinberg and Salam for the known elementary particles. Our model is a generic Quantum Field Theory in dimension four, obtained by developing along a Lorentz sub-manifold the lagrangian of Einstein and Dirac in dimension twelve. The main mechanism which gives birth to the standard model is a certain gauge fixing of triality, which permits to identify the multiplicity of fermions, as seen from the four dimensional world, with the eight unseen dimensions of the generating universe. In this way we get the known tables of particles, explaining the series of fermions and the gauge bosons. We suggest that the Higgs field dynamics could appear through a bosonization of the right handed neutrino and correspond to a displacement in the unseen dimensions. We also propose hypotheses for dark matter, and perhaps dark energy. Then we suggest predictions to go beyond the standard model.
Bosonic seesaw mechanism in a classically conformal extension of the Standard Model
Haba, Naoyuki; Okada, Nobuchika; Yamaguchi, Yuya
2015-01-01
We suggest the so-called bosonic seesaw mechanism in the context of a classically conformal $U(1)_{B-L}$ extension of the Standard Model with two Higgs doublet fields. The $U(1)_{B-L}$ symmetry is radiatively broken via the Coleman-Weinberg mechanism, which also generates the mass terms for the two Higgs doublets through quartic Higgs couplings. Their masses are all positive but, nevertheless, the electroweak symmetry breaking is realized by the bosonic seesaw mechanism. Analyzing the renormalization group evolutions for all model couplings, we find that a large hierarchy among the quartic Higgs couplings, which is crucial for the bosonic seesaw mechanism to work, is dramatically reduced toward high energies. Therefore, the bosonic seesaw is naturally realized with only a mild hierarchy, if some fundamental theory, which provides the origin of the classically conformal invariance, completes our model at some high energy, for example, the Planck scale. We identify the regions of model parameters which satisfy ...
Preon Trinity a new model of leptons and quarks
Dugne, J J; Hansson, J; Predazzi, Enrico; Dugne, Jean-Jacques; Fredriksson, Sverker; Hansson, Johan; Predazzi, Enrico
1999-01-01
A new model for the substructure of quarks, leptons and weak gauge bosons, is discussed. It is based on three fundamental and absolutely stable spin-1/2 preons. Its preon flavour SU(3) symmetry leads to a prediction of nine quarks, nine leptons and nine heavy vector bosons. One of the quarks has charge $-4e/3$, and is speculated to be the top quark (whose charge has not been measured). The flavour symmetry leads to three conserved lepton numbers in all known weak processes, except for some neutrinos, which might either oscillate or decay. There is also a (Cabibbo) mixing of the $d$ and $s$ quarks due to an internal preon-antipreon annihilation channel. An identical channel exists inside the composite $Z^0$, leading to a relation between the Cabibbo and Weinberg mixing angles.
Simple brane-world inflationary models — An update
Okada, Nobuchika; Okada, Satomi
2016-05-01
In the light of the Planck 2015 results, we update simple inflationary models based on the quadratic, quartic, Higgs and Coleman-Weinberg potentials in the context of the Randall-Sundrum brane-world cosmology. Brane-world cosmological effect alters the inflationary predictions of the spectral index (ns) and the tensor-to-scalar ratio (r) from those obtained in the standard cosmology. In particular, the tensor-to-scalar ratio is enhanced in the presence of the 5th dimension. In order to maintain the consistency with the Planck 2015 results for the inflationary predictions in the standard cosmology, we find a lower bound on the five-dimensional Planck mass (M5). On the other hand, the inflationary predictions laying outside of the Planck allowed region can be pushed into the allowed region by the brane-world cosmological effect with a suitable choice of M5.
Simple brane-world inflationary models: an update
Okada, Nobuchika
2015-01-01
In the light of the Planck 2015 results, we update simple inflationary models based on the quadratic, quartic, Higgs and Coleman-Weinberg potentials in the context of the Randall-Sundrum brane-world cosmology. Brane-world cosmological effect alters the inflationary predictions of the spectral index ($n_s$) and the tensor-to-scalar ratio ($r$) from those obtained in the standard cosmology. In particular, the tensor-to-scalar ratio is enhanced in the presence of the 5th dimension. In order to maintain the consistency with the Planck 2015 results for the inflationary predictions in the standard cosmology, we find a lower bound on the five-dimensional Planck mass. On the other hand, the inflationary predictions laying outside of the Planck allowed region can be pushed into the allowed region by the brane-world cosmological effect.
Standard model with Higgs as gauge field on fourth homotopy group
Guo, H; Wu, K; Guo, Hanying; Li, Jianming; Wu, Ke
1994-01-01
Based upon a first principle, the generalized gauge principle, we construct a general model with G_L\\times G'_R \\times Z_2 gauge symmetry, where Z_2=\\pi_4(G_L) is the fourth homotopy group of the gauge group G_L, by means of the non-commutative differential geometry and reformulate the Weinberg-Salam model and the standard model with the Higgs field being a gauge field on the fourth homotopy group of their gauge groups. We show that in this approach not only the Higgs field is automatically introduced on the equal footing with ordinary Yang-Mills gauge potentials and there are no extra constraints among the parameters at the tree level but also it most importantly is stable against quantum correlation.
Geometrical origin of tricritical points of various U(1) lattice models
Janke, W; Janke, W; Kleinert, H
1995-01-01
We review the dual relationship between various compact U(1) lattice models and Abelian Higgs models, the latter being the disorder field theories of line-like topological excitations in the systems. We point out that the predicted first-order transitions in the Abelian Higgs models (Coleman-Weinberg mechanism) are, in three dimensions, in contradiction with direct numerical investigations in the compact U(1) formulation since these yield continuous transitions in the major part of the phase diagram. In four dimensions, there are indications from Monte Carlo data for a similar situation. Concentrating on the strong-coupling expansion in terms of geometrical objects, surfaces or lines, with certain statistical weights, we present semi-quantitative arguments explaining the observed cross-over from first-order to continuous transitions by the balance between the lowest two weights (``2:1 ratio'') of these geometrical objects.
Haba, Naoyuki; Okada, Nobuchika; Yamaguchi, Yuya
2015-01-01
We suggest the so-called bosonic seesaw mechanism in the context of a classically conformal $U(1)_{B-L}$ extension of the Standard Model with two Higgs doublet fields. The $U(1)_{B-L}$ symmetry is radiatively broken via the Coleman-Weinberg mechanism, which also generates the mass terms for the two Higgs doublets through quartic Higgs couplings. Their masses are all positive but, nevertheless, the electroweak symmetry breaking is realized by the bosonic seesaw mechanism. We analyze the renormalization group evolutions for all model couplings, and find that a large hierarchy among the quartic Higgs couplings, which is crucial for the bosonic seesaw mechanism to work, is dramatically reduced toward high energies. Therefore, the bosonic seesaw is naturally realized with only a mild hierarchy, if some fundamental theory, which provides the origin of the classically conformal invariance, completes our model at some high energy, for example, the Planck scale. The requirements for the perturbativity of the running c...
Standard model from a gauge theory in ten dimensions via CSDR
Energy Technology Data Exchange (ETDEWEB)
Farakos, K.; Kapetanakis, D.; Koutsoumbas, G.; Zoupanos, G.
1988-09-01
We present a gauge theory in ten dimensions based on the gauge group E/sub 8/ which is dimensionally reduced, according to the coset space dimensional reduction (CSDR) scheme, to the standard model SU/sub 3c/xSU/sub 2L/xU/sub 1/, which breaks further to SU/sub 3c/xU/sub 1em/. We use the coset space Sp/sub 4//(SU/sub 2/xU/sub 1/)xZ/sub 2/. The model gives similar predictions for sin /sup 2/theta/sub w/ and proton decay as the minimal SU/sub 5/ GUT. Natural choices of parameters suggest that the Higgs masses are as predicted by the Coleman-Weinberg radiative mechanism.
DEFF Research Database (Denmark)
Juel-Christiansen, Carsten
2005-01-01
Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter......Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter...
Spädtke, P
2013-01-01
Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.
Low Energy Pion-Pion Elastic Scattering in Sakai-Sugimoto Model
Parthasarathy, R
2008-01-01
We have considered the holographic large $N_c$ QCD model proposed by Sakai and Sugimoto and evaluated the non-Abelian DBI-action on the D8-brane upto $(\\alpha')^4$ terms. Restricting to the pion sector, these corrections give rise to four derivative contact terms for the pion field. We derive the Weinberg's phenemenological lagrangian. The coefficients of the four derivative terms are determined in terms of $g_{YM}^2$. The low energy pion-pion scattering amplitudes are evaluated. Numerical results are presented with the choice of $M_{KK}=0.94 GeV$ and $N_c=11$. The results are compared with the amplitudes calculated using the experimental phase shifts. The agreement with the experimental data is found to be satisfactory.
Gravity loop corrections to the standard model Higgs in Einstein gravity
Abe, Yugo; Inami, Takeo
2016-01-01
We study quantum gravity corrections to the standard model Higgs potential $V_{\\rm eff}(\\phi)$ $\\grave{\\rm a}$ la Coleman-Weinberg and examine the stability question of $V_{\\rm eff}(\\phi)$ around Planck mass scale, $\\mu\\simeq M_{\\rm Pl}$ ($M_{\\rm Pl}=1.22\\times10^{19}{\\rm GeV}$). We calculate the gravity one-loop corrections by using the momentum cut-off $\\Lambda$ in Einstein gravity. We show a significant difference between the effective potential $V_{\\rm eff}(\\phi)$ with and without gravity loop corrections in the energy region of $M_{\\rm Pl}$ for $\\Lambda= (1\\sim3)M_{\\rm Pl}$. We find that $V_{\\rm eff}(\\phi)$ possesses a minimum somewhere at $\\mu\\simeq M_{\\rm Pl}$; it implies that the stability condition for $V_{\\rm eff}(\\phi)$ holds after gravity corrections included.
Energy of the Universe in Bianchi-type I Models in Moller's Tetrad Theory of Gravity
Aydogdu, O; Aydogdu, Oktay; Salti, Mustafa
2005-01-01
In this paper, using the energy definition in Moller's tetrad theory of gravity we calculate the total energy of the universe in Bianchi-type I cosmological models which includes both the matter and gravitational fields. The total energy is found to be zero and this result agrees with a previous works of Banerjee-Sen who investigated this problem using the general relativity version of the Einstein energy-momentum complex and Xulu who investigated same problem using the general relativity versions of the Landau-lifshitz, Papapetrou and Weinberg's energy-momentum complexes. The result that total energy of the universe in Bianchi-type I universes is zero supports the viewpoint of Tryon.
African Journals Online (AJOL)
trie neural construction oí inoiviouo! unci communal identities in ... occurs, Including models based on Information processing,1 ... Applying the DSM descriptive approach to dissociation in the ... a personal, narrative path lhal connects personal lo ethnic ..... managed the problem in the context of the community, using a.
Two-nucleon scattering in a modified Weinberg approach with a symmetry-preserving regularization
Behrendt, J; Gegelia, J; Meißner, Ulf-G; Nogga, A
2016-01-01
We consider the nucleon-nucleon scattering problem by applying time-ordered perturbation theory to the Lorentz invariant formulation of baryon chiral perturbation theory. Using a symmetry preserving higher derivative form of the effective Lagrangian, we exploit the freedom of the choice of the renormalization condition and obtain an integral equation for the scattering amplitude with an improved ultraviolet behavior. The resulting formulation is used to quantify finite regulator artifacts in two-nucleon phase shifts as well as in the chiral extrapolations of the S-wave scattering lengths and the deuteron binding energy. This approach can be straightforwardly extended to analyze few-nucleon systems and processes involving external electroweak sources.
$^1S_0$ nucleon-nucleon scattering in the modified Weinberg approach
Epelbaum, E; Gegelia, J; Krebs, H
2015-01-01
Nucleon-nucleon scattering in the $^1S_0$ partial wave is considered in chiral effective field theory within the renormalizable formulation of Ref. [1] beyond the leading-order approximation. By applying subtractive renormalization, the subleading contact interaction in this channel is taken into account non-perturbatively. For a proper choice of renormalization conditions, the predicted energy dependence of the phase shift and the coefficients in the effective range expansion are found to be in a good agreement with the results of the Nijmegen partial wave analysis.
Two-nucleon scattering in a modified Weinberg approach with a symmetry-preserving regularization
Energy Technology Data Exchange (ETDEWEB)
Behrendt, J.; Epelbaum, E. [Ruhr-Universitaet Bochum, Institut fuer Theoretische Physik II, Bochum (Germany); Gegelia, J. [Forschungszentrum Juelich, Institute for Advanced Simulation, Institut fuer Kernphysik and Juelich Center for Hadron Physics, Juelich (Germany); Tbilisi State University, Tbilisi (Georgia); Meissner, Ulf G. [Universitaet Bonn, Helmholtz Institut fuer Strahlen- und Kernphysik and Bethe Center for Theoretical Physics, Bonn (Germany); Forschungszentrum Juelich, Institute for Advanced Simulation, Institut fuer Kernphysik and Juelich Center for Hadron Physics, Juelich (Germany); Forschungszentrum Juelich, JARA - Forces and Matter Experiments, Juelich (Germany); Forschungszentrum Juelich, JARA - High Performance Computing, Juelich (Germany); Nogga, A. [Forschungszentrum Juelich, Institute for Advanced Simulation, Institut fuer Kernphysik and Juelich Center for Hadron Physics, Juelich (Germany); Forschungszentrum Juelich, JARA - High Performance Computing, Juelich (Germany)
2016-09-15
We consider the nucleon-nucleon scattering problem by applying time-ordered perturbation theory to the Lorentz-invariant formulation of baryon chiral perturbation theory. We employ a higher-derivative symmetry-preserving regularization to obtain an integral equation for the scattering amplitude, which permits a non-perturbative treatment of subleading contributions to the nucleon-nucleon potential. The resulting formulation is used to quantify finite regulator artefacts in two-nucleon phase shifts as well as in the chiral extrapolations of the S-wave scattering lengths and the deuteron binding energy. Our approach can be straightforwardly extended to analyse few-nucleon systems and processes involving external electroweak sources. (orig.)
The Recourse to War: An Appraisal of the ’Weinberger Doctrine’
1989-06-01
threatened 50 either directly or indirectly by the Soviet Union: these are grounds substantial and obvious enough to ensure the support of the electorate ...desirability) of a recourse to war in certain circumstances, rejecting thereby both the unqualified abstentionism of the pacifist and the unbridled
{sup 1}S{sub 0} nucleon-nucleon scattering in the modified Weinberg approach
Energy Technology Data Exchange (ETDEWEB)
Epelbaum, E.; Krebs, H. [Ruhr-Universitaet Bochum, Institut fuer Theoretische Physik II, Fakultaet fuer Physik und Astronomie, Bochum (Germany); Gasparyan, A.M. [Ruhr-Universitaet Bochum, Institut fuer Theoretische Physik II, Fakultaet fuer Physik und Astronomie, Bochum (Germany); SSC RF ITEP, Moscow (Russian Federation); Gegelia, J. [Ruhr-Universitaet Bochum, Institut fuer Theoretische Physik II, Fakultaet fuer Physik und Astronomie, Bochum (Germany); Tbilisi State University, Tbilisi (Georgia)
2015-06-15
Nucleon-nucleon scattering in the {sup 1}S{sub 0} partial wave is considered in chiral effective field theory within the renormalizable formulation of a previous work (Phys. Lett. B 716, 338 (2012)) beyond the leading-order approximation. By applying subtractive renormalization, the subleading contact interaction in this channel is taken into account non-perturbatively. For a proper choice of renormalization conditions, the predicted energy dependence of the phase shift and the coefficients in the effective range expansion are found to be in a good agreement with the results of the Nijmegen partial wave analysis. (orig.)
Nucleon-nucleon scattering in the 1S0 partial wave in the modified Weinberg approach
Directory of Open Access Journals (Sweden)
Gasparyan A. M.
2016-01-01
Full Text Available Nucleon-nucleon scattering in the 1S0 partial wave is considered in chiral effective field theory within the recently suggested renormalizable formulation based on the Kadyshevsky equation. Contact interactions are taken into account beyond the leading-order approximation. The subleading contact terms are included non-perturbatively by means of subtractive renormalization. The dependence of the phase shifts on the choice of the renormalization condition is discussed. Perturbative inclusion of the subleading contact interaction is found to be justified only very close to threshold. The low-energy theorems are reproduced significantly better compared with the leading order results.
Modifying the Weinberger-Powell Doctrine for the Modern Geo-Strategic Environment
2017-03-31
nasty, brutish” world in which exists a war of “every man against every man.” Max Weber advanced social sciences understanding of the state by...USAF A paper submitted to the Faculty of the Joint Advanced Varfighting School in partial satisfaction of the requirements of a Master of Science ...concessions, Operation DESERT STORM was widely regarded as a resounding victory. The war was popular with the American people and troops were treated
Higgs Triplet Model with Classically Conformal Invariance
Okada, Hiroshi; Yagyu, Kei
2015-01-01
We discuss an extension of the minimal Higgs triplet model with a classically conformal invariance and with a gauged $U(1)_{B-L}$ symmetry. In our scenario, tiny masses of neutrinos are generated by a hybrid contribution from the type-I and type-II seesaw mechanisms. The shape of the Higgs potential at low energies is determined by solving one-loop renormalization group equations for all the scalar quartic couplings with a set of initial values of parameters at the Planck scale. We find a successful set of the parameters in which the $U(1)_{B-L}$ symmetry is radiatively broken via the Coleman-Weinberg mechanism at the ${\\cal O}$(10) TeV scale, and the electroweak symmetry breaking is also triggered by the $U(1)_{B-L}$ breaking. Under this configuration, we can predict various low energy observables such as the mass spectrum of extra Higgs bosons, and the mixing angles. Furthermore, using these predicted mass parameters, we obtain upper limits on Yukawa couplings among an isospin triplet Higgs field and lepton...
Towards Modelling slow Earthquakes with Geodynamics
Regenauer-Lieb, K.; Yuen, D. A.
2006-12-01
We explore a new, properly scaled, thermal-mechanical geodynamic model{^1} that can generate timescales now very close to those of earthquakes and of the same order as slow earthquakes. In our simulations we encounter two basically different bifurcation phenomena. One in which the shear zone nucleates in the ductile field, and the second which is fully associated with elasto-plastic (brittle, pressure- dependent) displacements. A quartz/feldspar composite slab has all two modes operating simultaneously in three different depth levels. The bottom of the crust is predominantly controlled by the elasto-visco-plastic mode while the top is controlled by the elasto-plastic mode. The exchange of the two modes appears to communicate on a sub-horizontal layer in a flip-flop fashion, which may yield a fractal-like signature in time and collapses into a critical temperature which for crustal rocks is around 500-580 K; in the middle of the brittle-ductile transition-zone. Near the critical temperature, stresses close to the ideal strength can be reached at local, meter-scale. Investigations of the thermal-mechanical properties under such extreme conditions are pivotal for understanding the physics of earthquakes. 1. Regenauer-Lieb, K., Weinberg, R. & Rosenbaum, G. The effect of energy feedbacks on continental strength. Nature 442, 67-70 (2006).
Directory of Open Access Journals (Sweden)
Dermeval SAVIANI
2011-07-01
Full Text Available On the occasion of the commemoration of the 200 years of Independence of Latin American countries, this paper analyses the models of development and educational styles in the process of the emancipation of Ibero-America, focusing specifically on the Brazilian case. In order to do this, we use two key texts as a reference: Gregorio Weinberg’s Modelos educativos en el desarrollo histórico de América Latina (Models of Education in the Historical Development of Latin America and Germán Rama’s Estilos educacionales (Educational Styles. Both texts elaborate the educational models or styles that took part in the historical development of Latin American societies. Bearing in mind the polarization between tradition and the modernization displayed in the educational models and styles proposed by Weinberg and Rama, this work shows how the process of conservative modernization, which characterized —with different nuances— the general emancipation movement in Ibero-American countries, took place in Brazilian society.Con ocasión de las conmemoraciones de los 200 años de las Independencias de los países de América Latina, este artículo analiza los modelos de desarrollo y estilos educacionales en el proceso de emancipación de la América Ibérica, enfocando específicamente el caso de Brasil. Para ello son tomados como referencia los textos de Gregorio Weinberg, Modelos educativos en el desarrollo histórico de América Latina y de Germán Rama, Estilos educacionales, en los cuales se definen los modelos o estilos educacionales que se hicieron presentes en el desarrollo histórico de las sociedades latinoamericanas. Considerando la polarización entre lo tradicional y lo moderno, evidenciada en los modelos y estilos educacionales propuestos por Weinberg y Rama, este trabajo muestra cómo se manifestó, en el caso de la emancipación de la sociedad brasileña, el proceso de modernización conservadora que caracterizó, con diferentes matices, el
Strongly Coupled Models with a Higgs-like Boson*
Directory of Open Access Journals (Sweden)
Pich Antonio
2013-11-01
Full Text Available Considering the one-loop calculation of the oblique S and T parameters, we have presented a study of the viability of strongly-coupled scenarios of electroweak symmetry breaking with a light Higgs-like boson. The calculation has been done by using an effective Lagrangian, being short-distance constraints and dispersive relations the main ingredients of the estimation. Contrary to a widely spread believe, we have demonstrated that strongly coupled electroweak models with massive resonances are not in conflict with experimentalconstraints on these parameters and the recently observed Higgs-like resonance. So there is room for these models, but they are stringently constrained. The vector and axial-vector states should be heavy enough (with masses above the TeV scale, the mass splitting between them is highly preferred to be small and the Higgs-like scalar should have a WW coupling close to the Standard Model one. It is important to stress that these conclusions do not depend critically on the inclusion of the second Weinberg sum rule.
Strongly Coupled Models with a Higgs-like Boson
Pich, Antonio; Rosell, Ignasi; José Sanz-Cillero, Juan
2013-11-01
Considering the one-loop calculation of the oblique S and T parameters, we have presented a study of the viability of strongly-coupled scenarios of electroweak symmetry breaking with a light Higgs-like boson. The calculation has been done by using an effective Lagrangian, being short-distance constraints and dispersive relations the main ingredients of the estimation. Contrary to a widely spread believe, we have demonstrated that strongly coupled electroweak models with massive resonances are not in conflict with experimentalconstraints on these parameters and the recently observed Higgs-like resonance. So there is room for these models, but they are stringently constrained. The vector and axial-vector states should be heavy enough (with masses above the TeV scale), the mass splitting between them is highly preferred to be small and the Higgs-like scalar should have a WW coupling close to the Standard Model one. It is important to stress that these conclusions do not depend critically on the inclusion of the second Weinberg sum rule. We wish to thank the organizers of LHCP 2013 for the pleasant conference. This work has been supported in part by the Spanish Government and the European Commission [FPA2010-17747, FPA2011- 23778, AIC-D-2011-0818, SEV-2012-0249 (Severo Ochoa Program), CSD2007-00042 (Consolider Project CPAN)], the Generalitat Valenciana [PrometeoII/2013/007] and the Comunidad de Madrid [HEPHACOS S2009/ESP-1473].
Non-String Pursuit towards Unified Model on the Lattice
Kawamoto, N
1999-01-01
Non-standard overview on the possible formulation towards a unified model on the lattice is presented. It is based on the generalized gauge theory which is formulated by differential forms and thus expected to fit in a simplicial manifold. We first review suggestive known results towards this direction. As a small step of concrete realization of the program, we propose a lattice Chern-Simons gravity theory which leads to the Chern-Simons gravity in the continuum limit via Ponzano-Regge model. We then summarize the quantization procedure of the generalized gauge theory and apply the formulation to the generalized topological Yang-Mills action with instanton gauge fixing. We find N=2 super Yang-Mills theory with Dirac-K{ä}hler fermions which are generated from ghosts via twisting mechanism. The Weinberg-Salam model is formulated by the generalized Yang-Mills action which includes Connes's non-commutative geometry formulation as a particular case. In the end a possible scenario to realize the program is propose...
Classical scale invariance in the inert doublet model
Plascencia, Alexis D
2015-01-01
The inert doublet model (IDM) is a minimal extension of the Standard Model (SM) that can account for the dark matter in the universe. Naturalness arguments motivate us to study whether the model can be embedded into a theory with dynamically generated scales. In this work we study a classically scale invariant version of the IDM with a minimal hidden sector, which has a $U(1)_{\\text{CW}}$ gauge symmetry and a complex scalar $\\Phi$. The mass scale is generated in the hidden sector via the Coleman-Weinberg (CW) mechanism and communicated to the two Higgs doublets via portal couplings. Since the CW scalar remains light, acquires a vacuum expectation value and mixes with the SM Higgs boson, the phenomenology of this construction can be modified with respect to the traditional IDM. We analyze the impact of adding this CW scalar and the $Z'$ gauge boson on the calculation of the dark matter relic density and on the spin-independent nucleon cross section for direct detection experiments. Finally, by studying the RG ...
Radiative Type III Seesaw Model and its collider phenomenology
von der Pahlen, Federico; Restrepo, Diego; Zapata, Oscar
2016-01-01
We analyze the present bounds of a scotogenic model, the Radiative Type III Seesaw (RSIII), in which an additional scalar doublet and at least two fermion triplets of $SU(2)_L$ are added to the Standard Model (SM). In the RSIII the new physics (NP) sector is odd under an exact global $Z_2$ symmetry. This symmetry guaranties that the lightest NP neutral particle is stable, providing a natural dark matter (DM) candidate, and leads to naturally suppressed neutrino masses generated by a one-loop realization of an effective Weinberg operator. We focus on the region with the highest sensitivity in present and future LHC searches, with light scalar DM and at least one NP fermion triplet at the sub-TeV scale. This region allows for significant production cross-sections of NP fermion pairs at the LHC. We reinterpret a set of searches for supersymmetric particles at the LHC obtained using the package CheckMATE, to set limits on our model as a function of the masses of the NP particles and their Yukawa interactions. The...
K- nuclear potentials from in-medium chirally motivated models
Cieplý, A.; Friedman, E.; Gal, A.; Gazda, D.; Mareš, J.
2011-10-01
A self-consistent scheme for constructing K- nuclear optical potentials from subthreshold in-medium K¯N s-wave scattering amplitudes is presented and applied to analysis of kaonic atoms data and to calculations of K- quasibound nuclear states. The amplitudes are taken from a chirally motivated meson-baryon coupled-channel model, both at the Tomozawa-Weinberg leading order and at the next to leading order. Typical kaonic atoms potentials are characterized by a real part -ReVK-chiral=85±5 MeV at nuclear matter density, in contrast to half this depth obtained in some derivations based on in-medium K¯N threshold amplitudes. The moderate agreement with data is much improved by adding complex ρ- and ρ2-dependent phenomenological terms, found to be dominated by ρ2 contributions that could represent K¯NN→YN absorption and dispersion, outside the scope of meson-baryon chiral models. Depths of the real potentials are then near 180 MeV. The effects of p-wave interactions are studied and found secondary to those of the dominant s-wave contributions. The in-medium dynamics of the coupled-channel model is discussed and systematic studies of K- quasibound nuclear states are presented.
Probing classically conformal $B-L$ model with gravitational waves
Jinno, Ryusuke
2016-01-01
We study the cosmological history of the classical conformal $B-L$ gauge extension of the standard model, in which the physical scales are generated via the Coleman-Weinberg-type symmetry breaking. Especially, we consider the thermal phase transition of the U$(1)_{B-L}$ symmetry in the early universe and resulting gravitational-wave production. Due to the classical conformal invariance, the phase transition tends to be a first-order one with ultra-supercooling, which enhances the strength of the produced gravitational waves. We show that, requiring (1) U$(1)_{B-L}$ is broken after the reheating, (2) the $B-L$ gauge coupling does not blow up below the Planck scale, (3) the thermal phase transition completes in almost all the patches in the universe, the gravitational wave spectrum can be as large as $\\Omega_{\\rm GW} \\sim 10^{-8}$ at the frequency $f \\sim 0.01$-$1$Hz for some model parameters, and a vast parameter region can be tested by future interferometer experiments such as eLISA, LISA, BBO and DECIGO.
Majorana dark matter in a classically scale invariant model
Benic, Sanjin
2014-01-01
We analyze a classically scale invariant extension of the Standard Model with dark gauge $U(1)_X$ broken by doubly charge scalar $\\Phi$ leaving a remnant $Z_2$ symmetry. Dark fermions are introduced as dark matter candidates and for anomaly reasons we introduce two chiral fermions. Due to classical scale invariance, bare mass term that would mix these two states is absent and they end up as stable Majorana fermions $N_1$ and $N_2$. We calculate cross sections for $N_aN_a \\to \\phi\\phi$, $N_aN_a \\to X^\\mu \\phi$ and $N_2N_2 \\to N_1N_1$ annihilation channels. We put constraints to the model from the Higgs searches at the LHC, dark matter relic abundance and dark matter direct detection limits by LUX. The dark gauge boson plays a crucial role in the Coleman-Weinberg mechanism and has to be heavier then 680 GeV. The viable mass region for dark matter is from 470 GeV up to a few TeV. In the case when two Majorana fermions have different masses, two dark matter signals at direct detection experiments could provide a ...
Perturbative Unitarity Bounds in Composite 2-Higgs Doublet Models
De Curtis, Stefania; Yagyu, Kei; Yildirim, Emine
2016-01-01
We study bounds from perturbative unitarity in a Composite 2-Higgs Doublet Model (C2HDM) based on the spontaneous breakdown of a global symmetry $SO(6)\\to SO(4)\\times SO(2)$ at the compositeness scale $f$. The eight pseudo Nambu-Goldstone Bosons (pNGBs) emerging from such a dynamics are identified as two isospin doublet Higgs fields. We calculate the $S$-wave amplitude for all possible 2-to-2-body elastic (pseudo)scalar boson scatterings at energy scales $\\sqrt{s}$ reachable at the Large Hadron Collider (LHC) and beyond it, including the longitudinal components of weak gauge boson states as the corresponding pNGB states. In our calculation, the Higgs potential is assumed to have the same form as that in the Elementary 2-Higgs Doublet Model (E2HDM) with a discrete $Z_2$ symmetry, which is expected to be generated at the one-loop level via the Coleman-Weinberg (CW) mechanism. We find that the $S$-wave amplitude matrix can be block-diagonalized with maximally $2\\times 2$ submatrices in a way similar to the E2HDM...
Neutron Electric Dipole Moment in Two Higgs Doublet Model
Hayashi, T; Matsuda, M; Tanimoto, M; Hayashi, Tkemi; Koide, Yoshio; Matsuda, Masahisa; Tanimoto, Morimitsu
1994-01-01
We study the effect of the "chromo-electric" dipole moment on the electric dipole moment(EDM) of the neutron in the two Higgs doublet model. We systematically investigate the Weinberg's operator $O_{3g}=GG\\t G$ and the operator $O_{qg}=\\bar q\\sigma\\t Gq$, in the cases of $\\tan\\b\\gg 1$, $\\tan\\b\\ll 1$ and $\\tan\\b\\simeq 1$. It is shown that $O_{sg}$ gives the main contribution to the neutron EDM compared to the other operators, and also that the contributions of $O_{ug}$ and $O_{3g}$ cancel out each other. It is pointed out that the inclusion of second lightest neutral Higgs scalar adding to the lightest one is of essential importance to estimate the neutron EDM. The neutron EDM is considerably reduced due to the destructive contribution with each other if the mass difference of the two Higgs scalars is of the order $O(50\\G)$.
Discriminative phenomenological features of scale invariant models for electroweak symmetry breaking
Directory of Open Access Journals (Sweden)
Katsuya Hashino
2016-01-01
Full Text Available Classical scale invariance (CSI may be one of the solutions for the hierarchy problem. Realistic models for electroweak symmetry breaking based on CSI require extended scalar sectors without mass terms, and the electroweak symmetry is broken dynamically at the quantum level by the Coleman–Weinberg mechanism. We discuss discriminative features of these models. First, using the experimental value of the mass of the discovered Higgs boson h(125, we obtain an upper bound on the mass of the lightest additional scalar boson (≃543 GeV, which does not depend on its isospin and hypercharge. Second, a discriminative prediction on the Higgs-photon–photon coupling is given as a function of the number of charged scalar bosons, by which we can narrow down possible models using current and future data for the di-photon decay of h(125. Finally, for the triple Higgs boson coupling a large deviation (∼+70% from the SM prediction is universally predicted, which is independent of masses, quantum numbers and even the number of additional scalars. These models based on CSI can be well tested at LHC Run II and at future lepton colliders.
A Tree-level Unitary Noncompact Weyl-Einstein-Yang-Mills Model
Dengiz, Suat
2016-01-01
We construct and study perturbative unitarity (i.e., ghost and tachyon analysis) of a $3+1$-dimensional noncompact Weyl-Einstein-Yang-Mills model. The model describes a local noncompact Weyl's scale plus $SU(N)$ phase invariant Higgs-like field, conformally coupled to a generic Weyl-invariant dynamical background. Here, the Higgs-like sector generates the Weyl's conformal invariance of system. The action does not admit any dimensionful parameter and genuine presence of de Sitter vacuum spontaneously breaks the noncompact gauge symmetry in an analogous manner to the Standard Model Higgs mechanism. As to flat spacetime, the dimensionful parameter is generated within the dimensional transmutation in quantum field theories, and thus the symmetry is radiatively broken through the one-loop Effective Coleman-Weinberg potential. We show that the mere expectation of reducing to Einstein's gravity in the broken phases forbids anti-de Sitter space to be its stable constant curvature vacuum. The model is unitary in de Si...
Chang, Su-Wei; Choi, Seung Hoan; Li, Ke; Fleur, Rose Saint; Huang, Chengrui; Shen, Tong; Ahn, Kwangmi; Gordon, Derek; Kim, Wonkuk; Wu, Rongling; Mendell, Nancy R; Finch, Stephen J
2009-12-15
We examined the properties of growth mixture modeling in finding longitudinal quantitative trait loci in a genome-wide association study. Two software packages are commonly used in these analyses: Mplus and the SAS TRAJ procedure. We analyzed the 200 replicates of the simulated data with these programs using three tests: the likelihood-ratio test statistic, a direct test of genetic model coefficients, and the chi-square test classifying subjects based on the trajectory model's posterior Bayesian probability. The Mplus program was not effective in this application due to its computational demands. The distributions of these tests applied to genes not related to the trait were sensitive to departures from Hardy-Weinberg equilibrium. The likelihood-ratio test statistic was not usable in this application because its distribution was far from the expected asymptotic distributions when applied to markers with no genetic relation to the quantitative trait. The other two tests were satisfactory. Power was still substantial when we used markers near the gene rather than the gene itself. That is, growth mixture modeling may be useful in genome-wide association studies. For markers near the actual gene, there was somewhat greater power for the direct test of the coefficients and lesser power for the posterior Bayesian probability chi-square test.
A noncompact Weyl-Einstein-Yang-Mills model: A semiclassical quantum gravity
Dengiz, Suat
2017-08-01
We construct and study perturbative unitarity (i.e., ghost and tachyon analysis) of a 3 + 1-dimensional noncompact Weyl-Einstein-Yang-Mills model. The model describes a local noncompact Weyl's scale plus SU(N) phase invariant Higgs-like field,conformally coupled to a generic Weyl-invariant dynamical background. Here, the Higgs-like sector generates the Weyl's conformal invariance of system. The action does not admit any dimensionful parameter and genuine presence of de Sitter vacuum spontaneously breaks the noncompact gauge symmetry in an analogous manner to the Standard Model Higgs mechanism. As to flat spacetime, the dimensionful parameter is generated within the dimensional transmutation in quantum field theories, and thus the symmetry is radiatively broken through the one-loop Effective Coleman-Weinberg potential. We show that the mere expectation of reducing to Einstein's gravity in the broken phases forbids anti-de Sitter space to be its stable vacua. The model is unitary in de Sitter and flat vacua around which a massless graviton, N2 - 1 massless scalar bosons, N massless Dirac fermions, N2 - 1 Proca-type massive Abelian and non-Abelian vector bosons are generically propagated.
Bosonic seesaw mechanism in a classically conformal extension of the Standard Model
Directory of Open Access Journals (Sweden)
Naoyuki Haba
2016-03-01
Full Text Available We suggest the so-called bosonic seesaw mechanism in the context of a classically conformal U(1B−L extension of the Standard Model with two Higgs doublet fields. The U(1B−L symmetry is radiatively broken via the Coleman–Weinberg mechanism, which also generates the mass terms for the two Higgs doublets through quartic Higgs couplings. Their masses are all positive but, nevertheless, the electroweak symmetry breaking is realized by the bosonic seesaw mechanism. Analyzing the renormalization group evolutions for all model couplings, we find that a large hierarchy among the quartic Higgs couplings, which is crucial for the bosonic seesaw mechanism to work, is dramatically reduced toward high energies. Therefore, the bosonic seesaw is naturally realized with only a mild hierarchy, if some fundamental theory, which provides the origin of the classically conformal invariance, completes our model at some high energy, for example, the Planck scale. We identify the regions of model parameters which satisfy the perturbativity of the running couplings and the electroweak vacuum stability as well as the naturalness of the electroweak scale.
D-Branes at Singularities A Bottom-Up Approach to the String Embedding of the Standard Model
Aldazabal, G; Quevedo, Fernando; Uranga, Angel M
2000-01-01
We propose a bottom-up approach to the building of particle physics models from string theory. Our building blocks are Type II D-branes which we combine appropriately to reproduce desirable features of a particle theory model: 1) Chirality ; 2) Standard Model group ; 3) N=1 or N=0 supersymmetry ; 4) Three quark-lepton generations. We start such a program by studying configurations of D=10, Type IIB D3-branes located at singularities. We study in detail the case of Z_N, N=1,0 supersymmetric orbifold singularities leading to the SM group or some left-right symmetricextension. In general, tadpole cancellation conditions require the presence of additional branes, e.g. D7-branes. For the N=1 supersymmetric case the unique twist leading to three quark-lepton generations is Z_3, predicting $\\sin^2\\theta_W=3/14=0.21$. The models obtained are the simplest semirealistic string models ever built. In the non-supersymmetric case there is a three-generation model for each Z_N, N>4, but the Weinberg angle is in general too ...
D-Branes at Singularities A Bottom-Up Approach to the String Embedding of the Standard Model
Aldazabal, G.; Quevedo, F.; Uranga, A.M.
2000-01-01
We propose a bottom-up approach to the building of particle physics models from string theory. Our building blocks are Type II D-branes which we combine appropriately to reproduce desirable features of a particle theory model: 1) Chirality ; 2) Standard Model group ; 3) N=1 or N=0 supersymmetry ; 4) Three quark-lepton generations. We start such a program by studying configurations of D=10, Type IIB D3-branes located at singularities. We study in detail the case of Z_N, N=1,0 supersymmetric orbifold singularities leading to the SM group or some left-right symmetricextension. In general, tadpole cancellation conditions require the presence of additional branes, e.g. D7-branes. For the N=1 supersymmetric case the unique twist leading to three quark-lepton generations is Z_3, predicting $\\sin^2\\theta_W=3/14=0.21$. The models obtained are the simplest semirealistic string models ever built. In the non-supersymmetric case there is a three-generation model for each Z_N, N>4, but the Weinberg angle is in general too ...
Das, Arindam; Okada, Nobuchika; Takahashi, Dai-suke
2016-01-01
We consider the minimal U(1)' extension of the Standard Model (SM) with the classically conformal invariance, where an anomaly free U(1)' gauge symmetry is introduced along with three generations of right-handed neutrinos and a U(1)' Higgs field. Since the classically conformal symmetry forbids all dimensional parameters in the model, the U(1)' gauge symmetry is broken through the Coleman-Weinberg mechanism, generating the mass terms of the U(1)' gauge boson (Z' boson) and the right-handed neutrinos. Through a mixing quartic coupling between the U(1)' Higgs field and the SM Higgs doublet field, the radiative U(1)' gauge symmetry breaking also triggers the breaking of the electroweak symmetry. In this model context, we first investigate the electroweak vacuum instability problem in the SM. Employing the renormalization group equations at the two-loop level and the central values for the world average masses of the top quark ($m_t=173.34$ GeV) and the Higgs boson ($m_h=125.09$ GeV), we perform parameter scans t...
Classically conformal U(1)$^\\prime$ extended Standard Model and Higgs vacuum stability
Oda, Satsuki; Takahashi, Dai-suke
2015-01-01
We consider the minimal U(1)$^\\prime$ extension of the Standard Model (SM) with conformal invariance at the classical level, where in addition to the SM particle contents, three generations of right-handed neutrinos and a U(1)$^\\prime$ Higgs field are introduced. In the presence of the three right-handed neutrinos, which are responsible for the seesaw mechanism, this model is free from all the gauge and gravitational anomalies. The U(1)$^\\prime$ gauge symmetry is radiatively broken via the Coleman-Weinberg mechanism, by which the U(1)$^\\prime$ gauge boson ($Z^\\prime$ boson) mass as well as the Majorana mass for the right-handed neutrinos are generated. The radiative U(1)$^\\prime$ symmetry breaking also induces a negative mass squared for the SM Higgs doublet to trigger the electroweak symmetry breaking. In this context, we investigate a possibility to solve the SM Higgs vacuum instability problem. The model includes only three free parameters (U(1)$^\\prime$ charge of the SM Higgs doublet, U(1)$^\\prime$ gauge ...
Embedding inflation into the Standard Model - more evidence for classical scale invariance
Kannike, Kristjan; Raidal, Martti
2014-01-01
If cosmological inflation is due to a slowly rolling single inflation field taking trans-Planckian values as suggested by the BICEP2 measurement of primordial tensor modes in CMB, embedding inflation into the Standard Model challenges standard paradigm of effective field theories. Together with an apparent absence of Planck scale contributions to the Higgs mass and to the cosmological constant, BICEP2 provides further experimental evidence for the absence of large $M_{\\rm P}$ induced operators. We show that classical scale invariance, the paradigm that all fundamental scales in Nature are induced by quantum effects, solves the problem and allows for a remarkably simple scale-free Standard Model extension with inflaton without extending the gauge group. Due to trans-Planckian inflaton values and vevs, a dynamically induced Coleman-Weinberg-type inflaton potential of the model can predict tensor-to-scalar ratio $r$ in a large range, converging around the prediction of chaotic $m^2\\phi^2$ inflation for a large t...
One-loop pseudo-Goldstone masses in the minimal S O (10 ) Higgs model
Gráf, Lukáš; Malinský, Michal; Mede, Timon; Susič, Vasja
2017-04-01
We calculate the prominent perturbative contributions shaping the one-loop scalar spectrum of the minimal renormalizable nonsupersymmetric S O (10 ) Higgs model whose unified gauge symmetry is spontaneously broken by an adjoint scalar. Focusing on its potentially realistic 45 ⊕126 variant in which the rank is reduced by a vacuum expectation value of the 5-index antisymmetric self-dual tensor, we provide a thorough analysis of the corresponding Coleman-Weinberg one-loop effective potential, paying particular attention to the masses of the potentially tachyonic pseudo-Goldstone bosons transforming as (1, 3, 0) and (8, 1, 0) under the standard model (SM) gauge group. The results confirm the assumed existence of extended regions in the parameter space supporting a locally stable SM-like quantum vacuum inaccessible at the tree level. The effective potential tedium is compared to that encountered in the previously studied 45 ⊕16 S O (10 ) Higgs model where the polynomial corrections to the relevant pseudo-Goldstone masses turn out to be easily calculable within a very simplified purely diagrammatic approach.
One-loop pseudo-Goldstone masses in the minimal $SO(10)$ Higgs model
Gráf, Lukáš; Mede, Timon; Susič, Vasja
2016-01-01
We calculate the prominent perturbative contributions shaping the one-loop scalar spectrum of the minimal non-supersymmetric renormalizable $SO(10)$ Higgs model whose unified gauge symmetry is spontaneously broken by an adjoint scalar. Focusing on its potentially realistic $45\\oplus 126$ variant in which the rank is reduced by a VEV of the 5-index self-dual antisymmetric tensor, we provide a thorough analysis of the corresponding one-loop Coleman-Weinberg potential, paying particular attention to the masses of the potentially tachyonic pseudo-Goldstone bosons (PGBs) transforming as $(8,1,0)$ and $(1,3,0)$ under the Standard Model gauge group. The results confirm the assumed existence of extended regions in the parameter space supporting a locally stable SM-like quantum vacuum inaccessible at the tree-level. The effective potential (EP) tedium is compared to that encountered in the previously studied $45\\oplus 16$ $SO(10)$ Higgs model where the polynomial corrections to the relevant pseudo-Goldstone masses tur...
Low Scale Composite Higgs Model and 1.8 $\\sim$ 2 TeV Diboson Excess
Bian, Ligong; Shu, Jing
2015-01-01
We consider a simple solution to explain the recent diboson excess observed by ATALS and CMS Collaborations in models with custodial symmetry $SU(2)_L \\times SU(2)_R \\rightarrow SU(2)_c$. The $SU(2)_L$ triplet vector boson $\\rho$ with mass range of $1.8 \\sim 2$ TeV would be produced through the Drell-Yan process with sizable diboson decay branching to account for the excess. The other $SU(2)_L \\times SU(2)_R$ bidoublet axial vector boson $a$ would cancel all deviations of electroweak obervables induced by $\\rho$ even if the SM fermions mix with some heavy vector like (composite) fermions which couple to $\\rho$ ("non-universally partially composite"), therefore allows arbitrary couplings between each SM fermion and $\\rho$. We present our model in the "General Composite Higgs" framework with $SO(5) \\times U(1)_X \\rightarrow SO(4) \\times U(1)_X$ breaking at scale $f$ and demand the first Weinberg sum rule and positive gauge boson form factors as the theoretical constraints. We find that our model can fit the dib...
Semi-empirical model of solar plages
Institute of Scientific and Technical Information of China (English)
FANG; Cheng
2001-01-01
［1］ Zirin, H., Astrophysics of the Sun, Chapter 7, Cambridge: Cambridge University Press, 1988.［2］ Shine, R. A., Linsky, J. L., Physical properties of solar chromospheric plages II. Chromospheric plage models, Solar Phys., 1974, 39: 49.［3］ Kelch, W. L., Linsky, J. L., Physical properties of solar chromospheric plages III. Models based on CaII and MgII observations, Solar Phys., 1978, 58: 37.［4］ Lemaire, P., Goutlebroze, J. C., Vial, J. C. et al., Physical properties of the solar chromosphere deduced from optically thick lines, A & A, 1981, 103: 160.［5］ Fontenla, J. M., Avrett, E. H., Loeser, R., Energy balance in the solar transition region II. Effects of pressure and energy input on hydrostatic models, ApJ, 1991, 377: 712.［6］ Fontenla, J. M., Avrett, E. H., Loeser, R., Energy balance in the solar transition region III. Helium emission in hydrostatic, constant-abundance models with diffusion, ApJ, 1993, 406: 319.［7］ Pierce, A. K., Slaughter, C., Solar limb darkening I: λλ(30337297), Solar Phys., 1977, 51: 25.［8］ Pierce, A. K., Slaughter, C., Weinberger, D., Solar limb darkening in the interval 740424018*!, II, Solar Phys., 1977, 52: 179.［9］ Nechel, H., Labs, D., The solar radiation between 3300 and 12500*!, Solar Phys., 1984, 90: 205.［10］ Vernazza, J. E., Avrett, E. H., Loeser, R., Structure of the solar chromosphere I. Basic computations and summary of the results, ApJ, 1973, 184: 605.［11］ Mihalas, D., Stellar Atmospheres, San Francisco: W. H. Freeman and Company, 1978.［12］ Fang, C., Hnoux, J. -C., Self-consistent model of flare heated solar chromosphere, A & A, 1983, 118: 139.［13］ Ding, M. D., Fang, C., A semi-empirical model of sunspot penumbra, A & A, 1989, 225: 204.［14］ Vernazza, J. E., Avrett, E. H., Loeser, R., Structure of the solar chromosphere III. Models of the EUV brightness components of the quiet Sun, ApJ Suppl., 1981, 45: 635.［15］ Canfield, R. C., Athey, R
Directory of Open Access Journals (Sweden)
Dermeval SAVIANI
2011-07-01
Full Text Available On the occasion of the commemoration of the 200 years of Independence of Latin American countries, this paper analyses the models of development and educational styles in the process of the emancipation of Ibero-America, focusing specifically on the Brazilian case. In order to do this, we use two key texts as a reference: Gregorio Weinberg’s Modelos educativos en el desarrollo histórico de América Latina (Models of Education in the Historical Development of Latin America and Germán Rama’s Estilos educacionales (Educational Styles. Both texts elaborate the educational models or styles that took part in the historical development of Latin American societies. Bearing in mind the polarization between tradition and the modernization displayed in the educational models and styles proposed by Weinberg and Rama, this work shows how the process of conservative modernization, which characterized —with different nuances— the general emancipation movement in Ibero-American countries, took place in Brazilian society.
Restudy on Dark Matter Time-Evolution in the Littlest Higgs Model with T-Parity
Institute of Scientific and Technical Information of China (English)
QIAO Qing-Peng; TANG Jian; LI Xue-Qian
2008-01-01
Following previous study, in the littlest Higgs model (LHM), the heavy photon is supposed to be a possible dark matter candidate and its relic abundance of the heavy photon is estimated in terms of the Boltzman-Lee-Weinberg time-evolution equation. The effect of the T-parity violation is also considered. Our calculations show that when Higgs mass MH taken to be 300 GeV and do not considering T-parity violation, only two narrow ranges 133 < MAn < 135 GeV and 167MAH169 GeV are tolerable with the current astrophysical observation and if 135 < MAH < 167 GeV, there must at least exist another species of heavy particle contributing to the cold dark matter. As long as the T-parity can be violated, the heavy photon can decay into regular standard model particles and would affect the dark matter abundance in the universe, we discuss the constraint on the T-parity violation parameter based on the present data. Direct detection prospects are also discussed in some detail.
Electroweak vacuum stability in classically conformal $B-L$ extension of the Standard Model
Das, Arindam; Papapietro, Nathan
2015-01-01
We consider the minimal U(1)$_{B-L}$ extension of the Standard Model (SM) with the classically conformal invariance, where an anomaly free U(1)$_{B-L}$ gauge symmetry is introduced along with three generations of right-handed neutrinos and a U(1)$_{B-L}$ Higgs field. Because of the classically conformal symmetry, all dimensional parameters are forbidden. The $B-L$ gauge symmetry is radiatively broken through the Coleman-Weinberg mechanism, generating the mass for the $U(1)_{B-L}$ gauge boson ($Z^\\prime$ boson) and the right-handed neutrinos. Through a small negative coupling between the SM Higgs doublet and the $B-L$ Higgs field, the negative mass term for the SM Higgs doublet is generated and the electroweak symmetry is broken. In this model context, we investigate the electroweak vacuum instability problem in the SM. It is known that in the classically conformal U(1)$_{B-L}$ extension of the SM, the electroweak vacuum remains unstable in the renormalization group analysis at the one-loop level. In this pape...
Prum, Richard O
2010-11-01
The Fisher-inspired, arbitrary intersexual selection models of Lande (1981) and Kirkpatrick (1982), including both stable and unstable equilibrium conditions, provide the appropriate null model for the evolution of traits and preferences by intersexual selection. Like the Hardy–Weinberg equilibrium, the Lande–Kirkpatrick (LK) mechanism arises as an intrinsic consequence of genetic variation in trait and preference in the absence of other evolutionary forces. The LK mechanism is equivalent to other intersexual selection mechanisms in the absence of additional selection on preference and with additional trait-viability and preference-viability correlations equal to zero. The LK null model predicts the evolution of arbitrary display traits that are neither honest nor dishonest, indicate nothing other than mating availability, and lack any meaning or design other than their potential to correspond to mating preferences. The current standard for demonstrating an arbitrary trait is impossible to meet because it requires proof of the null hypothesis. The LK null model makes distinct predictions about the evolvability of traits and preferences. Examples of recent intersexual selection research document the confirmationist pitfalls of lacking a null model. Incorporation of the LK null into intersexual selection will contribute to serious examination of the extent to which natural selection on preferences shapes signals.
Marketing Search: An Interview with Pete Bell of Endeca and Gabriel Weinberg of DuckDuckGo
Directory of Open Access Journals (Sweden)
Brett Bonfield
2010-08-01
Full Text Available As it turns out, librarians aren’t the only ones competing with Google. In fact, we’re not even the only ones offering an alternative to Google when it comes to helping people find information. There’s Microsoft’s Bing, of course. And Yahoo! Search, at least until 2012, when Bing will begin providing Yahoo’s search results (though some [...
Brewer, Michael S.; Gardner, Grant E.
2013-01-01
Teaching population genetics provides a bridge between genetics and evolution by using examples of the mechanisms that underlie changes in allele frequencies over time. Existing methods of teaching these concepts often rely on computer simulations or hand calculations, which distract students from the material and are problematic for those with…
Mixing angles in SU(2)/sub L/ x U(1) gauge model
Energy Technology Data Exchange (ETDEWEB)
Nandi, S.; Tanaka, K.
1979-01-01
Exact expressions for the mixing parameters are obtained in terms of mass ratios in the standard Weinberg-Salam model with permutation symmetry S/sub 3/ for six quarks. The CP-violating phase is ignored, and there are no arbitrary parameters except for the quark masses. In the lowest order, the angles defined by Kobayashi-Maskawa are sin theta/sub 1/ = sin theta/sub c/ = (m/sub d//m/sub d/ + m/sub s/)/sup 1/2/, sin theta/sub 3/ = -sin theta/sub 3/ = -m/sup 2//sub s//m/sup 2//sub b/, and m/sub t/m/sub s/ greater than or equal to m/sub c/m/sub b/ = 7.2 GeV/sup 2/ or m/sub t/ greater than or equal to 24 GeV for m/sub s/ = 0.3 GeV.
Unlocking the Standard Model. IV. N=2 generations of quarks : spectrum and mixing
Machet, Bruno
2013-01-01
The Glashow-Salam-Weinberg model for 2 generations of quarks is extended to 8 composite Higgs multiplets, with no adjunction of extra fermions. It is the minimal number of Higgs doublets required to suitably account, simultaneously, for the spectrum of pseudoscalar mesons that can be built with 4 quarks and for the mass of the W gauge bosons. These masses being used as input, together with elementary low energy considerations for the pions, we calculate all other parameters, masses and couplings. We focus in this work on the spectrum of the 8 Higgs bosons (which all potentially contribute to the W and quark masses), and on the mixing (Cabibbo) angle, leaving the study of couplings to a subsequent work. The Higgs bosons fall into one triplet, two doublets and one singlet. In the triplet stand three states with masses \\sqrt{2} x that of heaviest pseudoscalar meson D_s, which, for 2 generations, pushes them up to 2.80 GeV. The 2 components of the first doublet have masses close to 1.25 GeV. The singlet has a mas...
A NORTHWEST DATABASE MODEL OF SHORT TANDEM REPEAT LOCI IN FORENSIC MEDICINE
Institute of Scientific and Technical Information of China (English)
王振原; 朱波峰; 刘雅诚; 严江伟; 霍振义; 金天博; 李涛; 樊拴良; 方杰
2003-01-01
Objective To establish the northwest database of short tandem repeat(STR) loci in forensic medicine. Methods Bloodstains or whole blood samples were collected from the unrelated prisoners in Xi'an city. Genetic distribution for 13 STR loci and amelogenin locus were determined in prisons based on GeneScan. One primer for each locus was labeled with the fluorescent by 5-FAM, JOE, or NED. The forensic database were generated by using multiple amplification, GeneScan, genotype, and genetic distribution analysis. Results 113 alleles and 302 genotypes were observed, with the corresponding frequency between 0.0050-0.5250 and 0.0100-0.4100. The mean H was 0.7667. The accumulative DP was 0.9999999,. The accumulative EPP was 0.9999999. The scope of PIC was 0.6036-0.8562. PM was less than 10-11. The observed and expected genotype frequencies were evaluated using χ2-test and all were in accordance with Hardy-Weinberg equilibrium (P＞0.05). Conclusion STR loci is an ideal genetic marker with powerful polymorphism and stable heredity. It can be used for individual identification and paternity in forensic medicine. The forensic DNA database model can be established successfully.
Energy Technology Data Exchange (ETDEWEB)
Cantanhede, Carlisson M. [Instituto de Fisica Teorica (IFT/UNESP), Sao Paulo, SP (Brazil); Casana, Rodolfo; Ferreira Junior, Manoel M. [Universidade Federal do Maranhao (UFMA), MA (Brazil). Dept. de Fisica; Hora, Eduardo da [Universidade Federal da Paraiba (UFPB), PB (Brazil). Dept. de Fisica
2012-07-01
Full text: Since the seminal works by Abrikosov [1] and Nielsen-Olesen [2] showing the existence of uncharged vortex, such nonperturbative solutions have been a theoretical issue of enduring interest. Already, the electrically charged vortices are obtained only in abelian models endowed with the Chern-Simons term [3,4]. This remains valid even in the context of highly nonlinear models, such as the Born-Infield electrodynamics. In this work, we demonstrated the existence of electrically charged BPS vortices in a Maxwell-Higgs model without the Chern- Simons term but endowed with a CPT-even and parity-odd Lorentz-violating (LV) structure. The LV term belonging to the CPT-even electrodynamics of the Standard Model Extension [5] plays a similar role that of the Chern-Simons term, mixing the electric and magnetic sectors. Besides the LV coefficients provide a very rich set of vortex configurations exhibiting electric's field inversion also are responsible by controlling the characteristic length of the vortex and by the flipping of the magnetic flux. [1] A. Abrikosov, Sov. Phys. JETP 32, 1442 (1957). [2] H. Nielsen, P. Olesen, Nucl. Phys. B 61, 45 (1973). [3] R. Jackiw and E. J. Weinberg, Phys. Rev. Lett. 64, 2234 (1990). [4] C.K. Lee, K.M. Lee, H. Min, Phys. Lett. B 252, 79 (1990) [5] D. Colladay and V. A. Kostelecky, Phys. Rev. D 55, 6760 (1997); Phys. Rev. D 58, 116002 (1998). (author)
Palcu, A
2006-01-01
The unjustly neglected method of exactly solving generalized electro-weak models - with an original spontaneous symmetry breaking mechanism based on the gauge group $SU(n)_{L}\\otimes U(1)_{Y}$ - is applied here to a particular class of chiral 3-3-1 models. This procedure enables us - without resorting to any approximation - to express the boson mass spectrum and charges of the particles involved therein as a straightforward consequence of both a proper parametrisation of the Higgs sector and a new generalized Weinberg transformation. We prove that the resulted values can be accommodated to the experimental ones just by tuning a sole parameter. Furthermore, if we take into consideration both left-handed and right-handed components of neutrino (included in a lepton triplet along with their corresponding left-handed charged partner) then we are in position to propose an original method for neutrino to aquire a very small but non-zero mass without spoiling the previous achieved results in the exact solution of th...
Tipler, Frank J
2010-01-01
I have shown that if we assume that the Standard Model of particle physics and Feynman-Weinberg quantum gravity holds at all times, then in the very early universe, the Cosmic Background Radiation (CBR) cannot couple to right handed electrons and quarks. If this property of CBR has persisted to the present day, the Ultra HIgh Energy Cosmic Rays (UHECR) can propagate a factor of ten further than they could if the CBR were an electromagnetic field, since most of the cross section for pion production when a UHECR hits a CBR photon is due to a quark spin flip, and such a flip cannot occur if the CBR photon cannot couple to right handed quarks. The GZM effect will still reduce the number of UHECR, but UHECR can arrive from a distance of a redshift of up to $z=0.1$. I show that taking this additional propagation distance into account allows us to identify the sources of 4 of the 6 UHECR which the Pierre Auger Collaboration could not identify, and also identify the source of the 320 EeV UHECR seen by the Fly's Eye i...
B0d-B¯0d mixing and the prediction of the top-quark mass in an independent particle potential model
Barik, N.; Das, P.; Panda, A. R.; Roy, K. C.
1993-10-01
Considering B0d-B¯ 0d mixing in a potential model of independent quarks by taking the effective interaction Hamiltonian of the standard Salam-Weinberg-Glashow model and subsequently diagonalizing the corresponding mass matrix with respect to B0d and B¯0d states, we obtain an expression for the mass difference ΔM0Bd in terms of the t-quark mass mt. Using the recent observation of the mixing parameter xd=0.72+/-0.15 by the ARGUS Collaboration, we predict the lower bound on the top-quark mass as mt>=149 GeV. Further, a consideration of experimental mass difference ΔM0Bd=(4.0+/-0.8)×10-13 GeV also leads to mt=167+16-17 GeV which is in agreement with the recent experimental bound as well as other theoretical predictions. However, such a prediction of mt that utilizes the experimental value of the CKM matrix element ||Vtd|| may not appear convincing in view of the large uncertainties in the measurement of ||Vtd|| so far reported. Therefore using the range of mt values within its bounds predicted from other independent works, we make a reasonable estimation of ||Vtd||.
Adventures in model-building beyond the Standard Model and esoterica in six dimensions
Stone, David C.
This dissertation is most easily understood as two distinct periods of research. The first three chapters are dedicated to phenomenological interests in physics. An anomalous measurement of the top quark forward-backward asymmetry in both detectors at the Tevatron collider is explained by particle content from beyond the Standard Model. The extra field content is assumed to have originated from a grand unified group SU(5), and so only specific content may be added. Methods for spontaneously breaking the R-symmetry of supersymmetric theories, of phenomenological interest for any realistic supersymmetric model, are studied in the context of two-loop Coleman-Weinberg potentials. For a superpotential with a certain structure, which must include two different couplings, a robust method of spontaneously breaking the R-symmetry is established. The phenomenological studies conclude with an isospin analysis of B decays to kaons and pions. When the parameters of the analysis are fit to data, it is seen that an enhancement of matrix elements in certain representations of isospin emerge. This is highly reminiscent of the infamous and unexplained enhancements seen in the K → pipi system. We conjecture that this enhancement may be a universal feature of the flavor group, isospin in this case, rather than of just the K → pipi system. The final two chapters approach the problem of counting degrees of freedom in quantum field theories. We examine the form of the Weyl anomaly in six dimensions with the Weyl consistency conditions. These consistency conditions impose constraints that lead to a candidate for the alpha-theorem in six dimensions. This candidate has all the properties that the equivalent theorems in two and four dimensions did, and, in fact, we show that in an even number of dimensions the form of the Euler density, the generalized Einstein tensor, and the Weyl transformations guarantee such a candidate exists. We go on to show that, unlike in two and four dimensions
Directory of Open Access Journals (Sweden)
Rebeca Pérez-Morales
2011-01-01
Full Text Available Lung cancer is the leading cause of cancer mortality in Mexico and worldwide. In the past decade, there has been an increase in the number of lung cancer cases in young people, which suggests an important role for genetic background in the etiology of this disease. In this study, we genetically characterized 16 polymorphisms in 12 low penetrance genes (AhR, CYP1A1, CYP2E1, EPHX1, GSTM1, GSTT1, GSTPI, XRCC1, ERCC2, MGMT, CCND1 and TP53 in 382 healthy Mexican Mestizos as the first step in elucidating the genetic structure of this population and identifying high risk individuals. All of the genotypes analyzed were in Hardy-Weinberg equilibrium, but different degrees of linkage were observed for polymorphisms in the CYP1A1 and EPHX1 genes. The genetic variability of this population was distributed in six clusters that were defined based on their genetic characteristics. The use of a polygenic model to assess the additive effect of low penetrance risk alleles identified combinations of risk genotypes that could be useful in predicting a predisposition to lung cancer. Estimation of the level of genetic susceptibility showed that the individual calculated risk value (iCRV ranged from 1 to 16, with a higher iCRV indicating a greater genetic susceptibility to lung cancer.
Pérez-Morales, Rebeca; Méndez-Ramírez, Ignacio; Castro-Hernández, Clementina; Martínez-Ramírez, Ollin C.; Gonsebatt, María Eugenia; Rubio, Julieta
2011-01-01
Lung cancer is the leading cause of cancer mortality in Mexico and worldwide. In the past decade, there has been an increase in the number of lung cancer cases in young people, which suggests an important role for genetic background in the etiology of this disease. In this study, we genetically characterized 16 polymorphisms in 12 low penetrance genes (AhR, CYP1A1, CYP2E1, EPHX1, GSTM1, GSTT1, GSTPI, XRCC1, ERCC2, MGMT, CCND1 and TP53) in 382 healthy Mexican Mestizos as the first step in elucidating the genetic structure of this population and identifying high risk individuals. All of the genotypes analyzed were in Hardy-Weinberg equilibrium, but different degrees of linkage were observed for polymorphisms in the CYP1A1 and EPHX1 genes. The genetic variability of this population was distributed in six clusters that were defined based on their genetic characteristics. The use of a polygenic model to assess the additive effect of low penetrance risk alleles identified combinations of risk genotypes that could be useful in predicting a predisposition to lung cancer. Estimation of the level of genetic susceptibility showed that the individual calculated risk value (iCRV) ranged from 1 to 16, with a higher iCRV indicating a greater genetic susceptibility to lung cancer. PMID:22215955
Chishtie, F A; Jia, J; Mann, R B; McKeon, D G C; Sherry, T N; Steele, T G
2010-01-01
We consider the effective potential $V$ in the standard model with a single Higgs doublet in the limit that the only mass scale $\\mu$ present is radiatively generated. Using a technique that has been shown to determine $V$ completely in terms of the renormalization group (RG) functions when using the Coleman-Weinberg (CW) renormalization scheme, we first sum leading-log (LL) contributions to $V$ using the one loop RG functions, associated with five couplings (the top quark Yukawa coupling $x$, the quartic coupling of the Higgs field $y$, the $SU(3)$ gauge coupling $z$, and the $SU(2) \\times U(1)$ couplings $r$ and $s$). We then employ the two loop RG functions with the three couplings $x$, $y$, $z$ to sum the next-to-leading-log (NLL) contributions to $V$ and then the three to five loop RG functions with one coupling $y$ to sum all the $N^2LL \\ldots N^4LL$ contributions to $V$. In order to compute these sums, it is necessary to convert those RG functions that have been originally computed explicitly in the mi...
Energy Technology Data Exchange (ETDEWEB)
Hue, L.T. [Duy Tan University, Institute of Research and Development, Da Nang City (Viet Nam); Vietnam Academy of Science and Technology, Institute of Physics, Hanoi (Viet Nam); Arbuzov, A.B. [Joint Institute for Nuclear Researches, Bogoliubov Laboratory for Theoretical Physics, Dubna (Russian Federation); Ngan, N.T.K. [Cantho University, Department of Physics, Cantho (Viet Nam); Vietnam Academy of Science and Technology, Graduate University of Science and Technology, Hanoi (Viet Nam); Long, H.N. [Ton Duc Thang University, Theoretical Particle Physics and Cosmology Research Group, Ho Chi Minh City (Viet Nam); Ton Duc Thang University, Faculty of Applied Sciences, Ho Chi Minh City (Viet Nam)
2017-05-15
The neutrino and Higgs sectors in the SU(2){sub 1} x SU(2){sub 2} x U(1){sub Y} model with lepton-flavor non-universality are discussed. We show that active neutrinos can get Majorana masses from radiative corrections, after adding only new singly charged Higgs bosons. The mechanism for the generation of neutrino masses is the same as in the Zee models. This also gives a hint to solving the dark matter problem based on similar ways discussed recently in many radiative neutrino mass models with dark matter. Except the active neutrinos, the appearance of singly charged Higgs bosons and dark matter does not affect significantly the physical spectrum of all particles in the original model. We indicate this point by investigating the Higgs sector in both cases before and after singly charged scalars are added into it. Many interesting properties of physical Higgs bosons, which were not shown previously, are explored. In particular, the mass matrices of charged and CP-odd Higgs fields are proportional to the coefficient of triple Higgs coupling μ. The mass eigenstates and eigenvalues in the CP-even Higgs sector are also presented. All couplings of the SM-like Higgs boson to normal fermions and gauge bosons are different from the SM predictions by a factor c{sub h}, which must satisfy the recent global fit of experimental data, namely 0.995 < vertical stroke c{sub h} vertical stroke < 1. We have analyzed a more general diagonalization of gauge boson mass matrices, then we show that the ratio of the tangents of the W-W{sup '} and Z-Z{sup '} mixing angles is exactly the cosine of the Weinberg angle, implying that number of parameters is reduced by 1. Signals of new physics from decays of new heavy fermions and Higgs bosons at LHC and constraints of their masses are also discussed. (orig.)
Hue, L. T.; Arbuzov, A. B.; Ngan, N. T. K.; Long, H. N.
2017-05-01
The neutrino and Higgs sectors in the { SU(2) }_1 × { SU(2) }_2 × { U(1) }_Y model with lepton-flavor non-universality are discussed. We show that active neutrinos can get Majorana masses from radiative corrections, after adding only new singly charged Higgs bosons. The mechanism for the generation of neutrino masses is the same as in the Zee models. This also gives a hint to solving the dark matter problem based on similar ways discussed recently in many radiative neutrino mass models with dark matter. Except the active neutrinos, the appearance of singly charged Higgs bosons and dark matter does not affect significantly the physical spectrum of all particles in the original model. We indicate this point by investigating the Higgs sector in both cases before and after singly charged scalars are added into it. Many interesting properties of physical Higgs bosons, which were not shown previously, are explored. In particular, the mass matrices of charged and CP-odd Higgs fields are proportional to the coefficient of triple Higgs coupling μ . The mass eigenstates and eigenvalues in the CP-even Higgs sector are also presented. All couplings of the SM-like Higgs boson to normal fermions and gauge bosons are different from the SM predictions by a factor c_h, which must satisfy the recent global fit of experimental data, namely 0.995Z-Z' mixing angles is exactly the cosine of the Weinberg angle, implying that number of parameters is reduced by 1. Signals of new physics from decays of new heavy fermions and Higgs bosons at LHC and constraints of their masses are also discussed.
Freeman, Thomas J.
This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…
Energy Technology Data Exchange (ETDEWEB)
Manley, D.M.
1981-11-01
The doubly differential cross section d/sup 2/sigma/d..cap omega..dT for ..pi../sup +/ mesons produced in the reaction ..pi../sup -/p ..-->.. ..pi../sup +/..pi../sup -/n was measured at 203, 230, 256, and 358 MeV with a single-arm magnetic spectrometer. A set of five previous measurements at 254, 280, 292, 331, and 356 MeV was reanalyzed with the new measurements. Integrated cross sections were calculated for the combined data set with unprecedented accuracy for this energy range. The chiral-symmetry-breaking parameter was determined to be epsilon = -0.03 +- 0.26 by extrapolating the mean square modulus of the matrix element to threshold and comparing the threshold matrix element with the prediction of soft-pion theory. This value of epsilon is consistent with zero as required by the Weinberg Lagrangian. Measurements at the three highest energies were compared with the results of an isobar-model analysis of bubble-chamber events by an LBL-SLAC collaboration. After allowing for an overall normalization difference, the measurements at 331 and 358 MeV were in excellent agreement with the results of their analysis. The measurement at 292 MeV required variation of the PS11(epsilonN) amplitude, as well as the overall normalization, which could be due to the limited number of bubble-chamber events available for the LBL-SLAC analysis at this energy. A partial-wave analysis of the measurements was also carried out with the VPI isobar model. Within this model, the matrix element contains a background term calculated from a phenomenological ..pi..N Lagrangian that is consistent with the hypotheses of current algebra and PCAC. The reaction was found to be dominated by the initial P11 wave. Production of the ..delta.. isobar from initial D waves was found to be significant at the two highest energies.
Model Transformations? Transformation Models!
Bézivin, J.; Büttner, F.; Gogolla, M.; Jouault, F.; Kurtev, I.; Lindow, A.
2006-01-01
Much of the current work on model transformations seems essentially operational and executable in nature. Executable descriptions are necessary from the point of view of implementation. But from a conceptual point of view, transformations can also be viewed as descriptive models by stating only the
Simonse, W.L.
2014-01-01
Business model design does not always produce a “design” or “model” as the expected result. However, when designers are involved, a visual model or artifact is produced. To assist strategic managers in thinking about how they can act, the designers’ challenge is to combine both strategy and design n
Das, Arindam; Oda, Satsuki; Okada, Nobuchika; Takahashi, Dai-suke
2016-06-01
We consider the minimal U(1 ) ' extension of the standard model (SM) with the classically conformal invariance, where an anomaly-free U(1 ) ' gauge symmetry is introduced along with three generations of right-handed neutrinos and a U(1 ) ' Higgs field. Since the classically conformal symmetry forbids all dimensional parameters in the model, the U(1 ) ' gauge symmetry is broken by the Coleman-Weinberg mechanism, generating the mass terms of the U(1 ) ' gauge boson (Z' boson) and the right-handed neutrinos. Through a mixing quartic coupling between the U(1 ) ' Higgs field and the SM Higgs doublet field, the radiative U(1 ) ' gauge symmetry breaking also triggers the breaking of the electroweak symmetry. In this model context, we first investigate the electroweak vacuum instability problem in the SM. Employing the renormalization group equations at the two-loop level and the central values for the world average masses of the top quark (mt=173.34 GeV ) and the Higgs boson (mh=125.09 GeV ), we perform parameter scans to identify the parameter region for resolving the electroweak vacuum instability problem. Next we interpret the recent ATLAS and CMS search limits at the LHC Run-2 for the sequential Z' boson to constrain the parameter region in our model. Combining the constraints from the electroweak vacuum stability and the LHC Run-2 results, we find a bound on the Z' boson mass as mZ'≳3.5 TeV . We also calculate self-energy corrections to the SM Higgs doublet field through the heavy states, the right-handed neutrinos and the Z' boson, and find the naturalness bound as mZ'≲7 TeV , in order to reproduce the right electroweak scale for the fine-tuning level better than 10%. The resultant mass range of 3.5 TeV ≲mZ'≲7 TeV will be explored at the LHC Run-2 in the near future.
Modelling SDL, Modelling Languages
Directory of Open Access Journals (Sweden)
Michael Piefel
2007-02-01
Full Text Available Today's software systems are too complex to implement them and model them using only one language. As a result, modern software engineering uses different languages for different levels of abstraction and different system aspects. Thus to handle an increasing number of related or integrated languages is the most challenging task in the development of tools. We use object oriented metamodelling to describe languages. Object orientation allows us to derive abstract reusable concept definitions (concept classes from existing languages. This language definition technique concentrates on semantic abstractions rather than syntactical peculiarities. We present a set of common concept classes that describe structure, behaviour, and data aspects of high-level modelling languages. Our models contain syntax modelling using the OMG MOF as well as static semantic constraints written in OMG OCL. We derive metamodels for subsets of SDL and UML from these common concepts, and we show for parts of these languages that they can be modelled and related to each other through the same abstract concepts.
Model-independent determination of the compositeness of near-threshold quasibound states
Kamiya, Yuki
2016-01-01
We study the compositeness of near-threshold states to clarify the internal structure of exotic hadron candidates. Within the framework of effective field theory, we extend the Weinberg's weak-binding relation to include the nearby CDD (Castillejo-Dalitz-Dyson) pole contribution with the help of the Pade approximant. Finally, using the extended relation, we conclude that the CDD pole contribution to the Lambda(1405) baryon in the Kbar N amplitude is negligible.
DEFF Research Database (Denmark)
Poulsen, Helle
1996-01-01
This paper presents a functional modelling method called Actant Modelling rooted in linguistics and semiotics. Actant modelling can be integrated with Multilevel Flow Modelling (MFM) in order to give an interpretation of actants.......This paper presents a functional modelling method called Actant Modelling rooted in linguistics and semiotics. Actant modelling can be integrated with Multilevel Flow Modelling (MFM) in order to give an interpretation of actants....
Anaïs Schaeffer
2012-01-01
By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models. Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...
DEFF Research Database (Denmark)
2011-01-01
This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...... years of experience is providing in directing the reader in their activities.Traps and pitfalls are discussed and strategies also given to improve model development towards “fit-for-purpose” models. The emphasis in this chapter is the adoption and exercise of a modelling methodology that has proven very...
Li, Qin; Zhao, Yongxin; Wu, Xiaofeng; Liu, Si
There can be multitudinous models specifying aspects of the same system. Each model has a bias towards one aspect. These models often override in specific aspects though they have different expressions. A specification written in one model can be refined by introducing additional information from other models. The paper proposes a concept of promoting models which is a methodology to obtain refinements with support from cooperating models. It refines a primary model by integrating the information from a secondary model. The promotion principle is not merely an academic point, but also a reliable and robust engineering technique which can be used to develop software and hardware systems. It can also check the consistency between two specifications from different models. A case of modeling a simple online shopping system with the cooperation of the guarded design model and CSP model illustrates the practicability of the promotion principle.
DEFF Research Database (Denmark)
Stubkjær, Erik
2005-01-01
Modeling is a term that refers to a variety of efforts, including data and process modeling. The domain to be modeled may be a department, an organization, or even an industrial sector. E-business presupposes the modeling of an industrial sector, a substantial task. Cadastral modeling compares to...
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
This paper puts forward a new conception:model warehouse,analyzes the reason why model warehouse appears and introduces the characteristics and architecture of model warehouse.Last,this paper points out that model warehouse is an important part of WebGIS.
DEFF Research Database (Denmark)
2011-01-01
procedure is introduced for the analysis and solution of property models. Models that capture and represent the temperature dependent behaviour of physical properties are introduced, as well as equation of state models (EOS) such as the SRK EOS. Modelling of liquid phase activity coefficients are also......This chapter presents various types of constitutive models and their applications. There are 3 aspects dealt with in this chapter, namely: creation and solution of property models, the application of parameter estimation and finally application examples of constitutive models. A systematic...... covered, illustrating several models such as the Wilson equation and NRTL equation, along with their solution strategies. A section shows how to use experimental data to regress the property model parameters using a least squares approach. A full model analysis is applied in each example that discusses...
Batty, M.
2007-01-01
The term ?model? is now central to our thinking about how weunderstand and design cities. We suggest a variety of ways inwhich we use ?models?, linking these ideas to Abercrombie?sexposition of Town and Country Planning which represented thestate of the art fifty years ago. Here we focus on using models asphysical representations of the city, tracing the development ofsymbolic models where the focus is on simulating how functiongenerates form, to iconic models where the focus is on representi...
Chang, CC
2012-01-01
Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko
DEFF Research Database (Denmark)
Bækgaard, Lars
2001-01-01
The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics. We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...
DEFF Research Database (Denmark)
Bækgaard, Lars
2001-01-01
The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics.We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...
Yang, Qiaoli
I started work on the field of dark matter and cosmology with Dr. Sikivie three years ago with a goal to distinguish observationally axions or axion-like particles (ALPs) from other dark matter candidates such as weakly interacting massive particles (WIMPs) and sterile neutrinos. The subject is exciting because if one can determine the identity of the dark matter, it will be a mile-stone of physics beyond the standard model. On the high energy frontier, the standard model with three generation fermions is firmly established. However, it is not complete because the theory does not contain a plausible dark matter candidate, with properties required from observation, and the theory has fine-tuning problems such as the strong CP problem. On the cosmology and astrophysics frontiers, new observations of the dynamics of galaxy clusters, the rotation curves of galaxies, the abundances of light elements, gravitational lensing, and the anisotropies of the CMBR reach unprecedented accuracy. They imply cold dark matter (CDM) is 23% of the total energy density of the universe. Although many "beyond the standard model" theories may provide proper candidates to serve as CDM particles, the axion is especially compelling because it not only serves as the CDM particle, but also solves the strong CP problem. The axion was initially motivated by the strong CP problem, namely the puzzle why there is no CP violation in the strong interactions. Peccei and Quinn solved the problem by introducing a new UPQ(1) symmetry, and later Weinberg and Wilczek pointed out that the spontaneous breaking of UPQ(1) symmetry leads to a new pseudoscalar particle, the axion[1][2][3]. Axion models were proposed in which the symmetry breaking scale may be much larger than the electroweak scale, in which case the axion is very light and couples extremely weakly to ordinary matter. Furthermore, it was realized [4] that the cold axions, produced by the misalignment mechanism during the QCD phase transition, have
Digital Repository Service at National Institute of Oceanography (India)
Unnikrishnan, A; Manoj, N.T.
Various numerical models used to study the dynamics and horizontal distribution of salinity in Mandovi-Zuari estuaries, Goa, India is discussed in this chapter. Earlier, a one-dimensional network model was developed for representing the complex...
Turner, Raymond
2009-01-01
Computational models can be found everywhere in present day science and engineering. In providing a logical framework and foundation for the specification and design of specification languages, Raymond Turner uses this framework to introduce and study computable models. In doing so he presents the first systematic attempt to provide computational models with a logical foundation. Computable models have wide-ranging applications from programming language semantics and specification languages, through to knowledge representation languages and formalism for natural language semantics. They are al
Taylor, J G
2009-01-01
We present tentative answers to three questions: firstly, what is to be assumed about the structure of the brain in attacking the problem of modeling consciousness; secondly, what is it about consciousness that is attempting to be modeled; and finally, what is taken on board the modeling enterprise, if anything, from the vast works by philosophers about the nature of mind.
DEFF Research Database (Denmark)
Sclütter, Flemming; Frigaard, Peter; Liu, Zhou
This report presents the model test results on wave run-up on the Zeebrugge breakwater under the simulated prototype storms. The model test was performed in January 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University. The detailed description of the model is given...
DEFF Research Database (Denmark)
Ravn, Anders P.; Staunstrup, Jørgen
1994-01-01
This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...
DEFF Research Database (Denmark)
2011-01-01
This chapter presents various types of constitutive models and their applications. There are 3 aspects dealt with in this chapter, namely: creation and solution of property models, the application of parameter estimation and finally application examples of constitutive models. A systematic...
Model Experiments and Model Descriptions
Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian
1999-01-01
The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.
Scalable Models Using Model Transformation
2008-07-13
and the following companies: Agilent, Bosch, HSBC , Lockheed-Martin, National Instruments, and Toyota. Scalable Models Using Model Transformation...parametrization, and workflow automation. (AFRL), the State of California Micro Program, and the following companies: Agi- lent, Bosch, HSBC , Lockheed
DEFF Research Database (Denmark)
Stubkjær, Erik
2005-01-01
to the modeling of an industrial sector, as it aims at rendering the basic concepts that relate to the domain of real estate and the pertinent human activities. The palpable objects are pieces of land and buildings, documents, data stores and archives, as well as persons in their diverse roles as owners, holders...... to land. The paper advances the position that cadastral modeling has to include not only the physical objects, agents, and information sets of the domain, but also the objectives or requirements of cadastral systems.......Modeling is a term that refers to a variety of efforts, including data and process modeling. The domain to be modeled may be a department, an organization, or even an industrial sector. E-business presupposes the modeling of an industrial sector, a substantial task. Cadastral modeling compares...
Modelling in Business Model design
Simonse, W.L.
2013-01-01
It appears that business model design might not always produce a design or model as the expected result. However when designers are involved, a visual model or artefact is produced. To assist strategic managers in thinking about how they can act, the designers challenge is to combine strategy and
Druyan, Leonard M.
2012-01-01
Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.
2016-01-01
This book provides a thorough introduction to the challenge of applying mathematics in real-world scenarios. Modelling tasks rarely involve well-defined categories, and they often require multidisciplinary input from mathematics, physics, computer sciences, or engineering. In keeping with this spirit of modelling, the book includes a wealth of cross-references between the chapters and frequently points to the real-world context. The book combines classical approaches to modelling with novel areas such as soft computing methods, inverse problems, and model uncertainty. Attention is also paid to the interaction between models, data and the use of mathematical software. The reader will find a broad selection of theoretical tools for practicing industrial mathematics, including the analysis of continuum models, probabilistic and discrete phenomena, and asymptotic and sensitivity analysis.
DEFF Research Database (Denmark)
Nielsen, Mogens Peter; Shui, Wan; Johansson, Jens
2011-01-01
In this report a new turbulence model is presented.In contrast to the bulk of modern work, the model is a classical continuum model with a relatively simple constitutive equation. The constitutive equation is, as usual in continuum mechanics, entirely empirical. It has the usual Newton or Stokes...... term with stresses depending linearly on the strain rates. This term takes into account the transfer of linear momentum from one part of the fluid to another. Besides there is another term, which takes into account the transfer of angular momentum. Thus the model implies a new definition of turbulence....... The model is in a virgin state, but a number of numerical tests have been carried out with good results. It is published to encourage other researchers to study the model in order to find its merits and possible limitations....
DEFF Research Database (Denmark)
Blomhøj, Morten
2004-01-01
Developing competences for setting up, analysing and criticising mathematical models are normally seen as relevant only from and above upper secondary level. The general belief among teachers is that modelling activities presuppose conceptual understanding of the mathematics involved. Mathematical...... modelling, however, can be seen as a practice of teaching that place the relation between real life and mathematics into the centre of teaching and learning mathematics, and this is relevant at all levels. Modelling activities may motivate the learning process and help the learner to establish cognitive...... roots for the construction of important mathematical concepts. In addition competences for setting up, analysing and criticising modelling processes and the possible use of models is a formative aim in this own right for mathematics teaching in general education. The paper presents a theoretical...
Wenninger, Magnus J
2012-01-01
Well-illustrated, practical approach to creating star-faced spherical forms that can serve as basic structures for geodesic domes. Complete instructions for making models from circular bands of paper with just a ruler and compass. Discusses tessellation, or tiling, and how to make spherical models of the semiregular solids and concludes with a discussion of the relationship of polyhedra to geodesic domes and directions for building models of domes. "". . . very pleasant reading."" - Science. 1979 edition.
DEFF Research Database (Denmark)
Liu, Zhou; Frigaard, Peter
This report presents the model on wave run-up and run-down on the Zeebrugge breakwater under short-crested oblique wave attacks. The model test was performed in March-April 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University.......This report presents the model on wave run-up and run-down on the Zeebrugge breakwater under short-crested oblique wave attacks. The model test was performed in March-April 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University....
DEFF Research Database (Denmark)
Vestergaard, Kristian
the engineers, but as the scale and the complexity of the hydraulic works increased, the mathematical models became so complex that a mathematical solution could not be obtained. This created a demand for new methods and again the experimental investigation became popular, but this time as measurements on small......-scale models. But still the scale and complexity of hydraulic works were increasing, and soon even small-scale models reached a natural limit for some applications. In the mean time the modern computer was developed, and it became possible to solve complex mathematical models by use of computer-based numerical...
Energy Technology Data Exchange (ETDEWEB)
V. Chipman
2002-10-05
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to
Modeling Documents with Event Model
Directory of Open Access Journals (Sweden)
Longhui Wang
2015-08-01
Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.
Model Selection for Geostatistical Models
Energy Technology Data Exchange (ETDEWEB)
Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.
2006-02-01
We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.
DEFF Research Database (Denmark)
Højgaard, Tomas; Hansen, Rune
2016-01-01
The purpose of this paper is to introduce Didactical Modelling as a research methodology in mathematics education. We compare the methodology with other approaches and argue that Didactical Modelling has its own specificity. We discuss the methodological “why” and explain why we find it useful to...
Højgaard, Tomas; Hansen, Rune
2016-01-01
The purpose of this paper is to introduce Didactical Modelling as a research methodology in mathematics education. We compare the methodology with other approaches and argue that Didactical Modelling has its own specificity. We discuss the methodological “why” and explain why we find it useful to construct this approach in mathematics education research.
DEFF Research Database (Denmark)
Gøtze, Jens Peter; Krentz, Andrew
2014-01-01
In this issue of Cardiovascular Endocrinology, we are proud to present a broad and dedicated spectrum of reviews on animal models in cardiovascular disease. The reviews cover most aspects of animal models in science from basic differences and similarities between small animals and the human...
Giandomenico, Rossano
2006-01-01
The model determines a stochastic continuous process as continuous limit of a stochastic discrete process so to show that the stochastic continuous process converges to the stochastic discrete process such that we can integrate it. Furthermore, the model determines the expected volatility and the expected mean so to show that the volatility and the mean are increasing function of the time.
Budiansky, Stephen
1980-01-01
This article discusses the need for more accurate and complete input data and field verification of the various models of air pollutant dispension. Consideration should be given to changing the form of air quality standards based on enhanced dispersion modeling techniques. (Author/RE)
Poortman, Sybilla; Sloep, Peter
2006-01-01
Educational models describes a case study on a complex learning object. Possibilities are investigated for using this learning object, which is based on a particular educational model, outside of its original context. Furthermore, this study provides advice that might lead to an increase in
Jongerden, M.R.; Haverkort, Boudewijn R.H.M.
2008-01-01
The use of mobile devices is often limited by the capacity of the employed batteries. The battery lifetime determines how long one can use a device. Battery modeling can help to predict, and possibly extend this lifetime. Many different battery models have been developed over the years. However,
Linguistic models and linguistic modeling.
Pedryez, W; Vasilakos, A V
1999-01-01
The study is concerned with a linguistic approach to the design of a new category of fuzzy (granular) models. In contrast to numerically driven identification techniques, we concentrate on budding meaningful linguistic labels (granules) in the space of experimental data and forming the ensuing model as a web of associations between such granules. As such models are designed at the level of information granules and generate results in the same granular rather than pure numeric format, we refer to them as linguistic models. Furthermore, as there are no detailed numeric estimation procedures involved in the construction of the linguistic models carried out in this way, their design mode can be viewed as that of a rapid prototyping. The underlying algorithm used in the development of the models utilizes an augmented version of the clustering technique (context-based clustering) that is centered around a notion of linguistic contexts-a collection of fuzzy sets or fuzzy relations defined in the data space (more precisely a space of input variables). The detailed design algorithm is provided and contrasted with the standard modeling approaches commonly encountered in the literature. The usefulness of the linguistic mode of system modeling is discussed and illustrated with the aid of numeric studies including both synthetic data as well as some time series dealing with modeling traffic intensity over a broadband telecommunication network.
Energy Technology Data Exchange (ETDEWEB)
Veronica J. Rutledge
2013-01-01
The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to
Mitchell, W.D.
1972-01-01
Model hydrographs are composed of pairs of dimensionless ratios, arrayed in tabular form, which, when modified by the appropriate values of rainfall exceed and by the time and areal characteristics of the drainage basin, satisfactorily represent the flood hydrograph for the basin. Model bydrographs are developed from a dimensionless translation hydrograph, having a time base of T hours and appropriately modified for storm duration by routing through reservoir storage, S=kOx. Models fall into two distinct classes: (1) those for which the value of x is unity and which have all the characteristics of true unit hydrographs and (2) those for which the value of x is other than unity and to which the unit-hydrograph principles of proportionality and superposition do not apply. Twenty-six families of linear models and eight families of nonlinear models in tabular form from the principal subject of this report. Supplemental discussions describe the development of the models and illustrate their application. Other sections of the report, supplemental to the tables, describe methods of determining the hydrograph characteristics, T, k, and x, both from observed hydrograph and from the physical characteristics of the drainage basin. Five illustrative examples of use show that the models, when properly converted to incorporate actual rainfall excess and the time and areal characteristics of the drainage basins, do indeed satisfactorily represent the observed flood hydrographs for the basins.
Grimaldi, P.
2012-07-01
These mandatory guidelines are provided for preparation of papers accepted for publication in the series of Volumes of The The stereometric modelling means modelling achieved with : - the use of a pair of virtual cameras, with parallel axes and positioned at a mutual distance average of 1/10 of the distance camera-object (in practice the realization and use of a stereometric camera in the modeling program); - the shot visualization in two distinct windows - the stereoscopic viewing of the shot while modelling. Since the definition of "3D vision" is inaccurately referred to as the simple perspective of an object, it is required to add the word stereo so that "3D stereo vision " shall stand for "three-dimensional view" and ,therefore, measure the width, height and depth of the surveyed image. Thanks to the development of a stereo metric model , either real or virtual, through the "materialization", either real or virtual, of the optical-stereo metric model made visible with a stereoscope. It is feasible a continuous on line updating of the cultural heritage with the help of photogrammetry and stereometric modelling. The catalogue of the Architectonic Photogrammetry Laboratory of Politecnico di Bari is available on line at: http://rappresentazione.stereofot.it:591/StereoFot/FMPro?-db=StereoFot.fp5&-lay=Scheda&-format=cerca.htm&-view
Modeling complexes of modeled proteins.
Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A
2017-03-01
Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C(α) RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Lin, Tony; Erfan, Sasan
2016-01-01
Mathematical modeling is an open-ended research subject where no definite answers exist for any problem. Math modeling enables thinking outside the box to connect different fields of studies together including statistics, algebra, calculus, matrices, programming and scientific writing. As an integral part of society, it is the foundation for many…
DEFF Research Database (Denmark)
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight
2016-01-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test...... the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how...
Berger, Stefan; Lang, Stefan; Pernkopf, Lena
2017-04-01
Climate change adaptability of a region due to climate change (CC) is of growing concern due to its irreversible character and the multitude of factors supporting or hampering the capability to adapt. Research on climate change adaptation, in its complex character and its global (in terms of both societal and environmental implications), involves several schools according to [Miller et al. 2010]: (1) the 'vulnerability' community with its two to three main pillars (exposure, adaptive capacity, sensitivity) following the actor-oriented IPCC approach [IPCC 2007], investigating the degree to which a system is susceptible to, and unable to cope with, adverse effects of climate change; and (2) the 'resilience' community emerging from the (eco-)systems approach with its dual function [Folke 2006] of absorbing disturbance and self-renewal/-organisation. The concept of 'transformability' seems to be the appropriate overarching one to accommodate either notion. Here we treat climate change (CC) adaptability/transformability as a latent phenomenon to be operationalized by decomposition [Weinberg 1975]. After this we re-compose a meta-indicator based on a scale-specific spatial set of regions characterised by uniform response to the phenomenon under concern. In [Lang et al. 2014] we showed how gridded fine-scale data being integrated and regionalised can support ambitious policy interventions in the so-called geon approach. Spatializing a multi-dimensional indicator set using scale-specific regionalisation shall aim for a policy-driven 'unitisation' of the intervention space. We focus our study on a tourism region called Salzkammergut, situated in inner Austria and historically grown. Nowadays intersecting three federal states without an explicit administrative body, this region can be considered 'latent' itself. The region, a historic tourism area since the Austrian Empire has received its recognition since the early 19th century. Then being confined to an area around the
DEFF Research Database (Denmark)
Kindler, Ekkart
2009-01-01
There are many different notations and formalisms for modelling business processes and workflows. These notations and formalisms have been introduced with different purposes and objectives. Later, influenced by other notations, comparisons with other tools, or by standardization efforts, these no...
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Insepov, Zeke; Veitzer, Seth; Mahalingam, Sudhakar
2011-01-01
Although vacuum arcs were first identified over 110 years ago, they are not yet well understood. We have since developed a model of breakdown and gradient limits that tries to explain, in a self-consistent way: arc triggering, plasma initiation, plasma evolution, surface damage and gra- dient limits. We use simple PIC codes for modeling plasmas, molecular dynamics for modeling surface breakdown, and surface damage, and mesoscale surface thermodynamics and finite element electrostatic codes for to evaluate surface properties. Since any given experiment seems to have more variables than data points, we have tried to consider a wide variety of arcing (rf structures, e beam welding, laser ablation, etc.) to help constrain the problem, and concentrate on common mechanisms. While the mechanisms can be comparatively simple, modeling can be challenging.
National Oceanic and Atmospheric Administration, Department of Commerce — Computer simulations of past climate. Variables provided as model output are described by parameter keyword. In some cases the parameter keywords are a subset of all...
Regardt, Olle; Rönnbäck, Lars; Bergholtz, Maria; Johannesson, Paul; Wohed, Petia
Maintaining and evolving data warehouses is a complex, error prone, and time consuming activity. The main reason for this state of affairs is that the environment of a data warehouse is in constant change, while the warehouse itself needs to provide a stable and consistent interface to information spanning extended periods of time. In this paper, we propose a modeling technique for data warehousing, called anchor modeling, that offers non-destructive extensibility mechanisms, thereby enabling robust and flexible management of changes in source systems. A key benefit of anchor modeling is that changes in a data warehouse environment only require extensions, not modifications, to the data warehouse. This ensures that existing data warehouse applications will remain unaffected by the evolution of the data warehouse, i.e. existing views and functions will not have to be modified as a result of changes in the warehouse model.
Hodges, Wilfrid
1993-01-01
An up-to-date and integrated introduction to model theory, designed to be used for graduate courses (for students who are familiar with first-order logic), and as a reference for more experienced logicians and mathematicians.
Accelerated life models modeling and statistical analysis
Bagdonavicius, Vilijandas
2001-01-01
Failure Time DistributionsIntroductionParametric Classes of Failure Time DistributionsAccelerated Life ModelsIntroductionGeneralized Sedyakin's ModelAccelerated Failure Time ModelProportional Hazards ModelGeneralized Proportional Hazards ModelsGeneralized Additive and Additive-Multiplicative Hazards ModelsChanging Shape and Scale ModelsGeneralizationsModels Including Switch-Up and Cycling EffectsHeredity HypothesisSummaryAccelerated Degradation ModelsIntroductionDegradation ModelsModeling the Influence of Explanatory Varia
2(2S+1)-component model and its connection with other field theories
Dvoeglazov, V V
1994-01-01
This talk presents the review of forgotten but attractive formalism proposed by Joos and Weinberg in the sixties for description of high-spin particles. Problems raised in the recent works [Ahluwalia {\\it et al.}] are discussed. New results obtained by the author in his preceding papers ["Hadronic J.", 1993, v. 16, No. 5, pp. 423-428; No. 6, pp. 459-467; Preprints IFUNAM FT-93-19, 24, 35] are reported. In {\\it Appendix}, bibliography of publications related with mentioned $2(2S+1)$- component formalism is presented.
Do stroke models model stroke?
Directory of Open Access Journals (Sweden)
Philipp Mergenthaler
2012-11-01
Full Text Available Stroke is one of the leading causes of death worldwide and the biggest reason for long-term disability. Basic research has formed the modern understanding of stroke pathophysiology, and has revealed important molecular, cellular and systemic mechanisms. However, despite decades of research, most translational stroke trials that aim to introduce basic research findings into clinical treatment strategies – most notably in the field of neuroprotection – have failed. Among other obstacles, poor methodological and statistical standards, negative publication bias, and incomplete preclinical testing have been proposed as ‘translational roadblocks’. In this article, we introduce the models commonly used in preclinical stroke research, discuss some of the causes of failed translational success and review potential remedies. We further introduce the concept of modeling ‘care’ of stroke patients, because current preclinical research models the disorder but does not model care or state-of-the-art clinical testing. Stringent statistical methods and controlled preclinical trials have been suggested to counteract weaknesses in preclinical research. We conclude that preclinical stroke research requires (1 appropriate modeling of the disorder, (2 appropriate modeling of the care of stroke patients and (3 an approach to preclinical testing that is similar to clinical testing, including Phase 3 randomized controlled preclinical trials as necessary additional steps before new therapies enter clinical testing.
DEFF Research Database (Denmark)
2012-01-01
The relationship between representation and the represented is examined here through the notion of persistent modelling. This notion is not novel to the activity of architectural design if it is considered as describing a continued active and iterative engagement with design concerns – an evident...... characteristic of architectural practice. But the persistence in persistent modelling can also be understood to apply in other ways, reflecting and anticipating extended roles for representation. This book identifies three principle areas in which these extensions are becoming apparent within contemporary....... It also provides critical insight into the use of contemporary modelling tools and methods, together with an examination of the implications their use has within the territories of architectural design, realisation and experience....
Eck, Christof; Knabner, Peter
2017-01-01
Mathematical models are the decisive tool to explain and predict phenomena in the natural and engineering sciences. With this book readers will learn to derive mathematical models which help to understand real world phenomena. At the same time a wealth of important examples for the abstract concepts treated in the curriculum of mathematics degrees are given. An essential feature of this book is that mathematical structures are used as an ordering principle and not the fields of application. Methods from linear algebra, analysis and the theory of ordinary and partial differential equations are thoroughly introduced and applied in the modeling process. Examples of applications in the fields electrical networks, chemical reaction dynamics, population dynamics, fluid dynamics, elasticity theory and crystal growth are treated comprehensively.
Institute of Scientific and Technical Information of China (English)
Ling Li; Vasily Volkov
2006-01-01
A physically-based model is presented for the simulation of a new type of deformable objects-inflatable objects, such as shaped balloons, which consist of pressurized air enclosed by an elastic surface. These objects have properties inherent in both 3D and 2D elastic bodies, as they demonstrate the behaviour of 3D shapes using 2D formulations. As there is no internal structure in them, their behaviour is substantially different from the behaviour of deformable solid objects. We use one of the few available models for deformable surfaces, and enhance it to include the forces of internal and external pressure. These pressure forces may also incorporate buoyancy forces, to allow objects filled with a low density gas to float in denser media. The obtained models demonstrate rich dynamic behaviour, such as bouncing, floating, deflation and inflation.
DEFF Research Database (Denmark)
Nash, Ulrik William
2014-01-01
Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory of probabil......Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory...... of probabilistic functionalism, and concerns the environment and the mind, and adaptation by the latter to the former. This entry is about the lens model, and probabilistic functionalism more broadly. Focus will mostly be on firms and their employees, but, to fully appreciate the scope, we have to keep in mind...
DEFF Research Database (Denmark)
Nash, Ulrik William
2014-01-01
Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory of probabil......Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory...
Directory of Open Access Journals (Sweden)
Aarti Sharma
2009-01-01
Full Text Available The use of computational chemistry in the development of novel pharmaceuticals is becoming an increasingly important tool. In the past, drugs were simply screened for effectiveness. The recent advances in computing power and the exponential growth of the knowledge of protein structures have made it possible for organic compounds to be tailored to decrease the harmful side effects and increase the potency. This article provides a detailed description of the techniques employed in molecular modeling. Molecular modeling is a rapidly developing discipline, and has been supported by the dramatic improvements in computer hardware and software in recent years.
Sivaram, C
2007-01-01
An alternate model for gamma ray bursts is suggested. For a white dwarf (WD) and neutron star (NS) very close binary system, the WD (close to Mch) can detonate due to tidal heating, leading to a SN. Material falling on to the NS at relativistic velocities can cause its collapse to a magnetar or quark star or black hole leading to a GRB. As the material smashes on to the NS, it is dubbed the Smashnova model. Here the SN is followed by a GRB. NS impacting a RG (or RSG) (like in Thorne-Zytkow objects) can also cause a SN outburst followed by a GRB. Other variations are explored.
Cardey, Sylviane
2013-01-01
In response to the need for reliable results from natural language processing, this book presents an original way of decomposing a language(s) in a microscopic manner by means of intra/inter‑language norms and divergences, going progressively from languages as systems to the linguistic, mathematical and computational models, which being based on a constructive approach are inherently traceable. Languages are described with their elements aggregating or repelling each other to form viable interrelated micro‑systems. The abstract model, which contrary to the current state of the art works in int
Building Models and Building Modelling
DEFF Research Database (Denmark)
Jørgensen, Kaj Asbjørn; Skauge, Jørn
I rapportens indledende kapitel beskrives de primære begreber vedrørende bygningsmodeller og nogle fundamentale forhold vedrørende computerbaseret modulering bliver opstillet. Desuden bliver forskellen mellem tegneprogrammer og bygningsmodelleringsprogrammer beskrevet. Vigtige aspekter om......lering og bygningsmodeller. Det bliver understreget at modellering bør udføres på flere abstraktionsniveauer og i to dimensioner i den såkaldte modelleringsmatrix. Ud fra dette identificeres de primære faser af bygningsmodellering. Dernæst beskrives de basale karakteristika for bygningsmodeller. Heri...... inkluderes en præcisering af begreberne objektorienteret software og objektorienteret modeller. Det bliver fremhævet at begrebet objektbaseret modellering giver en tilstrækkelig og bedre forståelse. Endelig beskrives forestillingen om den ideale bygningsmodel som værende én samlet model, der anvendes gennem...
DEFF Research Database (Denmark)
Jensen, Morten S.; Frigaard, Peter
In the following, results from model tests with Zeebrugge breakwater are presented. The objective with these tests is partly to investigate the influence on wave run-up due to a changing waterlevel during a storm. Finally, the influence on wave run-up due to an introduced longshore current...
Directory of Open Access Journals (Sweden)
Olaf eWolkenhauer
2014-01-01
Full Text Available Next generation sequencing technologies are bringing about a renaissance of mining approaches. A comprehensive picture of the genetic landscape of an individual patient will be useful, for example, to identify groups of patients that do or do not respond to certain therapies. The high expectations may however not be satisfied if the number of patient groups with similar characteristics is going to be very large. I therefore doubt that mining sequence data will give us an understanding of why and when therapies work. For understanding the mechanisms underlying diseases, an alternative approach is to model small networks in quantitative mechanistic detail, to elucidate the role of gene and proteins in dynamically changing the functioning of cells. Here an obvious critique is that these models consider too few components, compared to what might be relevant for any particular cell function. I show here that mining approaches and dynamical systems theory are two ends of a spectrum of methodologies to choose from. Drawing upon personal experience in numerous interdisciplinary collaborations, I provide guidance on how to model by discussing the question Why model?
Wolkenhauer, Olaf
2014-01-01
Next generation sequencing technologies are bringing about a renaissance of mining approaches. A comprehensive picture of the genetic landscape of an individual patient will be useful, for example, to identify groups of patients that do or do not respond to certain therapies. The high expectations may however not be satisfied if the number of patient groups with similar characteristics is going to be very large. I therefore doubt that mining sequence data will give us an understanding of why and when therapies work. For understanding the mechanisms underlying diseases, an alternative approach is to model small networks in quantitative mechanistic detail, to elucidate the role of gene and proteins in dynamically changing the functioning of cells. Here an obvious critique is that these models consider too few components, compared to what might be relevant for any particular cell function. I show here that mining approaches and dynamical systems theory are two ends of a spectrum of methodologies to choose from. Drawing upon personal experience in numerous interdisciplinary collaborations, I provide guidance on how to model by discussing the question "Why model?"
Burianová, Eva
2008-01-01
Cílem první části této bakalářské práce je - pomocí analýzy výchozích textů - teoretické shrnutí ekonomických modelů a teorií, na kterých model CAPM stojí: Markowitzův model teorie portfolia (analýza maximalizace očekávaného užitku a na něm založený model výběru optimálního portfolia), Tobina (rozšíření Markowitzova modelu ? rozdělení výběru optimálního portfolia do dvou fází; nejprve určení optimální kombinace rizikových instrumentů a následná alokace dostupného kapitálu mezi tuto optimální ...
Institute of Scientific and Technical Information of China (English)
R.E. Waltz
2007-01-01
@@ There has been remarkable progress during the past decade in understanding and modeling turbulent transport in tokamaks. With some exceptions the progress is derived from the huge increases in computational power and the ability to simulate tokamak turbulence with ever more fundamental and physically realistic dynamical equations, e.g.
Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.
2015-12-01
The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .
Goodwyn, Lauren; Salm, Sarah
2007-01-01
Teaching the anatomy of the muscle system to high school students can be challenging. Students often learn about muscle anatomy by memorizing information from textbooks or by observing plastic, inflexible models. Although these mediums help students learn about muscle placement, the mediums do not facilitate understanding regarding integration of…
Finger Lakes Regional Education Center for Economic Development, Mount Morris, NY.
This guide describes seven model programs that were developed by the Finger Lakes Regional Center for Economic Development (New York) to meet the training needs of female and minority entrepreneurs to help their businesses survive and grow and to assist disabled and dislocated workers and youth in beginning small businesses. The first three models…
Tijskens, L.M.M.
2003-01-01
For modelling product behaviour, with respect to quality for users and consumers, its essential to have at least a fundamental notion what quality really is, and which product properties determine the quality assigned by the consumer to a product. In other words: what is allowed and what is to be
Energy Technology Data Exchange (ETDEWEB)
A. Alsaed
2004-09-14
The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of
Information Model for Product Modeling
Institute of Scientific and Technical Information of China (English)
焦国方; 刘慎权
1992-01-01
The Key problems in product modeling for integrated CAD ∥CAM systems are the information structures and representations of products.They are taking more and more important roles in engineering applications.With the investigation on engineering product information and from the viewpoint of industrial process,in this paper,the information models are proposed and the definitions of the framework of product information are given.And then,the integration and the consistence of product information are discussed by introucing the entity and its instance.As a summary,the information structures described in this paper have many advantage and natures helpful in engineering design.
Building Models and Building Modelling
DEFF Research Database (Denmark)
Jørgensen, Kaj; Skauge, Jørn
2008-01-01
I rapportens indledende kapitel beskrives de primære begreber vedrørende bygningsmodeller og nogle fundamentale forhold vedrørende computerbaseret modulering bliver opstillet. Desuden bliver forskellen mellem tegneprogrammer og bygningsmodelleringsprogrammer beskrevet. Vigtige aspekter om comp...
Directory of Open Access Journals (Sweden)
Aarti Sharma
2009-12-01
Full Text Available
DEFF Research Database (Denmark)
Arnoldi, Jakob
The article discusses the use of algorithmic models for so-called High Frequency Trading (HFT) in finance. HFT is controversial yet widespread in modern financial markets. It is a form of automated trading technology which critics among other things claim can lead to market manipulation. Drawing...... on two cases, this article shows that manipulation more likely happens in the reverse way, meaning that human traders attempt to make algorithms ‘make mistakes’ or ‘mislead’ algos. Thus, it is algorithmic models, not humans, that are manipulated. Such manipulation poses challenges for security exchanges....... The article analyses these challenges and argues that we witness a new post-social form of human-technology interaction that will lead to a reconfiguration of professional codes for financial trading....
Barr, Michael
2002-01-01
Acyclic models is a method heavily used to analyze and compare various homology and cohomology theories appearing in topology and algebra. This book is the first attempt to put together in a concise form this important technique and to include all the necessary background. It presents a brief introduction to category theory and homological algebra. The author then gives the background of the theory of differential modules and chain complexes over an abelian category to state the main acyclic models theorem, generalizing and systemizing the earlier material. This is then applied to various cohomology theories in algebra and topology. The volume could be used as a text for a course that combines homological algebra and algebraic topology. Required background includes a standard course in abstract algebra and some knowledge of topology. The volume contains many exercises. It is also suitable as a reference work for researchers.
Fossión, Rubén
2010-09-01
The atomic nucleus is a typical example of a many-body problem. On the one hand, the number of nucleons (protons and neutrons) that constitute the nucleus is too large to allow for exact calculations. On the other hand, the number of constituent particles is too small for the individual nuclear excitation states to be explained by statistical methods. Another problem, particular for the atomic nucleus, is that the nucleon-nucleon (n-n) interaction is not one of the fundamental forces of Nature, and is hard to put in a single closed equation. The nucleon-nucleon interaction also behaves differently between two free nucleons (bare interaction) and between two nucleons in the nuclear medium (dressed interaction). Because of the above reasons, specific nuclear many-body models have been devised of which each one sheds light on some selected aspects of nuclear structure. Only combining the viewpoints of different models, a global insight of the atomic nucleus can be gained. In this chapter, we revise the the Nuclear Shell Model as an example of the microscopic approach, and the Collective Model as an example of the geometric approach. Finally, we study the statistical properties of nuclear spectra, basing on symmetry principles, to find out whether there is quantum chaos in the atomic nucleus. All three major approaches have been rewarded with the Nobel Prize of Physics. In the text, we will stress how each approach introduces its own series of approximations to reduce the prohibitingly large number of degrees of freedom of the full many-body problem to a smaller manageable number of effective degrees of freedom.
DEFF Research Database (Denmark)
2015-01-01
This book reflects and expands on the current trend in the building industry to understand, simulate and ultimately design buildings by taking into consideration the interlinked elements and forces that act on them. This approach overcomes the traditional, exclusive focus on building tasks, while....... The chapter authors were invited speakers at the 5th Symposium "Modelling Behaviour", which took place at the CITA in Copenhagen in September 2015....
DEFF Research Database (Denmark)
Michael, John
others' minds. Then (2), in order to bring to light some possible justifications, as well as hazards and criticisms of the methodology of looking time tests, I will take a closer look at the concept of folk psychology and will focus on the idea that folk psychology involves using oneself as a model...... of other people in order to predict and understand their behavior. Finally (3), I will discuss the historical location and significance of the emergence of looking time tests...
Energy Technology Data Exchange (ETDEWEB)
Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas
2005-11-01
Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.
Model Construct Based Enterprise Model Architecture and Its Modeling Approach
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.
Cold and warm quintessential/tachyonic inflationary models in light of the Planck 2015 results
Rezazadeh, K; Hashemi, S; Karimi, P
2015-01-01
Within the framework of cold and warm quintessential/tachyonic inflationary scenarios, we consider different inflationary potentials and check their viability in light of the Planck 2015 results. In the cold quintessential inflation, the exponential and inverse power-law potentials that give rise to the power-law and intermediate inflations, respectively, are not favored according to the Planck 2015 results. But, the power-law potential can be in agreement with the Planck 2015 data at 95\\% CL. Also, the predictions of the Higgs-like and Coleman-Weinberg potentials and $\\mathcal{R}^2$ inflation can lie inside the 68\\% CL region of Planck 2015 data. In the warm quintessential inflationary scenario, the power-law potential with a constant dissipative parameter $\\Gamma$, the inverse power-law and exponential potentials with constant/varying $\\Gamma$ do not lead to acceptable results. But the power-law potential with varying $\\Gamma$, the Higgs-like and Coleman-Weinberg potentials and $\\mathcal{R}^2$ inflation wit...
Directory of Open Access Journals (Sweden)
PAPAJ Jan
2014-05-01
Full Text Available Traditional wireless networks use the concept of the point-to-point forwarding inherited from reliable wired networks which seems to be not ideal for wireless environment. New emerging applications and networks operate mostly disconnected. So-called Delay-Tolerant networks (DTNs are receiving increasing attentions from both academia and industry. DTNs introduced a store-carry-and-forward concept solving the problem of intermittent connectivity. Behavior of such networks is verified by real models, computer simulation or combination of the both approaches. Computer simulation has become the primary and cost effective tool for evaluating the performance of the DTNs. OPNET modeler is our target simulation tool and we wanted to spread OPNET’s simulation opportunity towards DTN. We implemented bundle protocol to OPNET modeler allowing simulate cases based on bundle concept as epidemic forwarding which relies on flooding the network with messages and the forwarding algorithm based on the history of past encounters (PRoPHET. The implementation details will be provided in article.
Institute of Scientific and Technical Information of China (English)
Liu Zhiyang
2011-01-01
Similar to ISO Technical Committees,SAC Technical Committees undertake the management and coordination of standard's development and amendments in various sectors in industry,playing the role as a bridge among enterprises,research institutions and the governmental standardization administration.How to fully play the essential role is the vital issue SAC has been committing to resolve.Among hundreds of SAC TCs,one stands out in knitting together those isolated,scattered,but highly competitive enterprises in the same industry with the "Standards" thread,and achieving remarkable results in promoting industry development with standardization.It sets a role model for other TCs.
DEFF Research Database (Denmark)
2015-01-01
This book reflects and expands on the current trend in the building industry to understand, simulate and ultimately design buildings by taking into consideration the interlinked elements and forces that act on them. This approach overcomes the traditional, exclusive focus on building tasks, while....... The chapter authors were invited speakers at the 5th Symposium "Modelling Behaviour", which took place at the CITA in Copenhagen in September 2015....... posing new challenges in all areas of the industry from material and structural to the urban scale. Contributions from invited experts, papers and case studies provide the reader with a comprehensive overview of the field, as well as perspectives from related disciplines, such as computer science...
Directory of Open Access Journals (Sweden)
M. Alguacil Marí
2017-08-01
Full Text Available The current economic environment, together with the low scores obtained by our students in recent years, makes it necessary to incorporate new teaching methods. In this sense, econometric modelling provides a unique opportunity offering to the student with the basic tools to address the study of Econometrics in a deeper and novel way. In this article, this teaching method is described, presenting also an example based on a recent study carried out by two students of the Degree of Economics. Likewise, the success of this method is evaluated quantitatively in terms of academic performance. The results confirm our initial idea that the greater involvement of the student, as well as the need for a more complete knowledge of the subject, suppose a stimulus for the study of this subject. As evidence of this, we show how those students who opted for the method we propose here obtained higher qualifications than those that chose the traditional method.
DEFF Research Database (Denmark)
Bork Petersen, Franziska
2013-01-01
For the presentation of his autumn/winter 2012 collection in Paris and subsequently in Copenhagen, Danish designer Henrik Vibskov installed a mobile catwalk. The article investigates the choreographic impact of this scenography on those who move through it. Drawing on Dance Studies, the analytical...... advantageous manner. Stepping on the catwalk’s sloping, moving surfaces decelerates the models’ walk and makes it cautious, hesitant and shaky: suddenly the models lack exactly the affirmative, staccato, striving quality of motion, and the condescending expression that they perform on most contemporary...... catwalks. Vibskov’s catwalk induces what the dance scholar Gabriele Brandstetter has labelled a ‘defigurative choregoraphy’: a straying from definitions, which exist in ballet as in other movement-based genres, of how a figure should move and appear (1998). The catwalk scenography in this instance...
On Activity modelling in process modeling
Directory of Open Access Journals (Sweden)
Dorel Aiordachioaie
2001-12-01
Full Text Available The paper is looking to the dynamic feature of the meta-models of the process modelling process, the time. Some principles are considered and discussed as main dimensions of any modelling activity: the compatibility of the substances, the equipresence of phenomena and the solvability of the model. The activity models are considered and represented at meta-level.
Towards a Multi Business Model Innovation Model
DEFF Research Database (Denmark)
Lindgren, Peter; Jørgensen, Rasmus
2012-01-01
This paper studies the evolution of business model (BM) innovations related to a multi business model framework. The paper tries to answer the research questions: • What are the requirements for a multi business model innovation model (BMIM)? • How should a multi business model innovation model...... look like? Different generations of BMIMs are initially studied in the context of laying the baseline for how next generation multi BM Innovation model (BMIM) should look like. All generations of models are analyzed with the purpose of comparing the characteristics and challenges of previous...
Better Language Models with Model Merging
Brants, T
1996-01-01
This paper investigates model merging, a technique for deriving Markov models from text or speech corpora. Models are derived by starting with a large and specific model and by successively combining states to build smaller and more general models. We present methods to reduce the time complexity of the algorithm and report on experiments on deriving language models for a speech recognition task. The experiments show the advantage of model merging over the standard bigram approach. The merged model assigns a lower perplexity to the test set and uses considerably fewer states.
Neutrino oscillations: from an historical perspective to the present status
Bilenky, S.
2016-05-01
The history of neutrino mixing and oscillations is briefly presented. Basics of neutrino mixing and oscillations and convenient formalism of neutrino oscillations in vacuum is given. The role of neutrino in the Standard Model and the Weinberg mechanism of the generation of the Majorana neutrino masses are discussed.
Neutrino oscillations: from an historical perspective to the present status
Bilenky, S
2016-01-01
The history of neutrino mixing and oscillations is briefly presented. Basics of neutrino mixing and oscillations and convenient formalism of neutrino oscillations in vacuum is given. The role of neutrino in the Standard Model and the Weinberg mechanism of the generation of the Majorana neutrino masses are discussed.
Neutrino oscillations: From a historical perspective to the present status
Bilenky, S.
2016-07-01
The history of neutrino mixing and oscillations is briefly presented. Basics of neutrino mixing and oscillations and convenient formalism of neutrino oscillations in vacuum are given. The role of neutrino in the Standard Model and the Weinberg mechanism of the generation of the Majorana neutrino masses are discussed.
Galtsov, D V
2001-01-01
Recent progress in the study of solitons and black holes in non-Abelian field theories coupled to gravity is reviewed. New topics include gravitational binding of monopoles, black holes with non-trivial topology, Lue-Weinberg bifurcation, asymptotically AdS lumps, solutions to the Freedman-Schwarz model with applications to holography, non-Abelian Born-Infeld solutions
Radiative screening of fifth forces
Burrage, Clare; Millington, Peter
2016-01-01
We describe a symmetron model in which the screening of fifth forces arises at the one-loop level through the Coleman-Weinberg mechanism of spontaneous symmetry breaking. We show that such a theory can avoid current constraints on the existence of fifth forces, but still has the potential to give rise to observable deviations from general relativity.
Neutrino oscillations: From a historical perspective to the present status
Energy Technology Data Exchange (ETDEWEB)
Bilenky, S., E-mail: bilenky@gmail.com [Joint Institute for Nuclear Research, Dubna, R-141980 (Russian Federation); TRIUMF 4004 Wesbrook Mall, Vancouver BC, V6T 2A3 Canada (Canada)
2016-07-15
The history of neutrino mixing and oscillations is briefly presented. Basics of neutrino mixing and oscillations and convenient formalism of neutrino oscillations in vacuum are given. The role of neutrino in the Standard Model and the Weinberg mechanism of the generation of the Majorana neutrino masses are discussed.
Z sup 0 -boson contribution in anomalous electron momenta in plane-wave electromagnetic field
Klimenko, E Y
2002-01-01
The Z sup 0 -boson contribution to the mass of electron moving in plane-wave field is considered. The dependence of the Z sup 0 -boson contribution to electron anomalous magnetic momentum and anomalous electric momentum on the external field parameters is studied within the frames of the Weinberg-Salam-Glashow standard model
Turner, Kenneth; Tevaarwerk, Emma; Unterman, Nathan; Grdinic, Marcel; Campbell, Jason; Chandrasekhar, Venkat; Chang, R. P. H.
2006-01-01
Nanoscience refers to the fundamental study of scientific phenomena, which occur at the nanoscale--nanotechnology to the exploitation of novel properties and functions of materials in the sub-100 nm size range. One of the underlying principles of science is development of models of observed phenomena. In biology, the Hardy-Weinberg principle is a…
Neutral currents and the Higgs mechanism
Veltman, M.J.G.; Ross, D.A.
1975-01-01
The consequences of assuming (i) weak and e.m. forces constitute a gauge field theory, and (ii) there are no heavy leptons, are investigated. Relative to the Weinberg model, introduction of a general spontaneous symmetry breaking system leads to a theory with one extra free parameter, namely the
The charge radius and anapole moment of a free fermion
Energy Technology Data Exchange (ETDEWEB)
Gongora-T, A.; Stuart, R.G. (European Organization for Nuclear Research, Geneva (Switzerland). Theory Div.)
1992-07-01
We derive an expression for the charge radius and anapole moment of a free fermion induced at one loop in the standard Glashow-Salam-Weinberg model of electroweak interactions. The result, despite earlier claims to the contrary, is demonstrably gauge-invariant and observable in principle. (orig.).
2012-12-01
Aho Weinberger Kernighan (scripting language named after its authors) BCT Brigade Combat Team BDA battle damage assessment C2V command and control...to repair MUVES Modular Unix -Based Vulnerability Estimation Suite (Survivability/Lethality Analysis Directorate’s vulnerability/lethality model
Model Selection Principles in Misspecified Models
Lv, Jinchi
2010-01-01
Model selection is of fundamental importance to high dimensional modeling featured in many contemporary applications. Classical principles of model selection include the Kullback-Leibler divergence principle and the Bayesian principle, which lead to the Akaike information criterion and Bayesian information criterion when models are correctly specified. Yet model misspecification is unavoidable when we have no knowledge of the true model or when we have the correct family of distributions but miss some true predictor. In this paper, we propose a family of semi-Bayesian principles for model selection in misspecified models, which combine the strengths of the two well-known principles. We derive asymptotic expansions of the semi-Bayesian principles in misspecified generalized linear models, which give the new semi-Bayesian information criteria (SIC). A specific form of SIC admits a natural decomposition into the negative maximum quasi-log-likelihood, a penalty on model dimensionality, and a penalty on model miss...
The IMACLIM model; Le modele IMACLIM
Energy Technology Data Exchange (ETDEWEB)
NONE
2003-07-01
This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)
Building Mental Models by Dissecting Physical Models
Srivastava, Anveshna
2016-01-01
When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…
The IMACLIM model; Le modele IMACLIM
Energy Technology Data Exchange (ETDEWEB)
NONE
2003-07-01
This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)
Directory of Open Access Journals (Sweden)
José Luis Rivarola
1988-06-01
Full Text Available La evolución lingüística del español de Buenos Aires ha sido materia de varios trabajos anteriores de la autora quien en este libro reúne los resultados de sus investigaciones en un panorama que abarca cuatro siglos. Hasta donde llega mi información, es el primer trabajo de conjunto que comprende todo el desarrollo histórico de un español regional americano; con esto ya queda anticipada su novedad y su importancia, pues es un hecho conocido que la historia del español de América no ha sido objeto de estudios sistemáticos de conjunto y de amplio alcance cronológico. Además, se sustenta en el examen de un amplio corpus documental, para cuya información y evaluación se ha tenido muy presente la fiabilidad de la transcripción y la representatividad sociolingüística; este corpus se completa con fuentes secundarias, especialmente ricas a partir del s. XIX (tratados preceptivos, literatura costumbrista.
1986-02-05
replacement of the USS CORAL SEA, which will become the Navy’s training carrier, replacing the USS LEXINGTON. Looking ahead, the Navy will have to order... Marino Corps Tactical Support 44 44444 44 Reswerve Navy and Marine Corps Tactical Support 44 444 44 43 SeeM Ships. Active Tankers 21 21 26 26 24 Cargo 23
1987-01-01
the praise It won from the great English statesman, William Gladstone , as the "most wonderful work ever struck off at a given time by the brain and...growth to the already large stock of Soviet military assets (see Chart I.A.4). In 1984, for the first time since 1969 , U.S. military procurement appears...stability at lower force levels. Beginning in 1969 , the United States attempted to constrain the growth of the Soviet strategic threat through the Strategic
Modelling live forensic acquisition
CSIR Research Space (South Africa)
Grobler, MM
2009-06-01
Full Text Available This paper discusses the development of a South African model for Live Forensic Acquisition - Liforac. The Liforac model is a comprehensive model that presents a range of aspects related to Live Forensic Acquisition. The model provides forensic...
Continuous Time Model Estimation
Carl Chiarella; Shenhuai Gao
2004-01-01
This paper introduces an easy to follow method for continuous time model estimation. It serves as an introduction on how to convert a state space model from continuous time to discrete time, how to decompose a hybrid stochastic model into a trend model plus a noise model, how to estimate the trend model by simulation, and how to calculate standard errors from estimation of the noise model. It also discusses the numerical difficulties involved in discrete time models that bring about the unit ...
Comparative Protein Structure Modeling Using MODELLER.
Webb, Benjamin; Sali, Andrej
2016-06-20
Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. © 2016 by John Wiley & Sons, Inc.
Concept Modeling vs. Data modeling in Practice
DEFF Research Database (Denmark)
Madsen, Bodil Nistrup; Erdman Thomsen, Hanne
2015-01-01
account of the inheritance of characteristics and allows us to introduce a number of principles and constraints which render concept modeling more coherent than earlier approaches. Second, we explain how terminological ontologies can be used as the basis for developing conceptual and logical data models......This chapter shows the usefulness of terminological concept modeling as a first step in data modeling. First, we introduce terminological concept modeling with terminological ontologies, i.e. concept systems enriched with characteristics modeled as feature specifications. This enables a formal...
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Dodgson, Mark; Gann, David; Phillips, Nelson; Massa, Lorenzo; Tucci, Christopher
2014-01-01
The chapter offers a broad review of the literature at the nexus between Business Models and innovation studies, and examines the notion of Business Model Innovation in three different situations: Business Model Design in newly formed organizations, Business Model Reconfiguration in incumbent firms, and Business Model Innovation in the broad context of sustainability. Tools and perspectives to make sense of Business Models and support managers and entrepreneurs in dealing with Business Model ...
Chao, Dennis L; Longini, Ira M; Morris, J Glenn
2014-01-01
Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating models and the need to tailor models for different outbreak scenarios.
Longini, Ira M.; Morris, J. Glenn
2014-01-01
Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating models and the need to tailor models for different outbreak scenarios. PMID:23412687
Model Manipulation for End-User Modelers
DEFF Research Database (Denmark)
Acretoaie, Vlad
of these proposals. To achieve its first goal, the thesis presents the findings of a Systematic Mapping Study showing that human factors topics are scarcely and relatively poorly addressed in model transformation research. Motivated by these findings, the thesis explores the requirements of end-user modelers......End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... requires such experience. These languages are therefore only used by a small subset of the modelers that could, in theory, benefit from them. The goals of this thesis are to substantiate this observation, introduce the concepts and tools required to overcome it, and provide empirical evidence in support...
Air Quality Dispersion Modeling - Alternative Models
Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.
From Product Models to Product State Models
DEFF Research Database (Denmark)
Larsen, Michael Holm
1999-01-01
A well-known technology designed to handle product data is Product Models. Product Models are in their current form not able to handle all types of product state information. Hence, the concept of a Product State Model (PSM) is proposed. The PSM and in particular how to model a PSM is the Research...... Object for this project. In the presentation, benefits and challenges of the PSM will be presented as a basis for the discussion....
Measurement and Modeling: Infectious Disease Modeling
Kretzschmar, MEE
2016-01-01
After some historical remarks about the development of mathematical theory for infectious disease dynamics we introduce a basic mathematical model for the spread of an infection with immunity. The concepts of the model are explained and the model equations are derived from first principles. Using th
DEFF Research Database (Denmark)
Madsen, Henrik; Zhou, Jianjun; Hansen, Lars Henrik
1997-01-01
This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application...... of the laws of physics on the system. The unknown (or uncertain) parameters are estimated with Maximum Likelihood (ML) parameter estimation. The identified model has been evaluated by comparing the measurements with simulation of the model. The identified model was much more capable of describing the dynamics...... of the system than the deterministic model....
DEFF Research Database (Denmark)
Cameron, Ian T.; Gani, Rafiqul
This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models. These ...
Willden, Jeff
2001-01-01
"Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…
DEFF Research Database (Denmark)
Madsen, Henrik; Zhou, Jianjun; Hansen, Lars Henrik
1997-01-01
This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application...
Haiganoush Preisler; Alan Ager
2013-01-01
For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...
Solicited abstract: Global hydrological modeling and models
Xu, Chong-Yu
2010-05-01
The origins of rainfall-runoff modeling in the broad sense can be found in the middle of the 19th century arising in response to three types of engineering problems: (1) urban sewer design, (2) land reclamation drainage systems design, and (3) reservoir spillway design. Since then numerous empirical, conceptual and physically-based models are developed including event based models using unit hydrograph concept, Nash's linear reservoir models, HBV model, TOPMODEL, SHE model, etc. From the late 1980s, the evolution of global and continental-scale hydrology has placed new demands on hydrologic modellers. The macro-scale hydrological (global and regional scale) models were developed on the basis of the following motivations (Arenll, 1999). First, for a variety of operational and planning purposes, water resource managers responsible for large regions need to estimate the spatial variability of resources over large areas, at a spatial resolution finer than can be provided by observed data alone. Second, hydrologists and water managers are interested in the effects of land-use and climate variability and change over a large geographic domain. Third, there is an increasing need of using hydrologic models as a base to estimate point and non-point sources of pollution loading to streams. Fourth, hydrologists and atmospheric modellers have perceived weaknesses in the representation of hydrological processes in regional and global climate models, and developed global hydrological models to overcome the weaknesses of global climate models. Considerable progress in the development and application of global hydrological models has been achieved to date, however, large uncertainties still exist considering the model structure including large scale flow routing, parameterization, input data, etc. This presentation will focus on the global hydrological models, and the discussion includes (1) types of global hydrological models, (2) procedure of global hydrological model development
Bayesian Model Selection and Statistical Modeling
Ando, Tomohiro
2010-01-01
Bayesian model selection is a fundamental part of the Bayesian statistical modeling process. The quality of these solutions usually depends on the goodness of the constructed Bayesian model. Realizing how crucial this issue is, many researchers and practitioners have been extensively investigating the Bayesian model selection problem. This book provides comprehensive explanations of the concepts and derivations of the Bayesian approach for model selection and related criteria, including the Bayes factor, the Bayesian information criterion (BIC), the generalized BIC, and the pseudo marginal lik
From Numeric Models to Granular System Modeling
Directory of Open Access Journals (Sweden)
Witold Pedrycz
2015-03-01
To make this study self-contained, we briefly recall the key concepts of granular computing and demonstrate how this conceptual framework and its algorithmic fundamentals give rise to granular models. We discuss several representative formal setups used in describing and processing information granules including fuzzy sets, rough sets, and interval calculus. Key architectures of models dwell upon relationships among information granules. We demonstrate how information granularity and its optimization can be regarded as an important design asset to be exploited in system modeling and giving rise to granular models. With this regard, an important category of rule-based models along with their granular enrichments is studied in detail.
Geologic Framework Model Analysis Model Report
Energy Technology Data Exchange (ETDEWEB)
R. Clayton
2000-12-19
The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the
Mangani, P
2011-01-01
This title includes: Lectures - G.E. Sacks - Model theory and applications, and H.J. Keisler - Constructions in model theory; and, Seminars - M. Servi - SH formulas and generalized exponential, and J.A. Makowski - Topological model theory.
Earth Data Analysis Center, University of New Mexico — The model combines three modeled fire behavior parameters (rate of spread, flame length, crown fire potential) and one modeled ecological health measure (fire regime...
CSIR Research Space (South Africa)
Osburn, L
2010-01-01
Full Text Available The construction industry has turned to energy modelling in order to assist them in reducing the amount of energy consumed by buildings. However, while the energy loads of buildings can be accurately modelled, energy models often under...
Computational neurogenetic modeling
Benuskova, Lubica
2010-01-01
Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol
Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy
2008-01-01
Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...
National Aeronautics and Space Administration — CLAIRE MONTELEONI*, GAVIN SCHMIDT, AND SHAILESH SAROHA* Climate models are complex mathematical models designed by meteorologists, geophysicists, and climate...
Federal Laboratory Consortium — The Environmental Modeling Center provides the computational tools to perform geostatistical analysis, to model ground water and atmospheric releases for comparison...
Finch, W Holmes; Kelley, Ken
2014-01-01
A powerful tool for analyzing nested designs in a variety of fields, multilevel/hierarchical modeling allows researchers to account for data collected at multiple levels. Multilevel Modeling Using R provides you with a helpful guide to conducting multilevel data modeling using the R software environment.After reviewing standard linear models, the authors present the basics of multilevel models and explain how to fit these models using R. They then show how to employ multilevel modeling with longitudinal data and demonstrate the valuable graphical options in R. The book also describes models fo
DEFF Research Database (Denmark)
Rask, Morten
insight from the literature about business models, international product policy, international entry modes and globalization into a conceptual model of relevant design elements of global business models, enabling global business model innovation to deal with differences in a downstream perspective...... regarding the customer interface and in an upstream perspective regarding the supply infrastructure. The paper offers a coherent conceptual dynamic meta-model of global business model innovation. Students, scholars and managers within the field of international business can use this conceptualization...... to understand, to study, and to create global business model innovation. Managerial and research implications draw on the developed ideal type of global business model innovation....
Cellier, Francois E.
1991-01-01
A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.
DEFF Research Database (Denmark)
Andresen, Mette
2007-01-01
This paper meets the common critique of the teaching of non-authentic modelling in school mathematics. In the paper, non-authentic modelling is related to a change of view on the intentions of modelling from knowledge about applications of mathematical models to modelling for concept formation. Non......-authentic modelling is also linked with the potentials of exploration of ready-made models as a forerunner for more authentic modelling processes. The discussion includes analysis of an episode of students? work in the classroom, which serves to illustrate how concept formation may be linked to explorations of a non...
Interfacing materials models with fire field models
Energy Technology Data Exchange (ETDEWEB)
Nicolette, V.F.; Tieszen, S.R.; Moya, J.L.
1995-12-01
For flame spread over solid materials, there has traditionally been a large technology gap between fundamental combustion research and the somewhat simplistic approaches used for practical, real-world applications. Recent advances in computational hardware and computational fluid dynamics (CFD)-based software have led to the development of fire field models. These models, when used in conjunction with material burning models, have the potential to bridge the gap between research and application by implementing physics-based engineering models in a transient, multi-dimensional tool. This paper discusses the coupling that is necessary between fire field models and burning material models for the simulation of solid material fires. Fire field models are capable of providing detailed information about the local fire environment. This information serves as an input to the solid material combustion submodel, which subsequently calculates the impact of the fire environment on the material. The response of the solid material (in terms of thermal response, decomposition, charring, and off-gassing) is then fed back into the field model as a source of mass, momentum and energy. The critical parameters which must be passed between the field model and the material burning model have been identified. Many computational issues must be addressed when developing such an interface. Some examples include the ability to track multiple fuels and species, local ignition criteria, and the need to use local grid refinement over the burning material of interest.
Combustion modeling in a model combustor
Institute of Scientific and Technical Information of China (English)
L.Y.Jiang; I.Campbell; K.Su
2007-01-01
The flow-field of a propane-air diffusion flame combustor with interior and exterior conjugate heat transfers was numerically studied.Results obtained from four combustion models,combined with the re-normalization group (RNG) k-ε turbulence model,discrete ordinates radiation model and enhanced wall treatment are presented and discussed.The results are compared with a comprehensive database obtained from a series of experimental measurements.The flow patterns and the recirculation zone length in the combustion chamber are accurately predicted,and the mean axial velocities are in fairly good agreement with the experimental data,particularly at downstream sections for all four combustion models.The mean temperature profiles are captured fairly well by the eddy dissipation (EDS),probability density function (PDF),and laminar flamelet combustion models.However,the EDS-finite-rate combustion model fails to provide an acceptable temperature field.In general,the flamelet model illustrates little superiority over the PDF model,and to some extent the PDF model shows better performance than the EDS model.
Preon Trinity - A Schematic Model of Leptons, Quarks and Heavy Vector Bosons
Dugne, J J; Hansson, J; Dugne, Jean-Jacques; Fredriksson, Sverker; Hansson, Johan
2002-01-01
Quarks, leptons and heavy vector bosons are suggested to be composed of stable spin-1/2 preons, existing in three flavours, combined according to simple rules. Straightforward consequences of an SU(3) preon-flavour symmetry are the conservation of three lepton numbers, oscillations and decays between some neutrinos, and the mixing of the d and s quarks, as well as of the vector fields W^0 and B^0. We find a relation between the Cabibbo and Weinberg mixing angles, and predict new (heavy) leptons, quarks and vector bosons, some of which might be observable at the Fermilab Tevatron and the future CERN LHC. A heavy neutrino might even be visible in existing data from the CERN LEP facility.
Preon trinity - a schematic model of leptons, quarks and heavy vector bosons
Energy Technology Data Exchange (ETDEWEB)
Dugne, J.J. [Universite Blaise Pascal, Clermont-Ferrand II, (CNRS), Lab. de Physique Corpusculaire, 63 - Aubiere (France); Fredriksson, S.; Hansson, J. [Lulea University of Technology, Dept. of Physics, Lulea (Sweden)
2002-10-01
Quarks, leptons and heavy vector bosons are suggested to be composed of stable spin- (1/2) preons, existing in three flavours, combined according to simple rules. Straightforward consequences of an SU(3) preon-flavour symmetry are the conservation of three lepton numbers, oscillations and decays between some neutrinos, and the mixing of the d and s quarks, as well as of the vector fields W{sup 0} and B{sup 0}. We find a relation between the Cabibbo and Weinberg mixing angles, and predict new (heavy) leptons, quarks and vector bosons, some of which might be observable at the Fermilab Tevatron and the future CERN LHC. A heavy neutrino might even be visible in existing data from the CERN LEP facility. (authors)
Preon trinity—A schematic model of leptons, quarks and heavy vector bosons
Dugne, J.-J.; Fredriksson, S.; Hansson, J.
2002-10-01
Quarks, leptons and heavy vector bosons are suggested to be composed of stable spin-(1/2) preons, existing in three flavours, combined according to simple rules. Straightforward consequences of an SU(3) preon-flavour symmetry are the conservation of three lepton numbers, oscillations and decays between some neutrinos, and the mixing of the d and s quarks, as well as of the vector fields W0 and B0. We find a relation between the Cabibbo and Weinberg mixing angles, and predict new (heavy) leptons, quarks and vector bosons, some of which might be observable at the Fermilab Tevatron and the future CERN LHC. A heavy neutrino might even be visible in existing data from the CERN LEP facility.
Regularized Structural Equation Modeling.
Jacobucci, Ross; Grimm, Kevin J; McArdle, John J
A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM's utility.
DEFF Research Database (Denmark)
Gernaey, Krist; Sin, Gürkan
2011-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise....... Efficient and good modeling practice therefore requires the use of a proper set of guidelines, thus grounding the modeling studies on a general and systematic framework. Last but not least, general limitations of WWTP models – more specifically activated sludge models – are introduced since these define...
DEFF Research Database (Denmark)
Gernaey, Krist; Sin, Gürkan
2008-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... the practice of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise....... Efficient and good modeling practice therefore requires the use of a proper set of guidelines, thus grounding the modeling studies on a general and systematic framework. Last but not least, general limitations of WWTP models – more specifically, activated sludge models – are introduced since these define...
ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT
Energy Technology Data Exchange (ETDEWEB)
Clinton Lum
2002-02-04
The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS M&O 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS M&O 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3
Model Reduction of Nonlinear Fire Dynamics Models
Lattimer, Alan Martin
2016-01-01
Due to the complexity, multi-scale, and multi-physics nature of the mathematical models for fires, current numerical models require too much computational effort to be useful in design and real-time decision making, especially when dealing with fires over large domains. To reduce the computational time while retaining the complexity of the domain and physics, our research has focused on several reduced-order modeling techniques. Our contributions are improving wildland fire reduced-order mod...
Better models are more effectively connected models
Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John
2016-04-01
The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...
Integrity modelling of tropospheric delay models
Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó
2017-04-01
The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual
Numerical Modelling of Streams
DEFF Research Database (Denmark)
Vestergaard, Kristian
In recent years there has been a sharp increase in the use of numerical water quality models. Numeric water quality modeling can be divided into three steps: Hydrodynamic modeling for the determination of stream flow and water levels. Modelling of transport and dispersion of a conservative...
DEFF Research Database (Denmark)
Højsgaard, Søren; Edwards, David; Lauritzen, Steffen
, the book provides examples of how more advanced aspects of graphical modeling can be represented and handled within R. Topics covered in the seven chapters include graphical models for contingency tables, Gaussian and mixed graphical models, Bayesian networks and modeling high dimensional data...
Dynamic Latent Classification Model
DEFF Research Database (Denmark)
Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre
as possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics...... in the process as well as modeling dependences between attributes....
Wenger, Trey V.; Kepley, Amanda K.; Balser, Dana S.
2017-07-01
HII Region Models fits HII region models to observed radio recombination line and radio continuum data. The algorithm includes the calculations of departure coefficients to correct for non-LTE effects. HII Region Models has been used to model star formation in the nucleus of IC 342.
Multilevel IRT Model Assessment
Fox, Jean-Paul; Ark, L. Andries; Croon, Marcel A.
2005-01-01
Modelling complex cognitive and psychological outcomes in, for example, educational assessment led to the development of generalized item response theory (IRT) models. A class of models was developed to solve practical and challenging educational problems by generalizing the basic IRT models. An IRT
Models for Dynamic Applications
DEFF Research Database (Denmark)
2011-01-01
be applied to formulate, analyse and solve these dynamic problems and how in the case of the fuel cell problem the model consists of coupledmeso and micro scale models. It is shown how data flows are handled between the models and how the solution is obtained within the modelling environment....
DEFF Research Database (Denmark)
Silvennoinen, Annastiina; Teräsvirta, Timo
This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes nonparametric and semiparametric models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example...... in which several multivariate GARCH models are fitted to the same data set and the results compared....
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.
The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The M...
Modelling Railway Interlocking Systems
DEFF Research Database (Denmark)
Lindegaard, Morten Peter; Viuf, P.; Haxthausen, Anne Elisabeth
2000-01-01
In this report we present a model of interlocking systems, and describe how the model may be validated by simulation. Station topologies are modelled by graphs in which the nodes denote track segments, and the edges denote connectivity for train traÆc. Points and signals are modelled by annotatio...
Rahmani, Fouad Lazhar
2010-11-01
The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].
DEFF Research Database (Denmark)
Silvennoinen, Annastiina; Teräsvirta, Timo
This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes nonparametric and semiparametric models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example...... in which several multivariate GARCH models are fitted to the same data set and the results compared....
Multilevel IRT Model Assessment
Fox, Gerardus J.A.; Ark, L. Andries; Croon, Marcel A.
2005-01-01
Modelling complex cognitive and psychological outcomes in, for example, educational assessment led to the development of generalized item response theory (IRT) models. A class of models was developed to solve practical and challenging educational problems by generalizing the basic IRT models. An IRT
Energy Technology Data Exchange (ETDEWEB)
2015-09-01
The Biomass Scenario Model (BSM) is a unique, carefully validated, state-of-the-art dynamic model of the domestic biofuels supply chain which explicitly focuses on policy issues, their feasibility, and potential side effects. It integrates resource availability, physical/technological/economic constraints, behavior, and policy. The model uses a system dynamics simulation (not optimization) to model dynamic interactions across the supply chain.
Energy Technology Data Exchange (ETDEWEB)
Ibsen, Lars Bo; Liingaard, M.
2006-12-15
A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)
Chuine, I.; Garcia de Cortazar-Atauri, I.; Kramer, K.; Hänninen, H.
2013-01-01
In this chapter we provide a brief overview of plant phenology modeling, focusing on mechanistic phenological models. After a brief history of plant phenology modeling, we present the different models which have been described in the literature so far and highlight the main differences between them,
R. Pietersz (Raoul); M. van Regenmortel
2005-01-01
textabstractCurrently, there are two market models for valuation and risk management of interest rate derivatives, the LIBOR and swap market models. In this paper, we introduce arbitrage-free constant maturity swap (CMS) market models and generic market models featuring forward rates that span perio
DEFF Research Database (Denmark)
Ayres, Phil
2012-01-01
This essay discusses models. It examines what models are, the roles models perform and suggests various intentions that underlie their construction and use. It discusses how models act as a conversational partner, and how they support various forms of conversation within the conversational activity...... of design. Three distinctions are drawn through which to develop this discussion of models in an architectural context. An examination of these distinctions serves to nuance particular characteristics and roles of models, the modelling activity itself and those engaged in it....
Luczak, Joshua
2017-02-01
Scientific models are frequently discussed in philosophy of science. A great deal of the discussion is centred on approximation, idealisation, and on how these models achieve their representational function. Despite the importance, distinct nature, and high presence of toy models, they have received little attention from philosophers. This paper hopes to remedy this situation. It aims to elevate the status of toy models: by distinguishing them from approximations and idealisations, by highlighting and elaborating on several ways the Kac ring, a simple statistical mechanical model, is used as a toy model, and by explaining why toy models can be used to successfully carry out important work without performing a representational function.
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre
2005-01-01
One of the simplest, and yet most consistently well-performing setof classifiers is the \\NB models. These models rely on twoassumptions: $(i)$ All the attributes used to describe an instanceare conditionally independent given the class of that instance,and $(ii)$ all attributes follow a specific...... parametric family ofdistributions. In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....
DEFF Research Database (Denmark)
Gernaey, Krist; Sin, Gürkan
2011-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise...
DEFF Research Database (Denmark)
Gernaey, Krist; Sin, Gürkan
2008-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... the practice of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise...
DEFF Research Database (Denmark)
Justesen, Lise; Overgaard, Svend Skafte
2017-01-01
This article presents an analytical model that aims to conceptualize how meal experiences are framed when taking into account a dynamic understanding of hospitality: the meal model is named The Hospitable Meal Model. The idea behind The Hospitable Meal Model is to present a conceptual model...... that can serve as a frame for developing hospitable meal competencies among professionals working within the area of institutional foodservices as well as a conceptual model for analysing meal experiences. The Hospitable Meal Model transcends and transforms existing meal models by presenting a more open......-ended approach towards meal experiences. The underlying purpose of The Hospitable Meal Model is to provide the basis for creating value for the individuals involved in institutional meal services. The Hospitable Meal Model was developed on the basis of an empirical study on hospital meal experiences explored...
Widera, Paweł
2011-01-01
The process of comparison of computer generated protein structural models is an important element of protein structure prediction. It has many uses including model quality evaluation, selection of the final models from a large set of candidates or optimisation of parameters of energy functions used in template free modelling and refinement. Although many protein comparison methods are available online on numerous web servers, their ability to handle a large scale model comparison is often very limited. Most of the servers offer only a single pairwise structural comparison, and they usually do not provide a model-specific comparison with a fixed alignment between the models. To bridge the gap between the protein and model structure comparison we have developed the Protein Models Comparator (pm-cmp). To be able to deliver the scalability on demand and handle large comparison experiments the pm-cmp was implemented "in the cloud". Protein Models Comparator is a scalable web application for a fast distributed comp...
Ristad, E S; Ristad, Eric Sven; Thomas, Robert G.
1996-01-01
A statistical language model assigns probability to strings of arbitrary length. Unfortunately, it is not possible to gather reliable statistics on strings of arbitrary length from a finite corpus. Therefore, a statistical language model must decide that each symbol in a string depends on at most a small, finite number of other symbols in the string. In this report we propose a new way to model conditional independence in Markov models. The central feature of our nonuniform Markov model is that it makes predictions of varying lengths using contexts of varying lengths. Experiments on the Wall Street Journal reveal that the nonuniform model performs slightly better than the classic interpolated Markov model. This result is somewhat remarkable because both models contain identical numbers of parameters whose values are estimated in a similar manner. The only difference between the two models is how they combine the statistics of longer and shorter strings. Keywords: nonuniform Markov model, interpolated Markov m...
Lumped Thermal Household Model
DEFF Research Database (Denmark)
Biegel, Benjamin; Andersen, Palle; Stoustrup, Jakob
2013-01-01
a lumped model approach as an alternative to the individual models. In the lumped model, the portfolio is seen as baseline consumption superimposed with an ideal storage of limited power and energy capacity. The benefit of such a lumped model is that the computational effort of flexibility optimization......In this paper we discuss two different approaches to model the flexible power consumption of heat pump heated households: individual household modeling and lumped modeling. We illustrate that a benefit of individual modeling is that we can overview and optimize the complete flexibility of a heat...... pump portfolio. Following, we illustrate two disadvantages of individual models, namely that it requires much computational effort to optimize over a large portfolio, and second that it is difficult to accurately model the houses in certain time periods due to local disturbances. Finally, we propose...
Energy Technology Data Exchange (ETDEWEB)
C. Ahlers; H. Liu
2000-03-12
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
Introduction to Adjoint Models
Errico, Ronald M.
2015-01-01
In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.
Chao, Dennis L.; Ira M Longini; Morris, J. Glenn
2014-01-01
Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating mo...
Zagorsek, Branislav
2013-01-01
Business model describes the company’s most important activities, proposed value, and the compensation for the value. Business model visualization enables to simply and systematically capture and describe the most important components of the business model while the standardization of the concept allows the comparison between companies. There are several possibilities how to visualize the model. The aim of this paper is to describe the options for business model visualization and business mod...
Diffeomorphic Statistical Deformation Models
DEFF Research Database (Denmark)
Hansen, Michael Sass; Hansen, Mads/Fogtman; Larsen, Rasmus
2007-01-01
In this paper we present a new method for constructing diffeomorphic statistical deformation models in arbitrary dimensional images with a nonlinear generative model and a linear parameter space. Our deformation model is a modified version of the diffeomorphic model introduced by Cootes et al. Th...... with ground truth in form of manual expert annotations, and compared to Cootes's model. We anticipate applications in unconstrained diffeomorphic synthesis of images, e.g. for tracking, segmentation, registration or classification purposes....
DEFF Research Database (Denmark)
Høskuldsson, Agnar
1996-01-01
Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....
Dennis L Chao; Longini, Ira M.; Morris, J. Glenn
2014-01-01
Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating mo...
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...
Model Checking of Boolean Process Models
Schneider, Christoph
2011-01-01
In the field of Business Process Management formal models for the control flow of business processes have been designed since more than 15 years. Which methods are best suited to verify the bulk of these models? The first step is to select a formal language which fixes the semantics of the models. We adopt the language of Boolean systems as reference language for Boolean process models. Boolean systems form a simple subclass of coloured Petri nets. Their characteristics are low tokens to model explicitly states with a subsequent skipping of activations and arbitrary logical rules of type AND, XOR, OR etc. to model the split and join of the control flow. We apply model checking as a verification method for the safeness and liveness of Boolean systems. Model checking of Boolean systems uses the elementary theory of propositional logic, no modal operators are needed. Our verification builds on a finite complete prefix of a certain T-system attached to the Boolean system. It splits the processes of the Boolean sy...
Pavement Aging Model by Response Surface Modeling
Directory of Open Access Journals (Sweden)
Manzano-Ramírez A.
2011-10-01
Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.
8th Workshop on What Comes Beyond the Standard Model
Nielsen, Holger Bech; Froggatt, Colin D; Lukman, Dragan; What Comes Beyond the Standard Model; Bled Workshops Phys.
2005-01-01
Contents: 1. Can MPP Together with Weinberg-Salam Higgs Provide Cosmological Inflation? (D.L. Bennett and H.B. Nielsen) 2. Conserved Charges in 3d Gravity With Torsion (M. Blagojevic and B. Cvetkovic) 3. Mass Matrices of Quarks and Leptons in the Approach Unifying Spins and Charges (A. Borstnik Bracic and N.S. Mankoc Borstnik) 4. Dark Matter From Encapsulated Atoms (C.D. Froggatt and H.B. Nielsen) 5. Dirac Sea for Bosons Also and SUSY for Single Particles (Y. Habara, H.B. Nielsen and M. Ninomiya) 6. Searching for Boundary Conditions in Kaluza-Klein-like Theories (D. Lukman, N.S. Mankoc Borstnik and H. B. Nielsen) 7. Second Quantization of Spinors and Clifford Algebra Objects (N.S. Mankoc Borstnik and H. B. Nielsen) 8. Are there Interesting Problems That Could Increase Understanding of Physics and Mathematics? (R. Mirman) 9. Noncommutative Nonsingular Black Holes (P. Nicolini) 10. Compactified Time and Likely Entropy -- World Inside Time Machine: Closed Time-like Curve (H.B. Nielsen and M. Ninomiya) + Astri Kl...
Model Validation Status Review
Energy Technology Data Exchange (ETDEWEB)
E.L. Hardin
2001-11-28
The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and
DEFF Research Database (Denmark)
Cameron, Ian T.; Gani, Rafiqul
This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models....... These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....
Practical Marginalized Multilevel Models.
Griswold, Michael E; Swihart, Bruce J; Caffo, Brian S; Zeger, Scott L
2013-01-01
Clustered data analysis is characterized by the need to describe both systematic variation in a mean model and cluster-dependent random variation in an association model. Marginalized multilevel models embrace the robustness and interpretations of a marginal mean model, while retaining the likelihood inference capabilities and flexible dependence structures of a conditional association model. Although there has been increasing recognition of the attractiveness of marginalized multilevel models, there has been a gap in their practical application arising from a lack of readily available estimation procedures. We extend the marginalized multilevel model to allow for nonlinear functions in both the mean and association aspects. We then formulate marginal models through conditional specifications to facilitate estimation with mixed model computational solutions already in place. We illustrate the MMM and approximate MMM approaches on a cerebrovascular deficiency crossover trial using SAS and an epidemiological study on race and visual impairment using R. Datasets, SAS and R code are included as supplemental materials.
Modelling Foundations and Applications
DEFF Research Database (Denmark)
This book constitutes the refereed proceedings of the 8th European Conference on Modelling Foundations and Applications, held in Kgs. Lyngby, Denmark, in July 2012. The 20 revised full foundations track papers and 10 revised full applications track papers presented were carefully reviewed...... and selected from 81 submissions. Papers on all aspects of MDE were received, including topics such as architectural modelling and product lines, code generation, domain-specic modeling, metamodeling, model analysis and verication, model management, model transformation and simulation. The breadth of topics...
Institute of Scientific and Technical Information of China (English)
蒋娜; 谢有琪
2012-01-01
With the development of human society, the social hub enlarges beyond one community to the extent that the world is deemed as a community as a whole. Communication, therefore, plays an increasingly important role in our daily life. As a consequence, communication model or the definition of which is not so much a definition as a guide in communication. However, some existed communication models are not as practical as it was. This paper tries to make an overall contrast among three communication models Coded Model, Gable Communication Model and Ostensive Inferential Model, to see how they assist people to comprehend verbal and non -verbal communication.
Modeling worldwide highway networks
Villas Boas, Paulino R.; Rodrigues, Francisco A.; da F. Costa, Luciano
2009-12-01
This Letter addresses the problem of modeling the highway systems of different countries by using complex networks formalism. More specifically, we compare two traditional geographical models with a modified geometrical network model where paths, rather than edges, are incorporated at each step between the origin and the destination vertices. Optimal configurations of parameters are obtained for each model and used for the comparison. The highway networks of Australia, Brazil, India, and Romania are considered and shown to be properly modeled by the modified geographical model.
Institute of Scientific and Technical Information of China (English)
LI Zhi-jia; YAO Cheng; KONG Xiang-guang
2005-01-01
To improve the Xinanjiang model, the runoff generating from infiltration-excess is added to the model.The another 6 parameters are added to Xinanjiang model.In principle, the improved Xinanjiang model can be used to simulate runoff in the humid, semi-humid and also semi-arid regions.The application in Yi River shows the improved Xinanjiang model could forecast discharge with higher accuracy and can satisfy the practical requirements.It also shows that the improved model is reasonable.
Microsoft tabular modeling cookbook
Braak, Paul te
2013-01-01
This book follows a cookbook style with recipes explaining the steps for developing analytic data using Business Intelligence Semantic Models.This book is designed for developers who wish to develop powerful and dynamic models for users as well as those who are responsible for the administration of models in corporate environments. It is also targeted at analysts and users of Excel who wish to advance their knowledge of Excel through the development of tabular models or who wish to analyze data through tabular modeling techniques. We assume no prior knowledge of tabular modeling
Directory of Open Access Journals (Sweden)
Luiz Carlos Bresser-Pereira
2012-03-01
Full Text Available Besides analyzing capitalist societies historically and thinking of them in terms of phases or stages, we may compare different models or varieties of capitalism. In this paper I survey the literature on this subject, and distinguish the classification that has a production or business approach from those that use a mainly political criterion. I identify five forms of capitalism: among the rich countries, the liberal democratic or Anglo-Saxon model, the social or European model, and the endogenous social integration or Japanese model; among developing countries, I distinguish the Asian developmental model from the liberal-dependent model that characterizes most other developing countries, including Brazil.
Geller, Michael; Telem, Ofri
2015-05-15
We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at m_{KK}, naturally allowing for m_{KK} beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider.
Energy Technology Data Exchange (ETDEWEB)
Reiter, E.R.
1980-01-01
A highly sophisticated and accurate approach is described to compute on an hourly or daily basis the energy consumption for space heating by individual buildings, urban sectors, and whole cities. The need for models and specifically weather-sensitive models, composite models, and space-heating models are discussed. Development of the Colorado State University Model, based on heat-transfer equations and on a heuristic, adaptive, self-organizing computation learning approach, is described. Results of modeling energy consumption by the city of Minneapolis and Cheyenne are given. Some data on energy consumption in individual buildings are included.
Empirical Model Building Data, Models, and Reality
Thompson, James R
2011-01-01
Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m
Ensey, Tyler S.
2013-01-01
During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a
Energy Technology Data Exchange (ETDEWEB)
D.W. Wu; A.J. Smith
2004-11-08
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
Major Differences between the Jerome Model and the Horace Model
Institute of Scientific and Technical Information of China (English)
朱艳
2014-01-01
There are three famous translation models in the field of translation: the Jerome model, the Horace model and the Schleiermacher model. The production and development of the three models have significant influence on the translation. To find the major differences between the two western classical translation theoretical models, we discuss the Jerome model and the Hor-ace model deeply in this paper.
Modelling cointegration in the vector autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren
2000-01-01
A survey is given of some results obtained for the cointegrated VAR. The Granger representation theorem is discussed and the notions of cointegration and common trends are defined. The statistical model for cointegrated I(1) variables is defined, and it is shown how hypotheses on the cointegrating...... relations can be estimated under suitable identification conditions. The asymptotic theory is briefly mentioned and a few economic applications of the cointegration model are indicated....
Emissions Modeling Clearinghouse
U.S. Environmental Protection Agency — The Emissions Modeling Clearinghouse (EMCH) supports and promotes emissions modeling activities both internal and external to the EPA. Through this site, the EPA...
DEFF Research Database (Denmark)
Riis, Troels; Jørgensen, John Leif
1999-01-01
This documents describes a test of the implementation of the ASC orbit model for the Champ satellite.......This documents describes a test of the implementation of the ASC orbit model for the Champ satellite....
National Oceanic and Atmospheric Administration, Department of Commerce — The World Magnetic Model is the standard model used by the U.S. Department of Defense, the U.K. Ministry of Defence, the North Atlantic Treaty Organization (NATO)...
Laboratory of Biological Modeling
Federal Laboratory Consortium — The Laboratory of Biological Modeling is defined by both its methodologies and its areas of application. We use mathematical modeling in many forms and apply it to...
Rouder, Jeffrey N; Engelhardt, Christopher R; McCabe, Simon; Morey, Richard D
2016-12-01
Analysis of variance (ANOVA), the workhorse analysis of experimental designs, consists of F-tests of main effects and interactions. Yet, testing, including traditional ANOVA, has been recently critiqued on a number of theoretical and practical grounds. In light of these critiques, model comparison and model selection serve as an attractive alternative. Model comparison differs from testing in that one can support a null or nested model vis-a-vis a more general alternative by penalizing more flexible models. We argue this ability to support more simple models allows for more nuanced theoretical conclusions than provided by traditional ANOVA F-tests. We provide a model comparison strategy and show how ANOVA models may be reparameterized to better address substantive questions in data analysis.
DEFF Research Database (Denmark)
Højsgaard, Søren; Edwards, David; Lauritzen, Steffen
Graphical models in their modern form have been around since the late 1970s and appear today in many areas of the sciences. Along with the ongoing developments of graphical models, a number of different graphical modeling software programs have been written over the years. In recent years many...... of these software developments have taken place within the R community, either in the form of new packages or by providing an R ingerface to existing software. This book attempts to give the reader a gentle introduction to graphical modeling using R and the main features of some of these packages. In addition......, the book provides examples of how more advanced aspects of graphical modeling can be represented and handled within R. Topics covered in the seven chapters include graphical models for contingency tables, Gaussian and mixed graphical models, Bayesian networks and modeling high dimensional data...
Controlling Modelling Artifacts
DEFF Research Database (Denmark)
Smith, Michael James Andrew; Nielson, Flemming; Nielson, Hanne Riis
2011-01-01
the possible configurations of the system (for example, by counting the number of components in a certain state). We motivate our methodology with a case study of the LMAC protocol for wireless sensor networks. In particular, we investigate the accuracy of a recently proposed high-level model of LMAC......When analysing the performance of a complex system, we typically build abstract models that are small enough to analyse, but still capture the relevant details of the system. But it is difficult to know whether the model accurately describes the real system, or if its behaviour is due to modelling...... artifacts that were inadvertently introduced. In this paper, we propose a novel methodology to reason about modelling artifacts, given a detailed model and a highlevel (more abstract) model of the same system. By a series of automated abstraction steps, we lift the detailed model to the same state space...
Laboratory of Biological Modeling
Federal Laboratory Consortium — The Laboratory of Biological Modeling is defined by both its methodologies and its areas of application. We use mathematical modeling in many forms and apply it to a...
Modeling EERE deployment programs
Energy Technology Data Exchange (ETDEWEB)
Cort, K. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hostick, D. J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Belzer, D. B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Livingston, O. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2007-11-01
The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge for future research.
Consistent model driven architecture
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Bounding species distribution models
Directory of Open Access Journals (Sweden)
Thomas J. STOHLGREN, Catherine S. JARNEVICH, Wayne E. ESAIAS,Jeffrey T. MORISETTE
2011-10-01
Full Text Available Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for “clamping” model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART and maximum entropy (Maxent models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5: 642–647, 2011].
Energy Technology Data Exchange (ETDEWEB)
Pulkkinen, U. [VTT Industrial Systems (Finland)
2004-04-01
The report describes a simple comparison of two CCF-models, the ECLM, and the Beta-model. The objective of the comparison is to identify differences in the results of the models by applying the models in some simple test data cases. The comparison focuses mainly on theoretical aspects of the above mentioned CCF-models. The properties of the model parameter estimates in the data cases is also discussed. The practical aspects in using and estimating CCFmodels in real PSA context (e.g. the data interpretation, properties of computer tools, the model documentation) are not discussed in the report. Similarly, the qualitative CCF-analyses needed in using the models are not discussed in the report. (au)
Chip Multithreaded Consistency Model
Institute of Scientific and Technical Information of China (English)
Zu-Song Li; Dan-Dan Huan; Wei-Wu Hu; Zhi-Min Tang
2008-01-01
Multithreaded technique is the developing trend of high performance processor. Memory consistency model is essential to the correctness, performance and complexity of multithreaded processor. The chip multithreaded consistency model adapting to multithreaded processor is proposed in this paper. The restriction imposed on memory event ordering by chip multithreaded consistency is presented and formalized. With the idea of critical cycle built by Wei-Wu Hu, we prove that the proposed chip multithreaded consistency model satisfies the criterion of correct execution of sequential consistency model. Chip multithreaded consistency model provides a way of achieving high performance compared with sequential consistency model and ensures the compatibility of software that the execution result in multithreaded processor is the same as the execution result in uniprocessor. The implementation strategy of chip multithreaded consistency model in Godson-2 SMT processor is also proposed. Godson-2 SMT processor supports chip multithreaded consistency model correctly by exception scheme based on the sequential memory access queue of each thread.
Callison, Daniel
2002-01-01
Defines models and describes information search models that can be helpful to instructional media specialists in meeting users' abilities and information needs. Explains pathfinders and Kuhlthau's information search process, including the pre-writing information search process. (LRW)
Bennett, Joan
1998-01-01
Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)
Directory of Open Access Journals (Sweden)
Oleg Svatos
2013-01-01
Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.
National Oceanic and Atmospheric Administration, Department of Commerce — The World Magnetic Model is the standard model used by the U.S. Department of Defense, the U.K. Ministry of Defence, the North Atlantic Treaty Organization (NATO)...
Bounding Species Distribution Models
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
Amir Farbin
The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...
National Aeronautics and Space Administration — The Galactic model is a spatial and spectral template. The model for the Galactic diffuse emission was developed using spectral line surveys of HI and CO (as a...
Petrone, Giovanni; Spagnuolo, Giovanni
2016-01-01
This comprehensive guide surveys all available models for simulating a photovoltaic (PV) generator at different levels of granularity, from cell to system level, in uniform as well as in mismatched conditions. Providing a thorough comparison among the models, engineers have all the elements needed to choose the right PV array model for specific applications or environmental conditions matched with the model of the electronic circuit used to maximize the PV power production.
Buczyńska, Weronika
2010-01-01
We define toric projective model of a trivalent graph as a generalization of a binary symmetric model of a trivalent phylogenetic tree. Generators of the projective coordinate ring of the models of graphs with one cycle are explicitly described. The models of graphs with the same topological invariants are deformation equivalent and share the same Hilbert function. We also provide an algorithm to compute the Hilbert function.
Model of magnetostrictive actuator
Institute of Scientific and Technical Information of China (English)
LI Lin; ZHANG Yuan-yuan
2005-01-01
The hysteresis of the magnetostrictive actuator was studied. A mathematical model of the hysteresis loop was obtained on the basis of experiment. This model depends on the frequency and the amplitude of the alternating current inputted to the magnetostrictive actuator. Based on the model, the effect of hysteresis on dynamic output of the magnetostrictive actuator was investigated. Then how to consider hysteresis and establish a dynamic model of a magnetostrictive actuator system is discussed when a practical system was designed and applied.
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
In order to set up a conceptual data model that reflects the real world as accurately as possible,this paper firstly reviews and analyzes the disadvantages of previous conceptual data models used by traditional GIS in simulating geographic space,gives a new explanation to geographic space and analyzes its various essential characteristics.Finally,this paper proposes several detailed key points for designing a new type of GIS data model and gives a simple holistic GIS data model.
Modeling Digital Video Database
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The main purpose of the model is to present how the UnifiedModeling L anguage (UML) can be used for modeling digital video database system (VDBS). It demonstrates the modeling process that can be followed during the analysis phase of complex applications. In order to guarantee the continuity mapping of the mo dels, the authors propose some suggestions to transform the use case diagrams in to an object diagram, which is one of the main diagrams for the next development phases.
Tashiro, Tohru
2014-03-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
Quantal Response: Nonparametric Modeling
2017-01-01
spline N−spline Fig. 3 Logistic regression 7 Approved for public release; distribution is unlimited. 5. Nonparametric QR Models Nonparametric linear ...stimulus and probability of response. The Generalized Linear Model approach does not make use of the limit distribution but allows arbitrary functional...7. Conclusions and Recommendations 18 8. References 19 Appendix A. The Linear Model 21 Appendix B. The Generalized Linear Model 33 Appendix C. B
Auxiliary Deep Generative Models
Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae; Winther, Ole
2016-01-01
Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections...
Avionics Architecture Modelling Language
Alana, Elena; Naranjo, Hector; Valencia, Raul; Medina, Alberto; Honvault, Christophe; Rugina, Ana; Panunzia, Marco; Dellandrea, Brice; Garcia, Gerald
2014-08-01
This paper presents the ESA AAML (Avionics Architecture Modelling Language) study, which aimed at advancing the avionics engineering practices towards a model-based approach by (i) identifying and prioritising the avionics-relevant analyses, (ii) specifying the modelling language features necessary to support the identified analyses, and (iii) recommending/prototyping software tooling to demonstrate the automation of the selected analyses based on a modelling language and compliant with the defined specification.
Tashiro, Tohru
2013-01-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
Artificial neural network modelling
Samarasinghe, Sandhya
2016-01-01
This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .
Optimization modeling with spreadsheets
Baker, Kenneth R
2015-01-01
An accessible introduction to optimization analysis using spreadsheets Updated and revised, Optimization Modeling with Spreadsheets, Third Edition emphasizes model building skills in optimization analysis. By emphasizing both spreadsheet modeling and optimization tools in the freely available Microsoft® Office Excel® Solver, the book illustrates how to find solutions to real-world optimization problems without needing additional specialized software. The Third Edition includes many practical applications of optimization models as well as a systematic framework that il
Model Checking Feature Interactions
DEFF Research Database (Denmark)
Le Guilly, Thibaut; Olsen, Petur; Pedersen, Thomas;
2015-01-01
This paper presents an offline approach to analyzing feature interactions in embedded systems. The approach consists of a systematic process to gather the necessary information about system components and their models. The model is first specified in terms of predicates, before being refined to t...... to timed automata. The consistency of the model is verified at different development stages, and the correct linkage between the predicates and their semantic model is checked. The approach is illustrated on a use case from home automation....
GARCH Modelling of Cryptocurrencies
Directory of Open Access Journals (Sweden)
Jeffrey Chu
2017-10-01
Full Text Available With the exception of Bitcoin, there appears to be little or no literature on GARCH modelling of cryptocurrencies. This paper provides the first GARCH modelling of the seven most popular cryptocurrencies. Twelve GARCH models are fitted to each cryptocurrency, and their fits are assessed in terms of five criteria. Conclusions are drawn on the best fitting models, forecasts and acceptability of value at risk estimates.
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
Modelling of corrosion cracking of reinforced concrete structures is complicated as a great number of uncertain factors are involved. To get a reliable modelling a physical and mechanical understanding of the process behind corrosion in needed.......Modelling of corrosion cracking of reinforced concrete structures is complicated as a great number of uncertain factors are involved. To get a reliable modelling a physical and mechanical understanding of the process behind corrosion in needed....
Modeling and Remodeling Writing
Hayes, John R.
2012-01-01
In Section 1 of this article, the author discusses the succession of models of adult writing that he and his colleagues have proposed from 1980 to the present. He notes the most important changes that differentiate earlier and later models and discusses reasons for the changes. In Section 2, he describes his recent efforts to model young…
Energy Technology Data Exchange (ETDEWEB)
Fortelius, C.; Holopainen, E.; Kaurola, J.; Ruosteenoja, K.; Raeisaenen, J. [Helsinki Univ. (Finland). Dept. of Meteorology
1996-12-31
In recent years the modelling of interannual climate variability has been studied, the atmospheric energy and water cycles, and climate simulations with the ECHAM3 model. In addition, the climate simulations of several models have been compared with special emphasis in the area of northern Europe
Crushed Salt Constitutive Model
Energy Technology Data Exchange (ETDEWEB)
Callahan, G.D.
1999-02-01
The constitutive model used to describe the deformation of crushed salt is presented in this report. Two mechanisms -- dislocation creep and grain boundary diffusional pressure solution -- are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. Upon complete consolidation, the crushed-salt model reproduces the Multimechanism Deformation (M-D) model typically used for the Waste Isolation Pilot Plant (WIPP) host geological formation salt. New shear consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on WIPP and southeastern New Mexico salt. Nonlinear least-squares model fitting to the database produced two sets of material parameter values for the model -- one for the shear consolidation tests and one for a combination of the shear and hydrostatic consolidation tests. Using the parameter values determined from the fitted database, the constitutive model is validated against constant strain-rate tests. Shaft seal problems are analyzed to demonstrate model-predicted consolidation of the shaft seal crushed-salt component. Based on the fitting statistics, the ability of the model to predict the test data, and the ability of the model to predict load paths and test data outside of the fitted database, the model appears to capture the creep consolidation behavior of crushed salt reasonably well.
Modeling EERE Deployment Programs
Energy Technology Data Exchange (ETDEWEB)
Cort, K. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hostick, D. J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Belzer, D. B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Livingston, O. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2007-11-01
This report compiles information and conclusions gathered as part of the “Modeling EERE Deployment Programs” project. The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge in which future research is needed.
Meara, Paul
2004-01-01
This paper describes some simple simulation models of vocabulary attrition. The attrition process is modelled using a random autonomous Boolean network model, and some parallels with real attrition data are drawn. The paper argues that applying a complex systems approach to attrition can provide some important insights, which suggest that real…
Flexible survival regression modelling
DEFF Research Database (Denmark)
Cortese, Giuliana; Scheike, Thomas H; Martinussen, Torben
2009-01-01
Regression analysis of survival data, and more generally event history data, is typically based on Cox's regression model. We here review some recent methodology, focusing on the limitations of Cox's regression model. The key limitation is that the model is not well suited to represent time-varyi...
Rodarius, C.; Rooij, L. van; Lange, R. de
2007-01-01
The objective of this work was to create a scalable human occupant model that allows adaptation of human models with respect to size, weight and several mechanical parameters. Therefore, for the first time two scalable facet human models were developed in MADYMO. First, a scalable human male was
Modeling typical performance measures
Weekers, Anke Martine
2009-01-01
In the educational, employment, and clinical context, attitude and personality inventories are used to measure typical performance traits. Statistical models are applied to obtain latent trait estimates. Often the same statistical models as the models used in maximum performance measurement are appl
Diggle, Peter J
2007-01-01
Model-based geostatistics refers to the application of general statistical principles of modeling and inference to geostatistical problems. This volume provides a treatment of model-based geostatistics and emphasizes on statistical methods and applications. It also features analyses of datasets from a range of scientific contexts.
Zephyr - the prediction models
DEFF Research Database (Denmark)
Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg
2001-01-01
This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...
Model Breaking Points Conceptualized
Vig, Rozy; Murray, Eileen; Star, Jon R.
2014-01-01
Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…
Generalized Poisson sigma models
Batalin, I; Batalin, Igor; Marnelius, Robert
2001-01-01
A general master action in terms of superfields is given which generates generalized Poisson sigma models by means of a natural ghost number prescription. The simplest representation is the sigma model considered by Cattaneo and Felder. For Dirac brackets considerably more general models are generated.
Fedorov, Alexander
2011-01-01
The author supposed that media education models can be divided into the following groups: (1) educational-information models (the study of the theory, history, language of media culture, etc.), based on the cultural, aesthetic, semiotic, socio-cultural theories of media education; (2) educational-ethical models (the study of moral, religions,…
Speiser, Bob; Walter, Chuck
2011-01-01
This paper explores how models can support productive thinking. For us a model is a "thing", a tool to help make sense of something. We restrict attention to specific models for whole-number multiplication, hence the wording of the title. They support evolving thinking in large measure through the ways their users redesign them. They assume new…
DEFF Research Database (Denmark)
Bergdahl, Basti; Sonnenschein, Nikolaus; Machado, Daniel
2016-01-01
An introduction to genome-scale models, how to build and use them, will be given in this chapter. Genome-scale models have become an important part of systems biology and metabolic engineering, and are increasingly used in research, both in academica and in industry, both for modeling chemical pr...
Energy Technology Data Exchange (ETDEWEB)
C. Lum
2004-09-16
The purpose of this model report is to document the Rock Properties Model version 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties model provides mean matrix and lithophysae porosity, and the cross-correlated mean bulk density as direct input to the ''Saturated Zone Flow and Transport Model Abstraction'', MDL-NBS-HS-000021, REV 02 (BSC 2004 [DIRS 170042]). The constraints, caveats, and limitations associated with this model are discussed in Section 6.6 and 8.2. Model validation accomplished by corroboration with data not cited as direct input is discussed in Section 7. The revision of this model report was performed as part of activities being conducted under the ''Technical Work Plan for: The Integrated Site Model, Revision 05'' (BSC 2004 [DIRS 169635]). The purpose of this revision is to bring the report up to current procedural requirements and address the Regulatory Integration Team evaluation comments. The work plan describes the scope, objectives, tasks, methodology, and procedures for this process.
Wheeler, Tim Allan; Holder, Martin; Winner, Hermann; Kochenderfer, Mykel
2017-01-01
Accurate simulation and validation of advanced driver assistance systems requires accurate sensor models. Modeling automotive radar is complicated by effects such as multipath reflections, interference, reflective surfaces, discrete cells, and attenuation. Detailed radar simulations based on physical principles exist but are computationally intractable for realistic automotive scenes. This paper describes a methodology for the construction of stochastic automotive radar models based on deep l...
Richard Haynes; Darius Adams; Peter Ince; John Mills; Ralph Alig
2006-01-01
The United States has a century of experience with the development of models that describe markets for forest products and trends in resource conditions. In the last four decades, increasing rigor in policy debates has stimulated the development of models to support policy analysis. Increasingly, research has evolved (often relying on computer-based models) to increase...
Model Breaking Points Conceptualized
Vig, Rozy; Murray, Eileen; Star, Jon R.
2014-01-01
Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…
Thornton, Bradley D.; Smalley, Robert A.
2008-01-01
Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…
Fitzsimmons, Charles P.
1986-01-01
Points out the instructional applications and program possibilities of a unit on model rocketry. Describes the ways that microcomputers can assist in model rocket design and in problem calculations. Provides a descriptive listing of model rocket software for the Apple II microcomputer. (ML)
DEFF Research Database (Denmark)
Andreasen, Martin Møller; Meldrum, Andrew
This paper studies whether dynamic term structure models for US nominal bond yields should enforce the zero lower bound by a quadratic policy rate or a shadow rate specification. We address the question by estimating quadratic term structure models (QTSMs) and shadow rate models with at most four...
DEFF Research Database (Denmark)
2011-01-01
Engineering of products and processes is increasingly “model-centric”. Models in their multitudinous forms are ubiquitous, being heavily used for a range of decision making activities across all life cycle phases. This chapter gives an overview of what is a model, the principal activities in the ...
Rossi, P; Rossi, Paolo; Tan, Chung I
1995-01-01
Principal chiral models on a d-1 dimensional simplex are introduced and studied analytically in the large N limit. The d = 0 , 2, 4 and \\infty models are explicitly solved. Relationship with standard lattice models and with few-matrix systems in the double scaling limit are discussed.
Modeling agriculture in the Community Land Model
Directory of Open Access Journals (Sweden)
B. Drewniak
2013-04-01
Full Text Available The potential impact of climate change on agriculture is uncertain. In addition, agriculture could influence above- and below-ground carbon storage. Development of models that represent agriculture is necessary to address these impacts. We have developed an approach to integrate agriculture representations for three crop types – maize, soybean, and spring wheat – into the coupled carbon–nitrogen version of the Community Land Model (CLM, to help address these questions. Here we present the new model, CLM-Crop, validated against observations from two AmeriFlux sites in the United States, planted with maize and soybean. Seasonal carbon fluxes compared well with field measurements for soybean, but not as well for maize. CLM-Crop yields were comparable with observations in countries such as the United States, Argentina, and China, although the generality of the crop model and its lack of technology and irrigation made direct comparison difficult. CLM-Crop was compared against the standard CLM3.5, which simulates crops as grass. The comparison showed improvement in gross primary productivity in regions where crops are the dominant vegetation cover. Crop yields and productivity were negatively correlated with temperature and positively correlated with precipitation, in agreement with other modeling studies. In case studies with the new crop model looking at impacts of residue management and planting date on crop yield, we found that increased residue returned to the litter pool increased crop yield, while reduced residue returns resulted in yield decreases. Using climate controls to signal planting date caused different responses in different crops. Maize and soybean had opposite reactions: when low temperature threshold resulted in early planting, maize responded with a loss of yield, but soybean yields increased. Our improvements in CLM demonstrate a new capability in the model – simulating agriculture in a realistic way, complete with
Meister, Jeffrey P.
1987-01-01
The Mechanics of Materials Model (MOMM) is a three-dimensional inelastic structural analysis code for use as an early design stage tool for hot section components. MOMM is a stiffness method finite element code that uses a network of beams to characterize component behavior. The MOMM contains three material models to account for inelastic material behavior. These include the simplified material model, which assumes a bilinear stress-strain response; the state-of-the-art model, which utilizes the classical elastic-plastic-creep strain decomposition; and Walker's viscoplastic model, which accounts for the interaction between creep and plasticity that occurs under cyclic loading conditions.
Energy Technology Data Exchange (ETDEWEB)
Loennroth, J.S.; Kiviniemi, T. [Association EURATOM-Tekes, Helsinki University of Technology, P. O. Box 4100, 02015 TKK (Finland); Bateman, G.; Kritz, A. [Lehigh University, Bethlehem, PA (United States); Becoulet, M.; Figarella, C.; Garbet, X.; Huysmans, G. [Association Euratom-CEA, CEA Cadarache (France); Beyer, P. [University of Marseille (France); Corrigan, G.; Fundamenski, W. [UKAEA Fusion Association, Culham Science Centre (United Kingdom); Garcia, O.E.; Naulin, V. [Association Euratom-Risoe National Laboratory, Roskilde (Denmark); Janeschitz, G. [Forschungszentrum Karlsruhe (Germany); Johnson, T. [Association Euratom-VR, Royal Institute of Technology, Stockholm (Sweden); Kuhn, S. [Association Euratom-OeAW, University of Innsbruck (Austria); Loarte, A. [EFDA Close Support Unit, Garching (Germany); Nave, F. [Association Euratom-IST, Centro de Fusao Nuclear, Lisbon (Portugal); Onjun, T. [Sirindhorn International Institute of Technology (Thailand); Pacher, G.W. [Hydro Quebec (Canada); Pacher, H.D.; Pankin, A.; Parail, V.; Pitts, R.; Saibene, G.; Snyder, P.; Spence, J.; Tskhakaya, D.; Wilson, H.
2006-09-15
This paper presents a short overview of current trends and progress in integrated ELM modelling. First, the concept of integrated ELM modelling is introduced, various interpretations of it are given and the need for it is discussed. Then follows an overview of different techniques and methods used in integrated ELM modelling presented roughly according to physics approached in use and in order of increasing complexity. The paper concludes with a short discussion of open issues and future modelling requirements within the field of integrated ELM modelling. (copyright 2006 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Energy Technology Data Exchange (ETDEWEB)
Ritchie, L.T.; Alpert, D.J.; Burke, R.P.; Johnson, J.D.; Ostmeyer, R.M.; Aldrich, D.C.; Blond, R.M.
1984-03-01
The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.
DEFF Research Database (Denmark)
Knudsen, Torben
2011-01-01
The purpose with this deliverable 2.5 is to use fresh experimental data for validation and selection of a flow model to be used for control design in WP3-4. Initially the idea was to investigate the models developed in WP2. However, in the project it was agreed to include and focus on a additive...... model turns out not to be useful for prediction of the flow. Moreover, standard Box Jenkins model structures and multiple output auto regressive models proves to be superior as they can give useful predictions of the flow....
Energy Technology Data Exchange (ETDEWEB)
M. McGraw
2000-04-13
The UZ Colloid Transport model development plan states that the objective of this Analysis/Model Report (AMR) is to document the development of a model for simulating unsaturated colloid transport. This objective includes the following: (1) use of a process level model to evaluate the potential mechanisms for colloid transport at Yucca Mountain; (2) Provide ranges of parameters for significant colloid transport processes to Performance Assessment (PA) for the unsaturated zone (UZ); (3) Provide a basis for development of an abstracted model for use in PA calculations.
Auxiliary Deep Generative Models
DEFF Research Database (Denmark)
Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae;
2016-01-01
Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave...... the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections. Our findings suggest that more expressive and properly specified deep generative models converge...
Yum, Soo-Young; Yoon, Ki-Young; Lee, Choong-Il; Lee, Byeong-Chun
2016-01-01
Animal models, particularly pigs, have come to play an important role in translational biomedical research. There have been many pig models with genetically modifications via somatic cell nuclear transfer (SCNT). However, because most transgenic pigs have been produced by random integration to date, the necessity for more exact gene-mutated models using recombinase based conditional gene expression like mice has been raised. Currently, advanced genome-editing technologies enable us to generate specific gene-deleted and -inserted pig models. In the future, the development of pig models with gene editing technologies could be a valuable resource for biomedical research. PMID:27030199
Long, John
2014-01-01
Process Modeling Style focuses on other aspects of process modeling beyond notation that are very important to practitioners. Many people who model processes focus on the specific notation used to create their drawings. While that is important, there are many other aspects to modeling, such as naming, creating identifiers, descriptions, interfaces, patterns, and creating useful process documentation. Experience author John Long focuses on those non-notational aspects of modeling, which practitioners will find invaluable. Gives solid advice for creating roles, work produ
Modeling Epidemic Network Failures
DEFF Research Database (Denmark)
Ruepp, Sarah Renée; Fagertun, Anna Manolova
2013-01-01
the SID model’s behavior and impact on the network performance, as well as the severity of the infection spreading. The simulations are carried out in OPNET Modeler. The model provides an important input to epidemic connection recovery mechanisms, and can due to its flexibility and versatility be used......This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate...
Modeling Epidemic Network Failures
DEFF Research Database (Denmark)
Ruepp, Sarah Renée; Fagertun, Anna Manolova
2013-01-01
This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate...... the SID model’s behavior and impact on the network performance, as well as the severity of the infection spreading. The simulations are carried out in OPNET Modeler. The model provides an important input to epidemic connection recovery mechanisms, and can due to its flexibility and versatility be used...... to evaluate multiple epidemic scenarios in various network types....
Alves, Daniele S. M.; Galloway, Jamison; McCullough, Matthew; Weiner, Neal
2016-04-01
Models with Dirac gauginos are appealing scenarios for physics beyond the Standard Model. They have smaller radiative corrections to scalar soft masses, a suppression of certain supersymmetry (SUSY) production processes at the LHC, and ameliorated flavor constraints. Unfortunately, they are generically plagued by tachyons charged under the Standard Model, and attempts to eliminate such states typically spoil the positive features. The recently proposed "Goldstone gaugino" mechanism provides a simple realization of Dirac gauginos that is automatically free of dangerous tachyonic states. We provide details on this mechanism and explore models for its origin. In particular, we find SUSY QCD models that realize this idea simply and discuss scenarios for unification.
Energy Technology Data Exchange (ETDEWEB)
Brown, T.W.
2010-11-15
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Reconstruction of inflation models
Energy Technology Data Exchange (ETDEWEB)
Myrzakulov, Ratbay; Sebastiani, Lorenzo [Eurasian National University, Department of General and Theoretical Physics and Eurasian Center for Theoretical Physics, Astana (Kazakhstan); Zerbini, Sergio [Universita di Trento, Dipartimento di Fisica, Trento (Italy); TIFPA, Istituto Nazionale di Fisica Nucleare, Trento (Italy)
2015-05-15
In this paper, we reconstruct viable inflationary models by starting from spectral index and tensor-to-scalar ratio from Planck observations. We analyze three different kinds of models: scalar field theories, fluid cosmology, and f(R)-modified gravity. We recover the well-known R{sup 2} inflation in Jordan-frame and Einstein-frame representation, the massive scalar inflaton models and two models of inhomogeneous fluid. A model of R{sup 2} correction to Einstein's gravity plus a ''cosmological constant'' with an exact solution for early-time acceleration is reconstructed. (orig.)
Mathematical modelling techniques
Aris, Rutherford
1995-01-01
""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode
Controlling Modelling Artifacts
DEFF Research Database (Denmark)
Smith, Michael James Andrew; Nielson, Flemming; Nielson, Hanne Riis
2011-01-01
as the high-level model, so that they can be directly compared. There are two key ideas in our approach — a temporal abstraction, where we only look at the state of the system at certain observable points in time, and a spatial abstraction, where we project onto a smaller state space that summarises...... artifacts that were inadvertently introduced. In this paper, we propose a novel methodology to reason about modelling artifacts, given a detailed model and a highlevel (more abstract) model of the same system. By a series of automated abstraction steps, we lift the detailed model to the same state space...
Tijidjian, Raffi P.
2010-01-01
The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.
Fischer, Arthur E.
1996-01-01
In this paper a theory of models of the universe is proposed. We refer to such models ascosmological models, where a cosmological model is defined as an Einstein-inextendible Einstein spacetime. A cosmological model isabsolute if it is a Lorentz-inextendible Einstein spacetime,predictive if it is globally hyperbolic, andnon-predictive if it is nonglobally-hyperbolic. We discuss several features of these models in the study of cosmology. As an example, any compact Einstein spacetime is always a non-predictive absolute cosmological model, whereas a noncompact complete Einstein spacetime is an absolute cosmological model which may be either predictive or non-predictive. We discuss the important role played by maximal Einstein spacetimes. In particular, we examine the possible proper Lorentz-extensions of such spacetimes, and show that a spatially compact maximal Einstein spacetime is exclusively either a predictive cosmological model or a proper sub-spacetime of a non-predictive cosmological model. Provided that the Strong Cosmic Censorship conjecture is true, a generic spatially compact maximal Einstein spacetime must be a predictive cosmological model. It isconjectured that the Strong Cosmic Censorship conjecture isnot true, and converting a vice to a virtue it is argued that the failure of the Strong Cosmic Censorship conjecture would point to what may be general relativity's greatest prediction of all, namely,that general relativity predicts that general relativity cannot predict the entire history of the universe.
Modelling structured data with Probabilistic Graphical Models
Forbes, F.
2016-05-01
Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.
Modeling of ultrasound transducers
DEFF Research Database (Denmark)
Bæk, David
deviation of 5.5 % to 11.0 %. Finite element modeling of piezoceramics in combination with Field II is addressed and reveals the influence of restricting the modeling of transducers to the one-dimensional case. An investigation on modeling capacitive micromachined ultrasonic transducers (CMUT)s with Field......This Ph.D. dissertation addresses ultrasound transducer modeling for medical ultrasound imaging and combines the modeling with the ultrasound simulation program Field II. The project firstly presents two new models for spatial impulse responses (SIR)s to a rectangular elevation focused transducer...... II is addressed. It is shown how a single circular CMUT cell can be well approximated with a simple square transducer encapsulating the cell, and how this influence the modeling of full array elements. An optimal cell discretization with Field II’s mathematical elements is addressed as well...
DEFF Research Database (Denmark)
Gudiksen, Sune Klok; Poulsen, Søren Bolvig; Buur, Jacob
2014-01-01
Well-established companies are currently struggling to secure profits due to the pressure from new players' business models as they take advantage of communication technology and new business-model configurations. Because of this, the business model research field flourishes currently; however......, the modelling approaches proposed still rely on linear, rational conceptions and causal reasoning. Through six business cases we argue that participatory design has a role to play, and indeed, can lead the way into another approach to business modelling, which we call business model making. The paper...... illustrates how the application of participatory business model design toolsets can open up discussions on alternative scenarios through improvisation, mock-up making and design game playing, before qualitative judgment on the most promising scenario is carried out....
Energy Technology Data Exchange (ETDEWEB)
Moffat, Harry K.; Noble, David R.; Baer, Thomas A. (Procter & Gamble Co., West Chester, OH); Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann
2008-09-01
In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.
Energy Technology Data Exchange (ETDEWEB)
Shipman, Galen M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-13
These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematic approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.
Directory of Open Access Journals (Sweden)
Alexander Fedorov
2011-03-01
Full Text Available The author supposed that media education models can be divided into the following groups:- educational-information models (the study of the theory, history, language of media culture, etc., based on the cultural, aesthetic, semiotic, socio-cultural theories of media education;- educational-ethical models (the study of moral, religions, philosophical problems relying on the ethic, religious, ideological, ecological, protectionist theories of media education;- pragmatic models (practical media technology training, based on the uses and gratifications and ‘practical’ theories of media education;- aesthetical models (aimed above all at the development of the artistic taste and enriching the skills of analysis of the best media culture examples. Relies on the aesthetical (art and cultural studies theory; - socio-cultural models (socio-cultural development of a creative personality as to the perception, imagination, visual memory, interpretation analysis, autonomic critical thinking, relying on the cultural studies, semiotic, ethic models of media education.
Hydrological land surface modelling
DEFF Research Database (Denmark)
Ridler, Marc-Etienne Francois
Recent advances in integrated hydrological and soil-vegetation-atmosphere transfer (SVAT) modelling have led to improved water resource management practices, greater crop production, and better flood forecasting systems. However, uncertainty is inherent in all numerical models ultimately leading...... and disaster management. The objective of this study is to develop and investigate methods to reduce hydrological model uncertainty by using supplementary data sources. The data is used either for model calibration or for model updating using data assimilation. Satellite estimates of soil moisture and surface...... hydrological and tested by assimilating synthetic hydraulic head observations in a catchment in Denmark. Assimilation led to a substantial reduction of model prediction error, and better model forecasts. Also, a new assimilation scheme is developed to downscale and bias-correct coarse satellite derived soil...
DEFF Research Database (Denmark)
Tamke, Martin
2015-01-01
Appearing almost alive, a novel set of computational design models can become an active counterpart for architects in the design process. The ability to loop, sense and query and the integration of near real-time simulation provide these models with a depth and agility that allows for instant...... and informed feedback. Introducing the term "Aware models", the paper investigates how computational models become an enabler for a better informed architectural design practice, through the embedding of knowledge about constraints, behaviour and processes of formation and making into generative design models....... The inspection of several computational design projects in architectural research highlights three different types of awareness a model can possess and devises strategies to establish and finally design with aware models. This design practice is collaborative in nature and characterized by a bidirectional flow...
Gonzalez-Lopez, Jesus E Garcia Veronica A
2010-01-01
In this work we introduce a new and richer class of finite order Markov chain models and address the following model selection problem: find the Markov model with the minimal set of parameters (minimal Markov model) which is necessary to represent a source as a Markov chain of finite order. Let us call $M$ the order of the chain and $A$ the finite alphabet, to determine the minimal Markov model, we define an equivalence relation on the state space $A^{M}$, such that all the sequences of size $M$ with the same transition probabilities are put in the same category. In this way we have one set of $(|A|-1)$ transition probabilities for each category, obtaining a model with a minimal number of parameters. We show that the model can be selected consistently using the Bayesian information criterion.
Multiscale Modeling of Recrystallization
Energy Technology Data Exchange (ETDEWEB)
Godfrey, A.W.; Holm, E.A.; Hughes, D.A.; Lesar, R.; Miodownik, M.A.
1998-12-07
We propose a multi length scale approach to modeling recrystallization which links a dislocation model, a cell growth model and a macroscopic model. Although this methodology and linking framework will be applied to recrystallization, it is also applicable to other types of phase transformations in bulk and layered materials. Critical processes such as the dislocation structure evolution, nucleation, the evolution of crystal orientations into a preferred texture, and grain size evolution all operate at different length scales. In this paper we focus on incorporating experimental measurements of dislocation substructures, rnisorientation measurements of dislocation boundaries, and dislocation simulations into a mesoscopic model of cell growth. In particular, we show how feeding information from the dislocation model into the cell growth model can create realistic initial microstructure.
CREDIT RISK. DETERMINATION MODELS
Directory of Open Access Journals (Sweden)
MIHAELA GRUIESCU
2012-01-01
Full Text Available The internationalization of financial flows and banking and the rapid development of markets have changed the financial sector, causing him to respond with force and imagination. Under these conditions, the concerns of financial and banking institutions, rating institutions are increasingly turning to find the best solutions to hedge risks and maximize profits. This paper aims to present a number of advantages, but also limits the Merton model, the first structural model for modeling credit risk. Also, some are extensions of the model, some empirical research and performance known, others such as state-dependent models (SDM, which together with the liquidation process models (LPM, are two recent efforts in the structural models, show different phenomena in real life.
Pre-collapse evolution of galactic globular clusters
Fukushige, T; Toshiyuki Fukushige; Douglas C Heggie
1994-01-01
Abstract: This paper is concerned with collisionless aspects of the early evolution of model star clusters. The effects of mass loss through stellar evolution and of a steady tidal field are modelled using N-body simulations. Our results (which depend on the assumed initial structure and the mass spectrum) agree qualitatively with those of Chernoff \\& Weinberg (1990), who used a Fokker-Planck model with a spherically symmetric tidal cutoff. For those systems which are disrupted, the lifetime to disruption generally exceeds that found by Chernoff \\& Weinberg, sometimes by as much as an order of magnitude. Because we do not model collisional effects correctly we cannot establish the fate of the survivors. In terms of theoretical interpretation, we find that tidal disruption must be understood as a loss of {\\sl equilibrium}, and not a loss of {\\sl stability}, as is sometimes stated.
Phyloclimatic modeling: combining phylogenetics and bioclimatic modeling.
Yesson, C; Culham, A
2006-10-01
We investigate the impact of past climates on plant diversification by tracking the "footprint" of climate change on a phylogenetic tree. Diversity within the cosmopolitan carnivorous plant genus Drosera (Droseraceae) is focused within Mediterranean climate regions. We explore whether this diversity is temporally linked to Mediterranean-type climatic shifts of the mid-Miocene and whether climate preferences are conservative over phylogenetic timescales. Phyloclimatic modeling combines environmental niche (bioclimatic) modeling with phylogenetics in order to study evolutionary patterns in relation to climate change. We present the largest and most complete such example to date using Drosera. The bioclimatic models of extant species demonstrate clear phylogenetic patterns; this is particularly evident for the tuberous sundews from southwestern Australia (subgenus Ergaleium). We employ a method for establishing confidence intervals of node ages on a phylogeny using replicates from a Bayesian phylogenetic analysis. This chronogram shows that many clades, including subgenus Ergaleium and section Bryastrum, diversified during the establishment of the Mediterranean-type climate. Ancestral reconstructions of bioclimatic models demonstrate a pattern of preference for this climate type within these groups. Ancestral bioclimatic models are projected into palaeo-climate reconstructions for the time periods indicated by the chronogram. We present two such examples that each generate plausible estimates of ancestral lineage distribution, which are similar to their current distributions. This is the first study to attempt bioclimatic projections on evolutionary time scales. The sundews appear to have diversified in response to local climate development. Some groups are specialized for Mediterranean climates, others show wide-ranging generalism. This demonstrates that Phyloclimatic modeling could be repeated for other plant groups and is fundamental to the understanding of
Modeling agriculture in the Community Land Model
Directory of Open Access Journals (Sweden)
B. Drewniak
2012-12-01
Full Text Available The potential impact of climate change on agriculture is uncertain. In addition, agriculture could influence above- and below-ground carbon storage. Development of models that represent agriculture is necessary to address these impacts. We have developed an approach to integrate agriculture representations for three crop types – maize, soybean, and spring wheat – into the coupled carbon-nitrogen version of the Community Land Model (CLM, to help address these questions. Here we present the new model, CLM-Crop, validated against observations from two AmeriFlux sites in the United States, planted with maize and soybean. Seasonal carbon fluxes compared well with field measurements. CLM-Crop yields were comparable with observations in some regions, although the generality of the crop model and its lack of technology and irrigation made direct comparison difficult. CLM-Crop was compared against the standard CLM3.5, which simulates crops as grass. The comparison showed improvement in gross primary productivity in regions where crops are the dominant vegetation cover. Crop yields and productivity were negatively correlated with temperature and positively correlated with precipitation. In case studies with the new crop model looking at impacts of residue management and planting date on crop yield, we found that increased residue returned to the litter pool increased crop yield, while reduced residue returns resulted in yield decreases. Using climate controls to signal planting date caused different responses in different crops. Maize and soybean had opposite reactions: when low temperature threshold resulted in early planting, maize responded with a loss of yield, but soybean yields increased. Our improvements in CLM demonstrate a new capability in the model – simulating agriculture in a realistic way, complete with fertilizer and residue management practices. Results are encouraging, with improved representation of human influences on the land
Modeling local dependence in longitudinal IRT models
DEFF Research Database (Denmark)
Larsen, Maja Olsbjerg; Christensen, Karl Bang
2015-01-01
Measuring change in a latent variable over time is often done using the same instrument at several time points. This can lead to dependence between responses across time points for the same person yielding within person correlations that are stronger than what can be attributed to the latent...... variable. Ignoring this can lead to biased estimates of changes in the latent variable. In this paper we propose a method for modeling local dependence in the longitudinal 2PL model. It is based on the concept of item splitting, and makes it possible to correctly estimate change in the latent variable....
Energy Technology Data Exchange (ETDEWEB)
Hammerand, Daniel Carl; Scherzinger, William Mark
2007-09-01
The Library of Advanced Materials for Engineering (LAME) provides a common repository for constitutive models that can be used in computational solid mechanics codes. A number of models including both hypoelastic (rate) and hyperelastic (total strain) constitutive forms have been implemented in LAME. The structure and testing of LAME is described in Scherzinger and Hammerand ([3] and [4]). The purpose of the present report is to describe the material models which have already been implemented into LAME. The descriptions are designed to give useful information to both analysts and code developers. Thus far, 33 non-ITAR/non-CRADA protected material models have been incorporated. These include everything from the simple isotropic linear elastic models to a number of elastic-plastic models for metals to models for honeycomb, foams, potting epoxies and rubber. A complete description of each model is outside the scope of the current report. Rather, the aim here is to delineate the properties, state variables, functions, and methods for each model. However, a brief description of some of the constitutive details is provided for a number of the material models. Where appropriate, the SAND reports available for each model have been cited. Many models have state variable aliases for some or all of their state variables. These alias names can be used for outputting desired quantities. The state variable aliases available for results output have been listed in this report. However, not all models use these aliases. For those models, no state variable names are listed. Nevertheless, the number of state variables employed by each model is always given. Currently, there are four possible functions for a material model. This report lists which of these four methods are employed in each material model. As far as analysts are concerned, this information is included only for the awareness purposes. The analyst can take confidence in the fact that model has been properly implemented
Directory of Open Access Journals (Sweden)
Dan Alexandru Anghel
2012-01-01
Full Text Available In semiconductor laser modeling, a good mathematical model gives near-reality results. Three methods of modeling solutions from the rate equations are presented and analyzed. A method based on the rate equations modeled in Simulink to describe quantum well lasers was presented. For different signal types like step function, saw tooth and sinus used as input, a good response of the used equations is obtained. Circuit model resulting from one of the rate equations models is presented and simulated in SPICE. Results show a good modeling behavior. Numerical simulation in MathCad gives satisfactory results for the study of the transitory and dynamic operation at small level of the injection current. The obtained numerical results show the specific limits of each model, according to theoretical analysis. Based on these results, software can be built that integrates circuit simulation and other modeling methods for quantum well lasers to have a tool that model and analysis these devices from all points of view.
Geochemical modeling: a review
Energy Technology Data Exchange (ETDEWEB)
Jenne, E.A.
1981-06-01
Two general families of geochemical models presently exist. The ion speciation-solubility group of geochemical models contain submodels to first calculate a distribution of aqueous species and to secondly test the hypothesis that the water is near equilibrium with particular solid phases. These models may or may not calculate the adsorption of dissolved constituents and simulate the dissolution and precipitation (mass transfer) of solid phases. Another family of geochemical models, the reaction path models, simulates the stepwise precipitation of solid phases as a result of reacting specified amounts of water and rock. Reaction path models first perform an aqueous speciation of the dissolved constituents of the water, test solubility hypotheses, then perform the reaction path modeling. Certain improvements in the present versions of these models would enhance their value and usefulness to applications in nuclear-waste isolation, etc. Mass-transfer calculations of limited extent are certainly within the capabilities of state-of-the-art models. However, the reaction path models require an expansion of their thermodynamic data bases and systematic validation before they are generally accepted.
Modelling Farm Animal Welfare.
Collins, Lisa M; Part, Chérie E
2013-05-16
The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively) based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested.
Directory of Open Access Journals (Sweden)
Chérie E. Part
2013-05-01
Full Text Available The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested.
Some phenomenological predictions of charged Higgs bosons in electroweak interactions
Energy Technology Data Exchange (ETDEWEB)
Garcia Canal, C.A.; Santangelo, E.M.
1984-05-01
Some phenomenological consequences of an extended Salam-Weinberg model are studied. In particular, the existence, or absence, of e-..mu.. asymmetry in beam-dump experiments is analyzed and an increase in same sign dilepton cross sections is shown to exist due to the contribution of charged Higgs-mediated diagrams. The model is shown to be compatible with experimental results for other processes.
Final state interactions at the threshold of Higgs boson pair production
Zhang, Zhentao
2015-01-01
We study the effect of final state interactions at the threshold of Higgs boson pair production in the Glashow-Weinberg-Salam model. We consider three major processes of the pair production in the model: lepton pair annihilation, ZZ fusion, and WW fusion. We find that the corrections caused by the effect for these processes are markedly different. According to our results, the effect can cause non-negligible corrections to the cross sections for lepton pair annihilation and small corrections ...
Product Modelling for Model-Based Maintenance
Houten, van F.J.A.M.; Tomiyama, T.; Salomons, O.W.
1998-01-01
The paper describes the fundamental concepts of maintenance and the role that information technology can play in the support of maintenance activities. Function-Behaviour-State modelling is used to describe faults and deterioration of mechanisms in terms of user perception and measurable quantities.
Hydrological land surface modelling
DEFF Research Database (Denmark)
Ridler, Marc-Etienne Francois
to imperfect model forecasts. It remains a crucial challenge to account for system uncertainty, so as to provide model outputs accompanied by a quantified confidence interval. Properly characterizing and reducing uncertainty opens-up the opportunity for risk-based decision-making and more effective emergency...... and disaster management. The objective of this study is to develop and investigate methods to reduce hydrological model uncertainty by using supplementary data sources. The data is used either for model calibration or for model updating using data assimilation. Satellite estimates of soil moisture and surface...... temperature are explored in a multi-objective calibration experiment to optimize the parameters in a SVAT model in the Sahel. The two satellite derived variables were effective at constraining most land-surface and soil parameters. A data assimilation framework is developed and implemented with an integrated...
Aeroservoelasticity modeling and control
Tewari, Ashish
2015-01-01
This monograph presents the state of the art in aeroservoelastic (ASE) modeling and analysis and develops a systematic theoretical and computational framework for use by researchers and practicing engineers. It is the first book to focus on the mathematical modeling of structural dynamics, unsteady aerodynamics, and control systems to evolve a generic procedure to be applied for ASE synthesis. Existing robust, nonlinear, and adaptive control methodology is applied and extended to some interesting ASE problems, such as transonic flutter and buffet, post-stall buffet and maneuvers, and flapping flexible wing. The author derives a general aeroservoelastic plant via the finite-element structural dynamic model, unsteady aerodynamic models for various regimes in the frequency domain, and the associated state-space model by rational function approximations. For more advanced models, the full-potential, Euler, and Navier-Stokes methods for treating transonic and separated flows are also briefly addressed. Essential A...
DEFF Research Database (Denmark)
Borlund, Pia
2003-01-01
An alternative approach to evaluation of interactive information retrieval (IIR) systems, referred to as the IIR evaluation model, is proposed. The model provides a framework for the collection and analysis of IR interaction data. The aim of the model is two-fold: 1) to facilitate the evaluation...... assessments. The IIR evaluation model is presented as an alternative to the system-driven Cranfield model (Cleverdon, Mills & Keen, 1966; Cleverdon & Keen, 1966) which still is the dominant approach to the evaluation of IR and IIR systems. Key elements of the IIR evaluation model are the use of realistic...... of IIR systems as realistically as possible with reference to actual information searching and retrieval processes, though still in a relatively controlled evaluation environment; and 2) to calculate the IIR system performance taking into account the non-binary nature of the assigned relevance...
Deisboeck, Thomas S; Wang, Zhihui; Macklin, Paul; Cristini, Vittorio
2011-08-15
Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insights in the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community.
Validation of simulation models
DEFF Research Database (Denmark)
Rehman, Muniza; Pedersen, Stig Andur
2012-01-01
In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...... of models has been somewhat narrow-minded reducing the notion of validation to establishment of truth. This article puts forward the diversity in applications of simulation models that demands a corresponding diversity in the notion of validation....... of models with regards to their purpose, character, field of application and time dimension inherently calls for a similar diversity in validation approaches. A classification of models in terms of the mentioned elements is presented and used to shed light on possible types of validation leading...
Lawson, Andrew B
2002-01-01
Research has generated a number of advances in methods for spatial cluster modelling in recent years, particularly in the area of Bayesian cluster modelling. Along with these advances has come an explosion of interest in the potential applications of this work, especially in epidemiology and genome research. In one integrated volume, this book reviews the state-of-the-art in spatial clustering and spatial cluster modelling, bringing together research and applications previously scattered throughout the literature. It begins with an overview of the field, then presents a series of chapters that illuminate the nature and purpose of cluster modelling within different application areas, including astrophysics, epidemiology, ecology, and imaging. The focus then shifts to methods, with discussions on point and object process modelling, perfect sampling of cluster processes, partitioning in space and space-time, spatial and spatio-temporal process modelling, nonparametric methods for clustering, and spatio-temporal ...
Energy Technology Data Exchange (ETDEWEB)
Huemmer, Matthias [AREVA NP GmbH, Paul-Gossen Strasse 100, Erlangen (Germany)
2008-07-01
The safety of the Reactor Pressure Vessels (RPV) must be assured and demonstrated by safety assessments against brittle fracture according to the codes and standards. In addition to these deterministic methods, researchers developed statistic methods, so called local approach (LA) models, to predict specimen or component failure. These models transfer the microscopic fracture events to the macro scale by means of Weibull stresses and therefore can describe the fracture behavior more accurate. This paper will propose a recently developed LA model. After the calibration of the model parameters the wide applicability of the model will be demonstrated. Therefore a large number of computations, based on 3D finite element simulations, have been conducted, containing different specimen types and materials in unirradiated and irradiated condition. Comparison of the experimental data with the predictions attained by means of the LA model shows that the fracture behavior can be well described. (authors)
Bobyn, Justin D; Little, David G; Gray, Randolph; Schindeler, Aaron
2015-04-01
Multiple techniques designed to induce scoliotic deformity have been applied across many animal species. We have undertaken a review of the literature regarding experimental models of scoliosis in animals to discuss their utility in comprehending disease aetiology and treatment. Models of scoliosis in animals can be broadly divided into quadrupedal and bipedal experiments. Quadrupedal models, in the absence of axial gravitation force, depend upon development of a mechanical asymmetry along the spine to initiate a scoliotic deformity. Bipedal models more accurately mimic human posture and consequently are subject to similar forces due to gravity, which have been long appreciated to be a contributing factor to the development of scoliosis. Many effective models of scoliosis in smaller animals have not been successfully translated to primates and humans. Though these models may not clarify the aetiology of human scoliosis, by providing a reliable and reproducible deformity in the spine they are a useful means with which to test interventions designed to correct and prevent deformity.
DEFF Research Database (Denmark)
Andersen, Kasper Winther
Three main topics are presented in this thesis. The first and largest topic concerns network modelling of functional Magnetic Resonance Imaging (fMRI) and Diffusion Weighted Imaging (DWI). In particular nonparametric Bayesian methods are used to model brain networks derived from resting state f...... for their ability to reproduce node clustering and predict unseen data. Comparing the models on whole brain networks, BCD and IRM showed better reproducibility and predictability than IDM, suggesting that resting state networks exhibit community structure. This also points to the importance of using models, which...... allow for complex interactions between all pairs of clusters. In addition, it is demonstrated how the IRM can be used for segmenting brain structures into functionally coherent clusters. A new nonparametric Bayesian network model is presented. The model builds upon the IRM and can be used to infer...
1985-01-01
The outside users payload model which is a continuation of documents and replaces and supersedes the July 1984 edition is presented. The time period covered by this model is 1985 through 2000. The following sections are included: (1) definition of the scope of the model; (2) discussion of the methodology used; (3) overview of total demand; (4) summary of the estimated market segmentation by launch vehicle; (5) summary of the estimated market segmentation by user type; (6) details of the STS market forecast; (7) summary of transponder trends; (8) model overview by mission category; and (9) detailed mission models. All known non-NASA, non-DOD reimbursable payloads forecast to be flown by non-Soviet-block countries are included in this model with the exception of Spacelab payloads and small self contained payloads. Certain DOD-sponsored or cosponsored payloads are included if they are reimbursable launches.
Developing mathematical modelling competence
DEFF Research Database (Denmark)
Blomhøj, Morten; Jensen, Tomas Højgaard
2003-01-01
In this paper we introduce the concept of mathematical modelling competence, by which we mean being able to carry through a whole mathematical modelling process in a certain context. Analysing the structure of this process, six sub-competences are identified. Mathematical modelling competence...... cannot be reduced to these six sub-competences, but they are necessary elements in the development of mathematical modelling competence. Experience from the development of a modelling course is used to illustrate how the different nature of the sub-competences can be used as a tool for finding...... the balance between different kinds of activities in a particular educational setting. Obstacles of social, cognitive and affective nature for the students' development of mathematical modelling competence are reported and discussed in relation to the sub-competences....
Identification of physical models
DEFF Research Database (Denmark)
Melgaard, Henrik
1994-01-01
The problem of identification of physical models is considered within the frame of stochastic differential equations. Methods for estimation of parameters of these continuous time models based on descrete time measurements are discussed. The important algorithms of a computer program for ML or MAP...... design of experiments, which is for instance the design of an input signal that are optimal according to a criterion based on the information provided by the experiment. Also model validation is discussed. An important verification of a physical model is to compare the physical characteristics...... of the model with the available prior knowledge. The methods for identification of physical models have been applied in two different case studies. One case is the identification of thermal dynamics of building components. The work is related to a CEC research project called PASSYS (Passive Solar Components...
Institute of Scientific and Technical Information of China (English)
吴宁; 阮图南
1996-01-01
A quantum mechanical model with one bosonic degree of freedom is discussed in detail. Conventionally, when a quantum mechanical model is constructed, one must know the corresponding classical model. And by applying the correspondence between the classical Poisson brackets and the canonical commutator, the canonical quantization condition can be obtained. In the quantum model, study of the corresponding classical model is needed first. In this model, the Lagrangian is an operator gauge invariant. After localization, in order to keep gauge invariance, the operator gauge potential must be introduced. The Eular-Lagrange equation of motion of the dynamical argument gives the usual operator equation of motion. And the operator gauge potential just gjves a constraint. This constraint is just the usual canonical quantization condition.